List of Archived Posts

2004 Newsgroup Postings (03/20 - 04/22)

IBM 360 memory
IBM 360 memory
Microsoft source leak
IBM 360 memory
IBM 360 memory
IBM 360 memory
Memory Affinity
Digital Signature Standards
Digital Signature Standards
IBM 360 memory
IBM 360 memory
Do we really need all three of these newsgroups?
real multi-tasking, multi-programming
JSX 328x printing (portrait)
The SOB that helped IT jobs move to India is dead!
"360 revolution" at computer history museuam (x-post)
IBM 360 memory
REXX still going strong after 25 years
The SOB that helped IT jobs move to India is dead!
REXX still going strong after 25 years
REXX still going strong after 25 years
REXX still going strong after 25 years
System/360 40th Anniversary
Xquery might have some things right
who were the original fortran installations?
System/360 40th Anniversary
REXX still going strong after 25 years
who were the original fortran installations?
360 and You Bet Your Company
cheaper low quality drives
cheaper low quality drives
someone looking to donate IBM magazines and stuff
System/360 40th Anniversary
someone looking to donate IBM magazines and stuff
System/360 40th Anniversary
50 years of computer payroll
Omniscience Protocol Requirements
ANNOUNCE: NIST Considers Schneier Public Key Algorithm
FC1 & FC2
System/360 40th Anniversary
RFC-2898 Appendix B
REXX still going strong after 25 years
REXX still going strong after 25 years
[OT] Microsoft aggressive search plans revealed
who were the original fortran installations?
who were the original fortran installations?
ok, today's updates for FC2 test2 breaks
ok, today's updates for FC2 test2 breaks
ok, today's updates for FC2 test2 breaks
Has the Redhat ntp time server gone off-line?
ok, today's updates for FC2 test2 breaks
ok, today's updates for FC2 test2 breaks
Was it ever thus?
COMPUTER RELATED WORLD'S RECORDS?
[OT] Computer Proof of the Kepler Conjecture
If there had been no MS-DOS
If there had been no MS-DOS
If you're going to bullshit, eschew moderation
How secure is 2048 bit RSA?
Happy Birthday Mainframe
How secure is 2048 bit RSA?
If you're going to bullshit, eschew moderation
microsoft antitrust
System/360 40 years old today
System/360 40 years old today
System/360 40 years old today
System/360 40 years old today
Happy Birthday Mainframe
bits, bytes, half-duplex, dual-simplex, etc
A POX on you, Dennis Ritchie!!!
If you're going to bullshit, eschew moderation
What terminology reflects the "first" computer language ?
ibm mainframe or unix
DASD Architecture of the future
DASD Architecture of the future
DASD Architecture of the future

IBM 360 memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sat, 20 Mar 2004 13:10:00 -0700
Anne & Lynn Wheeler writes:
tss/360 had concept of virtual memory and one-level store ... but almost not concept of performance techniques and dynanic adaptive anything. besides tss/360 horribly long pathlengths there were some static things that they just didn't stop and think about. CMS used a

slightly more digression; standard cms didn't have a virtual memory file system ... it had a somewhat CTSS looking filesystem using effectively real i/o semantics that got simulated in cp/67's virtual memory environment .... recent slight digression regarding CCWTRANS & UNTRANS simulating real I/O CCW programs in virtual memory environment:
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming

in the early '70s, i did do a page-mapped infrastructure for the CMS filesystem ... which supported general semantics ... being able to somewhat do the one-level store stuff ... ala tss/360 ... but also supporting the existing CMS filesystem semantics ... aka real I/O operations were translated into block page read/write. From that standpoint it supported the high-level block read/write semantics standard in the cms filesystem ... but the underlying implementation was signficantly more efficiently since it avoided the whole thing of virtual to real translation, with page fixing/unfixing that had to go on during the period of the simualted real i/o.

random past postings about paged/memory mapped filesystem:
https://www.garlic.com/~lynn/submain.html#mmap

in had the efficiency of using paging implementation with the tight & efficient block read/write leveraging the higher level CMS filesystem conventions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 360 memory

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 20 Mar 2004 19:03:05 -0700
johnl@iecc.com (John R. Levine) writes:
My paperweight is clear lucite with a blue back. It says

the paper weight is actually my wife's, who did stint as manager of 6000 engineering architecture ... before we started ha/cmp project. The guy she reported to, then went on to head up somerset and then he served a stint as president of MIPS.

also in bottom desk drawer is clear plastic ball about 3in in diameter, embedded in the plastic is a flat stylistic map of the world with network links in north america, europe, south america, asia, australia, africa ... and the notation 1000 nodes, vnet ibm.

it was created for the 1000th node on the internal network (a little after the time internet went over 256 nodes).

misc. reference:
https://www.garlic.com/~lynn/internet.htm#22
https://www.garlic.com/~lynn/99.html#112

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Microsoft source leak

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft source leak
Newsgroups: alt.folklore.computers
Date: Sat, 20 Mar 2004 21:01:50 -0700
KR Williams writes:
Later. IBM got rights to manufacture x86 processors in trade for stock. Intel was in trouble financially and IBM needed Intel. IBM sold the stock in short time for a couple of bux. Intel also got IBM's Engineering Design System (EDS), which the revamped with a usable front-end.

during the '70s some amount of electronic shops in the valley were running vm/370 (some having even started out running cp/67). in the above period ... somebody from the palo alto branch office tried to get a whole load of mainframes into intel running mvs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 360 memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sat, 20 Mar 2004 21:45:17 -0700
glen herrmannsfeldt writes:
My understanding is that when running under MVS it is better to run the compilers without overlays. One can, for example, load the Fortran H compiler into the link editor, and write it out again without any overlay statements. This will remove the overlay structure that it would otherwise have.

lets say you have 768k byte real machine ... with 104 4k pages available, after fixed kernel requirements (400kbytes). If this is being heavily time-shared with lots of other users ... and the compilers have an aggregate code size of 1mbytes to 3mbytes .... then it would be somewhat more efficient to do it in 100-200kbyte chunks (and in real environment not possible to do in larger chunks that available real storage).

If you have a 1gbyte (or more) real storage with 1mbyte to 3mbytes aggregate compiler size, then it would be more efficient to read/fetch it as one operation ... trading off real memory utilization against disk arm utilization.

attached are several URLs to lengthy posts about observation that over 10-15 year period, the relative system disk/dasd performance had declined by a factor of five to ten times. when i first started making the statement, GPD (disk division) assigned several people from their performance & modeling group to refute the claims. They eventually came back with conclusion that I had slightly understated the problem. This eventually culminated in GPD doing a user group presentation on recommendations regarding disk allocation and optimization (i.e. compensate for the declining relative system disk performance).

whether you are doing large block fetches in real memory system or in virtual memory system (even when there has to be emulated real I/O with page fixing/unfixing and ccw translation) ... it still tends to be better than simplistic memory mapped implementation that does most of the fetches based on 4k page fault at a time.

the memory mapped file system that i had done for cms (30+ years ago):
https://www.garlic.com/~lynn/submain.html#mmap

had API where CMS specified virtual address range to be mapped to filesystem area. there was some dynamic adaptive stuff that looked at contention for real storage and decided how best to fullfill the request. the base CMS filesystem also didn't have the concept of contiguous allocation ... pretty much a scatter allocation somewhat inherited from CTSS (and not all that different from various dos, unix, etc implementations). As part of the memory-mapped enhancements, I also added semantics to the filesystem that attempted to get contiguous records on disk and modified the application that generated program executables to invoke the contiguous option.

observations about relative system disk performance was declining (aka disks were getting faster, but rest of the system was getting much faster than disks):
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003.html#21 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
https://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 360 memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 20 Mar 2004 22:10:45 -0700
"del cecchi" writes:
Well, sitting on top of my TV is a plastic cube with a blue gene compute chip imbedded in it. :-)

yes, but my selectric typing element apl "golf ball" is something like 35 years old ... and presumably would still work, if there was a 2741 selectric typewriter to put it into.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 360 memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 21 Mar 2004 09:39:23 -0700
jmfbahciv writes:
There was a third way to do things: multiple sharable high segments. This method eliminated overlays which, IIRC, always had bugs.

there were OVERLAY linkedit statements for doing program overlays from the early 360 days. in theory, large multi-module programs could be compiled and then during the linkedit step different modules placed in different overlays ... and the system would magically handled fetching/replacing which overlay needed to be resident at which time.

Most of the compilers, etc ... just packaged things into phases and linkedited the phases as different executables, with a phase doing something like an XCTL between phases. Fortran H had a relative few such phases as large executables. PLI was possibly the worse, I have memories of early PLI compiler where the number of distinct compiler executables running to the hundred(s) (and the aggregate PLI compiler size larger than Fortran H, but being able to work in smaller real storage domain).

overlay statement from recent IBM linkedit manual (note: not recommended)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGG3L100/2.5.13?DT=19911220122242
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGG3L100/2.5.13.1?SHELF=&DT=19911220122242&CASE=

comment about creating overlay programs:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGG3L100/2.2.1.7?DT=19911220122242
comment about creating multiple load modules
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGG3L100/2.2.1.8?DT=19911220122242

more than you ever wanted to now about linkage editor & loader
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/IGG3L100/CCONTENTS?DT=19911220122242

With the memory mapped filesystem,
https://www.garlic.com/~lynn/submain.html#mmap

I had done virtual memory management changes for both cp and cms, a subset was discontiguous shared segments shipped in vm/370 release 3. I had modified some amount of CMS code to be read-only and packaged it as virtual memory "shared" segments ... that could float ... aka different virtual addresses spaces could have the same shared segments at different virtual addresses.
https://www.garlic.com/~lynn/submain.html#adcon
It was also possible for the image of the shared segments to be resident in the memory mapped filesystem.

The ability to float shared segments (and some number of other features) was dropped in the product release.

Also, the images for shared segments were restricted to a special, common, system-wide repository. The issue of a common system wide repository and requirement for each shared segment have a predefined, common, fixed addresses then created its own kind of problems.

370s were limited to both 24bit (16mbyte) virtual and real addresses. shared segments (loaded high) tended to be relatively persistent .. and several of the applications when laid out as discontiguous shared segments could be several hundred kilobytes to a megabyte. The issue was that eventually potentially dozens of applications accumulated in the shared segment library. Different users might want different combinations of shared segment applications loaded into their address space. Not knowing the possible combinations then required that each application in the shared segment library had to have a unique, system-wide, predefined addresses ... and eventually the total number of shared-segment applications started to eat up the total 16mbyte virtual address space (which also had to accomadate the user programs and cms kernel).

A webpage that talks about "loading and overlays" and further done in the page, talks about tss/360 position independent code:
http://www.iecc.com/linker/linker08.html

Note that the os/360/370/390 linkage editor provides position indenpenden code ... before the executable image is loaded from disk. The executable image on disk has "RLD" entries for all the address constants in the image that must be swizzled as the executable is loaded. Once the executable image is part of the virtual address space, the swizzled address constants now bind it to that address location (and would preclude the same, exact image from appearing at different address locations in different virtual address spaces).

the method I used in the early '70s modifications for position independent code was basically a variation on the "ELF" scheme described in the above "loading" overview ...using displacement addresses.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Memory Affinity

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Memory Affinity
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 21 Mar 2004 21:51:56 -0700
Andi Kleen writes:
The commercial pioneer may have been Sequent Dynix/PTX, who also had a workload manager (this means they could move memory between nodes). It works kind like swapping, except that the memory is not written to the swap partition, but moved to other nodes. Deciding when to do this is not trivial, the easiest is to just let the user program tell you (using a "NUMA API").

a commerical pioneer may have been IDC circa mid-70s. they were the second cp/67 time-sharing service bureau in '68 and then later moved to vm/370:
https://www.garlic.com/~lynn/submain.html#timeshare

mid-70s IDC had datacenters in waltham and sanfran. They provided 7x24 time-sharing to customers around the world ... and therefor there was no time for preventive maintenance when customers weren't connected. they had several vm/370 enhancements and supported "swap out" of one machine and "swap in" to another machine within the waltham cluster of loosely-coupled machines (shared disk/dasd). however, the more interesting is they also supported migration of address space between nodes in waltham and sanfran over 56kbit leased-line (somewhat made easier because a lot of their service was information queries to financial databases that were replicated in both sanfran and waltham).

misc. past posts referencing idc
https://www.garlic.com/~lynn/97.html#14 Galaxies
https://www.garlic.com/~lynn/99.html#10 IBM S/360
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2002f.html#17 Blade architectures
https://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002l.html#66 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003h.html#31 OT What movies have taught us about Computers
https://www.garlic.com/~lynn/2003k.html#17 Dealing with complexity

sequent, dg, and convex all did SCI NUMA implementations .... dg & sequent with relatively standard dolphin parts. all three worked on various workload management and partitioning solutions for a complex aka partitioning a full complex into multiple sub-complex of processor groups, with each sub-complex having its own system/kernel image. workload management could mean changing the number of processors in a subcomplex and/or moving workload between subcomplexes (as an aside, the mainframe "hardware" LPAR partitioning of complex predates the SCI NUMA impleemntations by possibly ten years).

various old SCI posts:
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth

out of some past:
ORACLE SUPPORTS SEQUENT'S ENTERPRISE ARCHITECTURE Oct. 20, 1995 Redwood Shores, Calif. -- Oracle Corp. will support Sequent's NUMA-Q architecture for large-scale enterprise computing -- which includes an intelligent, high-speed interconnect and scales to more than 250 processors. This SMP-based technology will support Oracle products including the Oracle7 database, Oracle7 Parallel Server (OPS) and Oracle7 InterNode Parallel Query (IPQ). OPS and IPQ are innovative approaches to accelerating processes for customers operating very large databases. The basic building block of the NUMA-Q architecture is the four-processor Intel Pentium Pro baseboard, enhanced with extra redundancy and robustness for increased availability in enterprise computing environments. Sequent connects multiple Pentium Pro "quads" with its new IQ-Link, an intelligent, high-speed interconnect which moves data between the quads quickly.

.....

and from summer '97, sequent to support ia-64
http://www.tgc.com/hpc-bin/artread.pl?direction=Current&articlenumber=11494

I have some recollection of sequent pitching a high-availability clustering configuration ... four 256-processor NUMA machines tied together in no-single-point-of-failure for high-end commercial processing (along with fiber-channel support) ... cluster announce jan, 1998
http://www.tgc.com/hpc-bin/artread.pl?direction=Current&articlenumber=12462

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

Digital Signature Standards

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Digital Signature Standards
Newsgroups: sci.crypt
Date: Mon, 22 Mar 2004 09:35:12 -0700
gorilla_nerfball@hotmail.com (Gorilla Nerfball) writes:
The last info I found on the subject is from 1996 talks about the debate between RSA and its derivatives (ISO 9796), and NIST DSA/DSS. But I have not found any concrete info to point me to one, or the other, or something else altogether. Any references would be useful. What's happened in the last 8 years?

In the financial industry, there were a couple issues related to digital signatures.

In the early 90s, there was big push for RSA signatures in conjunction with x.509 identity certificates. The issue by the mid-90s was that identity information carried in an x.509 identity certificate represented complex and serious privacy and liability issues. As a result you saw retrenching to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo
that just contained an account number, a public key, and bunch of administrative gorp.

In the early & mid-90s ... there is also the issue of using hardware tokens for digital signatures. The problem then was that the chips seldom had reasonable and reliable random number generation. DSA (& ECDSA) requires high quality random numbers for both key generation as well as every digital signature operation. A typical RSA hardware token from the era had key generation done by a reliable external hardware box and the keys injected into the token during some personalization phase. But basically, early to mid-90s hardware tokens with questionable random number generation put DSA signature operations at risk. By the late-90s, there were starting to be chips generally available that had reasonably trusted random number facilities, providing some comfort for DSA (& ECDSA) signature operations.

Supposedly a desirable application for signatures in the mid-90s was financial transactions: take a standard financial transaction, ASN.1 encode it and digitally sign it; then package up the transaction, the signature, and the certificate and send it on its way to the financial institution. An issue became that a typical financial transaction of the period (and still is) totals 60-80 bytes, a 1024-bit RSA key signature is 128 bytes, and typical relying-party-only certificate ran 4k bytes to 12k bytes; aka the standard, accepted digital signature process (for the supposedly, main, driving market force for digital signatures) results in a two-order of magnitude size bloat (aka increase of one hundred times) in typical financial transaction size.

Now there was a little bit of business process analysis that went on. If you look at a relying-party-only certificate ... you go to your financial institution and present your public key ... they record that in your account and give you back a relying-party-only certificate. You then use that certificate to repeatedly send your financial institutions, digitally signed transactions that have been bloated by a factor of one hundred times because you are constantly sending them back a copy of a certifiate that they already have the original of. So my assertion has been that such certificates are redundant and superfluous (for supposedly the primary certificate market purpose), in addtion to representing a serious operational 100-times size bloat.
https://www.garlic.com/~lynn/x959.html#aads

Somewhat in parallel with this, X9 had been working on X9.62, ECDSA (which is referenced by FIPS182-2 document and can be found on the nist.gov web site). So a 163-bit ECDSA key results in a 42byte digital signature and gives at least the security of a 1024-bit RSA key with a 128byte digital signature ... reference to ietf internet draft on key strengths:
https://www.garlic.com/~lynn/2004.html#38 When rsa vs dsa

Also, X9 has been working on compressed ECDSA digital certificates, X9.68 for high transaction operations ... certificates which might be as small as 300-600 bytes. Then a 60-80 byte transaction would have a 42-byte signature and a 500-byte certificate ... which would reduce the size bloat from 100-times to something less than 10-times.

Note, however, one possible x9.68 certificate compression technique is to eliminate fields from the certificate that the recipient is known to already have. Since the relying-party-only certificate were generated by the recipient, it can be shown that the recipient has all fields, and the appended certificate can be reduced to zero bytes. So the alternative assertion is that rather than not having to append redundant and superfluous certificates to transactions, it is possible to append zero-byte certificates. This then can result in ECDSA digitally signed transactions that possibly only double the size of a financial transaction (instead of 10-times or 100-times size bloat):
https://www.garlic.com/~lynn/x959.html#x959

Now, going back to the thing that can be claimed that really launced the certificate industry ... and digital signatures ... was this thing called SSL and e-commerce. Some specific references to the SSL/e-commerce innovation:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
and a lot more general postings about SSL certificates and digial signatures:
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

Digital Signature Standards

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Digital Signature Standards
Newsgroups: sci.crypt
Date: Mon, 22 Mar 2004 13:04:14 -0700
.. and slightly related hardware token issue ... analysis was from late '98 ... but should still not be too dated.

RSA signatures are extremely compute intensive and power hungry, while doable on standard 7816 hardware token it runs to many seconds elapsed time.

there are also industrial engineering issues with 7816 contacts ... that are being addressed by iso 14443 proximity standard ... as well as consumer ease of use issues (although in '98 there are still price issues with 14443 technology vs. 7816 technoloyg).

some upcoming hardware chips:

1) emerging chips with trusted random number generation ... support for DSA & ECDSA ... and ECDSA implementations leveraging commoningly available "DES acclerators" allows ECDSA signatures in under second and within iso 14443 proximity power profile

2) emerging chips that have RSA accelerators that hold promise for RSA signatures being done in a second or two ... the problem is that the increased circuits don't particularly reduce the power requirements for RSA signatures .... just compress the amount of power needed into smaller time period. RSA w/o acceleration circuits wasn't very practical in ISO 14443 power profile ... and the use of accelerator circuits compressing the power requirements into shorter period of time makes them even less practical for ISO 14443 power profile.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

IBM 360 memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 22 Mar 2004 19:55:30 -0700
Peter Flass writes:
I'm not an expert, but I believe TSS (Time Sharing System) was an IBM OS for the 360/67, I would guess an upgrade of CTSS for the 70xx machines.

As people have said it was supposedly somewhat bloated, and eventually lost out to CP67/CMS and later VM.


some of the history extract from melinda's history
https://www.garlic.com/~lynn/2004c.html#11 40yrs, science center, feb. 1964

another extract:
http://listserv.uark.edu/scripts/wa.exe?A2=ind9803&L=vmesa-l&F=&S=&P=40304

a (linux) paper talking about VM and lots of extracts from Melinda's paper (after you get thru the front part):
http://www.itworld.com/Comp/1369/LWD000606S390/

a copy of one of the extracts from above ... which is an extract from melinda's paper.
"Throughout 1967 and very early 1968, IBM's Systems Development Division, the guys who brought you TSS/360 and OS/360, continued its effort to have CP-67 killed, sometimes with the help of some IBM Research staff. Substantial amounts of Norm Rasmussen's, John Harmon's, and my time was spent participating in technical audits which attempted to prove we were leading IBM's customers down the wrong path and that for their (the customers'!) good, all work on CP-67 should be stopped and IBM's support of existing installations withdrawn." (R. U. Bayles quoted in Varian, p. 97).

melinda's website (with lots of the details):
https://www.leeandmelindavarian.com/Melinda#VMHist

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

IBM 360 memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 22 Mar 2004 21:31:49 -0700
adam@fsf.net (Adam Thornton) writes:
And, of course, for what it's worth, Melinda is married to Lee, who was one of the earliest TSS users.

i was an undergraduate that got to play in the datacenter ... first task was writing a replacement (in 360 assembler) for the 1401/MPIO (front-end to the 709) ... which would run on 360/30 (as opposed to running the 360/30 in 1401 emulation mode and directly executing MPIO).

the university replaced the 709 and the 360/30 with a 360/67 for TSS/360. pending tss/360 becoming really operational ... the university ran the 67 in straight 360/65 mode with os/360.

the ibm se did get some time on weekends for testing tss/360 ... and I would be in there also ... sometimes working around him and/or playing with tss/360 also. I remember one time when he had worked out something like 60 bug fixes on release 0.68 and sent them into mohansic. he got an answer back something to the effect, that mohansic was just shipping relase 0.71 and would he reverify the bug fixes against release 0.71 and resubmit them.

eventually along the way, cp/67 was installed the last week in january, 1968 ... for testing purposes mostly ... the 360/67 continued to run os/360 in 360/65 the majority of the time. The ibm se and I did put together a fortran edit, compile & test benchmark with simulated terminal response ... and ran the script against both tss/360 and cp/67 on the same hardware. tss/360 managed four users running the script with second plus response for trivial interactive response ... while cp67/cms managed something like 30 users running effectively the same script with subsecond response for trivial interactive response (and this was before I really got rolling on rewriting major cp/67 pathlengths).

i did get to attend the spring '68 (ibm user group) share meeting in houston where cp/67 was "official" announced. minor reference:
https://www.garlic.com/~lynn/2003d.html#72 CP/67 35th anniversary

There is this story at the houston share meeting where I got into an argument with one of the lead IBM TSS programmers after four hrs of drinking at SCIDS (Society for Continuous Inebriation During Share) and one of his cohorts grabbed his arm as he was pulling it back for a really good punch. A special meeting was then setup the next day in the Houston astrodome for us to meet and both agree that it never happened.

I also made a presentation at the fall '68 share meeting in Atlantic City on some of my os/360 performance optimization work ... as well as some of my cp/67 performance optimization work ... including major pathlength rewrites, fastpath introduction, etc (still an undergraduate).

past postings (from 10 years ago) about my '68 Atlantic City share presentation:
https://www.garlic.com/~lynn/94.html#18 CP/67 and OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 and OS MFT14

photo from the VM/370 30th b'day party at SHARE 99 (in sanfran)
https://www.garlic.com/~lynn/LynnWheeler023.jpg

vm b'day picture

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Do we really need all three of these newsgroups?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Do we really need all three of these newsgroups?
Newsgroups: linux.redhat,comp.os.linux.redhat,alt.os.linux.redhat
Date: Tue, 23 Mar 2004 12:38:49 -0700
Brian Chase writes:
I find this so scattered, I just pay more attention to the Fedora mailing list more than newsgroups. Shame, cause just one newsgroup for redhat would be the perfect number and I'd be sure and check it.

As it is now, I sometimes check only one, or two, but I usually overlook at least one when looking at my mail. What a pain.

Who can fix this, two of them need to go...


i find gnus virtual newsgroups useful for this ... i have a virtual redhat newsgroup which collects 13 different redhat releated newsgroups, eliminates duplicates and presents the postings as if they were a single newsgroup. it handles all the admin/bookkeeping behind the scenes.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

real multi-tasking, multi-programming

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: real multi-tasking, multi-programming
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 23 Mar 2004 15:56:17 -0700
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
Important? Yes. Defining? No.

Was there multiprogramming on, e.g., the 2030, 2040, 2050, 3145, 3155? Lots of processors implemented I/O channels with cycle stealing. I know of no respected authority in CS who would claim that there was no multiprogramming on those processors.


and the 3158. the 3158 microcode basically time-shared between the 370 instructions implementation and the channel implementation.

moving to 303x ... they took the 3158 processor engine, removed the 370 instruction implementation, just leaving the channel implementation and called it a channel director.

3031 was a 3158 processor engine (with just the 370 microcode) dedicated to 370 instruction implementation and a second 3158 processor engine (with just the channel microcode) called a channel director.

3032 was a 3168 processing engine reconfigured to use the 3158 processor engine as a channel director.

3033 was a new technology configured to use the 3158 processor engine as a channel director.

the 3158 channel microcode supported six channels ... as did the channel director. with 303x, you got up to 12 channels by having two (3158 processor engine) channel directors. you got 16 channels by having three (3158 processor engine) channel directors.

in effect, the standard 370/158 processor was doing (real?) microcode multi-tasking between the 370 instruction implementation and the channel implementation ... which then sort of makes even a basic 3031 a multiprocessor ... since a minimum 3031 configuration involved two 3158 processor engines; one dedicated to 370 instruction microcode and one dedicated to channel microcode.

another extreme was the 370 115/125. The basic architecture was a shared memory bus with positions for up to nine processors. a hardware configuration would tend to have four or more microprocessors ... with different (identical) processor engines having different microcode loaded; 370 instruction set, disk controller, telecom controller, etc. The difference between a 125 and a 115 was that in a 115, ALL microprocessor enginess were identical, while in a 125, the processor engine running the 370 microcode was about 50 percent faster than the other processor engines.

i worked on a 370/125 multiprocessor project (which never shipped to customers), that would load up to five 125 processors (running 370 microcode) onto the memory bus ... for what appeared to be a 5-way 370 multiprocessing system.

now, i did do a twist for this system, the dispatcher that actually dispatched different tasks on different real engines was implemented in the microcode. in that sense, it was a little like 432i (that came later). the 370 kernel code could add/remove tasks from the dispatch list .... but the actual code that dispatched tasks on processors was all done in the microcode.

misc. old 5-way posts:
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#10 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000e.html#7 Ridiculous
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#19 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2002i.html#80 HONE
https://www.garlic.com/~lynn/2002i.html#82 HONE
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#16 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

JSX 328x printing (portrait)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: JSX 328x printing (portrait)
Newsgroups: bit.listserv.ibm-main
Date: Tue, 23 Mar 2004 16:24:06 -0700
R.Skorupka@ibm-main.bremultibank.com.pl (R.S.) writes:
Laser printers are unrelated to TCP/IP - agreed. Laser printers are older than TCP/IP - IMHO not true. Copiers are older (in Poland usually we call them Xerocopier - guess why), but printers are probably a little bit younger.

the big, fast 3800 is probably about the same time as TCP on arpanet .. maybe a little earlier. arpanet wasn't internetworking (aka IP). It had IMPs (somewhat like 3705 boxes) that managed all the networking gorp talking to other imps and connected to hosts. Work on defining and development of IP was done during the 70s, but the big networking switch-over to IP was 1/1/83.

there were laser copiers during 60s & 70s ... ibm had the copier3 in the 70s. there was a project that took effectively a copier3 and produced a computer attached laser printer called the 6670 (which predated the 1/1/83 switch-over).

somewhat from its copier heritage, the 6670 offered an advantage over the 3800, it could duplex (print on both sides of the same paper).

misc. collected networking posts:
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The SOB that helped IT jobs move to India is dead!

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The SOB that helped IT jobs move to India is dead!
Newsgroups: alt.folklore.computers
Date: Tue, 23 Mar 2004 17:08:56 -0700
Anne & Lynn Wheeler writes:
a related thread ran recently in comp.arch regarding social security benefits not being fully funded (and future generations will have to make up the difference):
https://www.garlic.com/~lynn/2004b.html#9 A hundred subjects: 64-bit OS2/eCs, Innoteck Products
https://www.garlic.com/~lynn/2004b.html#21 A hundred subjects: 64-bit OS2/eCs, Innoteck Products


news blurb today on latest social security figures
http://interestalert.com/brand/siteia.shtml?Story=st/sn/03230002aaa010ce.upi&Sys=rmmiller&Fid=NATIONAL&Type=News&Filter=National%20News

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

"360 revolution" at computer history museuam (x-post)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: "360 revolution" at computer history museuam (x-post)
Newsgroups: bit.listserv.ibm-main,bit.listserv.vmesa-l
Date: Wed, 24 Mar 2004 08:18:21 -0700
From: "Computer History Museum" <event@computerhistory.org>
Dear Computer History Fans,

YOU ARE INVITED TO TWO FASCINATING MILESTONE EVENTS...

"360 REVOLUTION"

Computer pioneers and National Medal of Technology awardees Erich Bloch, Fred Brooks, Jr. and Bob Evans, current IBM technology chief Nick Donofrio and the Computer History Museum cordially invite you to a conversation about the extraordinary System/360 project. Heralded by Fortune Magazine in 1965 as the "$5,000,000,00 Gamble," the System/360, launched on April 7, 1964, created a compatible computer family that helped revolutionize the computer industry.

This event, hosted by the Computer History Museum and sponsored by IBM, provides a behind-the-scenes view of the tough decisions made by some of the people who made them. Learn how the System/360 helped transform the government, science and commercial landscape.

The event will be held on April 7, 2004 at the Computer History Museum, 1401 N. Shoreline Boulevard, Mountain View, California. A 6:00 PM member reception will be followed by the program at 7:00 PM.

Admission is free but advance reservations are required. Please RSVP by March 31, 2004. For more information and to register on-line, please go to
http://www.computerhistory.org/ibmS360_04072004
or call 650 810 1019.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 360 memory

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 360 memory
Newsgroups: alt.folklore.computers
Date: Wed, 24 Mar 2004 14:05:43 -0700
Dave Daniels writes:
Three years ago 'hack' (hack at watson decimal ibm decimal com) posted a message on comp.lang.asm370 about an IBM operating system he used called 'EM/370'. I don't think this was an April Fools' Day thing but it sounded rather interesting to me. Google for 'EM/370' for Hack's description of it. Has anybody else heard of EM/370?

chris stephenson built (much of?) em/yms ... sort of a take-off on vm/cms. it has a lot of very advanced features ... chris's work has shown up in CMS at various times. The CMS EDF filesystem introduced in the product in the mid-70s was from chris.

they built a deskside 370 (37t) that was/is used to run em/yms as personal computer. I believe most of the em/yms group are gone ... but michel (hack) still runs it. I have copy of Chris' fairwell note from 7Dec1997 ... and Chris' memorial announcement from May of 1999.

minor reference from long ago and far away
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000

abstracts for the talks mentioned in the above conference agenda (2/26-2/28, 1980):
Walt Daniels - Individual Computing

"Individual Computing" is a new project at IBM Research, the central theme of which is the pursuit of advanced functions for a single user operating system based on EM/YMS, which provides a productive environment for writing, testing and running programs. The system runs in a virtual machine under CP, or could run on a small stand-alone (individual) computer, closely coupled with a highly interactive display, and connected either closely or loosely to service machines which provide access to shared files and global networks. An overview and future plans for displays and shared files will be given. 30 mins.

C.J. Stephenson - AN EXTENDED MACHINE, AND A NEW ONE-USER OPERATING SYSTEM.

Operating systems have traditionally been constructed with their basic services (such as file I/O, device support and interrupt handling) implemented by programs which reside in the same address space as the higher level programs (such as compilers, interpreters, editors and other application programs). The Extended Machine (EM/370 for short) is an experimental system which embeds some of these services under what appears to be the machine interface itself. One of the aims is to facilitate the implementation of new and special-purpose operating systems, which are relieved of the burden of supporting the hardware from scratch. YMS (Yorktown Monitor System) is a simple one-user operating system which runs on EM/370 and supports a user interface which is comparable to that of CMS, though somewhat more general. An outline of these systems will be given, with digressions into some of the more novel features. 30 minutes.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Wed, 24 Mar 2004 21:40:37 -0700

http://www.rexxla.org/
http://www.rexxla.org/Symposium/2004/announcement.html
and from slashdot
http://developers.slashdot.org/developers/04/03/24/0034224.shtml?tid=126&tid=136&tid=156&tid=187

and 2/26/80 conference referenced in posting earlier today
https://www.garlic.com/~lynn/2004d.html#16 IBM 360 memory thread

also had presentation on rex(x) by Mike

of all the wierd things to trip across, i have an

H-assembly listing of DMSRVA ASSEMBLE that has a munged date but is probably sometime in 1983

and

H-assembly listing done 13may83 of DMSREX ASSEMBLE dated 15apr83

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

The SOB that helped IT jobs move to India is dead!

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The SOB that helped IT jobs move to India is dead!
Newsgroups: alt.folklore.computers
Date: Wed, 24 Mar 2004 23:08:32 -0700
Anne & Lynn Wheeler writes:
the other thing from the us census article in the early '90s ... besides half of the 18 year olds being functionally illiterate was that (at the time) over half of the (us) manufacturing jobs were in some way subsidized ... aka the claim that over half of the employees in manufacturing jobs were receiving total benefits (salary, retirement, insurance, medical, etc) in excess of the value of the work they performed (the difference in the value they provided and the benefits they received had to be made up in some way).

outsourcing report blames schools & related articles today
http://www.wired.com/news/business/0,1367,62780,00.html?tw=wn_tophead_3
http://www.aeanet.org/Publications/id_OffshoreOutsourcingMain.asp
http://www.mercurynews.com/mld/mercurynews/news/breaking_news/15207887.htm
http://www.cra.org/

previous postings in thread
https://www.garlic.com/~lynn/2004b.html#2 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#16 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#23 Health care and lies
https://www.garlic.com/~lynn/2004b.html#24 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#29 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#32 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#37 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#38 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#42 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#43 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#50 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004b.html#52 The SOB that helped IT jobs move to India is dead!
https://www.garlic.com/~lynn/2004c.html#18 IT jobs move to India
https://www.garlic.com/~lynn/2004c.html#19 IT jobs move to India
https://www.garlic.com/~lynn/2004d.html#14 The SOB that helped IT jobs move to India is dead!

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Sat, 27 Mar 2004 09:02:38 -0700
"David Wade" writes:
If its "H" is it part of VM or an PRPQ add one or what ?

original post:
https://www.garlic.com/~lynn/2004d.html#17 REXX still going strong after 25 years

H-assembler was commonly available in the 70s & 80s on (ibm) mainframe platforms. the original commoningly available 360 assembler was the f-assembler. then came h-assembler with a lot of additional features and performance. then there were the slac-mods to h-assembler ... post on h-assembler & slac-mods:
https://www.garlic.com/~lynn/2003n.html#34 Macros and base register question
https://www.garlic.com/~lynn/2004c.html#13 Yakaota
other refs to the slac mods and assembler H
http://www.xephon.com/arcinframe.php/m090a06
this makes reference to H, XF, HLASM, slac-mods, etc ... all in their MVS incantations (and their proclib procedures):
http://docweb.nerdc.ufl.edu/docweb.nsf/0/e754c7d0eddc8e5285256bf900674d74?OpenDocument

from comment section for DMSRVA:
Handle all interfaces to the current generation of variables.

... in this time frame, REXX was still internal use only, and customers had possibly hardly even heard of it ... and it was still called REX. The name change to REXX didn't occur until it was released as product to customers (if i remember correctly there was issue with soembody already having some rights to REX).

I had done a SHARE presentation on DUMPRX ... sort of stressing that it had all been done in REX (except for about 100 assembler instructions) and therefor got around the OCO issue, was ten times more function than the (assembler-based) product, ten times faster than the (assembler-based) procduct, took about half my time over 3 months to develop ... and therefor others should be able to do something similar also.

misc. past posts on dumprx (and dump readers, hung/zombie processes in general):
https://www.garlic.com/~lynn/submain.html#dumprx

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Sat, 27 Mar 2004 16:22:34 -0700
arargh403NOSPAM writes:
I thought that 'f' came with the O/S, and 'H' was a program product that cost extra.

And then there was 'g', a non IBM version, of which my tape may still be readable after 30 years.


a table i found at:
http://www-306.ibm.com/software/awdtools/hlasm/history.html

Date              Product                               Action
31 March 1983     Assember H V2                         Released for GA
26 June 1992      High Level Assembler V1R1             Released for GA
9 August 1994     Assembler H V2                        Withdrawn
24 March 1995     High Level Assembler V1R2             Released for GA
15 December 1995  High Level Assembler Toolkit Feature  Released for GA
31 October 1995   Assember H V2                         End of service
31 December 1995  High Level Assembler V1R1             End of service
20 February 1996  S/390 Software Version                Version promotion
6 August 1996     S/390 Software Version                Version promotion
4 September 1996  029 Card Punch                        Withdrawn
18 February 1997  S/390 Software Version                Version promotion
1 August 1997     HLASM Toolkit Feature Upgrade 1       Released for GA
15 October 1997   HLASM Toolkit Feature Upgrade 2       Released for GA
25 September 1998 High Level Assembler and Toolkit      Released for GA
Feature V1R3
30 June 2000      High Level Assembler V1R2             Marketing withdrawn
                                                        (VSE)
29 September 2000 High Level Assembler and Toolkit      Released for GA
                  Feature V1R4
29 September 2001 HLASM V1R4 ASMIDF/MVS 64-bit support  Released for GA
31 December 2001  High Level Assembler V1R2             Service withdrawn
6 August 2002     High Level Assembler and Toolkit      Announcement of
                  Feature V1R3                          service withdrawal
for 6 Oct 2003

"balstyle memo" from vmshare archives .... started 1/16/84 by Mike to discuss assembler coding style used in rexx source
http://vm.marist.edu/~vmshare/browse.cgi?fn=BALSTYLE&ft=MEMO
note above includes some people making references to TSS/360 assembler

"rexx89 memo" from vmshare archives ... posts prior to 1989, originally created 1/24/86
http://vm.marist.edu/~vmshare/browse.cgi?fn=REXX89&ft=MEMO
"rexx memo" from vmshare archives
http://vm.marist.edu/~vmshare/browse.cgi?fn=REXX&ft=MEMO
"rexx90 prob" from vmshare archives ... posts prior to 1990, originally created 3/11/84:
http://vm.marist.edu/~vmshare/browse.cgi?fn=REXX90&ft=PROB
"rexx prob" from vmshare archives (first post 5/31/91)
http://vm.marist.edu/~vmshare/browse.cgi?fn=REXX&ft=PROB

and for something completely different, "wylbur memo" from vmshare archives:
http://vm.marist.edu/~vmshare/browse.cgi?fn=WYLBUR&ft=MEMO

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Sun, 28 Mar 2004 08:48:11 -0700
Brian Inglis writes:
What's the difference between the High Level Assembler and Toolkit and Assemblers F and H?

some of the diffs seem to be picking up stuff from the slac-mods ... but I think most of that actually happened for XF.

almost all the detailed description are pdf files. the previous history URL has pointer to feature overview
http://www-306.ibm.com/software/awdtools/hlasm/
summary:
http://www-306.ibm.com/software/awdtools/hlasm/more.html
from above:


High Level Assembler provides:

• Extensions to the basic assembler language.
• Extensions to the macro and conditional assembly language, including
external function calls and built-in functions.
• Enhancements to the assembly listing, including a new macro and copy
code member cross reference section, and a new section that lists
  all the unreferenced syms defined in CSECTs.
• New assembler options, such as:
o a new associated data file, the ADATA file, containing both
language-dependent and language-independent records, that can be
  used by debugging and other tools;
o a DOS operation code table to assist in migration from DOS/VSE
  assembler;
o the use of 31-bit addressing for most working storage requirements;
o a generalized object format data set; and
o internal performance enhancements and diagnostic capabilities.

High Level Assembler generates object programs from assembler language
programs that use the following machine instructions:

• System/370
• System/370 Extended Architecture (370-XA)
• Enterprise Systems Architecture/370? (ESA/370)
• Enterprise Systems Architecture/390 (ESA/390®).

some more feature:
http://www-306.ibm.com/software/awdtools/hlasm/about.html

more details are in share presentation:
http://www.share.org/proceedings/sh98/data/S8165B.PDF

the program understanding tool from the HLASM toolkit:
http://www-1.ibm.com/servers/eserver/zseries/os/vse/pdf/orlando2000/E29.pdf

which sounds a little bit like a PLI program that I wrote in the early 70s to analyze/understand 370 assembler listings, extracting code flow, register use/set, building looping and if/then/else/when/etc logic structures. There was some ambiguity in analyzing the listing file since the address fields didn't give the actual domain space the address existing in. That was one difference between F/X/XF assembler listings and tss/360/370 listings where every displacement/address field was prefixed by its csect/dsect index (aka there was less ambiguity analyzing tss/370 listings)

more detailed posting:
https://www.garlic.com/~lynn/94.html#12 360 "OS" & "TSS" assemblers
https://www.garlic.com/~lynn/2000d.html#36 Assembly language formatting on IBM systems

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

System/360 40th Anniversary

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40th Anniversary
Newsgroups: alt.folklore.computers
Date: Sun, 28 Mar 2004 20:58:45 -0700
hancock4@bbs.cpcn.com (Jeff nor Lisa) writes:
The gamble by IBM resulted in a much more efficient computer world. It was a big improvement in cost/performance yet preserved older technologies (still used 30-40 years later). I think it allowed larger corporations to expand functions that were computerized, including more online processing, and smaller corporations to afford to get in with a relatively cheap model 30.

in the 70s ... i remember somebody relating testimony at the gov. trial by somebody from one of the bunch ... possibly rca or univac. the testimony supposedly was that ALL of the major computer vendors had realized by the late 50s that the SINGLE most important item that was going to be necessary to be successful in the computer market was a compatible line across all machines. the issues were 1) there was lots of corporate growth going on and customers were buying computers and then upgrading and 2) the amount of money customers were pouring into developing applications was more than the hardware. Most of the bunch attempted to address that single most important item ... but ibm for one reason or antoher was much more successful than the others. the issue wasn't that the other vendors didn't realize it and/or didn't try ... it was that ibm happened to pull it off better/first.

Amdahl left ibm to do a plug-compatible processor ... in part because of direction ibm was taking in the early 70s to do FS
https://www.garlic.com/~lynn/submain.html#futuresys
which was a radical, incompatible departure ... and FS was in large part re-action to plug-compatible controllers ... something that i've gotten blamed for helping create with a project I worked on as undergraduate:
https://www.garlic.com/~lynn/submain.html#360pcm

I was at mit talk early 70s (possibly spring 71 or 72?), that Amdahl gave on forming his new company. some of the student audience asked about business plan for getting funding. he made reference to customers having something like $100b already invested in ibm mainframe software (at that time, within 8 years of 360 introduction) and even if ibm totally walked away from 360/370 (somewhat veiled reference to the internal FS direction) ... there would be customers running that same software for at least the next 30 years (aka if ibm stopped making 370s totally, Amdahl would be able to still sell 370-compatible machines for at least the next 30 years, just based on current existing software). Some of the audience also gave him a lot of heat about selling out to foreign interests (between outright investment and manufacturing arraignment).

similar past posts mentioning the trial testimony.
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/96.html#20 1401 series emulation still running?
https://www.garlic.com/~lynn/99.html#231 Why couldn't others compete against IBM?
https://www.garlic.com/~lynn/2001j.html#33 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#38 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#39 Big black helicopters
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2002c.html#0 Did Intel Bite Off More Than It Can Chew?
https://www.garlic.com/~lynn/2003.html#71 Card Columns
https://www.garlic.com/~lynn/2003o.html#43 Computer folklore - forecasting Sputnik's orbit with

past postings mentioning bunch:
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003.html#71 Card Columns
https://www.garlic.com/~lynn/2003b.html#61 difference between itanium and alpha

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Xquery might have some things right

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Xquery might have some things right
Newsgroups: comp.databases.theory
Date: Sun, 28 Mar 2004 21:48:39 -0700
Christopher Browne writes:
The oral history I have heard from various people that have been watching SGML since its beginnings have indicated this.

SGML was standardization of GML done at the cambridge science center circa 1970 by "G", "M", "L" (the choice of generalized markup language comes from the intials of the three people that work on it). GML processing added to the existing implementation of the CMS SCRIPT command (i.e. cambridge monitor system ... also done at the cambridge science center).

misc. past posts about (s)gml history:
https://www.garlic.com/~lynn/2000e.html#1 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2001i.html#1 History of Microsoft Word (and wordprocessing in general)
https://www.garlic.com/~lynn/2002h.html#17 disk write caching (was: ibm icecube -- return of
https://www.garlic.com/~lynn/2002o.html#4 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002o.html#54 XML, AI, Cyc, psych, and literature
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?

At least "L" went on to work on RDBMS BLOBs ... towards the end of the System/R (possibly transition to R-star?) .. there were half dozen or so of us that transferred out of cambridge to san jose during that time frame

some specifics about the gml history
https://web.archive.org/web/20230804173255/http://www.sgmlsource.com/history/
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20230402212558/http://www.sgmlsource.com/history/jasis.htm
http://www.sgmlsource.com/history/G320-2094/G320-2094.htm

other science center postings:
https://www.garlic.com/~lynn/subtopic.html#545tech

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

who were the original fortran installations?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: who were the original fortran installations?
Newsgroups: alt.folklore.computers
Date: Sun, 28 Mar 2004 23:36:41 -0700
i've been asked about helping track down some of the original fortran material. i've asked some people that i knew at boeing in the '60s that used the original fortran (circa 1960) ... and still have some stuff.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360 40th Anniversary

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40th Anniversary
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 10:03:50 -0700
Charles Richmond writes:
Maybe the announcement is "the 360 is *40* years old"...and Suprise!!! We are laying of another 10,000 and sending the jobs to India!!!

I believe IBM had peaked at something over 480k employees sometime in the 80s. I believe in 92/93 time-frame they were down to a little over 200k employees (ibm reported a couple billion loss in 92). I have some vaque recollection that purchase of Lotus brought it back up to 217k. Later in the 90s, there were acquisitions of some number of consulting & service businesses, Sequent computers, Informix database, etc. They are around 320k-330k now. try:
http://www.sequent.com/
http://www.informix.com/

IBM's gross 25 years or so ago run something like 60/40 changing to 40/60 (US/non-US); although frequently influenced by dollar valuation (low dollar, results in larger valuation of world-trade business). Over the years, employees have been heavily US oriented even though much of the business is outside the US.

I believe in the 60s, it was divided domestic and world trade. In the early 70s(?), world trade was split into EMEA (europe, middle east, africa) and AFE (americas and far east). In that time-frame, not only was lots of branch office and field people using HONE for their business ... so was hdqtrs (at least hdqtrs marketing people). Part of moving EMEA hdqtrs from the US to La Defense (Paris) in the early '70s, I hand carried HONE installation over. misc. HONE:
https://www.garlic.com/~lynn/subtopic.html#hone

global services, 2002, 150k employees, total 320k
http://news.com.com/2100-1001-927845.html

330k employees in 2004
http://www.forbes.com/technology/newswire/2004/01/17/rtr1215724.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 10:32:53 -0700
Dave Daniels writes:
About twenty years ago tbe VM systems programmer where I worked gave me a printed listing of a full assembly of Rexx. It was about a box of paper. I do not have it now.

need 117 character wide display ... from some place:


DDDDDDDDD      MM        MM    SSSSSSSSSS    RRRRRRRRRRR    EEEEEEEEEEEE   XX        XX
DDDDDDDDDD     MMM      MMM   SSSSSSSSSSSS   RRRRRRRRRRRR   EEEEEEEEEEEE   XX        XX
DD       DD    MMMM    MMMM   SS        SS   RR        RR   EE              XX      XX
DD        DD   MM MM  MM MM   SS             RR        RR   EE               XX    XX
DD        DD   MM  MMMM  MM   SSS            RR        RR   EE                XX  XX
DD        DD   MM   MM   MM    SSSSSSSSS     RRRRRRRRRRRR   EEEEEEEE           XXXX
DD        DD   MM        MM     SSSSSSSSS    RRRRRRRRRRR    EEEEEEEE           XXXX
DD        DD   MM        MM            SSS   RR    RR       EE                XX  XX
DD        DD   MM        MM             SS   RR     RR      EE               XX    XX
DD       DD    MM        MM   SS        SS   RR      RR     EE              XX      XX
DDDDDDDDDD     MM        MM   SSSSSSSSSSSS   RR       RR    EEEEEEEEEEEE   XX        XX
DDDDDDDDD      MM        MM    SSSSSSSSSS    RR        RR   EEEEEEEEEEEE   XX        XX


AAAAAAAAAA SSSSSSSSSS SSSSSSSSSS EEEEEEEEEEEE MM MM BBBBBBBBBBB LL YY YY AAAAAAAAAAAA SSSSSSSSSSSS SSSSSSSSSSSS EEEEEEEEEEEE MMM MMM BBBBBBBBBBBB LL YY YY AA AA SS SS SS SS EE MMMM MMMM BB BB LL YY YY AA AA SS SS EE MM MM MM MM BB BB LL YY YY AA AA SSS SSS EE MM MMMM MM BB BB LL YY YY AAAAAAAAAAAA SSSSSSSSS SSSSSSSSS EEEEEEEE MM MM MM BBBBBBBBBB LL YYYY AAAAAAAAAAAA SSSSSSSSS SSSSSSSSS EEEEEEEE MM MM BBBBBBBBBB LL YY AA AA SSS SSS EE MM MM BB BB LL YY AA AA SS SS EE MM MM BB BB LL YY AA AA SS SS SS SS EE MM MM BB BB LL YY AA AA SSSSSSSSSSSS SSSSSSSSSSSS EEEEEEEEEEEE MM MM BBBBBBBBBBBB LLLLLLLLLLLL YY AA AA SSSSSSSSSS SSSSSSSSSS EEEEEEEEEEEE MM MM BBBBBBBBBBB LLLLLLLLLLLL YY DMSREX assembled from REXPAK (2099 records, 04/15/83 17:19:55) Printed by Userid MFC, on 13 May 1983 at 16:45:40
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

who were the original fortran installations?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: who were the original fortran installations?
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 13:21:44 -0700
"Jim Mehl" writes:
What kind of material are you looking for? I think I still have some IBM 704 Fortran manuals from about 1961.

one of the people working with backus dredging up stuff contacted me ... and one of the people I've contacted from boeing also mentioned having some 704 stuff from 1960 (as well as some univac non-fortran stuff from the same era). they are looking for stuff from original installations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 and You Bet Your Company

From: Anne & Lynn Wheeler <lynn@garlic.com>
Newsgroups: bit.listserv.vmesa-l
Subject: 360 and You Bet Your Company
Date: Sun, 28 Mar 2004 23:24:27 -0700
At 5:39 3/27/2004, wrote:
During 1983 I had the privelege of listening to an IBM senior V.P. who was instrumental in the 360's development deliver a lecture at the Systems Research Institute in Manhattan. The V.P. literally referred to the project as, "You bet your company."

slightly related thread in a.f.c
https://www.garlic.com/~lynn/2004d.html#22 System/360 40th Anniversary

supposedly FS was another such "you bet your company"
https://www.garlic.com/~lynn/submain.html#futuresys
which was canceled and not even announced.

old FS post with some specific references:
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS = IBM Future System

there was some reference that the money spent on unannounced, canceled FS would have bankrupted any other company.

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/

cheaper low quality drives

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: cheaper low quality drives
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 14:46:06 -0700
krw writes:
Redundant Arrays of Independent Disks (RAID) A way of storing the same data in different places on multiple hard disks. Storing data on multiple disks can improve performance by balancing input and output operations. Since using multiple disks increases the mean time between failure, storing data redundantly also increases fault-tolerance.

slightly related posting
https://www.garlic.com/~lynn/2002e.html#4 Mainframers: take back the light

if makes reference to website with various disk history
http://www.papyrusweb.ch/Syspinner/IBMHistoryOfFirsts.asp
from above (note the patent predates the work sponsored at UCB by nearly ten years):
1978

First patent for RAID (Redundant Arrays of Independent Disks) technology. IBM subsequently co-sponsored the research by the University of California at Berkeley that led to the initial definition of RAID levels in 1987. The first two-speed tape unit, raising streaming speeds to 160 kb/second.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

cheaper low quality drives

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: cheaper low quality drives
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 15:18:51 -0700
the 88 sigmod paper
http://portal.acm.org/citation.cfm?id=971701.50214&coll=GUIDE&dl=ACM

the case for redundant arrays of inexpensive disks (RAID) ... as opposed to independent disks

the abstract ....
Increasing performance of CPUs and memories will be squandered if not matched by a similar performance increase in I/O. While the capacity of Single Large Expensive Disks (SLED) has grown rapidly, the performance improvement of SLED has been modest. Redundant Arrays of Inexpensive Disks (RAID), based on the magnetic disk technology developed for personal computers, offers an attractive alternative to SLED, promising improvements of an order of magnitude in performance, reliability, power consumption, and scalability. This paper introduces five levels of RAIDs, giving their relative cost/performance, and compares RAID to an IBM 3380 and a Fujitsu Super Eagle.

... some of my postings about noticing that disk relative system performance was declining:
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002l.html#34 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004b.html#54 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

someone looking to donate IBM magazines and stuff

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: someone looking to donate IBM magazines and stuff
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 16:10:42 -0700
cstacy@news.dtpq.com (Christopher C. Stacy) writes:
I have some old books, mostly techincal and financial texts, and also the past forty years of the IBM Systems Journal. I'm cleaning out, and will take them to our dump, but it pains me to throw them away without giving someone who might want them a chance. I've called a few libraries (Babson, Olin, local) about the IBM Systems Journal and have elicited no interest. Does anyone have suggestions as to what organization in the greater Boston area might have some intereste? (I've also contacted a few organizations, such as Books for Africa, and have found no interest in old texts.)

ask him if he has any old LLMPS stuff or other things from lincoln labs

minor reference to LLMPS manual (co-author winett & belvin)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

System/360 40th Anniversary

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40th Anniversary
Newsgroups: alt.folklore.computers
Date: Mon, 29 Mar 2004 19:59:51 -0700
hancock4@bbs.cpcn.com (Jeff nor Lisa) writes:
Also, as mentioned in the Aspray Cambell book, IBM was superior in mechanical engineering. Today that isn't so important, but back when card readers and line printers were used, good performance was vital to keeping the rest of the machine going. The IBM 2540 reader and 1403 printer were superb. Why the others couldn't have a printer without wavy lines I don't know.

i found that, in general, ibm payed a lot of attention to manufacturing engineering, industrial engineering, yields, etc. (whether it was mechanical or other kinds, although some of it may reflect the mechanical manufacturing heritage). there were times when I saw ibm let competition have a product ... in part because of manufacturing or yield issues; there were times when manufacturing technology was as important or more important than the product technology.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

someone looking to donate IBM magazines and stuff

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: someone looking to donate IBM magazines and stuff
Newsgroups: alt.folklore.computers
Date: Tue, 30 Mar 2004 13:31:30 -0700
note that Frank was at lincoln labs. which ran cp/67. the first cp/67 time-sharing service bureau
https://www.garlic.com/~lynn/submain.html#timeshare
was NCSS that had some of the people from the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
and at least one or two people from Lincoln Labs.

The next CP/67 service bureau formed was IDC ... Arnow and Belvin from Lincoln Labs and at least Bob Seawright. IDC.

Bob Seawright was customer at Union Carbide and Love Seawright was the IBM SE on the Union Carbide account. Union Carbide/IBM sent the couple on assignment to Cambridge Science Center (working on CP/67). Bob did a hack on OS/360 PCP called Online/OS, effectively trying to create a CMS like environment using a PCP-base ... i.e. stripped down PCP running in virtual machine ... a conversational monitor interacting with the "operator's console" ... and setup "saved image" of PCP after the kernel had completed most of the initialization. Bob joined IDC and Love stayed on with IBM and became member of the VM/370 development group.

misc. extracts from melinda's history:
https://www.leeandmelindavarian.com/Melinda#VMHist
At about the same time that Lincoln decided to run CP-67, another influential customer, Union Carbide, made the same decision. In February, 1967, Union Carbide sent two of its system programmers, Bob Seawright and Bill Newell, to Cambridge to assist in the development of the system. They both subsequently made important contributions to CP. Union Carbide's IBM SE, Love Seawright, was sent to Cambridge at the same time to learn to support the system. Love tackled the job of documenting the system, figuring out how it worked by using it and reading the listings. As her temporary assignment kept being extended, she worked at documenting, testing, debugging, and giving demonstrations. Later, she would package Version 1 of CP-67 and then help to support it by teaching courses, answering the hotline, and editing the CP-67 Newsletter.

...
In mid-1965, I [Walt Doherty] was assigned to be T.J. Watson's man in TSS land at Mohansic. While there, I participated in a number of design meetings and met Lee [Varian], Ted Dolotta, Oliver Selfridge, Jack Arnow, Frank Belvin, and Joel Winett. The last four were at Lincoln Labs. Jack Arnow was Director of Computing there. Frank Belvin and Joel Winett worked for him. Oliver Selfridge was in the Psychology Department. Oliver suggested that I come work with them for a while on an editor project, called the Byte Stream Editor.... I went up to Lincoln for about a year.

...
Version 1 of CP-67 was released to eight installations in May, 1968, and became available as a TYPE III Program in June. Almost immediately after that, two ''spinoff'' companies were formed by former employees of Lincoln Lab, Union Carbide, and the IBM Cambridge Scientific Center, to provide commercial services based on CP/CMS. Dick Bayles, Mike Field, Hal Feinleib, and Bob Jay went to the company that became National CSS. Harit Nanavati, Bob Seawright, Jack Arnow, Frank Belvin, and Jim March went to IDC (Interactive Data Corporation). Although the loss of so many talented people was a blow, the CSC people felt that the success of the two new companies greatly increased the credibility of CP-67.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360 40th Anniversary

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40th Anniversary
Newsgroups: alt.folklore.computers
Date: Tue, 30 Mar 2004 16:36:24 -0700
J Ahlstrom writes:
Burroughs perpetuated the successors to the B5000/5500 and introduced the completely incompatible B2500/3500 et seq.

GE introduced/perpetuated their incompatible 200/400/600 series.

Some did get it: RCA immediately copied the (non-privileged) ISA of 360. Univac ADDED 360 knock-offs to their line and perpetuated only the 1108 et seq. NCR introduced a new family of machines replacing their previous ones. SDS introduced a new family of machines replacing their previous ones

I am not sure what Honeywell did before it bought GE (whose incompatible families, it perpetuated I believe.)

Not to mention the non-US manufacturers.


the other part of the testimony was that ibm hqtrs/watson was almost unique in hdqtrs forcing the local plant & engineering people to toe the line and effectively sacrifice localized tactical optimizations (with whatever technology was being used by that specific line) for overall corporate strategic objectives.

the business people may have believed in the requirement for compatibility across the product line ... but the engineering, plant, and product managers ... it was quite radical to sacrifice tactical product advantages for strategic corporate objectives.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

50 years of computer payroll

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 50 years of computer payroll.
Newsgroups: alt.folklore.computers
Date: Wed, 31 Mar 2004 09:41:28 -0700
bv@wjv.comREMOVE (Bill Vermillion) writes:
What does the zinc do, in regards to symptom and problems.

I know it's a normal mineral supplment in many foods.


i have some vague recollection of a study about artificial vitamin C showing more benefits than natural vitamin C ... and they eventually found that it was trace zinc that was used in the manufacturing process.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Omniscience Protocol Requirements

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Omniscience Protocol Requirements
Newsgroups: alt.folklore.computers
Date: Thu, 01 Apr 2004 00:19:54 -0700
the latest RFC in long tradition is available.

repeat of an old tale ... slightly related
https://www.garlic.com/~lynn/2001d.html#51 A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#52 A beautiful morning in AFM.
https://www.garlic.com/~lynn/2001d.html#53 April First
https://www.garlic.com/~lynn/2001d.html#62 A beautiful morning in AFM.

in any case, for a list of similar RFCs
https://www.garlic.com/~lynn/rfcietff.htm
click on Term (term->RFC#) and scroll down to "April1" keyword. clicking on the RFC number, brings up the summary in the lower frame; clicing on the ".txt=nnnn" field, retrieves the actual RFC.

....................


Network Working Group                                         S. Bradner
Request for Comments: 3751                                    Harvard U.
Category: Informational                                     1 April 2004

Omniscience Protocol Requirements

   Copyright (C) The Internet Society (2004).  All Rights Reserved.

Abstract

There have been a number of legislative initiatives in the U.S. and
elsewhere over the past few years to use the Internet to actively
interfere with allegedly illegal activities of Internet users.  This
memo proposes a number of requirements for a new protocol, the
Omniscience Protocol, that could be used to enable such efforts.

1.  Introduction

In a June 17, 2003 U.S. Senate Judiciary Committee hearing, entitled
"The Dark Side of a Bright Idea: Could Personal and National Security
Risks Compromise the Potential of Peer-to-Peer File-Sharing
Networks?," U.S. Senator Orrin Hatch (R-Utah), the chair of the
committee, said he was interested in the ability to destroy the
computers of people who illegally download copyrighted material.  He
said this "may be the only way you can teach somebody about
copyrights."  "If we can find some way to do this without destroying
their machines, we'd be interested in hearing about that," Mr Hatch
was quoted as saying during a Senate hearing.  He went on to say "If
that's the only way, then I'm all for destroying their machines."

[Guardian]

Mr. Hatch was not the first U.S. elected official to propose something
along this line.  A year earlier, representatives, Howard Berman
(D-Calif.) and Howard Coble (R-N.C.), introduced a bill that would
have immunized groups such as the Motion Picture Association of
America (MPAA) and the Recording Industry Association of America
(RIAA) from all state and federal laws if they disable, block, or
otherwise impair a "publicly accessible peer-to-peer file-trading
network."

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ANNOUNCE: NIST Considers Schneier Public Key Algorithm

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANNOUNCE:  NIST Considers Schneier Public Key Algorithm
Newsgroups: sci.crypt
Date: Thu, 01 Apr 2004 08:09:10 -0700
Grumble writes:
I hate April 1st with all my heart.

not wanting to be outdone by venerable IETF tradition:
https://www.garlic.com/~lynn/2004d.html#36 Omniscience Protocol Requirements

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FC1 & FC2

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: FC1 & FC2
Newsgroups: linux.redhat
Date: Thu, 01 Apr 2004 14:50:46 -0700
I have a 128mbyte laptop and a 256mbyte dell precision 410 (two processor) for testing fedora. FC1 installed on both machines w/o any problems ... although there was a problem getting the SMP kernel to boot on the 410.

FC2 test2 installation also went w/o a hitch ... although it did complain a little during the laptop installation about only having 128mbyte.

There does seem to be a significant difference between FC1 (2.4 kernel) and FC2 (2.6 kernel) with respect to real memory usage (displayed by system monitor. FC1 was constantly at 250mbytes in use and flowing over into swap. Doing the same stuff on FC2 tends to show anywhere from 160-200mbytes real memory in use. The feel of FC2 seems to be quicker. It is almost as if the FC2 kernel is running/operating with 50-60mbytes less real storage requirements

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

System/360 40th Anniversary

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40th Anniversary
Newsgroups: alt.folklore.computers
Date: Fri, 02 Apr 2004 08:18:55 -0700
Lars Poulsen writes:
IBM bid several systems. - one of them was an MVT system with WYLBUR (strange, because the 360/65 at the technical university was very happy with a Waterloo inspired system that included WATFIV and WITS) - one was a TSS system (which would never have worked!) - they never proposed VM/CMS

in the '60s ... it was still CP67/CMS running on these special 360/67s ... one could make the cliam that CP67 was "only" a type-III program ... but then so was HASP.

the other issue was that in many sections of IBM, VM was very non-strategic, doing things like branding copies (internally available within the corporation) of the CERN (& share report) comparison of TSO and CMS to be "comfidential, restricted" ... i.e. available on need-to-know basis only.

however, IBM Denmark later in the '70s, did get active with vm/cms ... and there was a conference or two at the university in Lingby that i remember attending. Somebody that was very active over the years was John Hartmann (who I remember commenting that even other people from Denmark had problem pronouncing his name). minor past reference:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party

picture of john:
http://vm.marist.edu/~piper/party/jph-08.html
also (with various pictures and stories from Denmark thrown in):
http://vm.marist.edu/~piper/party/jph-01.html

and from above:
Preben Thomsen

In Denmark the VM story began in the summer 1973. Two System Engineers were working on a demonstration of this 'strange new Operating system' they called VM/370. I was a system programmer busy writing modifications to HASP (that's what they call JES2 now a days). My boss saw some possibilities in VM, so he asked me to join the project. Eventually we got some machine time every afternoon. When I was playing in the terminal room this young guy began circling around me. I new him. He began at IBM as rather irritating student. Irritating because he was always right. After university he was hired - and he began asking questions about VM. Looking back, it is hard to believe that I could teach him anything.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RFC-2898 Appendix B

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RFC-2898 Appendix B
Newsgroups: sci.crypt
Date: Fri, 02 Apr 2004 10:02:05 -0700
(Kev) writes:
Some may consider this heresy, but I don't think writing down passwords is such a bad thing. You can write down a much stronger password than you can memorise. And when you write it down, the piece of paper effectively becomes an access token. So long as you keep it well hidden, and change the password regularly, I think you end up with better security than a weaker password committed to memory that you rarely change.

password-based (shared-secret) infrastructure from a purely theoretical myopic stand-point isn't in itself bad necessarily bad ... the issue is the requirement for unique shared-secret for each distinct security domain ... and what happens when a person becomes involved in scores of distinct security domains ... each requiring their own, unique, proveably secure shared-secret.

As long as the number of distinct electronic, online environments that a person had to deal with was limited to a very few, shared-secrets wasn't horribly difficult. It is the proliferation of electronic, online environments such that a person is dealing with scores of different environments ... all requiring their unique authentication shared-secret.

So, I have a hundred different pieces of paper, each well hidden, and each needing to be changed every month.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Fri, 02 Apr 2004 14:20:35 -0700
Charles Richmond writes:
I always thought that Assembler G and Assembler H were from IBM directly...and ASSIST was from Waterloo...

fortran e, Fortran g, fortran h

assembler e, assembler f, assembler h, etc

website with list of (old) products & product codes:
http://mywebpages.comcast.net/gsf/tools/product.codes.html
from above:


ASSEMBLER
   360S-AS-036 S/360 OS ASSEMBLER (E)
360S-AS-037 S/360 OS ASSEMBLER (F)           360SAS037  IEU
5734-AS1    OS ASSEMBLER H                   5734AS100  IEV
5752-SC103  OS/VS ASSEMBLER (XF)             5741SC103  IFO,IFN
   5668-962    ASSEMBLER H V2                   566896201  IEV
5696-234    HIGH-LEVEL ASSEMBLER             569623400  ASM

FORTRAN
360S-FO-092 S/360 OS FORTRAN IV (E)
360S-FO-520 S/360 OS FORTRAN IV (G)          ''         IEY,IHC
   360S-FO-500 S/360 OS FORTRAN IV (H)          ''         IEK,IHC
5734-FO1    FORTRAN CODE AND GO COMPILER
   5734-FO2    FORTRAN IV G1                               IGI
5734-FO3    FORTRAN IV H EXTENDED                       IFE
5799-AAW    FORTRAN IV H EXTENDED PLUS
5748-FO3    VS FORTRAN V1                               IFX,IFY
   5668-806    VS FORTRAN V2 (COMP/LIB/DEBUG)   5668-806   ???,AFB
5688-087    VS FORTRAN V2 (COMP/LIB)                    ???,AFB
   5796-PKR    Ext. Exponent Range for FORTRAN   5796-PKR

old RFC mentioning assembler g
http://www.faqs.org/rfcs/rfc90.html

XREF40 product that converts each translator signature into 2-character code:
http://gsf-soft.com/Products/XREF40.html NOTE: renamed LOADXREF:
http://gsf-soft.com/Products/LOADXREF.shtml
from above (even reference to the rexx compiler):


Most compilers include their own "compiler signature", or "Translator
ID", in the object code they generate. These signatures
(e.g. 5740CB100 0204) are stored by the linkage-editor or binder into
the IDR records of the load-module or program object.

XREF40 converts each translator signature to a 2-character code,
referred to as the "abbreviated translator code". As not all compilers
or assemblers provide a signature, XREF40 can still recognize certain
translators using other criteria when no signature is present for a
given module (CSECT).

Code   Translator name

  AE      S/360 OS ASSEMBLER (E)
AF      S/360 OS ASSEMBLER (F)
  AG      WATERLOO ASSEMBLER (G)
AL      S/360 OS ALGOL (F)
A1      APL/360
A1      APL2 V1
  A2      APL2 VERSION 2
BA      VS BASIC
  C       C FOR SYSTEM/370 (MVS)
C       C/370 COMPILER AND LIBRARY V2
C       C/370 COMPILER V1 V2
C       SAA AD/CYCLE C/370 V1 V2
  CA      OS FULL ANS COBOL V3
CA      OS FULL ANS COBOL V4
  CA      S/360 OS FULL ANS COBOL V1 V2
CE      S/360 OS COBOL (E)
C1      VS COBOL FOR OS/VS (R2M2)
C1      VS COBOL FOR OS/VS (R2M3)
  C2      VS COBOL II
C3      COBOL/370 and COBOL for MVS (5688-197)
  C3      COBOL for OS/390 (5648-A25)
C3      Enterprise COBOL (5655-G53)
E+      EASYTRIEVE PLUS (EZPDRIVR)
F?      FORTRAN IV (H EXTENDED PLUS)
  FC      OS FORTRAN CODE AND GO COMPILER
FE      S/360 OS SYSTEM FORTRAN IV (E)
  FG      OS FORTRAN IV G1
FH      OS FORTRAN IV H EXTENDED
F2      VS FORTRAN V2 (COMP/LIB)
F2      VS FORTRAN V2 (COMP/LIB/DEBUG)
  F3      VS FORTRAN R3
HL      HIGH-LEVEL ASSEMBLER
  H1      ASSEMBLER H V1
H2      ASSEMBLER H V2
PA      VS PASCAL
PF      S/360 OS PL/1 (F)
  PG      Visual Age PL/I
PK      OS PL/I CHECKOUT COMPILER
  PM      PL/I FOR MVS AND VM
PV      PASCAL/VS
P1      OS PL/I OPTIMIZING COMPILER V1
P2      OS PL/I V2
  P3      Enterprise PL/I
RG      RPG II
  RG      S/360 OS RPG
RX      REXX/370
XF      OS/VS ASSEMBLER (XF)

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

REXX still going strong after 25 years

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: REXX still going strong after 25 years
Newsgroups: alt.folklore.computers
Date: Fri, 02 Apr 2004 19:43:49 -0700
"NoSpam" writes:
I thought that ASSIST was written by John Mashey at Penn State.

wikipedia:
https://en.wikipedia.org/wiki/John_Mashey
https://en.wikipedia.org/wiki/ASSIST

another description:
https://en.wikipedia.org/wiki/ASSIST
and it is available here:
http://mstack.cs.niu.edu/pub/ASSIST/
from above:
http://mstack.cs.niu.edu/pub/ASSIST/ASREPLGD.HTML
http://mstack.cs.niu.edu/pub/ASSIST/ASUSERGD.HTML

and for something completely different ... one university's computer history that is farily typical (which happened to include an extraneous reference to ASSIST):
http://www.wvnet.edu/divisions/systems/history/events.html

in the above history, they mention in the first entry (sept. 69) running CPS (conversational programming system). CPS was done by the Boston Programmming Center ... which was on the 3rd floor of 545tech sq ... other postings about 545 tech sq:
https://www.garlic.com/~lynn/subtopic.html#545tech

nat rochester, jean sammet and others were at the Boston programming center. In the growth of CP/67 and transition to VM/370, the group was spun off from the science center and eventually took over all of third floor as well as absorbing the boston programming center (and most of their people). As the vm/370 group continued to grow, they eventually had to move out to the old SBC bldg at Burlington Mall (SBC had earlier been spun off to CSC as part of some legal action). CPS included optional support for a special microcode option for the 360/50 that sped up a number of CPS operations.

The CMS COPYFILE command is notorious as having been implemented by a former boston programming individual.

and for true topic drift, random past sammet references:
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002o.html#76 (old) list of (old) books
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003c.html#1 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2004.html#20 BASIC Language History?

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

[OT] Microsoft aggressive search plans revealed

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] Microsoft aggressive search plans revealed
Newsgroups: comp.arch
Date: Mon, 05 Apr 2004 09:45:46 -0600
Joe Seigh writes:
Well, you can always copy the entire data structure and swap it in with a single pointer update so you only have to execute one memory barrier. And for things like linked queues where items are only added to the front of the queue, you can get away with a single memory barrier after loading the only pointer that can be pointed to new items. But there are data structures where making a deep copy would be considered expensive. That's why we have linked lists as an alternative to using arrays for everything.

that was one of the examples came up for compare and swap. most of the work at science center
https://www.garlic.com/~lynn/subtopic.html#545tech
on compare and swap was charlie's (thats why the mnemonic is his initials, CAS ... the challenge was coming up with something that corresponded to his initials.

The problem/opportunity given the science center trying to get it into the 370 hardware was the statement from the POK redbook (i.e. 370 architecture) owners was that a multiprocessor-specific operation was not sufficient justification for a new instruction. there needed to be a use for the instruction in both multiprocessor and non-multiprocessor environments. that gave rise to the description of being able to co-ordinate multi-threaded processes (which might be user-level and/or enabled for interrupts) that needed coordination regardless of whether running in a multiprocessor or strictly uniprocessor (aka non-SMP) environment.

That became incorporated into the programming notes that were at the end of the instruction description in the principles of operation ... much of which has since been moved to the appendix. esa/390 table of contents:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822

current compare and swap:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822
see programming notes section at end of above page.

new perform locked operation:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822
long description included in the above page

multiprogramming and multiprocessing examples (some wording essentially carry-over from the original):
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822

one of the things developed at the Uithoorn HONE center (and incorporated into the rest of the HONE centers around the world):
https://www.garlic.com/~lynn/subtopic.html#hone
was a CKD disk I/O sequence that emulated compare and swap operation for disks in large complex of loosely-coupled processors with shared access to the same disk farm.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

who were the original fortran installations?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: who were the original fortran installations?
Newsgroups: alt.folklore.computers
Date: Mon, 05 Apr 2004 12:32:18 -0600
hancock4@bbs.cpcn.com (Jeff nor Lisa) writes:
This has been discussed before, but allow me to repeat: The IBM 1130--to a programmer--seemed to be the slowest computer ever built, due to its very slow punch and printer. I think someone here said the printer ran at 80 lines per minute which seems about right. But I thought the printer was a reconditioned 407 which could do 150 lines a minute (as well as counting). The Fortran II didn't help.

there was a 1443 which had a bar with slugs on it and the bar would move back and forth (something like a saw, "flying type bar") ... which is 150 lines/min

the 407 had reader and a printer ... plus programmable plug board which would do all sorts of accounting stuff. the 407 in the student keypunch area had a plug board setup to just print (student) card deck (self-service). at some point, i took it upon myself to play with the plugboard late at night and/or weekends ... before they started letting me have the whole datacenter on weekends.

we had a wierd situation in the datacenter one day. the university business office ran this 360 cobol accounting program every day ... that ended by printing out emulated 407 switch settings. Apparently the program had gone thru an evolution of 407 plugboard to some sort of autocoder(?) that emulated 407 plugboard that was translated into 709 cobol that was translated into 360 cobol ... and the end of the program still printed out emulated 407 switch settings.

One day the operator noticed that the program ended with some value that they never saw before. The whole batch stream (& machine) was put on hold (idle) ... while they tried to track somebody down in the administration that knew what happened. After about 90 minutes, they weren't able to find anybody ... so they made the decision to run the program again and see if the switch settings came out the same.

columbia reference to plugboards referencing a may03 a.f.c posting
http://www.columbia.edu/cu/computinghistory/plugboard.html

other plugboard and/or 407 refs:
http://www.columbia.edu/cu/computinghistory/407.html
http://www.columbia.nyc.ny.us/acis/history/cpc.html
http://www.columbia.nyc.ny.us/acis/history/
http://mywebpages.comcast.net/georgetrimble/A.html
http://www.columbia.edu/cu/computinghistory/tabulator.html

various 1443 references (some attached to 1620):
http://www.computerhistory.org/old/IBM1620/PAGES/ibm_documents.html
http://www.computerhistory.org/old/IBM1620/PAGES/photos_system.html
http://www-1.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP1440.html
http://www.informationheadquarters.com/History_of_computing/IBM_1620.shtml
http://www.angelfire.com/or/paulrogers/Ibm1620.html
http://www.columbia.edu/cu/computinghistory/1620.html
http://www.geocities.com/rpn01/360h.html

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

who were the original fortran installations?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: who were the original fortran installations?
Newsgroups: alt.folklore.computers
Date: Mon, 05 Apr 2004 12:39:03 -0600
Jonathan Griffitts writes:
Way back then, the rumor was that the 1130 was not originally intended as a product. Supposedly it was a technology experiment that then "escaped" to the market. I've never seen any evidence to confirm or refute that, but it was a strange machine.

university had a 2250m1 ... which was (360) channel attached vector graphics device. there was this thing called a 2250m4 ... which was a 2250 using 1130 as a "controller".

the science center had 2250m4 (1130 + 2250)
https://www.garlic.com/~lynn/subtopic.html#545tech

had one ... and somebody ported (pdp1) spacewar game to it. in the early 70s my kids would play on the weekends. the 2250 keyboard was split into left & right half ... and the keys then were used to control various operations.

that 1130 was also somewhat the genesis of the internal network ... the person that came up with the concept of effectively gateways in every node for heterogeneous computing ... recent posting for minor thread drift:
https://www.garlic.com/~lynn/aadsm17.htm#17 PKI International Consortium

did the first "networking" code between the 1130 and cp/67.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ok, today's updates for FC2 test2 breaks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: ok, today's updates for FC2 test2 breaks
Newsgroups: linux.redhat.install,linux.redhat.rpm,linux.redhat.devel
Date: Tue, 06 Apr 2004 10:46:46 -0600
ok, todays updates for FC2 test2 breaks just about everything.

i noticed during yum update that there were bunch of selinux policy stuff being automagically applied this morning

then just about everything stopped working. i log out ... to log in as root ... which possibly was a mistake ... and x-windows is in-operable. I finally get in as root ... but window managers don't work; i get a bare-bones xterm.

i reboot, doesn't help. there are a huge number of error messages about one thing or another broken. window manager can't execute. only thing that remotely can login as is root ... and there is no window manager ... there is just a really bare-bones xterm.

any suggestion about how to quickly get back to operational system. presumably i can run yum update ... again from the xterm window ... maybe somebody would kindly put out a new selinux policy update that is a little more kindly??

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ok, today's updates for FC2 test2 breaks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ok, today's updates for FC2 test2 breaks
Newsgroups: linux.redhat.install,linux.redhat.rpm,linux.redhat.devel
Date: Tue, 06 Apr 2004 14:30:10 -0600
Alexander Dalloz writes:
You better subscribe to the fedora-test-list@redhat.com if running a test release.

yes, well, hum ... i really had some use for 2.6 kernel on Fedora ... so it was build 2.6 kernel on FC1 or move to FC2. I put-up FC2 on a couple victum machines for two weeks before migrating it to additional machines. everything breaking after applying maint. this morning was a something of a shock.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ok, today's updates for FC2 test2 breaks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ok, today's updates for FC2 test2 breaks
Newsgroups: linux.redhat.install,linux.redhat.rpm,linux.redhat.devel
Date: Tue, 06 Apr 2004 15:49:40 -0600
and of course the work around is rename the selinux directory created in /etc/security by the service update this morning (and then reboot). for some reason it is ignoring both the "0" in the /selinux/enforce file and the enforcing=0 parameter on the boot command.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Has the Redhat ntp time server gone off-line?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Has the Redhat ntp time server gone off-line?
Newsgroups: alt.linux.redhat,alt.os.linux.redhat
Date: Tue, 06 Apr 2004 16:39:57 -0600
"Dave" writes:
Technically, you should try to find an NTP stratum 2 time server close to you geographically. Supposedly the time is more accurate when you reduce the number of hops in the route.

minor point but geography might not have anything at all to do with it; the number of hops tends to be what network you are connected to and what network the time server is connected to.

using traceroute ... I've seen short latencies that went effectively next door but actually involved hops from one coast to the other coast and back ... it was a few hops but both nodes were fairly high up in the network infrastructure hierarchy. there have been other "close" locations that involved lots and lots of hops and long latencies ... but it involved nodes that were fairly low in the network infrastructure hierarchy ... using different ISPs from different major service providers.

the problem is latency ... all other things being equal, geography (& distance) might be expected to dominate. However, in numerous regions, the politics of ISP interconnectivity can dominate (and have relatively little to do with geography).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ok, today's updates for FC2 test2 breaks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ok, today's updates for FC2 test2 breaks
Newsgroups: linux.redhat.install,linux.redhat.rpm,linux.redhat.devel
Date: Tue, 06 Apr 2004 16:56:27 -0600
Alexander Dalloz writes:
Well, if you run daily automatic yum updates you obviously have no FC2 test 2 system any more, but you run the development branch:

1) you have no test system you could contribute to the community testing phase with valuable feedback

2) development is unstable, even bloody, sometime even that bloody that things are happily broken


in general i've been quite pleased with FC2/test2 ... rh9 is coming to end of life and i had FC1 (not test version) on some victim machines before switching to FC2/test2 (and 2.6 kernel). I've had outstanding bug report in for FC1 having to do with the SMP kernel unable to boot on an older two-processor machine (although the single processor kernel boots fine). Somebody else did eventually find a work-around involving twindling some BIOS settings (although RH9 didn't have problem and doesn't appear that FC2 has a problem).

today's issue seems to be something else is broken. this particular (yum) update created selinux policy files ... but who would have expected that the kernel that has been running fine would ignore enforce/0. no kernel change and doesn't appear to be any code change ... just create the policy files.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

ok, today's updates for FC2 test2 breaks

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ok, today's updates for FC2 test2 breaks
Newsgroups: linux.redhat.install,linux.redhat.rpm,linux.redhat.devel
Date: Tue, 06 Apr 2004 22:25:26 -0600
Mark Bauer writes:
edit /etc/sysconfig/selinux and change the value to "disabled" and then reboot

there is no selinux in /etc/sysconfig

i did check enforce in /selinux as being zero ... and even rebooted with enforcing=0 on boot. i finally just renamed the new directory selinux with the policies that was put into /etc/security.

is there description of format of /etc/sysconfig/selinux might look like ... or is just the word *disabled* sufficient?

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

Was it ever thus?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Was it ever thus?
Newsgroups: alt.folklore.computers
Date: Wed, 07 Apr 2004 08:21:13 -0600
"Hank Oredson" writes:
Personally Developed Software IBM Personal Computers Personal Editor II (C) Copyright IBM Corp. 1982,1985 Written by Jim Wyllie

wyllie sat sort of opposite side of bldg. 28 from my office.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

COMPUTER RELATED WORLD'S RECORDS?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: COMPUTER RELATED WORLD'S RECORDS?
Newsgroups: alt.folklore.computers
Date: Wed, 07 Apr 2004 13:37:25 -0600
"Stimpy" writes:
info.cern.ch was the first WWW site. Created by Tim Berners-Lee in (IIRC) November 1990

another site was SLAC
https://ahro.slac.stanford.edu/wwwslac-exhibit

and a line from the above page
... 1994 ... use of WWW explodes to the world beyond physics ...

the up and coming GRID stuff has somewhat similar flavor ... although it isn't likely to ever be as evident as WWW

cern & slac were sister sites and both heavy vm installations sharing a lot of common applications and other activities.

some of the original SLAC pages:
https://ahro.slac.stanford.edu/wwwslac-exhibit/early-web-chronology-and-documents-1991-1994

the slac www wizards:
http://www.slac.stanford.edu/history/earlyweb/wizards.shtml
other historical stuff
http://www.slac.stanford.edu/welcome/slac-pub-7636.html

and the history description from w3:
http://www.w3.org/History.html
the above comments that cern no longer has its original web pages online (and/or possibly available?)

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

[OT] Computer Proof of the Kepler Conjecture

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT] Computer Proof of the Kepler Conjecture
Newsgroups: comp.arch
Date: Wed, 07 Apr 2004 17:08:28 -0600
Robert Myers writes:
Greetings,

well then from today's inbasket:
http://www.improbable.com/airchives/paperair/volume10/v10i2/v10i2.html
<> "PROOFREADERS' UPDATE 2004," by Joe Slavsky. The latest (and 24th) annual progress report from the large group of mathematicians who are laboring to prove -- by hand -- Haken and Appel's famous computer-aided proof of the Four-Color Map Theorem. [BACKGROUND: Haken and Appel's gargantuan, reputed-to-be-too-big- to-be-completely-read-and-checked-by-human-beings proof has irritated many people. Thus this prove-it-by-hand project.]

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

If there had been no MS-DOS

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If there had been no MS-DOS
Newsgroups: alt.folklore.computers
Date: Wed, 07 Apr 2004 23:49:16 -0600
Brian Inglis writes:
IBM sysadmins were called sysprogs and normally had nothing to do with your jobs, except maybe bouncing them from the fast to the slow queue when the fast queue turnaround time wasn't so fast ... heh, heh!

I/Os were billed because that was the base function of the mainframe: to do lots of I/Os fast. Most jobs spent most of their time in I/O wait, as that was usually the throughput bottleneck, so charges tended to be high for I/Os, allowing the processor charges to be lower. When the load increased to where the bottleneck was becoming the CPU, processor charges got increased, to try to put off the need for an expensive upgrade.


the other issue at many places was they were internal datacenters and the charges were totally funny money ... and the datacenters were actually treated as cost centers.

while i was at university, the datacenter got the state legislature to change the datacenter from a cost center to a profit (or at least break-even) center. the datacenter had a responsibility to provide services to the university but also could sell services on the open market. the legislature had to restructure the university budget so that departments got enuf (real?) dollar allocation to pay for datacenter charges. prior to that there were administative bills and charges ... but no money actually moved from one set of books to another (datacenter was really funded by appropriation from the state).

it was the sign of the times ... within a year or so, boeing formed BCS ... effectively in part to move the datacenters from funny money cost centers to profit centers. they could sell services to the rest of the boeing company ... but they could also sell services on the open market. a couple months after it was formed, ibm conned me to giving up spring vacation and teaching a 40 hr computing class to BCS technical staff (I was still an undergraduate).

In the 70s, i remember one of the science center people that had been involved with cms\apl had gone to BCS. One of their contracts was with USPS that did the financial model justifying the increase to 4(?)cent stamp.

the difference between being a cost center and a profit (or at least a non-cost) certer was a lot more freedom for planning for new hardware. As a cost center, the university datacenter was constantly begging the state legislature for the appropriations to buy new hardware (and if the appropriation didn't pass, they put off any new hardware). As at least a psuedo-profit center, the datacenter had its own earnings and could purchase equipment from the money it was earning.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

If there had been no MS-DOS

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If there had been no MS-DOS
Newsgroups: alt.folklore.computers
Date: Thu, 08 Apr 2004 06:56:54 -0600
Brian Inglis writes:
It was a lot easier just charging for usage than attempting to control usage based on datacentre budget and departmental allocations. Departments had to justify their budgets to the business, and we were no longer responsible for attempting to justify our budget based on departments' consistent underestimates of their projected workload, then deal with the company politics when the 10% estimated usage department consistently used more than 50% of the machine.

in the change over to exchanging real money rather than funny money ... rather than the datacenter having to beg legislature for budget based on department projections ... the departments needed to have real budget allocated (including money for student jobs) ... and if the money ran out ... so did the computer usage. this eliminated a lot of scenarios where department projected ten percent usage and used fifty percent ... based on some kind of political pull of one form or another.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

If you're going to bullshit, eschew moderation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If you're going to bullshit, eschew moderation.
Newsgroups: alt.folklore.computers
Date: Thu, 08 Apr 2004 17:01:10 -0600
Brian Boutel writes:
1942? There was a lot going on in 1942. WW2 started in 1939. As a small child in London I was subject to nightly, and often daily, aerial bombing for 2 years before the US deigned to get involved. Why didn't the US enter the war earlier? Largely because FDR lacked the support at home. First there were the isolationists, who didn't want the US to get its sticky fingers into other people's affairs, and then there were the racists, who were sympathetic to Hitler's ideas.

i was sitting in the doctor's office today ... and picked up one of their reading publications. it was long article on how the BBC refused to give churchill air time all during the 30s to warn about the growing problem in germany, marshal public support and that something needed to be done. the implication was that the bbc (and much of britan's population) had their heads in the sand until forced otherwise. The thread was basically that both the british and american public were nearly identical ... and neither changed until it was forced on them. the difference (if any) was possibly because of geographical circumstances, britan was forced to face it earlier.

nothing in your statement seems to counterdict that.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How secure is 2048 bit RSA?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How secure is 2048 bit RSA?
Newsgroups: sci.crypt
Date: Thu, 08 Apr 2004 20:05:09 -0600
"No One" <no-one-no-spam@home.com> writes:
How secure is a 2048 bit RSA key? DOes it make sense to pretect a 256 bit AES session key with a 2048 bit RSA public/private key pair?

from here two months ago
https://www.garlic.com/~lynn/2004b.html#11 is 3DES more secure than 384 bit RSA?

references ietf draft "Determining Strengths For Public Keys Used For Exchanging Symmetric Keys" .... there is table


System
requirement  Symmetric  RSA or DH     DSA subgroup
for attack   key size   modulus size  size
resistance   (bits)     (bits)        (bits)
(bits)

70           70          947          129
    80           80         1228          148
90           90         1553          167
   100          100         1926          186
150          150         4575          284
200          200         8719          383
250          250        14596          482

....

which implies better than 14,596 bit RSA key would be needed.

--
Anne & Lynn Wheeler - https://www.garlic.com/~lynn/

Happy Birthday Mainframe

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Happy Birthday Mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 09 Apr 2004 08:19:11 -0600
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
In our shop, JOB cards were orange and the rest white. Any card with "temporary" content - such as a date card or file override - was blue. Assembler source was on cards with a cerise (I kid you not - that was the name of the colour) stripe at the top.

when i started, all cards were blank (manilla folder) stock except the $job card for ibsys which was red stripe across the top including the print area ... university with mostly student fortran jobs. in the conversion to 360, JCL became red stripe.

part of the issue was that student jobs tended to be 10-30 cards, and the input window just accpeted them and batched them in tray (that held about 3000 cards). ever so often the tray was taken to 2540 reader and the 1401 read all the cards and wrote them to tape for processing by 709. output from the 709 went to tape and was printed by the 1401.

the operators took stack of paper, burst the first job, pulled the first card deck from the tray (red cards made it easy to determine where each job started), wrapped the print around the card deck and a rubber band around the print. when things were slow, people would practice shooting rubber bands.

for larger deck of cards (say 100 or more) would use a magic marker to draw a diagonal stripe across the top of the deck (from one back corner to the opposite front corner). for decks that hadn't been sequenced ... the diagonal stripe aided in putting a dropped deck back in order.

the name of the program might also be written with a marker on the top of the deck. if several programs decks were stored in the same card tray ... it made it easier to pick out the specific deck you wanted. there used to be these filling cabinets ... that were double wide card drawers.

if you wanted to distinguish other cards in a card deck ... you used set of different colored magic markers to draw a line across the top edge of the card.

it was rare for the university to have cards other than the least expensive manilla stock ... with the exception of the small amount of manilla stock that had the red stripe across the top edge.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How secure is 2048 bit RSA?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How secure is 2048 bit RSA?
Newsgroups: sci.crypt
Date: Fri, 09 Apr 2004 08:24:20 -0600
Ckwop@hotmail.com (Simon Johnson) writes:
I think 128-bit is a bit crazy. I mean, find me a system that has a security level of 128-bits? The cost of breaking a 128-bit key is still considerably more than the global GDP yet the cost of exploiting the latest flaw in the OS your using to obtain that key is probably less than $10.

I take the argument that security failures in the cryptographic layer tend to be more catastrophic than elsewhere in the cryptosystem but it's still pretty ridiculous. Achieving an overall security level of just 64-bits would be truly impressive feat.


the issue tends to be if you are expected to proect the material for 30-50 years. possibly somewhat surprisingly, one of the most agressive about number of years has been the motion picture industry, they've been asking for longer period protection than is mandated for some gov. agencies.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

If you're going to bullshit, eschew moderation

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If you're going to bullshit, eschew moderation.
Newsgroups: alt.folklore.computers
Date: Fri, 09 Apr 2004 09:56:14 -0600
jmfbahciv writes:
It's a deja vu. Since I'm a spoiled American who only recently started learning _real_ history, I don't want to wait until the mess happens and then have to clean it up. From what I understood about Boyd's work, he just went around the big piles of mess to the center, then started working outward. I also have no idea how to do that.

i consider boyd one of the greatest strategists i ever met. he had this story about an 18 page newsweek article that was 18 months in the planning about industrial/military spending issues. because of certain points that needed to be presented ... it apparently was necessary to have congressional testimony ... the hearings were done late friday afternoon ... and the 18 page article on the testimony managed to get written and make it into the issue on monday. the process somewhat absolved the sources of information from repercussions of any secrecy act. Although there was the story that the pentagon then invented a new red stamp for non-classified, but terribly embarrassing material, "NO-SPIN" (and it is probably not what you think, if you can find it, check out the cover photo of the newsweek issue).

maybe not to the center ... but possibly to the heart. one of the biographies that talk about desert storm was the big battle over not doing the traditional mechanized warfare ... slugging it out up the center until nobody is left standing.

and of course, previous boyd postings:
https://www.garlic.com/~lynn/subboyd.html#boyd

one of the surprising things that you may find is the magnitude of far east history that is frequently skipped over ... compared to how much european history is covered in the US (regardless of any factual issues). some of this may reflect the predominant european background.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

microsoft antitrust

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: microsoft antitrust
Newsgroups: alt.folklore.computers
Date: Fri, 09 Apr 2004 13:52:59 -0600
Brian Inglis writes:
IBM had already taken various actions (like that mentioned above) to make it irrelevant. They also operated each country as a separate unit, to avoid problems with legal differences between countries, like that regarding communications, data protection, and privacy.

in the mid '70s i got roped into helping with the microcode for virgil/tulley (i.e. 138/148) ... misc.
https://www.garlic.com/~lynn/submain.html#mcode

and then dragged into helping on the business end off & on for a year. basically you create a product ... take it to the business planning and forecasting people for a market segment or region ... and thrash out a pricing and market forecast. the basic process was to come up with high, medium and low price for the product and get forecasts ... then iterate since the choice of prices was somewhat a mystical art.

part of the issue was that there was heavy up front development costs that would be amortized over relatively small production volume (tens or hundreds of thousands). you had to price to recover your costs. however, if the market was price sensitive, you might be able to get a larger volume with a lower price. the larger volume resulted in amortizing the heavy up front costs across a larger number of units ... justifying the lower price. however, some things weren't necessarily price sensitive and forecasting a lower price wouldn't increase the volume prediction. For instance, some datacenter costs were just starting to be dominated by people costs, not hardware prices. Lowering hardware prices, didn't necessarily increase demand because costs were starting to be dominated by other factors. There were even some cases where there was no price point where the forecast would recover the costs and the product would be canceled ... sometimes late in the effort.

In any case, we run around to 1133 (domestic sales & marketing hdqtrs), emea & afe hdqtrs ... recent emea/afe reference:
https://www.garlic.com/~lynn/2004d.html#25

... and major domestic regions, and some of the larger countries ... repeating this process. Part of the problem I discovered for the mid-range domestically was that domestic forecasters weren't really held accountable for their numbers. This process was also a little wierd since I was providing/supporting a lot of the technology on the backend (apl & hone)
https://www.garlic.com/~lynn/subtopic.html#hone

that the planners & forecasters were using for their models. And with virgil/tulley, i was dealing with them on the frontend for the application of those same models.

Finally to the point related to in your posting. World trade forecasts effectively resulted in intra-company purchase order to the plant. If domestic forecasts were off by a factor of 10, the plant was responsible, but in world trade countries, the plant shipped the forecasted boxes to the country and it became their responsibility. People might even be fired for gross mis-forecasts.

As a result, you would place less weight on domestic forecasts and frequently the plant would duplicate a lot of the (domestic) marketing forecast stuff to try and establish the real numbers. Domestic forecasters, because they really weren't held accountable for mis-forecasts, tended to more align their forecasts with major corporate strategic statements (and it did seem sometimes with enuf strategic statements and other efforts, the results could be dictated).

My involvement with virgil/tully somewhat overlapped the period that i was also working on VAMPS/smp
https://www.garlic.com/~lynn/subtopic.html#smp
and the resource manager.
https://www.garlic.com/~lynn/subtopic.html#fairshare

All the business toil & bubble with virgil/tully somewhat helped with the resource manager. It was going to be the first SCP "priced" feature and there were all sort of business issues to be worked out. Up until that time, application software had been priced, but operating system (kernel) software had been "free" under the theory it was required to support the machine. The resource manager got to break new ground as priced for kernel feature. A little bubble with the resource manager was that it shipped before SMP support ... and the business deal for kernel features was that it wasn't really bare-bones hardware support ... but something like SMP support still needed to be free. The problem was that about 80 percent of the resource manager code was needed for SMP support. The solution was that when SMP support shipped, the 80 percent of the resource manager code needed for SMP was transferred into the free "kernel componeent" ... and the remaining 20 percent of the resource manager continued to be charged for at the same price.

So many years later, when we had invented 3-tier architecture
https://www.garlic.com/~lynn/subnetwork.html#3tier

and were taking a lot of grief from the SAA people (who if not actually trying to stuff the client/server genie back into the bottle, were at least attempting to put it in a straight jacket). the guy now running SAA was the person I spent all that time running around with doing virgil/tully. We would periodically drop in on him, he now had a big corner office in somers (pyramid power) that could almost see to endicott.

as to the heavy upfront and infrastructure costs ... some of this was commented about in the fs references
https://www.garlic.com/~lynn/submain.html#futuresys
....
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System

and with respect to comment in the above about never hearing of anybody refusing to go along with FS ... i didn't.

and to take a long rambling post even further afield ... some recent (unrelated) comments about privacy:
https://www.garlic.com/~lynn/aadsm17.htm#21 Identity (was PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#23 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#24 Privacy, personally identifiabile information, identity theft

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360 40 years old today

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40 years old today
Newsgroups: alt.folklore.computers
Date: Fri, 09 Apr 2004 18:38:54 -0600
haynes@alumni.uark.edu (Jim Haynes) writes:
kind of records of blocks of records you wanted up to the limit of the number of bytes per track. This assures the very most efficient use of disk space, but at the cost that when disk technology changes you had to reformat everything. Most everybody else used a fixed block size on disk, which meant you wasted a little space in the last block of a file but everything else was so much simpler. Seems like many years later IBM decided that a fixed block architecture might be a good idea after all.

fixed-block actually tended to have two characterstics ... or lets say ... CKD had two characteristics that differed from fixed-block.

in typical fixed-block ...

1) everything was allocated based on the block size 2) you found stuff by having some sort of index structure that knew which block things were in

CKD was an extreme contrained resource trade-off.

1) disk records could be formated exactly the size the application required 2) lots of finding stuff relied on searching for matching pattern rather than keeping an index

an index tended to imply real storage for the index to occupy (at least temporarily). the CKD stuff effectively tradded off not having indexes that occupied real storage for I/O bandwidth involved in outboard searching for matching patterns.

Both the volume VTOC (disk directory of all files) and PDS (partition data set ... a file that had a directory of members within the file) used multi-track search to find matching information. By at least the mid-70s, the trade-off was no longer valid .... real memory was becoming much more available and disk I/O was becoming the constrained resource. On 3330 disk with 19 platters, running at 3600 rpm, a multi-track search of all 19 tracks took almost a 1/3rd of a second (i.e. 19/60th second) elapsed time. Caching a high-use index in memory was becoming a much more sensible trade-off than expensive multi-track search.

Once, I was brought in to shoot an extremely severe pathelogical performance problem at a large national retailer that was running a datacenter with several VS2/SVS 370/168 systems (corresponding to national regions) sharing the same collection of disks.

I was brought into classroom with a dozen or so class tables (6-8ft long, 2foot wide) covered in one foot high stacks of performance monitoring data from all the systems and given the opportunity to find the problem. After an hour or so ... i wasn't finding any disk i/o rates that corresponded to pathelogical extreme performance degradation. The only pattern that I started to notice that consistently related to performance problem periods was a specific drive I/O was consistently around 6-7 i/os per second.

recognizing the pattern was slightly aggravated by the fact that there was only system complex specific statics, you had to keep a running sum for all disks in the datacenter in your head, based on adding the i/o activity against each disk by each processor complex for each time period. there were dozens of disks and time periods were 10-15 minute intervals ... so there was an entry for each disk for every time-period in each of the different print-outs from each of the processor complexes sharing the same pool of disks. it was further complicated by the traditional wisdom that heavy loads on 3330 disks were in the 40-60/sec range ... so everybody was looking for disks with peak activity over 40/sec

so there is this line about going to the doctor and saying it hurts when i do this; and the doctor says, well stop doing it.

in any case, FBA devices were supported by the systems that had facilities for indexing to find where things were located as opposed to be tightly wedded to CKD searching technology.

today, technology has reached the point where all the disks are fixed block (in effect mass production manufacturing techniques are cranking out fixed block disks) and CKD operations are simulated in the disk controller. Part of the reason is economics using commodity fixed block disks and part of the reason was the capability was developed anyway.

There was a number of projects by various vendors to develop advanced disk management subsystems. These disk controller implemented virtual disks, data compression, only storing what was actually allocated, etc. As a result, there was no longer a 1:1 relationship between how the operating system thought data was arrainged on disk and how the controller was actually managing all the information in the virtual storage subsystem. Data could be in controller cache or some place the controller had decided to put it. They had to at least simulate CKD disk search operations with data that might be in the controller cache.

It used to be every time that new disk technology was shipped to customers ... there were large operating system changes to add the new device driver support. Now they can have a whole slew of different model (virtual) 3380s and 3390s defined and the operating system doesn't even know that whole generations of real hard disks get swaopped in&out underneath.

some past discussions of the issue:
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002g.html#32 Secure Device Drivers
https://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360 40 years old today

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40 years old today
Newsgroups: alt.folklore.computers
Date: Sat, 10 Apr 2004 10:44:03 -0600
Joe Morris writes:
...and then there were the SORT implementations which optimized their use of intermediate files based on knowledge of the physical disk architecture, the idea being that in a single revolution of a disk drive they could do head switching to read records from several different tracks.

vm/370 used similar logic on 3330s and 3380s for paging head switch.

the problem on 3330 was that the electronic latency to do a head switch operation was longer than the rotational latency from the end of one record and the start of the next record. the solution was to use a format that placed small, dummy, non-data records between the 4k page records. the problem was that on a 3330 track, after getting three 4k records ... there was only rum enuf on the track to intersperse maximum of 110-byte records ... and the nominal specs for head-switch latency was more than the rotational delay introduced by 110-byte record.

so nominal vm format was each track was 1) 4kpage, 2) 50-byte dummy, 3) 4kpage, 4) 50-byte dummy, 5) 4kpage

... and the logic knew that there wasn't enuf latency to do switch between heads involving immediately sequentially occurring records (attempting it would result in a complete rotation).

so i did a little study ... there were various combinations of processors, channels, and controllers (ibm and non-ibm) ... all with slightly different latency characteristics ... which in aggregate contributed to the head-switch latency.

so if you redid format with 110-byte records and ran test on large variations ... what might be the results.

first thing i did was write a simple program that would read/write a full track ... redoing the format from 50-byte dummy records to 110-byte dummy records while the system was running "on the fly" (it would reformat a disk that was actively being used). then the kernel "set-sector" table had to be patched in-core (i.e. the known sector rotational starting position of each record).

so in general, there were a number of non-IBM controlers that could do the head-switch with 110-byte dummy records (i.e. they were slightly faster than the ibm 3830 bcontrollerb).

also, 168s, 148s, and 4341s could do the head-switch with IBM 3830 disk controllers.

158s were the big problem ... their intergrated channel had higher latency and resulted in the head switch time not being performed in the rotational latency of 110-byte dummy record.

it turned out all of the 303x machines weren't able to do the head switch in the 110-byte gap; which isn't surprising since the "channel director" used by all of the 303x machines was really a stripped down 158 running just the integrated channel code.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360 40 years old today

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40 years old today
Newsgroups: alt.folklore.computers
Date: Sat, 10 Apr 2004 12:43:49 -0600
Anne & Lynn Wheeler writes:
page records. the problem was that on a 3330 track, after getting three 4k records ... there was only rum enuf on the track to intersperse maximum of 110-byte records ... and the nominal specs for head-switch latency was more than the rotational delay introduced by 110-byte record.

oops, talk about knocking off a post too quickly. the spec. called for 110-byte dummy record for doing the head-switch. the 3330 only had room for max. 101-byte dummy record (between the 4k data records). however, based on lots of timing tests ... the major problem was the 158 integrated channels ... and its descendent, the 303x channel controller. there was a non-ibm pcm disk controller that reliably performed the head-switching with only 50-byte dummy records.

the problem got worse with 3880 disk controller. while 3880 added support for 3mbyte transfers & data streaming ... its command latency got worse. data streaming moved some hand-shake from every byte to groups of 8bytes. this helped get the 3mbyte transfer rate ... and also allowed doubling the maximum cable lengths from 200ft to 400ft.

the problem was that going from 3830->3880 controller ... it went from a relatively fast horizontal mcode engine to a relatively slow jib-prime vertical microcode engine. The 3880 did special hardware circuits for the dataflow ... leaving the jib-prime to handle command operations.

for bldg. 14/15 (engineering & product test labs), i had rewritten the io supervisor for their use. they had prototype devices in "testcells" for various kinds of testing. typically running mvs on host processor with an attached testcell ... resulted in a system MTBF of 15 minutes. As a result, the environment was limited to a number of mainframe processors dedicated to "stand-alone" testing with custom programs operating one testcell at a time. The rewriting io supervisor allowed them to run production operations on the machines while handling a half-dozen testcells concurrently at a time. misc. past posts about disk engineering lab:
https://www.garlic.com/~lynn/subtopic.html#disk

one of the things i did as part of the io supervisor rewrite (besides clean-up and elimination of all failure modes) was to rewrite alternate pathing. Disk controllers could have connections to four different channels ... the channels could be connected to the same processor complex (in which case a processor had multiple parallel i/o paths to say a pool of 32 to 64 disk drives) or to different processor complexes. If different processor complexes were involved, then you had loosely-coupled configuration with multiple processing complexes sharing the same pool of disks.

in the case of multiple channel paths connected to the same processor, there was opportunity to perform traffic load-balancing across all channel paths. so i did this sophisticated channel load balancing implementation ... which wasn't so bad with 3830 disk controllers ... but it fell flat on its face with 3880 disk controllers. It turns out that jib-prime processing for multiple path operation was agonizing slow. If two succesive operations came in on different channel paths ... the 3880 processing almost looked like it went into hibernation performing the channel switching. It was so severe, that it was always better to treat multiple channel paths to the same controller as primary with alternate(s) ... as opposed to load-balancing peers.

some drift back to the original post. the mvs disk device driver group was in stl. i offered to give them MVS support for FBA drives. the problem was that they claimed that it would cost $26m to just ship FBA drive support (not to develop or test ... but just the mechanics of product shipping out the door). those customers were buying as many CKD drives as possible. FBA support wouldn't increase the number of drives sold ... at best, they switch from CKD drives to buying FBA drives. As a result, you couldn't show any ROI on the $26m cost. This is ignoring the long-term advantages of simplified drivers and transitions from one technology generation to the next.

The FBA drives were 3310 and 3370 ... with 3370 drive being the larger capacity. What they did do ... was create a 3375 ... which was a FBA 3370 with CKD emulation built on top.

some past posts mentioning 3375
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overflow?)
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

System/360 40 years old today

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: System/360 40 years old today
Newsgroups: alt.folklore.computers
Date: Sat, 10 Apr 2004 13:36:29 -0600
Anne & Lynn Wheeler writes:
so in general, there were a number of non-IBM controlers that could do the head-switch with 110-byte dummy records (i.e. they were slightly faster than the ibm 3830 bcontrollerb).

also, 168s, 148s, and 4341s could do the head-switch with IBM 3830 disk controllers.


again the tests were run with 101-byte dummy records ... the maximum that you could get on 3330 track ... the spec. called for 110-byte dummy records.

however, this just affected vm paging ... except for installations running my page-mapped filesystem for CMS.

as an undergradudate, i had done a special x"ff" ccw op-code for cms that emulated a seek/search/tic/read/write operation as a command immediate, cc=1, csw-stored ... which drastically reduced the processing overhead of cms disk/file activity. however, bob adair was adamant that cp/67 faithful follow 360 principles of operation. ALL cp/67 virtual machine extensions had to be thru the diagnose instruction .... since the principles of operation defines the diagnose instruction as being model dependent ... and one could claim a paradigm based on a virtual machine being a particular kind of 360 machine model. in any case, similar function eventually shipped in the product but implemented using diagnose instruction.

the diagnose instruction drastically reduced the processing overhead of emulating cms disk/file i/o .... but did nothing to affect the other operational charactistics of emulating real disk I/O paradigm in a virtual memory system (copying to shadow CCWs, fixing/pinning virtual pages, translating addresses, scheduling the operation as an integral unit, etc).

the page-mapped implementation created a bunch better abstraction for a virtual memory infrastructure. every filesystem i/o operations was done as either a pull/push of a page. this had the benefit of picking up the processor performance advantages of the standard CMS disk diagnose i/o operations ... plus eliminating some of the additional overhead of having to emulate real I/O architecture as well as a much better match between file transfers and the underlying virtual memory infrastructure.

disks now shared the same physical format as cp paging system and utilized the same software. physical disk characteristics were removed from cms domain. underlying kernel could break up multiple page transfers into any units that seemed optimal and/or combine them with any other set of pending physical page transfers. it could adapt the physical execution of file requests based on disk, system, memory, contention and/or a number of other considerations. the api had advisery information regarding performing the transfers synchronously or asynchronously with the virtual machine execution .... but the underlying kernel implementation could adapt the actual execution based on the combination of the advisery flags from the api and configuration, and dynamic real-time circumstances.

misc. past on page-mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap

this differs from previous discussion about tss/360s one-level store paradigm. one-level store paradigm basically did full-file mapping and then relied on page faults.

the page-mapped filesystem allowed preserving all the standard filesystem operation semantics, overlapped buffering, large block operations, windowing across files/executables, indicating when region was no longer needed by fetching other pages into the same virtual address range, etc. It was possible to use the API to emulate one-level store operations ... but it was also possible to use the API to preserve a lot of the throughput hints that had been built in as part of supporting physical disk i/o.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Happy Birthday Mainframe

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Happy Birthday Mainframe
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 11 Apr 2004 10:03:12 -0600
Joe Morris writes:
Don't forget the edge-coated cards which were more expensive than the normal ones. Usually these were used for console decks (card decks stored on the operator's console, or on the card reader, which were frequently used), or other decks that would repeatedly be run through the reader. (On diskless systems, this would include the compiler decks.)

a box of cards held 2000 cards, a case held 6(?) boxes. a card tray held about 3000 cards (about a box & half ... card tray was made for filing cabinets that seemed to close to the same dimensions of large legal, five drawer filing cabines (somewhat wider, they were two card drawer wide).

at the university there were lots & lots of cases of the less inexpensive manilla stock ... mostly with no red stripe ... but maybe 20 cases of plain manilla for every case of red stripe.

somewhere in a back room the university had a dozen or so cases of heavier stock ... that were solid colors .. a case or two each of some color.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

bits, bytes, half-duplex, dual-simplex, etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: bits, bytes, half-duplex, dual-simplex, etc
Newsgroups: bit.listserv.vmesa-l
Date: Sun, 18 Apr 2004 07:57:20 -0600
At 11:01 PM 4/16/2004, you wrote:
1. Is CP defined to better perform with CP Owned Volumes on ECKD or will FCP boost performance 2. FICON is rated at 100MB, but the new FICON express cards are rated at 2GB. What's the difference

original escon was about 17mbyte/sec aggregate. escon had been laying around POK since the late '70s, one of the 6000 engineers took escon definition and tweaked it and came up with SLA (which was about ten percent faster than escon) for the 6000. he then started looking at doing an 800mbit/sec version of SLA.

at that time in the IEEE bodies there were a number of standardization activities:

1) HiPPI .... effectively started by LANL to do a standard version of cray channel, 800mbit/sec, half-duplex, parallel copper. there was also various activities defining serial-HiPPI, hippi protocol running over fiber. hippi standards reference page at cern:
http://hsi.web.cern.ch/HSI/hippi/

2) FCS ... effectively started by LLNL to do a fiber version of a serial copper product they had installed. The (6000) SLA architect was convinced to join the FCS standards group and became editor of the FCS standards document. At a meeting in 1988, somebody made some reference that they were hoping for $1k/drop price for FCS by 1992 (i.e. a star-hub architecture to each office, similar to the enet installations at the time, the $1k/drop included the prorated hub costs). The primary difference between the hub-star enet of the time and FCS was that FCS would run over a pair of fiber cables instead of twisted pair copper ... and instead of 10mbit/sec half-duplex ... FCS is a full-duplex protocol, capable of simultaneously transmitting and receiving at 1gbit/sec (2gbit/sec aggregate). hi-end cards capable of full-media thruput have to be able to handle sustained simultaneous transmission of both transmitting and receiving at 1gbit/sec each ... or 2gbit/sec aggregate (there are two fiber cables, one dedicated to transmission in each direction, each cable has a single transmitter on one end and a single receiver on the other end, a pair of cables has the transmit/receive ends reversed so that it has one cable doing simplex transmission in one direction and the other cable has simplex transmission in the opposite direction, the pair of simplex transmissions are used to emulate full-duplex). fcs standards reference page at cern
http://hsi.web.cern.ch/HSI/fcs/fcs.html
fiber channel industry web page:
http://www.fibrechannel.org/

3) SCI driven by SLAC as generic mechanism for low-latency, asynchronous full-duplex fiber interconnect .... there were definitions for SCI for memory-interconnect as well as various kinds of inter-processor and device I/O. At least sequent, data general, and convex built processor based on SCI (scalable coherent interface) memory-interconnect definition. sci standards website:
http://www.scizzl.com/

all three of these standards activities were effectively going on concurrently in the late '80s and early '90s.

minor FCS reference from 1992 with reference to SSA serial copper and FCS:
https://www.garlic.com/~lynn/95.html#13 ssa

misc other references to fiber channel standard (FCS ... as opposed to First Customer Ship):
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/94.html#17 Dual-ported disks?
https://www.garlic.com/~lynn/96.html#15 tcp/ip
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#31 OT?
https://www.garlic.com/~lynn/2001.html#46 Small IBM shops
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002g.html#33 ESCON Distance Limitations - Why ?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002j.html#15 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2003p.html#1 An entirely new proprietary hardware strategy

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/ Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

A POX on you, Dennis Ritchie!!!

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A POX on you, Dennis Ritchie!!!
Newsgroups: alt.folklore.computers
Date: Wed, 21 Apr 2004 21:39:07 -0600
Peter Flass writes:
Some stuff comes as source: Some exits and tables, and (AKAIK) all of JES2.

one of the problems that JES2 had early on ... was that it was all being dong using the vm/cms source maint. infrastructure (on cms) .... but then the files had to be converted (by some automated stuff) for SMP & mvs distribution.

https://www.garlic.com/~lynn/2000b.html#80 write rings
https://www.garlic.com/~lynn/2001e.html#57 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2002h.html#67 history of CMS
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003.html#58 Card Columns
https://www.garlic.com/~lynn/2003e.html#38 editors/termcap
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003k.html#47 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003l.html#17 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

If you're going to bullshit, eschew moderation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: If you're going to bullshit, eschew moderation.
Newsgroups: alt.folklore.computers
Date: Wed, 21 Apr 2004 21:49:54 -0600
jmfbahciv writes:
The reason it's disturbing is because it wasn't in anybody's cirricula. I think most of the books I've been picking up at the dump were texts and publications used in college. People are cleaning out their attics (from the smell, quite a few basements) and dumping these books.

my wife (re)read 7 pillars of wisdom two years ago ... and has been recommending it. there was thoughtful article on the subject in sunday ny times .... although it was in the entertainment section.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What terminology reflects the "first" computer language ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What terminology reflects the "first" computer language ?
Newsgroups: alt.folklore.computers
Date: Wed, 21 Apr 2004 22:04:16 -0600

> "Soon" is one of those funny words with extremely flexible meaning.
> As in, "Someday soon", I'm going to get around to make an XPL compiler
> using the LR(1), (or, in the case of Bison, LALR(1) ) algorithm.
> Given that my copy of McKeeman, Horning & Wortman is c/r 1970, and
> that I purchased it circa 71, the word "soon" can stretch into some
> decades. "Soon" there will be peace in the middle east.


sitting around someplace, i ran across ...


#***********************************************************#
#                                                           #
#                       TM                                  #
#               MetaWare   TWS User's Manual                #
#                                                           #
#               A Translator Writing System                 #
#                      based  on  the                       #
#                   LR Parsing Technique                    #
#                                                           #
#               (C) Copyright  1979, 80, 81                 #
#                   Franklin L.  DeRemer                    #
#                   Thomas  J.  Pennello                    #
#                   Santa Cruz, CA 95060                    #
#                                                           #
#                                                           #
#***********************************************************#

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

ibm mainframe or unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ibm mainframe or unix
Newsgroups: bit.listserv.ibm-main
Date: Wed, 21 Apr 2004 22:40:42 -0600
bblack@ibm-main.lst (Bruce Black) writes:
Vincent,

Don't forget that today it is not a choice between Unix and the Mainframe, because the Mainframe can run Unix (either Linux as a "stand-alone" system or Unix programs running under z/OS).


don't forget that Au was available thru most of the 80s and 90s ... originally code-name gold. lots of places ran it.

aix/370 was available in the late '80s.

the aixv2 was a derivative of (AT&T) system 5 ... and aixv3 was derivative of aixv2.

aix/370 (and its companion aix/ps2) was created from a ucla locus base.

the palo alto group had original started on BSD port to 370 ... but that effort got retargeted to PC/RT ... providing "AOS" (bsd port) on the PC/RT as an altnerative to AIX/V2. The group then came back to doing a 370 offering ... but this time using UCLA's Locus as a base (rather than UCB BSD).

misc past mainframe unix posts:
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000d.html#68 "all-out" vs less aggressive designs
https://www.garlic.com/~lynn/2000d.html#69 "all-out" vs less aggressive designs
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2001.html#44 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#19 mainframe question
https://www.garlic.com/~lynn/2001l.html#50 What makes a mainframe?
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2002b.html#29 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#39 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002h.html#65 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#54 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh
https://www.garlic.com/~lynn/2002j.html#36 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002o.html#11 Home mainframes
https://www.garlic.com/~lynn/2002o.html#40 I found the Olsen Quote
https://www.garlic.com/~lynn/2002p.html#45 Linux paging
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003h.html#45 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003o.html#49 Any experience with "The Last One"?
https://www.garlic.com/~lynn/2004c.html#9 TSS/370 binary distribution now available
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DASD Architecture of the future

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DASD Architecture of the future
Newsgroups: bit.listserv.ibm-main
Date: Thu, 22 Apr 2004 08:24:33 -0600
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
One point is that it was always difficult to write future-proof code using the supposed "device indepence" features of the operating system. And devices always seemed to have one or two peculiar characteristics to be taken into account. Anyone else remember the "fixed head feature" available for 3350s - I think it gave you two cylinders with zero seek times. Useful (if expensive) but another characteristic to be taken into account when actually using the device. Not perhaps a device independency problem - but something else again.

one of the problems with the fixed head feature for the 3350 was that the 3350s only had a single exposure .... which resulted in the fixed-head portion of the 3350 being unavaible during any arm-motion operations to the rest of the device. i tried pushing thru a hardware change to allow multiple exposures on the 3350 (analogous to 2305) so that you could do data transfers off the fixed head portion in parallel with arm movement.

there are actual two separate issues ... preferrential allocation to the fixed-head area and being able to overlap transfers and arm motion.

in any case, about that time there was proposal for new kind of dedicated paging device called vulcan they appeared to feel that enhancement to the 3350 fixed head feature was a competitive issue and help make sure that the proposal didn't go anywhere. Then things changed, and vulcan never got announced.

later ... there was 1655 which had some of the characteristics of vulcan.

for a little drift, recent posting on FBA devices & CKD:
https://www.garlic.com/~lynn/2004d.html#63 System/360 40 years old today
above includes several references to past posts on the subject,

misc. past vulcan &/or 1655 posts:
https://www.garlic.com/~lynn/99.html#8 IBM S/360
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000d.html#53 IBM 650 (was: Re: IBM--old computer manuals)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002.html#31 index searching
https://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003m.html#39 S/360 undocumented instructions?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DASD Architecture of the future

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DASD Architecture of the future
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 22 Apr 2004 08:41:31 -0600
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
There was a product called "Extend" that was marketed by, among others, ITEL. It supported /370 DASD on /360 - we attached IBM 3330s and ITEL (ISS - shudder!) 3350s to a /67 using it in around 1978/9. It implemented the /370 channel's CCW re-presentation in software, and a few other things. The /67 ran CP/67 with OS/360 21.8F on top - I don't remember how the DASD support was done in MFT, but I _do_ remember using VS1's IEBCOPY under MFT with a dummy SIO appendage.

one of the major first uses of the internal network was joint development project between endicott and cambridge on the cp/67 "h" and "i" systems.

The "h" modifications to cp/67 was to emulate full 370 architecture in a kernel running on a real 360/67. The "i" modifications was to change change the kernel to run on a 370 architecture rather than 360/67 architecture (i.e. there were differences between virtual memory tables and control registers on 360/67 and 370).

A cp/67i system ran in a 370 virtual machine under a cp/67h kernel on a real 360/67 for a year before the first 370 engineering model of 370 was available (a 370/145 in endicott). Actually at cambrdige, because there was so many outsiders using the system (including MIT & BU students), there was perception of a security issue running a cp/67h system on the bare hardware (some student might get access to virtual 370 features and discover all the unannounced goodies). As a result, the cp/67h system was run in a 360/67 virtual machine on the standard cambridge time-sharing service; aka
cp/67l kernel ran on the bare-hardware cp/67h kernel ran in a virtual 360/67 under cp/67l cp/67i kernel ran in a virtual 370 under cp/67h and for testing, cms ran in a virtual 370 under cp/67i

I've told the story before, but when endicott first got a 370/145 engineering machine running with virtual memory support, cp/67i was brought in to boot/ipl (ipl on the engineering machine was a knife-switch). the boot failed ... and after a little diagnosing, it turned out that the engineers had reversed the implementation of two "b2" opcodes; the kernel was quickly patched to correspond to the (incorrect) hardware implementation, and the rest of the testing ran successfully.

Eventually there were a large number of 370/145s running internally around the corporation with virtual memory capability (even tho it hadn't been announced yet). A couple people in san jose added the modifications to cp/67i kernel to support 3330s and 2305s ... which was referred to as cp/67sj. This ran on a large number of internal machines before the redone vm/370 kernel was available.

minor past refs to cp/67sj system:
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse

this wasn't exactly supporting 3330s & 2305s on real 360/67 ... but it was support in the cp/67 kernel for 3330 & 2305 devices.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

DASD Architecture of the future

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DASD Architecture of the future
Newsgroups: bit.listserv.ibm-main,alt.folklore.ccomputers
Date: Thu, 22 Apr 2004 09:02:58 -0600
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
So I'm not sorry that device geometry changes have been taken out of the equation. _EVERYTHING_ is virtualised these days, so it doesn't affect vendors' ability to adopt new technologies. Frankly, I see no chance at all of the implementation of any new DASD architecture in the classic sense within z/OS - thee are too few resources available at IBM and a raft of other things to do.

But - as I keep on saying - I _would_ like to see a SAN File System client as a z/OS subsystem.


probably the original SAN system was developed at NCAR in the early '80s. It was an MVS system on 4341(4381?). Basically there were DASD farm with HYPERchannel A515 remote device adapters (box that emulated ibm channel and ibm control units could interface to). Various cray supercomputers and other processors with HYPERchannel adapters could access the DASD farm thru the A515 adapter boxes.

The MVS processor had "real" channel attachments to the DASD farm (in part FE maint & serviceability requirement). The MVS machine also had an A22x adapter .... i.e. a HYPERchannel adapter box that looks like a control unit and attaches to a real ibm channel. The other processors communicated to the MVS machine via the A22x adapter as "control" mechanism ... requesting access to data. The MVS system would prep an A515 box and return information to the requestor. The requestor then would invoke the DASD channel program that had been preloaded into the memory of the A515 box .... resulting in direct data transfer between the DASD and the various processors in the SAN environment.

LANL also did something analogous using MVS system and HYPERchannel. The LANL system was picked up and marketing by General Atomics as Datatree. In the early 90s, NCAR had an effort porting the MVS code to AIX, planning on marketing it as mesa archival (they moved into small office complex just off 50 as you entered boulder).

In the early '89s, One of the things done in the IEEE standardization effort for HiPPI was something called 3rd party transfers supported by HiPPI switches. The facility would be that a SAN control processor could enable/disable paths between processors and (IPI) disks based various requireemnts (as a follow-on to the MVS-based HYPERchannel implementation).

misc. past references:
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#60 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002k.html#31 general networking is: DEC eNet: was Vnet : Unbelievable
https://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003i.html#53 A Dark Day

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home