List of Archived Posts

2004 Newsgroup Postings (05/27 - 06/26)

Usenet invented 30 years ago by a Swede?
before execution does it require whole program 2 b loaded in
Text Adventures (which computer was first?)
Need to understand difference between EBCDIC and EDCDIC
Infiniband - practicalities for small clusters
Adding Certificates
Adding Certificates
Text Adventures (which computer was first?)
network history
Need to understand difference between EBCDIC and EDCDIC
IBM's Electronic Data Interchange Support
Infiniband - practicalities for small clusters
network history
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
HERCULES
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
Infiniband - practicalities for small clusters
|d|i|g|i|t|a|l| questions
|d|i|g|i|t|a|l| questions
Infiniband - practicalities for small clusters
network history
Infiniband - practicalities for small clusters
Most dangerous product the mainframe has ever seen
[IBM-MAIN] HERCULES
network history
network history
network history
network history
network history
network history (repeat, google may have gotten confused?)
network history
network history
Infiniband - practicalities for small clusters
spool
IBM 7094 Emulator - An historic moment?
[URL] (about) passwords
command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
Sequence Numbbers in Location 73-80
Sequence Numbbers in Location 73-80
command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
PL/? History
PL/? History
Hercules
Adventure game (was:PL/? History (was Hercules))
Chained I/O's
Channel busy without less I/O
Chained I/O's
Chained I/O's
effeciently resetting a block of memory
The WIZ Processor
War
Adventure game (was:PL/? History (was Hercules))

Usenet invented 30 years ago by a Swede?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Usenet invented 30 years ago by a Swede?
Newsgroups: sci.archaeology,soc.culture.nordic,soc.history.medieval,soc.history.science,comp.protocols.tcp-ip
Date: Thu, 27 May 2004 17:38:15 -0600
Philip Deitiker writes:
The first personal computer I personally saw as a 8Kb Z80 based kodak in which you had to supply a cassette tape recorder to feed in the programs and store them. A TV set with the lousiest resolution you ever saw, the device had a keypad on it, but it was not a keyboard. Way back in the early days people actually used cassette records instead of disk drives. In 1982 when I learned how to program it was done using punch cards encoded with Fortran V written on terminals run by something called Music, and the cards were then processed, usually overnight. Next day you find out how many misspelling errors you made. It took about a week to debug a program you would not even created errors in with Visual Basic (option explicit turned on) today.

the first personal computer i saw was a 64kb 360/30 that the university would let me have from 8am sat until 8am monday (actually they nominally shutdown the computing center over the weekend and i could check in 8am sat morning and pull a 48hr shift and then have to possibly worry about staying awake another 8-10 hrs for monday classes).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

before execution does it require whole program 2 b loaded in

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: before execution does it require whole program 2 b loaded in
Newsgroups: bit.listserv.ibm-main
Date: Thu, 27 May 2004 17:53:10 -0600
gilmap writes:
At one time, AL3() was used pervasively, perhaps by both coloro che sanno and mandolinisti. I suspect its vestiges are yet prevalent.

... note as an aside ... a lot of the os system services macros forced alignment of mixed data & instructions with appropriately generated cnops (program origins were assumed to be at a minium double word aligned ... so system services macros could calculate alignment at double word level by the displacement from the program origin and taking on faith the program would be at a minimum double word aligned).

cms system call macros skipped the cnops and just allowed fullword constants (even relocatable adcons) to be half-word aligned following the svc instruction.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Text Adventures (which computer was first?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Text Adventures (which computer was first?)
Newsgroups: alt.folklore.computers
Date: Thu, 27 May 2004 18:20:25 -0600
Peter Flass writes:
No, you're not. Take a look at Hercules, an open-source IBM mainframe emulator for Windoze/Linux, etc. Legal versions of MVS 3.8J and VM/370 (rel 8?) are available with the software you'll need bundled. These aren't the most recent OS's, but, particularly from a programmer's POV are pretty close to everything but the newest 64-bit stuff.

the last "release" of vm/370 was 6. this was the transition period when ibm was charging for application software (as of the unbundling announcement 6/23/69) but not operating system software ... and starting to charge for all software.

after unbundling ... the theory was that operating system software could be shipped for free because it was necessary to make the machine run (and therefor there was an excuse for "bundling" the kernel with the machine).

with mainframe clones appearing and customers getting operating systems for "free" ... there was a transition to figuring that kernel software could be charged for ... totally independent of any past "bundling" questions.

I got to be the guinea pig with the resource manager for the first charged for kernel software (the privilege I got to spend more time than I thot I would ever need to learning about business and pricing models). The initial logic was that the feature wasn't actually necessary to make the hardware operate ... it just made it operate better. random resource manager refs:
https://www.garlic.com/~lynn/subtopic.html#fairshare

this was for vm/370 release three.

then vm/370 release four was going to ship ... base system again for free. the problem was a lot of release four was support for multiprocessors ... which had been architected/designed based on a lot of features in the resource manager (which was charged for). random multiprocessor refs:
https://www.garlic.com/~lynn/subtopic.html#smp

the resolution was to repackage about 80 percent of the resource manager code as part of the base (free) release 4 ... and continue to charge the same price for the abbreviated, slimmed down "release 4" resource manager.

release five and six had more and more kernel features being packaged as part of charged for add-ons.

Finally come release seven time-frame and the decision was made to have a single kernel package with a flat-rate price for the whole thing. No more somewhat artificial distinction of what was free and what was charged for ... everything was charged for. This appeared to be somewhat motivated by the proliferation of 370 clones ... in effect getting free operating system at IBM's expense.

In any case, the name was changed to VM/SP (release 1; instead of vm/370 release 7 with a slew of optional extra priced features).

So what is likely to be out there is the base release 6 ... the last "free", non-changed for release (w/o any of the priced add-ons).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Need to understand difference between EBCDIC and EDCDIC

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need to understand difference between EBCDIC and EDCDIC
 charterset.
Newsgroups: bit.listserv.ibm-main
Date: Thu, 27 May 2004 22:00:02 -0600
"John S. Giltner, Jr." writes:
I would say that the "EDCDIC" is a typo, especially since the file name on one of the links i EBCDIC and the table names on the other links are also EBCDIC. Only in the column header of the second table does it change to EDCDIC. Some other references I found for "EDCDIC" stated that it is the code used on IBM mainframes, which do not use EDCDIC but do use EBCDIC, in fact in one paper it used the terms EBCDIC, EDCDIC, and EDCIC to describe the coding used on an IBM mainframe.

if you have your trusty green card ... quite a bit of it is seven(/eight) columns with:

decimal
hexadecimal
mnemonic (aka if there is instruction opcode with that value)
graphic & control symbols BCDIC (aka pre-360)
graphic & control symbols EBCDIC
7-track tape BCDIC
punched card code
system/360 8-bit code

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Fri, 28 May 2004 06:42:09 -0600
glen herrmannsfeldt writes:
What about CALL/OS, or possibly slightly different names for similar systems?

I did use it some years ago, and it seemed to work pretty well, though it was somewhat more limited than I thought it needed to be.


was subsystem monitor that was at least loaded by standard os/360 and ran on any of the real memory 360s. the 360/67 was the only official product with hardware translation & virtual memory (although cambridge had done custom relocation hardware on a 360/40 for the original virtual machine system, cp/40).

apl\360 was similar ... cps (conversational programming system), etc; there were some number of terminal oriented multitasking monitors that were developed for real-memory 360s. most of them operated with limited sized workspaces that were swapped (say 8k to maybe 32kbytes) ... some analogy to ctss ... except there was double layered operating system; the base real-memory 360 operating system ... with an interactive monitor layered over it, handled the terminals and swapping.

slightly more primitive/powerful were the conversational job entry ... basically a terminal editor with job submission and retrieval. TSO was sort of in that genre. I had done something similar at the university, taking the CMS editor syntax and re-implementing it from scratch. Putting 2741 & TTY support and re-implemented CMS editor into HASP. also original wylbur done at stanford:
http://datacenter.cit.nih.gov/interface/interface206/if206-01.htm
http://portal.acm.org/citation.cfm?id=362234&dl=ACM&coll=portal
http://portal.acm.org/citation.cfm?id=801770&dl=ACM&coll=portal

tss/360 was the official virtual memory operating system for the 360/67. cp/67 virtual machine (and virtual memory) operating system was developed by the science center (although it was somewhat a port from cp/40 from the custom modified 360/40):
https://www.garlic.com/~lynn/subtopic.html#545tech

another virtual memory operating system develeped for 360/67 was MTS (michigan terminal system) developed at UofMich.

tss/360 sort of continued to limp along after it was decommissioned as official product. a tss/370 port was done. late '70s, a custom unix port was done interfacing to low-level tss/370 kernel APIs which was used inside AT&T.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adding Certificates

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adding Certificates
Newsgroups: comp.security.firewalls
Date: Fri, 28 May 2004 08:24:58 -0600
"Beoweolf" writes:
The question is...what is a certificate's main function?

The answer is...to uniquely identify the bearer of the certificate

The problem is...I would like to add certificates to a firewall, but I don't want to configure the firewall

The answer is...No, Your identity (online) is tied to your IP address, if you change your IP address, you will need to renew your certificates. If you do not provide an IP address, you have not satisfied the requirement, having an IP address, which is needed to certify the source, your identity.


basically a certificate is to bind some information to a public key, typically for use in an offline environment.

typically somebody generates a digital signature, appends a certificate and transmits it.

the receiver eventually gets the transmission, uses (public key in) the certificate to verify the digital signature and then has some comfort that the transmission is somehow related to the information bound in the certificate. the type of information bound in the certificate might be identity or even permission related.

the typical business process has the receiver looking up some available account record for the information instead of with a certificate.

certificates had a design point from the early '80s for offline email ... the environment where somebody dialed up their "post office", exchange email, hung up and processed the email offline. the certificate was designed to serve an introductory function when the recipient had never previously had any communication with the sender.

the function of the certificate was to serve as a substitute, in an offline electronic environment of the early 80s, for access to the real information. In a firewall scenario ... the lack of RADIUS-like function being able to access the real information.

the x.509 identity certificate standards (somewhat from the early 90s) started to run into problems by the mid-90s because of the serious privacy issues related to having potentially enourmous amounts of privacy related information flying around in such certificates.

Somewhat in an attempt to preserve the certificate paradigm and demonstrate some marginal utility for certificates ... there was some effort to retrench the x.509 identity certificate to something called a relying-party-only certificate ... which basically contained only an account number and a public key. Some financial operations demonstrated such an implementation in conjunction with various payment related operations.

There was some serious issues with this mode of operation.

1) it was trivial to show that in such a relying-party-only scenario; the certificate was redundant and superfluous. by definition if you have to access the account record with all the real information, then the certificate doesn't actually contain any useful information, making the existance of the certificate redundant and superfluous. it also violates the basic assumptions of the certificate design point, a substitute in an offline environment for access to the real information.

2) even relying-party-only certificates could get to fairly large, even to the 4kbyte to 12kbyte range. in the payment attachment scenario; the typical payment message is 60-80 bytes, containing the account number and the amount of the transaction. In the relying-party-only scenario a 128byte digital signature would be attached to the payment message followed by the digital certificate. This is sent to the relying-party financial institution that has all the information in an account record (including a copy of a public key). The destination financial institution pulls the account number from the payment transaction, reads the account record, retrieves the public key and verifies the digital signature. At no time did it need to resort to referencing the certificate. So in an attempt to demonstrate some marginally utility for a relying-party-only certificate, append a certificate that need never be used but does result in payload bloat of a typical payment message approximately one hundred times.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Adding Certificates

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Adding Certificates
Newsgroups: comp.security.firewalls
Date: Fri, 28 May 2004 09:48:36 -0600
Mailman writes:
I don't know where you got that idea, but it just doesn't work that way.

A certificate is a third party confirmation of your NAME, not your address. Thus a server certificate (which is probably what the OP was asking about) simply confirms that the server www.serv.com really is www.serv.com. What the IP address is has nothing to do with this: that is the job of the DNS.

Look up the definition of the certificate fields for X509.

As to "adding certificates to the firewall" - the question is meaningless. A firewall has no certificates, nor does it use them. A certificate is almost always connected to an application (Web server, EMail server, proxy, browser, EMail client, VPN, etc), so you need to look for the documentation for those. -- Mailman


domain name server certificates exist because of trust issues with the domain name system and am I really talking to the server I think that I'm talking to.

basically 3rd party certificatation authority is asked to certify that i'm really the owner of the domain name and issue a certificate to that effect.

the client types or clicks on a URL ... the browser goes off and contacts the server, the server sends back a certificate ... the browser checks to see if it is a valid certificate (using a table of certification authority public keys that have somehow been loaded into the browser at some time) and then checks to see if the domain name in the certificate matches the typed or clicked on value. so one of the exploits is to get the user to click on a field that actually points to a domain name that matches a certificate that a bad guy might happen to have.

now, the 3rd party certification authorities ... in order to certify somebody is really the domain name owner ... have to contact the authoritative agency for domain name ownership; which turns out to be the domain name infrastructure ... this is the same entity that people supposedly have trust issues with and therefor believe they need certificates.

now, 3rd party certificate authorities have some trust issues and process issues with domain name infrastructure also ... there happen to be a number of proposals to improve the integrity of the domain name infrastructure ... some of them being backed by the 3rd party CAs (if people can't trust the source of the information for the certified infomration, how can they trust the certificate).

one of these proposals somewhat from the 3rd party CA industry, involve a domain name owner registering a public key with the domain name infrastructure at the same time they register their domain name. Future interaction between the domain name owner and the domain name infrastructure can involve digital signatures that can be validated with the registered public key ... minimizing things like domain name hijacking and various other exploits. So this improves the reliance and trust that the 3rd party CA industry places in the domain name infrastructure.

the other issue is that the 3rd party certification process is fairly expensive identification process. the current paradigm has the domain name owner registering a bunch of identity information with the domain name infrastructure. when the domain name owner applies for their server certificate, they also provide a lot of identification information. The 3rd party CA than has an expensive and error prone process matching the presented identification information with the identification information on file with the domain name infrastructure. With a public key on file with the domain name infrastructure, the domain name owner can just digitally sign a certificate request. the 3rd party CA then just has to retrieve the public key on file and verify the digital signature. This changes an expensive and error prone identification process into a much simpler, less error prone, and less expensive authentication process.

So there is something of a catch-22 for the 3rd party certification industry.

1) If the integrity of the domain name infrastructure is improved, then the lack of trust supposedly will be lowered, which likely also results in lower demand for certificates.

2) if there is a public key on file with the domain name infrastructure (for the domain name owner) that the 3rd party certification process can retrieve for authentication ... then presumably other entities, like end-users and clients might also be able to retrieve the public key for performing authentication ... eliminating the requirement for needing digital certificates issued by a 3rd party certification authorities.

misc. past posts on ssl server certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

note that general case of using the domain name infrastructure for a public key server (w/o needing certificates) shows up in some of the anti-spam proposals for authenticating email senders.

part of the issue is that the domain name infrastructure is actually a distributed, near-real-time, information distribution mechanism. it is currently primarily used to distribute the ip-address related to a specific domain or host name. however, the domain name infrastructure has also been used for serving up other kinds of real-time data. There is nothing that prevents the domain name infrastructure from also serving up real-time public keys that are bound to a domain or host name.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Text Adventures (which computer was first?)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Text Adventures (which computer was first?)
Newsgroups: alt.folklore.computers
Date: Fri, 28 May 2004 10:12:26 -0600
"Jim Nugent" <njim2k-nntp@yahoo.com> writes:
A side note: This machine was hooked up to to a BBN "IMP" (Interface Message Processor) which was a essentailly a NIC as big as a refrigerator and a gateway to the ARPANet (not sure it was called that at time) aka the baby Internet. I know Stanford had one (we could log in to their '10 and they into ours), don't know who else... WPI?.

arpanet project defined a packet-switched network implemented with IMPs ... which were effectively network front-end processers (FEPs) connected to backend hosts. there were a variety of backend hosts that IMPs connected to ... including some number of IBM mainframes.

it predated the internet and didn't support IP, internetworking protocol. the great switch-over to internetworking and IP was 1/1/83.

one of the reasons that the internal network was larger than the arpanet/internet up until about mid-85 was that effectively there was gateway function in the internal network nodes allowing support for heterogeneous networking. the internet didn't get heterogeneous networking and ip/internetworking until the 1/1/83 switch-over. At the time of the switch-over, arpanet had approximately 250 nodes and the internal network was nearing a thousand nodes (which it hit that summer). Besides the introduction of internetworking, heterogeneous networking, and gateways ... the other factor was that with IP, there started to be a number of workstation and PC nodes ... while the internal network pretty much remained a host/mainframe based infrastructure. The internal network technology was also used for the bitnet & earn university/research networks, some large part was also directly or indirectly funded by the corporation.

misc. past internet post:
https://www.garlic.com/~lynn/internet.htm

some bitnet/earn posts:
https://www.garlic.com/~lynn/subnetwork.html#bitnet

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

network history

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: network history
Newsgroups: alt.folklore.computers
Date: Fri, 28 May 2004 15:49:14 -0600
cstacy@news.dtpq.com (Christopher C. Stacy) writes:
I'm confused by a couple of these statements. First, I thought that the IBM network was composed of IBM systems, not a heterogeneous environment. Second, the ARPANET was all about heterogeneous networking: connecting all different kinds of hardware and software systems together. What is it that you mean by "heterogeneous networking"?

double check ... all of the network was homogeneous IMPs ... there may have been heterogeneous systems ... but the heterogeneous nature of the systems were hidden behind the homogeneous IMs and homogeneous networking infrastructure.

the big innovation with the 1/1/83 switchover was the introduction of internetworking and gateways .... so that different networks ... even different heterogeneous networks ... that was somewhat why the term "internetworking" was chosen ... to be able to interconnect different networks.

a possible semantic queue/clue to the difference before and after the great 1/1/83 switch-over ... was that the after networking was the "internetworking protocol" ... giving rise to the abbreviated term "internet"

the internal network may have been homogeneous machines ... as opposed to heterogeneous hosts ... but there was, in fact a number of different operating systems for those machines along with different network.

the mainstay for the internal network was the networking support developed at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
the same place that developed virtual machine operation systems, cp/67, the origins of vm/370, cms, gml (which begate sgml, html, xml, etc) and a number of interactive offerings.

a lot of people are more familiar with the mainframe batch system networking of NJI in JES2 (somewhat derived from the TUCC networking modifications to HASP). HASP had a one byte table for psuedo unit record devices ... and the TUCC/NJI networking support mapped networking nodes into the remaining entries in the 255 psuedo device table.

this network support was more akin to the arpanet in that it had an extremely strong homogeneous implementation ... in addition not be able to define more than 255 networking nodes (at most, a typical JES2 installation might have 60 defined psuedo unit record devices, so the maximum number of defined networking nodes was more like 200 or less). JES2 also had the downside that if it encountered traffic where either the origin node or the destination node wasn't in its network table ... it was trashed.

The homegeneuous nature of the JES2 implementation was possibly even stronger than the arpanet/imp implementation ... with the added downside that the JES2 networking implementation was in the batch operating system kernel ... and if JES2 crashed it was likely to crash the whole system. There was the famous scenario where a system in San Jose upgraded its JES2 networking and started injecting files into the network ... and some that eventually arrived in Hursley caused the hursley systems to crash.

The VM networking implementation was used for the internal network backbone ... since it didn't have network node limitations AND didn't trash traffic where it didn't recognize the origin (it somewhat had to know the destination in order to route the traffic). The VM network implementation allowed much more freedom and ease in network growth than the JES2 implementation.

Also the gateway function was used to interface arbitrary protocols to vm network node .... and because implementations like JES2 was so fragile ... there eventually evolved all sorts of special VM network gateway functions. Special drivers were built for VM gateways that would be used when connecting to specific JES2 systems. It became the responsibility of the VM systems to see that the JES2 system didn't crash. The VM drivers would have special code that recognized JES2 transport headers that were at different version and/or release level than the immediate system it was communicating with ... and would convert the headers to acceptable format to avoid crashing locally connected JES2 systems. Besides not having the homogeneous nature of the JES2 systems, basically embedded gateway capability to deal with many different interfaces, pretty much free from network node limitations, and wouldn't trash traffic from origins that it didn't recognize (possibly recently connected machines somewhere in the network).

These capabilities the arpanet didn't really get until the great switchover on 1/1/83 to the internetworking protocol ... which then basically begate the attribute "internet".

some misc. internet archeological references:
https://www.garlic.com/~lynn/rfcietf.htm#history

note if you instead seelect
https://www.garlic.com/~lynn/rfcietf.htm
and then select "misc historical references" when the referenced historical RFC numbers are selected, the corresponding RFC summary will be brought up in the lower frame. And as always ... clicking on the ".txt=nnnn" field in the RFC summary retrieves the actual RFC.

random past post mentioning heterogeneous/homogeneous network issues
https://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/99.html#206 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000.html#74 Difference between NCP and TCP/IP protocols
https://www.garlic.com/~lynn/2000d.html#67 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#13 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#30 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003g.html#44 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#16 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003l.html#0 One big box vs. many little boxes
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003n.html#44 IEN 45 and TCP checksum offload
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#35 Questions of IP

random past post mentioning the great 1/1/83 switchover
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#6 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002q.html#4 Vector display systems
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2003g.html#44 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#51 vnet 1000th node anniversary 6/10
https://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#16 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#17 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003i.html#32 A Dark Day
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003l.html#0 One big box vs. many little boxes
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003n.html#44 IEN 45 and TCP checksum offload
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
https://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004e.html#30 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
https://www.garlic.com/~lynn/2004f.html#35 Questions of IP
https://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Need to understand difference between EBCDIC and EDCDIC

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need to understand difference between EBCDIC and EDCDIC
 charterset.
Newsgroups: bit.listserv.ibm-main
Date: Fri, 28 May 2004 16:08:02 -0600
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
I don't say any such thing. ASCII is American Standard Code for Information Interchange; it is a standard, and anything not matching the standard is by definition not ASCII. Any binary computer can use ASCII, and any binary computer can use other character sets; "ASCII computer" is as meaningless as "multiple linear regression computer" or "payroll computer".

for those that don't remember, the 360 PSW bit definition 12 was "USASCII mode" ... which was dropped in 370 (again from my trusty green card).

there actually used to be another subtle difference ... i had put TTY support into CP/67 at the university ... and wrote a punch of code to drive the 2702 to extend the existing cp/67 2741/1052 automatic terminal detect to also do automagic 2741/1052/TTY terminal detect. I pretty well tested it out when the IBM CE informed me that it wouldn't work reliably ... that while the 2702 allowed you to reassign line-scanners to any line with the SAD command ... they had taken a shortcut in the implementation and hardwired the oscillator to lines (determining the baud rate).

somewhat as a result there was a project start to build a clone controller ... where we've gotten blamed for originating the pcm controller business
https://www.garlic.com/~lynn/submain.html#360pcm

so i had built the ascii->ebcdic and ebcdic->ascii translate tables (somewhat borrowed from btam) and was using them with cp/67 tty support on the 2702.

the 2702 clone was originally built out of an interdata/3 minicomputer and a channel interface board from reverse engineering the ibm channel interface. the first bytes we got into storage ... we thot was cause for celebration ... but it turned out that it looked like garbage. we had overlooked the fact that ibm linescanner convention places the leading bit in the low-order bit position. communication ascii data arriving in 360 memory had every byte bit-reversed. The translation tables took care of the fact that they were dealing with bit-reversed ascii bytes. in order to fully emulate the 2702 ... our clone also had to do bit-reversal on every ascii byte.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM's Electronic Data Interchange Support

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM's Electronic Data Interchange Support
Newsgroups: bit.listserv.ibm-main
Date: Fri, 28 May 2004 18:15:05 -0600
poitras writes:
EDI is much more than that. From wikipedia:

"EDI is still the engine behind 95% of all electronic commerce transactions in the world"

https://en.wikipedia.org/wiki/Electronic_Data_Interchange

It wouldn't surprise me if dozens of IBM products incorporated some EDI elements. It's been around for over 30 years. Websphere is the current spot where IBM seems to be concentrating their EDI efforts.

Although I suppose IBM might have some product called "EDI", if I had gotten this call, it would have sounded to me something like, "How long will my release of LU 6.2 be supported?"


LU6.2 may be way to modern ... i was doing search for some stuff about protocol between controller and 327x terminal ... and happen to stumble across history reference to EDI being closely tied to bisynch.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Fri, 28 May 2004 21:41:11 -0600
"Stephen Fuld" writes:
The driving requirement in the early 70s (pre virtual memory) was the TSO environment. You don't want to have to have every user, when his program was swapped out due to quantum expiring, waiting for user input, etc. to have too be reloaded back at the same memory location from which it was swapped out.

tso had well acknowledged, extremely poor human factors .... in the mid to late 70s (after virtual memory w/mvs) there was a big performance and human factors issues with 3274/3278 making it difficult to deliver subsecond response. something of a stock answer was that the majority of people using 3278s aren't expecting (or can't expect) subsecond response anyway. the 3272/3277 added a pretty consistent tenth of a second hardware latency (regardless of factors) .... the 3274/3278 generation increased the hardware latency to over a half second minimum ... but it could vary to quite a bit more based on how much data was being transferred or other concurrent factors. I was providing an infrastructure that under heavy load had .11 second system response for 90th percentile of trivial interactive commands. Coupled with 3272/3277 hardware latency ... that made it a little less than quarter of a second. Most TSO sysadmins talked glowingly about being able to provide any sort of avg. system response that came even within spitting distance of a second.

i've repeatedly commented that tso was little more than a crje implementation .. and one of the poorer ones at that. it was never suppose to be a human factors interactive system ... the batch applications didn't need base/bound ... and it wouldn't have actually been able to cost justify base/bound based on TSO ... because that wouldn't have been sufficient to fix TSO thruput.

we had an interesting situation with one datacenter having a heavily loaded 168 MVS system in the late 70s sharing disk farm with a heavily loaded 168 VM/CMS system. There was an edict that controller strings of disk were nominal dedicated exclusively to one system or another (primarily to eliminate the performance polution that would happen if a MVS disk was being access thru a controller that was nominally reserved exclusively for VM).

One day, the MVS system administrator inadvertantly mounted a MVS disk on a drive nominal designed as VM exclusive. Within a couple of minutes the VM/CMS users started irate phone calls to the datacenter.

It turns out that OS/360 genre of operating systems have a nasty habit of using multi-trark searches on nearly all of their disks ... which results in horrible busy conditions on channels, controllers, and drives ... which, in turn leads to horrible response latency characteristics. TSO users aren't aware of it because their response is always horrible. However, a single MVS disk on a nominal string of 16 VM drives sharing the same controller was enough to send all the affected vm/cms users into severe apopolexy within a few minutes. If a single MVS disk sharing a controller that is nominally 15 VM/CMS disks ... sends vm/cms users into severe apopoplexy ... can you imagine what 16 MVS disks on an MVS dedicated controller string does to the latency on an MVS system? ... it is several times worse ... however the TSO users are so beaten down that they don't even realize it.

in any case, back to the story ... the vm staff asked the mvs staff to move the disk back to one of their "own" controller strings. The MVS staff declined because it might interrupt some application in process using the disk. SO .... we had this highly souped up VS1 system tailored for running under VM/370 ... which also uses the os genra of multi-track search conventions. We got one of the VS1 packs mounted on an MVS string with the MVS system disks ... and cranked up an application under VS1 (under VM on a otherwise heavily loaded 370/158) ... and brought the 370/168 MVS system nearly to its knees. At that point the MVS staff decided that they would swap drives ... and mvs disks would be kept on the mvs side and vm disks would be kept on the vm side.

the point of the story is that there are a huge number of factors in os/360 contributing to horrible tso response ... even after move to mvs virtual memory infrastructure ... which in theory is a far superior solution than the base/bound swapping ... and it still couldn't improve TSO.

it is some of the reasons that most of the internal software development programming was done on internal vm/cms systems ... regardless of what software was being development and/or for what platform.

I also provided support and infrastructure for the internal HONE system that was the online infrastructre for the worldwide field, branch, marketing and sales people. The US HONE complex had close to 40,000 users defined. This was all VM/CMS based also ... and was one of the largest online time-sharing services at the time (although purely internal again). random hone refs:
https://www.garlic.com/~lynn/subtopic.html#hone

as to one of your other comments ... there was at least one paper published in the 70s claiming corporate credit for virtual memory. The author of one of the letters to the editor going into gory detail about why it wasn't true ... showed me the letter before he sent it off.

so another mvs CKD story ... in the late '70s there was a large national retailer that had their datacenter in the area. they had multiple systems sharing very large disk farm ... somewhat with specific complexes dedicated to specific regions of the nation. Sporadically during the day their retail applications suffered extremely bad performance degradation ... and nobody could explain why. I was eventually called in and brought into this class room with dozen or so student tables ... each about six feet long... about half were completely covered with one foot high stacks of fan-fold printer output. The output was detailed performance numbers of processer utilization and device activity at 10-15 minute snapshots ... for each processor in the complex covering days & days of activity.

so i get to examining thousands and thousands of pages of this output. after an hour or so of looking at reams and reams of performance numbers and asking what periods had the bad performance and what periods didn't ... i was starting to see a slight pattern on the activity count for a specific disk among the very large number. Part of the problem was that the outputs were processor specific ... so to integrate the activity for a specific disk ... I had to run totals in my head across the different system specific counts for each disk drive (each system would report the number of I/Os it did to each specific disk ... but to understand the aggregate number of I/Os for a device, you had to sum the counts across all the individual system reports and keeping the running taily from snapshot to snapshot).

So nominal load for these disks runs 40-50 i/os per second ... heavy load might peak at 60-70 i/os per second (with a lot of heavily optimized seek order ... or short seek distances) ... this is aggregate for a device across the whole complex. I was starting to see an anomaly for one disk that it was consistantly running at 6-7 i/os per second aggregate during the periods described as bad performance. It wasn't high or heavy load ... it just started to be the only relatively consistent observations across the whole complex.

It turns out because it was such a low number ... nobody else had given it much thought. So I started probing what was on the disk. Turns out it was the application program library for all retail applications ... large PDS dataset which, it turned out had a large number of members and the PDS directory had been extended to three cylinders. Now these were 3330 disk drives that had 19 tracks and rotated at 3600rpm or 60rps. Program loading consistented of finding the member in the PDS directory using multi-track search. The search operation would start on the first track on the directory and search until it found the member name or the end of cylinder. During the search operation, the device, the (shared) controller (which met all other disk on that controller) and the channel were busy. It turned out that avg. search was a cylinder an half (i.e. all searches of three cylinder directory would on the avg. find the member after searching half the entires). So a multi-track search for a full-cylinder takes 19/60 seconds ... or a little less than 1/3rd second elapsed time (during which time, a significant amount of the disk i/o across the whole machine room was at a stand still). A search of a half cylinder is a little less than 1/6th of a second (and a second I/O). The loading of the member takes about 1/30th of a second (and a third I/O). So each program member load involves half second elapsed time ... and three I/Os. That means the disk is doing about two program loads a second and six i/os a second ... which is very close to what I was seeing (the avg. across the whole complex tended to run somewhere between six iand 7 i/os per second ... some of the program members required two disk i/os to load). It was operating at fully saturated at 6-7 I/Os per second as well as contributing to severe complex-wide system degradation. Of course requests to load application program members out of that specific library were experiencing severe queueing delays ... since all machines and all business regions shared the same application program library. But constant program loading from that library was also resulting in severe performance impact thru-out the infrastructure.

random past postings about the horrible effects of multi-track search on latency, response, thruput, performance:
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#19 OT?
https://www.garlic.com/~lynn/2000f.html#42 IBM 3340 help
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#60 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002d.html#22 DASD response times
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2002l.html#49 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize
https://www.garlic.com/~lynn/2003f.html#51 inter-block gaps on DASD tracks
https://www.garlic.com/~lynn/2003k.html#28 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003k.html#37 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004d.html#63 System/360 40 years old today
https://www.garlic.com/~lynn/2004e.html#42 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

network history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: network history
Newsgroups: alt.folklore.computers
Date: Sat, 29 May 2004 09:11:28 -0600
cstacy@news.dtpq.com (Christopher C. Stacy) writes:
I started to write a long reply to this, but I decided that explaining it all would be a waste of time and would just piss you off. We're just going to have to disagree on the semantics. Readers of the literature will note that the IBM interpretation of "heterogeneous networking" is exactly opposite of what everyone else believes.

IBM has some technology called SNA ... system network architecture; I've repeatedly commented that it is not a system, not a network, and not an architecture. For the most part, SNA is primarily a communication (not networking) infrastructure and a terminal control system for very large numbers of terminals. A modest size customer configuration might involve 65,000 connected/supported terminals.

Note that the VM-based networking technology for the internal network was not in any way related to SNA.

SNA doesn't even actually have a network layer (in the osi model) ... there is my separate rant that ISO actually prohibited work on protocol standards that violated the OSI model ... and internetworking/ip doesn't exist in the OSI model and therefor is in violation ... and can't be worked on as an ISO protocol standard. I was involved in some stuff that tried to get HSP (high speed protocol) considered in (ISO chartered) ANSI X3S3.3. It would go directly from transport to MAC. It violated OSI model in two ways: a) it bypassed the layer 3/4 interface and b) it went directly to LAN MAC interface, which sits in the middle of layer 3, which also violates OSI model:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

In any case, SNA doesn't even have a network layer. The closest thing that genre has with something that has a network layer is APPN. When APPN was about to be announced ... the SNA/raleigh crowd nonconcurred with the announcement. It was escalated and after 8-10 weeks APPN managed to get out an announcement letter ... but it was very carefully crafted so as to not state any connection between APPN and SNA.

Some of the SNA heritage is claimed to have gone back to the PCM controller clones ... which I've gotten blamed for helping originate. As an undergraduate, I worked on a project that reversed engineered the ibm channel interface, built a ibm channel board interface for an Interdata/3 and programmed it to simulate an ibm controller:
https://www.garlic.com/~lynn/submain.html#360pcm

The appearance of PCM controllers is supposedly a prime factor in kicking off future system effort ... some specific quotes on this
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
random general posts about failed/canceled FS project:
https://www.garlic.com/~lynn/submain.html#futuresys

The death of FS ... supposedly kicked off some number of projects that tried other methods of tightly integrating host and controllers ... the SNA pu4/pu5 (ncp/vtam) interface might be considered one of the most baroque.

note that the acronym "ncp" for network control program that ran in the 3705 FEPs should not be confused with the "network control protocol" supported by arpanet IMP FEPs.

Several years later, I tried to ship a product that would encapsulate and carry SNA RUs over a real networking infrastructure ... simulating a PU4/PU5 at boundary layer to host PU5/VTAM. About the time the battle over getting APPN announced was going on ... I gave a presentation on the effort in raleigh to the SNA TRB. Afterwards the guy that ran the TRB wanted to know who had invited me (to make sure I was never invited back). some related postings:
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#69 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

It was part of some general activity that my wife and I had called high-speed data transport (HSDT)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

part of HSDT was operating a high-speed backbone for the internal network. One of the things that flowed over the backbone was chip designs. Chip design work went on in several places around the company ... however there was a unique, one-of-a-kind hardware logic simulator out on the west coast ... that performed 50,000 times faster than other methods at the time.

We also got to be effectively red team for the NSFNET1 and NSFNET2 bids (there was one meeting for NSFNET2 bid that involved my wife and I as the red team and the blue team was something like 30 people from seven different labs around the world) ... we weren't actually allowed to bid. My wife did talk the director of NSF into asking for an audit of what we were doing. The conclusion was a letter that said what we had operating was at least five years ahead of all NSFNET bid submissions to build something new (actually we were doing some stuff that internet2 might get around to deploying).

the other thing we had come up with in HSDT that got us in lots of trouble was 3-tier architecture and middle layer (the genesis of middleware) ... this was at a time when the corporation was running SAA and trying to at least put the client/server genie into a straight-jacket (if not back into the bottle):
https://www.garlic.com/~lynn/subnetwork.html#3tier

There is this story (in the middle of some HSDT activity and before NSFNET1 bid), on the friday before I was to leave on a HSDT business trip to japan, somebody from the sna/raleigh communication group sent out an announcement for a new online discussion group on "high-speed networking" with the following definitions:


low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following monday, on a wall in a conference room in tokyo

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         >600mbits

in any case ... my assertion has been that the ease of doing gateways was one of the critical reasons that the internal network was almost a thousand nodes at the time when arpanet was 250 nodes .. and the 1/1/83 switchover ... effectively creating the internet (and deploying internetworking protocol) with gateways was a prime factor in allowing the internet to explode in size and pass the internal network in number of nodes by the middle of 1985.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sat, 29 May 2004 09:51:17 -0600
"Stephen Fuld" writes:
Interesting. I didn't know that. On a similar vein, I recently read an article by an IBMer on the history of disk drives (for the 50th anneversary). It comes very close to claiming (and certainly intimates) that IBM invented the caching disk controller, which is equally false. That one hurts me personally. :-(

the first caching disk (DASD) controller that I was aware of was the 3880-11/ironwood and 3880-13/sheriff. Ironwood was an 8mbyte, 4kbyte record cache architecture and sheriff was a 8mbyte fulltrack cache architecture. The earliest reference I have to ironwood is 1981.

random past ironwood/sheriff posts:
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#52 ''Detrimental'' Disk Allocation
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?

When caching controlers might have been invented, I have no idea. I know that it surprised me ... and some other people that IBM had a patent on RAID controllers something like ten years before the term RAID was coined.

I know about 1980 I was involved in a large amount of disk access tracing which was used to feed a cache model. One of the things that came out of the work was a methodology for doing real-time efficient tracing on production systems that could be used for re-organizing physical locality. The other was that (except for head/drive full-track buffer for compensating for rotational delay) ... the ROI for cache benefit/byte is at the largest common point; a single 20mbyte system file cache has more benefit than two partitioned ten mbyte controller caches.

Of course this is consistent with my work as an undergraduate on page replacement algorithms ... that global LRU outperformed local LRU replacement strategies (except in some pathelogical cases) ... misc posts on global LRU work as undergraduate in the '60s:
https://www.garlic.com/~lynn/subtopic.html#wsclock

Over ten years after my undergraduate work on global LRU in the late 60s ... there was some argument regarding a stanford phd (clock) thesis on global LRU replacement strategies. To help resolve it ... I was able to site an early '70s ACM paper by the grenoble science center implementing working set dispatcher (which included a faithful implementation of local LRU) modifications for CP/67 running on 360/67. Gernoble had a one mbyte 360/67 with 155 pageable pages after fixed kernel requirements.

With similar workload and 35 users, grenoble got about the same performance as the cambridge system did with 75 users on a 768k 360/67 (104 pageable pages after fixed kernel requirements) ... which was running nearly the same version of cp/67 but with global LRU replacement strategy. minor past ref:
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
in the above ... scroll down to the bottom of the post ... past all my "random refs" urls .. to specific literature citings.

I used to have original copies of onion skin plotting paper with performance curves ... and other pieces from early drafts of the grenoble paper ... but they seem to have disappeared with the passing of years.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sat, 29 May 2004 11:23:05 -0600
"Stephen Fuld" writes:
I never heard that it wasn't supposed to be a "true" interactive time sharing system. There were certainly other such systems available at the time and TSO seemed to be marketed as IBM's equivalent capability. Certainly the sysprogs at the site I was at thought it was that. :-(

fundamentailly it was a glorified conversational job entry ... there was nothing in the infrastructure to really support time-sharing. The "official" time-sharing system was TSS/360 ... aka Time Sharing System .... that designed for the 360/67 with virtual memory.

there was possibly a lot of confusion with online systems and real time-sharing systems. os/360 was pretty good at online systems ... basically single execution contexts with huge amounts of terminals ... where the majority of each terminal specific context was some amount of data ... not actually unique programs. Things like the airline res systems, systems that run atm machines, etc.

in some ways TSO was term inflation ... somewhat analogous to SNA ... which was not a system, not a network, and not an architecture ... but basically a communication and terminal control system. somewhat about that in post this monring in a.f.c:
https://www.garlic.com/~lynn/2004g.html#12 network history

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sat, 29 May 2004 11:50:09 -0600
"Stephen Fuld" writes:
Interesting. That seems to imply that it was actually considered by the hardware guys and rejected as not worth the cost. Is that true? BTW, one advantage it would have had with the batch workload would be to eliminate the need for the extra work that you had to do to make pieces of program executable at any physical memory location.

i think it means that it may have been considered by the business guys and possibly rejected because there wasn't sufficient ROI ... i.e. how many more machines would be sold against the cost of doing it (at least in the beginning of the 370 time-frame). the hardware issues probably were the least of the consideration ... considering that at least some number of the machines had it anyway (just wasn't part of the 360/370 architecture). the next step was nominally going to the extremely radical departure called future system (where absolutely everything changed):
https://www.garlic.com/~lynn/submain.html#futuresys
and then the fallback was somewhat the virtual memory announcement for 370.

i got into a little of something similar with STL over supporting FBA in MVS ... and being able to get out from under the horrible performance impact of multi-track search conventions. They told me that if I gave them fully developed, integrated and tested code ... it would still cost $26m to ship it in a product and I couldn't show any increase in disk drive sales i.e. at the time they were already selling as many disk drives as they could make. also there were 3375/florance drives which was basically a FBA drive with CKD emulation overlayed on top (any sales of 3370 FBA drives could be handled by 3375/florance flavor w/o system software impacts).

note ... i don't know what the reasoning was at the beginning of 360. there were the real memory systems with base+displacement for batch and online systems .... you didn't have to move things in and out a lot (except with the exception of loader overlays ... which i don't believe was used much). online systems tended to have single execution context and may have moved the individual terminal context in&out ... but that was all data and wouldn't need to have any absolute address pointers.

there was virtual memory add on for 360 in the 360/67 with the official time sharing system (tss/360). tss/360 had defined and implemented a position independent paradigm. people that were never exposed to real interactive systems might equate TSO with time-sharing just because TSO was called time-sharing option. that doesn't mean that TSO was in any more related to time-sharing than SNA is related to networking (and in fact, SNA is totally devoid of a network layer).

some number of machines had base&bound in the hardware ... but not part of the 360 architecture (from what i understand it was used by various emulators).

the change from 360 to 370 (w/o virtual memory) was primarily some new hardware technology ... but very little architecture change. part of this was to avoid changes to operating systems, application code, software and/or programming paradigm. i believe some number of the machines may have still implemented base&bound in the hardware ... but it wasn't part of the 360/370 architecture and apparently primarily used by various emulators.

I have 2nd hand story of ibm systems engineer (on the boeing account) doing a highly modified version of cp/67 on real-memory 360/370 using the base&bound hardware.

of course ... all of the mainframe machines have base&bound in the hardware now since it is the basis for implementating the microcode-level erzatz virtual machines called LPARs or logical partitions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sat, 29 May 2004 23:48:05 -0600
"Stephen Fuld" writes:
I was a user/victim of TSS/360 when I was a student at CMU from 68-72. When I got my first programming job 1972 they were running lots of (13 IIRC) 360/65s. At that time, I believe they were all batch, but soon after than they started experimenting with TSO/360 on a 4 plex running MVT and ASP. I thought TSS was dead before TSO became a reality, but I may be wrong.

tss/360 was decommuted before tso became a reality ... one could conjecture that marketing may have then felt compelled to do the name inflation and call it time-sharing option.

even tho tss/360 was decommuted ... it continued to limp along (with some loyal customer base) and there was a port done for tss/370. it then saw something of a revival inside AT&T with a unix port down to low-level tss/370 APIs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sun, 30 May 2004 00:46:40 -0600
the cache model i worked in 1980 was with dick mattson. there was a lot of system stuff that could do i/o tracing all thru the 70s, this was a technique that could effectively do i/o activity capture with nearly no overhead and therefore could (in theory) be left on all the time. one of the things we looked at was something like hsm/sms doing physical reorg based on load-balancing and arm distance .. as part of normal production operation.

the model had arm, drive, controller, channel, channel director, main memory. since 370/303x were already max'ing out memory comfigurations ... looked at aggregating cache memory in something like the 303x channel director (which as i've posted before really was a 370/158 w/o the 370 microcode ... just the integrated channel microcode).

ironwood and sheriff 3880 cache stuff were done in tucson. both were 8mbytes ... so neither were really sufficient to make a lot of cache difference.

using 3880-11/ironwood as a paging cache ... I was able to show that with standard strategy, the majority of pages were duplicated in both processor memory and the 3880-ii cache. to make 3880-11 cache effective ... i claimed that the system needed to be changed to always do "destructive" reads to the 3880-11 controller ... so to avoid the problem with total duplication of pages in the controller cache and main memory. this is just another variation on the dup/no-dup (aka duplicate/no-duplicate) paging strategy i've periodically posted on. with the no-dup strategy ... you could start treating the 3800-ii memory as overflow adjunct to main memory pages as opposed to normally containing duplicates of main memory pages most of the time. pages would get written out to the 3880-11 and they would either get "reclaimed" later with a destructive read (eliminating duplicates in the 3880-11 and main memory) or would age out to real disk. On a real cache miss with a real read to disk ... you always wanted it to bypass cache since; if it didn't, the result would be a duplicate in cache and real memory ... which was a waste of cache. The only way you wanted the 3880-11 cache populated was on flushing a page out of real memory to the 3880-11.

the 3880-13/sheriff was also only an 8mbyte cache. they generated marketing material that evironments normally got 90 percent hit rate. I brought up the issue that a normal use of 3880-13 was with sequential reading; a frequent configuration might be a 3380 drive with 10 4kbyte records per track ... read sequentially. With the 3880-13, the first record read on the 3380 track is a miss, bringing in the whole track ... which then makes the next nine sequential reads "hits" ... or a 90 percent hit rate (as per the marketing material). I suggested if the customer were to tweak their JCL to do 10record buffering ... encouraging full-track reads, then the 90 precent hit rate would drop to zero. The 3880-13 would have resulted in nearly the same effect if it had been simple track buffering as opposed to track caching i.e. with only 8mbytes, there was little re-use probability over and above the sequential read scenario ... and you could have gotten nearly the same effect with a simpler track buffer as opposed to the more complex track caching stuff.

I also pointed out that doing a track buffer in the drive ... instead of cache in the controller ... would pick up the marginal improvement of out-of-order transfer ... begin read as soon as the head had settled ... regardless of where the I/O operation says to start.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sun, 30 May 2004 08:09:52 -0600
Anne & Lynn Wheeler writes:
using 3880-11/ironwood as a paging cache ... I was able to show that with standard strategy, the majority of pages were duplicated in both processor memory and the 3880-ii cache. to make 3880-11 cache effective ... i claimed that the system needed to be changed to always do "destructive" reads to the 3880-11 controller ... so to avoid the problem with total duplication of pages in the controller cache and main memory. this is just another variation on the dup/no-dup (aka duplicate/no-duplicate) paging strategy i've periodically posted on. with the no-dup strategy ... you could start treating the 3800-ii

i had somewhat come up with some of this for the resource manager. if you have a variety of devices for paging, 2305 fixed head disks (12mbytes) and big moveable arm 3330s (200 mbytes) ... you would try and keep the highest used overflow from real memory on 2305 fixed heads. however, it could be that some stuff that pushed out to 2305 just weren't getting asked for again ... so you would periodically run LRU for pages on 2305 and move inactive pages from the 2305 to lower used areas on 3330s. I called this "page migration" which I put into the resource manager (along with all the stuff of real storage page replacement, global LRU, dispatching & scheduled ... some amount I had actually done earlier as undergraduate in the 60s for cp/67 and it had been dropped in the cp/67 to vm/370 transition):
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

so, instead of using the 3880-11 as a real disk controller 4k (page) record cache ... attempt to manipulate the infrastructure so that it effectively operated as an 8mbyte managed electronic paging device; always using cache bypass/destructive reads on incoming (avoid having duplicates in controller cache and real memory). the advantage it had over a real managed 8mbyte electronic paging device was that when you pushed things into ... and they possibly aged out ... you didn't have to worry about needing to run page migration on it; the internal controller LRU algorithm would handle the aging of low referenced data.

At this point the system needed to periodically query the controller on cache hit statistics. Even with all the managing ... you could still be pushing pages into the 3880-11 at such a rate ... that they were aged out before they were ever pulled back again. If the cache hit rate was dropping too low ... then the system needed to start managing the pages that were being pushed into the cache ... and start using mechanisms that did writes to disk that avoided the cache also.

random past posts on dup/no-dup strategies for managing device/controllers associated with paging and replacement algorithms:
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002f.html#26 Blade architectures
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

HERCULES

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: HERCULES
Newsgroups: bit.listserv.ibm-main
Date: Sun, 30 May 2004 08:17:13 -0600
IBM-MAIN@ibm-main.lst (Phil Payne) writes:
I believe IBM has a MAJOR lead in storage virtualisation with what has so far been delivered of Storage Tank - the IBM SAN File System. On the 25th they announced V2.1 and created the previously unseen "SFS" acronym. It's on IBM's site - check it out. I believe this system embodies architectures that could eventually deliver what I call the "data dictionary dream" across multiple platforms - yet there is no mainframe client.

slight topic drift ... i used SFS extensively in the early to mid-80s for a filesystem rewrite that I had done using vs/pascal ... and moving a bunch of stuff out of the kernel previously implemented in assembler. there are even some number of usenet posts mostly from the 90s on the work. random trivia, at the time, I also had office in almaden.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sun, 30 May 2004 10:51:33 -0600
"Stephen Fuld" writes:
I have snipped a lot of stuff about the problems with the early IBM 3880 caching controllers. Yes, when we saw what IBM had done, we were "puzzled" in that we couldn't figure out how they had gotten things so bollixed up. There were many problems in the design, many of which you have identified. But let me just say that we were commonly getting 80% hit rates on typical user workloads. Note that we didn't need as high a hit rate as the -13 did to achieve good performance since we were also caching writes. In any event, customers were almost always very pleased with the performance. Of course, we "benefited" from the rather limited memory size of systems in that era.

they may have been bolixed up ... but that was dwarfed by the caches being too small for the market they were sold into (any secondary issues regarding the quality of the cache implementation was totally dwarfed by the lack of quantity). at the time of the 3880-11 & 13 controller caches ... typical configurations were 4341 16mbyte real storage and 3033 & 3081 32mbyte real stores.

a 4341 w/16mbyte typically had single 3880 controller with 16 to 32 640mbyte drives ... which you could add an 8mbyte cache option. the 4341 was "caching" more data in real memory than was available in the cache of the controller. in the page/4k record case it was relatively trivial that the 4341 real storage tended to have a superset of what was in the 3880-11 8mbyte cache ... if you operated it as a straight cache. i had used the dup/no-dup paradigm in the 70s ... and it also became applicable to the 3880-11 ... and figuring out how to manipulate the interface to bypass its caching function and attempt to use it directly as just simply 8mbyte adjunct to real memory.

The analogy in CPU caches ... would be having a L1 cache larger than any L2 cache ... which you could reasonably show would result in the L2 cache being superfluous. For an extreme example, lets say that instead of having a 32kbyte L1 cache and a 4mbyte L2 cache .. you had a 4mbyte L1 cache and a 32kbyte L2 cache ... would the L2 cache provide much benefit?

The is the scenario I also presented for 3880-13 full track cache. Given that the next higher cache was larger ... then having a lower level cache that contained less information made it redundant and superfluous as a cache. It could somewhat be used as an elastic buffer for attempting to compensate for some end-to-end syncronicity effect ... but it tended to not be very useful as a cache.

a 3033 or 3081 with 32mbyte might have four 3880 controllers each with 16-32 drives (for a maximum of 64-128 drives). You could add a 8mbyte cache option to each 3880 controller ... either as a 3880-11 or as a 3880-13. Again in the 3880-11 case, it could be shown that the majority of the pages brought thru the 3880-11 caches would exists as duplicates of the same page in real memory (operating it as a cache, and therefor had very low probability of being needed).

In the case of the 3880-13 full track cache ... other than its use as a buffer for sequential reading (in effect a read-ahead buffer as opposed to a caching re-use buffer), the probability in general circumstances was extremely low. If it wasn't sequential reading ... of data it hadn't seen before ... the other common application was random access to databases that were tens of gigabytes in size. The probability of a 3880-13 8mbyte cache having something from a ten gigabyte random access database ... that wasn't already in the real storage "cache" was extremely low ... and by the time anything might have aged out of what's in real memory ... it surely would have aged out of the smaller 3880 controller cache.

Given the real-live configurations ... hit rates for actual cache re-use was near zero ... given that the real storage memories were as large or larger than the available controller cache sizes. Majority of the hit rates in real life would effectively be coming from the buffer read-ahead effects (not from caching re-use effects). The processor should only be resorting to asking for data that wasn't already in real stroage ... and given the relative size of real storage and the caches ... if it wasn't in real storage it also wasn't likely to be in the caches.

it would be possible to fabricate a configuraiton ... where there wasn't any special software changes required. Say your normal configuration was 32mbyte 3081 with four 3880 controllers and 128 drives ... you could buy four additional 3880 controllers with controller cache option, and use them with a single dedicated 3380 apiece. On each of the dedicated 3380s you restrict allocation to at most say 16mbytes of data. The problem here is that there would still be a fairly high duplicates between what was in the cache and what was in real memory. The other problem is that both 3880 controllers and 3380 drives were relatively expensive ... so it wouldn't be likely that one would dedicate them for a function supporting such a small amount of data.

One way of thinking about this is that the effective size of the cache is reduced by the data that is in both real storage and the cache (since the processor isn't likely to be asking for data it already has). A 8mbyte cache with high duplicates because of large real storage ... might effectively only have 512kbytes of data that wasn't also in real memory. That 512kbytes is the only portion of the cache that there is possibility for the processor to ask for ... since it isn't a very smart processor that is asking for stuff it already has. So when you are talking about probability of cache hit rates ... it would be the probability that the processor would ask for something in that 512k bytes (since it wouldn't be asking for the stuff that was duplicated and it already had).

Lets turn this around ... lets say a 8mbyte cache of no duplicates had an 80percent hit rate (purely cache re-use ... no read-ahead buffer effects). Then, with only 512kbyte of non-duplicates, the cache hit would drop to 1/16th or five percent (for probability of hit rate re-use) ... assuming purely linear effects. However there are some non-linear theshhold effects so it could actually drop to less than five percent.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Sun, 30 May 2004 14:38:16 -0600
"Stephen Fuld" writes:
OK, at the risk of seeming self serving here, I will relate the story. First, I discount the Memorex box, which had a write through cache in the A units (head of string) and got to a few beta sites before being withdrawn and never heard from again. The first cache disk controller was developed by a company named Amperif and installed on a cusotmer Sperry mainframe in the second half of 1980. This was a "full cache" design, incorporating what IBM would later call DASD Fast Write, (something IBM didn't have until the 3880-23 some time later) including a UPS and a small disk for backing up the cache in the event of power failures (The 3880-13 was not write through but was only usable for paging data, and the 3880-13 wasn't even write-through, but IIRC "write-by" - it did writes directly to the disk and invalidated the cache entry.

i never wrote any of the microcode for the 3880 ... but while the original controller development was still up in san jose (and before the original 3880 controller was even announced) i got suckered into helping get it working.

i use to wander around looking for interesting things and periodically roped into working on them. there was a joke that at one point that i was working first shift in bldg. 28/research, 2nd shift in bldg. 14/15, 3rd shift in stl, and 4th shift/weekends at HONE.

bldg. 14 had this machine room with some number of mainframes where the original 3880 development was going on. these machines were in individual "test cells" ... i.e. heavy steel mesh cages with 4-5 number combination locks, inside a secure machine room, inside a secure bldg. on a plant site with fences & gates.

running mvs on one of the mainframes with a single test cell being testing had something like a MTBF of 15minutes for MVS. As a result ... process eventually settled on special stand alone test time rotating between the various test cells ... running special purpose monitors on the mainframe for exercising the controllers.

I thot it would be fun to rewrite the operating system i/o supervisor so that it never failed ... and be able to concurrently do testing with a half dozen to dozen test cells at a time. it was something of a challenge, but got pretty much there so eventually the machines in bldg. 14 were running the modified system ... as well as the machines over at the product test lab in bldg. 15 ... and then when some amount of the controller activity was moved to tucson ... the custom operating system was propagated to tucson also. however, i didn't have a lot of interaction with what went on in tucson.

part of the downside was then when there appeared to be a problem ... the engineers wanted to blame me ... and I could get dragged into helping scope whatever problem they happened to have. I also got suckered into sitting in on conference calls between the controller engineers in san jose and the channel engineers on processors in POK. something of the current tv ad ... i'm not really a disk engineer ... i just play one on tv.

lots of random past posts on the fun in bldg 14 & 15; (engineering and product test labs):
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Mon, 31 May 2004 07:21:19 -0600
"Stephen Fuld" writes:
Yes, the small size was a problem, but the lack of caching writes was a major contributer, as writes tend to have a higher hit rate than reads (in a controller that caches writes). So even with a modest sized cache, you get hits on the writes of the typical database operation of read some record, update it and rewrite it. In addition, you get hits on things like the logs, that are sequential but tend to want to be written quickly, not waiting for lots of records to accumulate, in order to reduce transaction latency. The 3880-13 would have looked much better performance wise, even with its small cache if it could have taken advantage of these kinds of operations. I know that because our system did exactly that. Now of course, as with any cache, bigger is better and IBM eventually learned, with the -23, to make the caches larger.

there is no cache hit of writing a record that has just been read. the write completely replaces anything that has been read.

you can have (using some processor cache related terms):

1) write thru ... in which case it has to be immediately written before signaling complete

2) write into ... with "write behind" or "lazy writes" to actual disk later. especially for database operations write-behind/lazy-write strategies need various kinds of countermeasures for volitile cache memories loosing power (before the write has been performed)

you can keep the written record around in cache on the off-chance there is a subsequent read ... which would be a cache hit.

the full-track strategy related to caching for database writes then becomes somewhat similar to RAID-5. you are only replacing part of the actual data ... in which case you have to possibly read the whole stripe, update the parity record and write back the individual record and the partity record. if you are doing full-track caching ... here is where there is something like a cache-hit. if you have recently read the record ... and brought in the whole track ... when you go to write the (updated) record back, having the full-track already in a buffer means that you can update the portion of the track represented by the latest record write w/o having to (re)read the track ... before writing it back out.

the issue in cache isn't by itself that bigger is better ... in part it is that the lower-level cache needs to be (potentially significantly) larger than the next higher level in any memory hierarchy ... otherwise the lower-level cache tends to become totally polluted by duplicates (and the lower-level cache never sees any hits because they are all being satisfied by data resident in some higher level memory hierarchy).

another of the places that such caching tends to crash & burn is with shadowing implementations (like illustra was ... and some other database systems, or log structured filesystems) ... where you effectively never write back ... you always write to newly allocated locations. for these buffered for things like helping with lazy-writes ... but the possibility of cache re-use gets to be very low.

when i asked the question of the engineers why the -11/-13 caches were only 8mbytes ... they said something about business reasons ... which i fail to remember what they were at the moment. I do remember that for other products about that era ... that processors had priority access to memory inventory and some products never got announced. say they forecasted that half all 3880s might order or retrofit cache ... then they had to get commitment from internal memory inventory for that number of boxes times the mbytes/box. if instead, they forecasted a few tens of thousands, they might have gotten allocation for more mbytes/box.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

|d|i|g|i|t|a|l| questions

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: |d|i|g|i|t|a|l| questions
Newsgroups: alt.folklore.computers
Date: Mon, 31 May 2004 06:26:55 -0600
Brian Inglis writes:
Ran same workload on similarly configured low end air cooled IBM MF: application performance and thruput was much higher at same price point as the VAX, with an order of magnitude expansion capability available, and another order of magnitude available if we went water cooled.

that was somewhat this post
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

some related about rapid explosion in mid-range market segment during the late 70s & early 80s ... which was subsequently subsummed by workstations and large PCs:
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

and some topic drift:
https://www.garlic.com/~lynn/2002f.html#1 Blade architecture
https://www.garlic.com/~lynn/2002f.html#5 Blade architecture
https://www.garlic.com/~lynn/2002f.html#5 Blade architecture

and even more drfit related to thread running over on comp.arch
https://www.garlic.com/~lynn/2002f.html#26 Blade architecture
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

|d|i|g|i|t|a|l| questions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: |d|i|g|i|t|a|l| questions
Newsgroups: alt.folklore.computers
Date: Mon, 31 May 2004 06:50:58 -0600
Rupert Pigott writes:
If you take the long view that doesn't coincide with what we've seen with IBM's product lines... IBM thinking appears to be capable of accommodating multiple lines of radically different hardware and OSes (lots of fiefdoms). What I'm wondering is if DEC got saddled with a bunch of guys who fell out of the IBM tree when the FS project got shit-canned. What I read of FS is that it was intended to replace all the various lines and do everything better somehow. Does that sound like VAX/VMS or what ? :)

there may be two different issues here ... in the early 70s, FS was sort of to have been another 360; a new radical generation that was even more different than 360 than 360 had been from earlier ... lots of posts on FS
https://www.garlic.com/~lynn/submain.html#futuresys
recent discussion of the sequence in network history thread here
https://www.garlic.com/~lynn/2004g.html#12 network history

one intepretation is that the extreme re-action to the cancelation of the horribly complex FS resulted in some inventing the opposite extreme, KISS risc. lots of misc. posts on risc/801/romp/rios/etc
https://www.garlic.com/~lynn/subtopic.html#801

in the early '80s there was a project, fort knox, to use 801 to replace the large proliferation of microprocessor that had spawned all over the cooperation. low & mid-range 370s were microcode machines (as had been 360 before them) ... typically each having their own unique microprocessor engine. there were loads of other microprocessors in controllers, office products, instrument division, etc (s/32, s/34, s/38, displaywriter, series/1, 8100, 1800, 3274, 3880, 3705, 4341, 4331, etc). Nominally, the follow-on to the 4341, the 4381 would have been an 801 microprocessor engine. I contributed to a report that said that technology had moved on & that it was cost effective to implement 370 in hardware for the midrange ... which was instrumental in killing fort knox for 4381.

801/romp with cp.r was to have been the follow-on to the office products division displaywriter. when that project got killed, the people involved decided to retarget the platform to unix workstation market ... resulting in the pc/rt. they hired the company that had done the at&t unix port for pc/ix to do one of the pc/rt, resulting in aix "2.0".

One influx of IBM'ers into DEC was from the vm370/cms development group out in burlington mall during 76 & early 77. the decision had been made to kill the vm370 product and move all the people from the burlington mall group to POK to work on the vm/tool ... this was an internal virtual machine development project (only) supporting the development of mvs/xa for "811" ... or what became 3081 and 31-bit addressing. some number of the people left ibm, stayed in the boston area and got jobs at dec & prime.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

From: lynn@garlic.com
Subject: Re: Infiniband - practicalities for small clusters
Newsgroups: comp.arch
Date: Tue, 1 Jun 2004 20:22:18 -0700
"Stephen Fuld" wrote in message news:<beKuc.103011$hH.1815333@bgtnsc04-news.ops.worldnet.att.net>...
Typical sort of "large company" problem. :-(

large company might be a symptom rather than a cause. two issues that somewhat affect the situation:

1) problem with a large customer base ... which are the customers for the product. a large customer base can be the cause ... given a large customer base, you might have a large company as a symptom.

2) the other aspect of having a memory hierarchy. using the cpu cache analogy ... given a situation where you effectively have the same technology for both L1 and L2 ... and ask a customer whether they wanted to use 8mbytes for L1 or L2 ... where the benefit/byte to the customer was five to fifty times greater when used as L1 (compared to using same amount for L2) ... aka given the customer had choice between adding the same 8mbytes as main memory or controller cache. ... This also somewhat goes back to one of my original comments implying the higher up in the hierarchy you have more memory .. the better off you are ... and by implication, any cache memory that you have lower down in the memory hierarchy has to be much larger than amount higher up in the hierarchy ... if nothing else to compensate for duplicate pollution.

some amount of controller buffered memory is useful ... as mentioned before ... not as re-use cache ... but to handle outboard various functions asynchronously (avoiding needing to have end-to-end syncronicity). the amount of buffered memory needed to improve performance by avoiding end-to-end syncronicity ... is different than needing possibly 5-10 times larger cache lower in the memory hierarchy as a means of compensating for duplicate pollution (as well as various other uses that fail to conform to least-recently-used paradigm ... like large sequential reads).

network history

Refed: **, - **, - **, - **
From: lynn@garlic.com
Date: Thu, 3 Jun 2004 10:48:33 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
eugene@cse.ucsc.edu (Eugene Miya) wrote in message news:<40beb413$1@darkstar>...
This is completely misleading. IMPs weren't seen at the application level. IMP were just that INTERFACE PROCESSORS (MESSAGE). You never saw a mesage. They were just interfaces like connectors. No different than differing backplanes. > I have to agree with a subsequent posting by Chris. > What's lacking is any reference to the protocols which preceded TCP. > It's was pretty stupid to FTP Fortran files and get inconsistent character sets, and their interpretation of a dumb one (EBCDIC). That is heterogeneous. > And this was all before email.

ok, the original assertion was that the internal network was larger than the arpanet because the internal network had effectively heterogeneous and gateway support from just about the start ... which the arpanet/internet didn't get until the 1/1/83 switch-over

there was nothing about whether there was other technologies at the transport and application layer ... and/or even that there weren't other technologies out there. the assertion was that a big limitation on the size of the arpanet vis-a-vis the internal network was the difference between gateways and lack of gateways at the network (and/or internetworking) layer. furthermore, that a big explosion in the number of nodes on the internet after the 1/1/83 switch-over was because it got gateway and heterogeneous support as part of the 1/1/83 switch-over ... greatly contributing to it passing the number of nodes in the internal network by mid-85.

so as to why the arpanet had fewer nodes than the internal network prior to 1/1/83 ... is because it had FTP at the application & transport layer? That because the IMPs weren't seen at the application layer is a reason that there were fewer nodes on the arpanet than on the internal network? That the IMPs were a network layer implementation ... and therefor not seen at the application layer is a reason that the arpanet had fewer nodes than the internal network?

Or is it that the IMPs were networking layer implementation, were a homogeneous networking layer implementation, and didn't have internetworking and/or gateway ... contributed to the apranet having fewer nodes than the internal network ... and that the introduction of internetworking and gateways as part of the 1/1/83 switch-over allowed a big explosion in the number of heterogeneous networking nodes ... aided by gateways part of internetworking architecture.

misc. past archeological posts on this subject:
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler https://www.garlic.com/~lynn/

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Thu, 3 Jun 2004 10:48:33 -0700
Newsgroups: comp.arch
Subject: Re: Infiniband - practicalities for small clusters
nmm1@cus.cam.ac.uk (Nick Maclaren) wrote in message news:<c9mmsg$2ur$1@pegasus.csx.cam.ac.uk>...
Actually, that is how GUTS worked (Gothenburg). Phoenix did less modification, and used the MVT batch mechanism for jobs and TSO for interactive work. It was essentially a front-end and service provider.

It did, however, have to do really quite a lot to make TSO usable. I was probably the only person who ever wrote CLISTs under Phoenix, and it took me a hell of a lot of politicking to get permission.

Regards, Nick Maclaren.


one of the people that did vs/pascal left about '80 and got funding to do a 3274 controller clone that outboarded a lot of the TSO interactive interface in the controller ... because the TSO performance was really so bad compared to other infrastructures. one of the explanations why the company didn't do well was that the majority of the people that used TSO didn't understand how really bad it was and/or appreciate that things could be significantly better.

how truely bad tso was also why much of the internal development work went on under vm/cms worldwide around the company ... regardless of the targeted platform.

Most dangerous product the mainframe has ever seen

From: lynn@garlic.com
Date: Thu, 3 Jun 2004 17:58:30 -0700
Newsgroups: bit.listserv.ibm-main
Subject: Re: Most dangerous product the mainframe has ever seen
scomstock@aol.com (S Comstock) wrote in message news:<20040603112905.16349.00000371@mb-m29.aol.com>...
In yesterday's Wall Street Journal (back page of the Marketplace section) there was a story about Sun and Fujitsu teaming up to jointly develop future computer systems, called the APL (Advanced Product Line). The last paragraph quotes Scott McNealy (Sun's CEO): "I think the APL will be ultimately the most dangerous product the IBM mainframe has ever seen". Well, competition is good. But what I found interesting is the implicit recognition of the IBM mainframe as a major player even today. Most competitors, at least publicly, denigrate the mainframe as not being a factor in the industry any more. Now an open admission that mainframes are still a significant part of the market. Somehow I find that encouraging.

remember in the past that it was fujitsu that funded and did the manufacturing for Amdahl. also fujitsu funded HaL (some number of former beemers) before taking it over completely ... which did the first 64bit sparc chip ... random references:
http://sunsite.uakom.sk/sunworldonline/swol-10-1995/swol-10-hal.html
http://www.hoise.com/primeur/99/articles/monthly/AE-PR-10-99-53.html
http://www.hoise.com/articles/SW-PR-12-98-28.html

[IBM-MAIN] HERCULES

Refed: **, - **, - **, - **
From: lynn@garlic.com
Date: Thu, 3 Jun 2004 18:13:21 -0700
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: [IBM-MAIN] HERCULES
jmaynard@ibm-main.lst (Jay Maynard) wrote in message news:<20040601162325.GA2980@thebrain.conmicro.cx>...
Do they? I'm not so sure. It seems to me that the powers that be at IBM want to tap the Linux geek base for the future, as their answer to the graying of the z/OS (and, to a lesser extent, z/VM - although there they seem to want to train the people to use VM, instead of supplant it) workforce.

x-posting to alt.folklore.computers

original cp/67 in the '60s and then vm/370 in the '70s shipped with all the source ... both to internal accounts within the corporation as well as customers outside the corporation. both internal and external customers frequently rebuilt the system completely from the source as a common practice.

there was a study done of the vm share tape in the late 70s and effectively of something very equivalent, the internal corporate common tape. the vm share tape and the internal common tape had about the equivalent number of lines of source changes, updates and/or add-ons to the system ... which in aggregate (for both) was about four times larger than the base product source (note however that in both the share and common tape cases, there was some large amount of duplicate feature/functions implemented from different installations).

i posted somewhat of a time-line of the transition of the vm/370 kernel to charged for licensed product (note that the time-line was similar for MVS ... although slightly later, using vm/370 as guinea pig):
https://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)

network history

From: lynn@garlic.com
Date: Thu, 3 Jun 2004 18:47:40 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
jmfbahciv@aol.com wrote in message news:<40bf1bfd$0$2940$61fed72c@news.rcn.com>...
The difference is Lynn's view from inside out and our view from outside in.

note that the original assertion wasn't about inside out or outside in ... the original assertion pertained to why was the internal network larger (had more nodes) than the arpanet for just about the complete life of the arpanet .... up until after the switchover to 1/1/83 ... sometime mid-85.

my assertion was that the mainstay of the internal network effectively had gateway-like function in every node ... allowing the connection of heterogeneous environments ... something that the arpanet didn't get until internetworking and the switch-over 1/1/83.

there were comments about SNA ... which was extremely homogeneous AND didn't even have a network layer. it was designed primarily for large terminal communication infrastructures.

then given the original assertion ... then all the comments about how much better arpanet was all during the '70s than the internal network ... are important contributing factors for why the arpanet was so much smaller than the internal network ... because it actually had all these really much better features.

so is this intending to imply that all such really great technical features were actually a disaster from a deployment standpoint (as contributing to the fact that the arpanet was so much smaller than the internal network)? Or is it an observation that the arpanet was so much smaller than the internal netwokr (until it started to explode in size after the 1/1/83 switchover) in spite of having these really, really great features ... sort of implying that there must be some other really enormous technical(?) inhibitor offsetting such really great features.

network history

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Fri, 4 Jun 2004 11:30:23 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
eugene@cse.ucsc.edu (Eugene Miya) wrote in message news:<40bf69e5$1@darkstar>...
> You and Chris and I will just have to agree to disagree.
>
> I recall gateways for email before 1983.


they were application gateways, not network gateways
> NCP existed from the era before layered models.
> In that period for demo or die had lots of discussion on transparency.
> TCP was that transition.


wrong ... in fact is an example of layered model/implementation
> I think you are making some revisionist interpretations of the situation
> at the time. Most of the other gatewayed networks didn't have the full
> functionality of the ARPAnet and other up and coming work done by
> others. And I think that some of that can be seen reflected in the
> earliest versions of laying (descriptions not prescriptions) in Tanenbaum.


i never made any comments about whether apranet had more or less function. I was specifically citing that the lack of internetworking layer and network gateways (which are part of the internetworking layer) didn't exist in the arpanet and probably contributed to inhibiting its growth ... and that the switch-over on 1/1/83 removed that.

> >misc. past archeological posts on this subject:
> >https://www.garlic.com/~lynn/internet.htm
>
> Read it before Lynn. It's your view.


actually the above reference contains some detailed information about CSNET & its email gateway into the arpanet. note that this was a application gateway ... not a network gateway. they are different.

the point of the original assertion was that the internal network co-existed in the same period as the arpanet (supposedly before layered architectures) .... and even so, the internal network effectively had (network) gateway function in every node.

The assertion is that one of the reasons that the arpanet was smaller than the internal network all thru the 70s ... was because the arpanet lacked internetworking/networking-gateways until 1/1/83 switch-over ... while the internal network effectively had the equivalent of networking gateway function in every node.

it wasn't that arpanet NCP and host protocols weren't layered. In fact they were exceptionally layered. The issue was that they were layered in much more of the traditional OSI model that lacked internetworking and network-layer gateways. that the arpanet didn't deviate from the traditional OSI layering until the 1/1/83 switch-over where it gained internetworking and gateway layer function. Furthermore, ISO has had rules that ISO & ISO chartered organizations can't work on protocols that violate the OSI layered model. Since the internet's internetworking layer and networking gateways (i.e. the networking gateways are part of the internetworking layer which doesn't exist in the OSI movel), that couldn't be considered in ISO.

it wasn't that there wasn't layered models during the period that arpanet existing ... there were layered models ... they just didn't include (and some cases specifically mandated the exclusion of) internetworking and networking gateways (i.e. gateways between networks).

the switch-over to internetworking and gateways for the arpanet on 1/1/83 gave it somewhat the functionality of what the internal network had from the start ... and therefor the assertion specifically about a reason why the internal network was larger than the arpanet thru-out most of its early lifetime ... until sometime mid-85.

network history

Refed: **, - **, - **
From: lynn@garlic.com
Date: Fri, 4 Jun 2004 12:07:34 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
eugene@cse.ucsc.edu (Eugene Miya) wrote in message news:<40bf69e5$1@darkstar>...
I think you are making some revisionist interpretations of the situation at the time. Most of the other gatewayed networks didn't have the full functionality of the ARPAnet and other up and coming work done by others. And I think that some of that can be seen reflected in the earliest versions of laying (descriptions not prescriptions) in Tanenbaum.

lets see what is revisionist interpretation

1) arpanet didn't have internetworking (and networking gateways that are part of the internetworking layer) until the 1/1/83 switchover

2) the internal network was larger (had more nodes) than the arpanet for nearly the whole period until about mid-85

3) the mainstay of the internal network had effectively (network) gateway function in every node from the start

so which of the three claims are revisionist?

an assertion that a possible/contributing reason that the internal network was larger than the arpanet for that period was because of the (networking) gateway functionality. furthermore a possible reason for the explosive growth in the internet after the 1/1/83 switchover was the availability of internetworking and network gateways that come with internetworking layer.

note that the period of the 70s was not w/o layered networking implementations. an issue was that the osi model dominated ... and at least up thru the early '90s, ISO had rules that ISO and ISO chartered organizations couldn't standardize protocols that violated OSI model. Internetworking and the related network gateways that are part of the internetworking layer don't exist in the OSI model ... and therefor couldn't be considered for standardization. It wasn't that layering didn't exist in the period ... but specifically internworking wasn't part of the traditional layered model.

part of the reason for making the assertion that internetorking and network gateways that are part of the internet ... contributed significantly to explosive growth in the number of internet nodes after the 1/1/83 switch-over ... had (in the past)having had to deal with networking deployments that lacked internetworking capability ... and the frequent synchronization issues that come with a homogeneous infrastructure. One could claim that the 1/1/83 switch-over was in itself an example of the synchronization efforts needed when dealing with an homogeneous infrastructure.

network history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Fri, 4 Jun 2004 14:23:52 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
eugene@cse.ucsc.edu (Eugene Miya) wrote in message news:<40c080f9$1@darkstar>...
The political difficulty well documented from Bob Taylor to today is justifying communication hardware and software which is seen as an impact to cost which is not easily directly measured. IMPs weren't cheap. LH/DH-11s had to be paid by some one. Creative "solutions" were found, and that's when I learned the term "computer funny money" came about.

so what is cause and what is effect?

so to have a little fun .... I'll assert that the political difficulties of connecting to a research, non-classified network was a side-effect of it being homogeneous and w/o internetworking gateways. and that with internetworking ... and internetworking gateways ... you would minimize global authoritative approval for approval of individual node attachment ... something you started to see after the 1/1/83 switch-over ... aka was the ease of adding network nodes after the 1/1/83 switch-over a characteristic of no longer requiring global authoritative approval ... or was it a side-effect of having independent, interconnected networks belonging to different domains ... where there isn't a single authoritative agency responsible for allowing or disallowing each individual network node.

so to have a little more fun ... i'll see your political difficulties with getting an authoritative agency to approve additional individual network node connection to homogeneous, research, non-classifed network ... with a network that required all links leaving corporate premise to be vetted and encrypted ... sometime in the mid-80s I was told that over half of all link encryptors in the world were installed on the internal network. so lets have some political difficulties getting authoritative agency approval for adding an individual node to a homogeneous, research, non-classified network against getting french and german PTT sign-off (in the late 70s and well thru the 80s) for a fully encrypted link between a site in germany and a site in france (actually the difficulty wasn't just restricted to the PTTs & governments of germany and france).

little issues with the MIB guys showing up saying you can't turn on encryption (and corporate guys saying w/o encryption you can't turn on the link).

in any case, I'll assert that internetworking and internetworking gateways also addresses homogeneous authoriative administrative network issues as well as internetworking issues between different deployed networks.

network history

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Fri, 4 Jun 2004 14:42:48 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
eugene@cse.ucsc.edu (Eugene Miya) wrote in message news:<40c01abd$1@darkstar>...
I think it wouldn't violate disclosure to note that during the 70s, a fair portion of IBM use still involved punch cards. It was certain possible to use either net for card images. Security was a little loose with passwords during that period still being stored in clear text, much less passing them across a network and public key only coming into exist in the 76-78 period in the open lit.

Certainly the world before domain naming was interesting.


you could probably consider whether it would violate disclosures to state that the majority of business and university computing during the '70s still involved punch cards ... not just internal ibm. also that the majority of the world didn't use computers at all, and/or that the majority of the automobiles ran on gasoline.

the first time i actually ran into public key implementation was in the mid-80s ... although as mentioned, there was some fairly hefty encryption requirements earlier ... and that there was a claim that over half of all link encryptors in the world was on the internal network.

the internal remote access program for business travel did detailed vulnerability study in the early '80s. one of the most vulnerable exposures were hotel PBXs ... as a result a requirement was created, that all remote dial-in connections had to be encrypted. a custom modem was built with some heavy duty physical security and some protocol that involved at least exchange of session keys ... and frequently also dial-back. this was required for remote access dial-in.

network history (repeat, google may have gotten confused?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Fri, 4 Jun 2004 21:32:48 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history (repeat, google may have gotten confused?)
Brian Inglis wrote in message news:<tucvb0ps4dn2ndhv6lhsd81mh2oc2sjksc@4ax.com>...
[snip ARPA and VNET details]

ISTM the point that Lynn was trying to make was that it was the internetworking gateway protocols that allowed global internetworks to be created.

ISTM that both ARPA and VNET were homogeneous *networks* that allowed heterogeneous *hosts* to be connected as leaf nodes. In the ARPA network, the "NIC"s were IMPs that had to have custom drivers written for each host. In the VNET network, the "NIC"s were 360/370 hosts running CP/VM and RSCS that included drivers adapted to the behaviour of 360/370 [H]ASP/JES hosts, and also non-JES hosts that emulated the behaviour of various (dumb and "smart") devices. So VNET hosts either ran VM or JES.

Were POWER (DOS/VSE) hosts or other IBM OSes supported on VNET? I never encountered any non-VM DOS/VSE systems, or any DOS/VSE applications that supported NJE or network application protocols, just reader input and punch and print output.


so by the time of bitnet ... they had pretty much stopped shipping vnet native mode drivers with the vm/vnet infrastructure to customers ... by the early 80s ... about the only vnet drivers customers saw were the (JES2) NJE-compatible drivers.

part of this probably could be considered the ongoing rounds of the current release was going to be the last release ever shipped ... and the big batch processing operating system was going to be the only one in existance. a trivial example of this continuing and ongoing saga was the declared decision mid-70s to shutdown burlington mall development group and everybody would transfer to POK to work on the internal-only tool supporting mvs/xa development (and there was never, again, yet, still be another vm release). that particular iteration shutting down burlington mall development group in the mid-70s saw a lot of people going to dec & prime.

all along during the 70s ... the NJE drivers supported a maximum of 255 nodes (less the psuedo local devices which typically ran 60-80) in a single network. during all that time, there were more than 255 nodes in the internal network. NJE was not enhanced with 999 node support until after the internal network exceeded 1000 nodes (which met that it was still not possible to use NJE as part of the mainstream internal network ... other than at leaf nodes).

this whole vm thing was a constant thorn in the cooperation side, the interactive stuff was what the majority of the corporation used for development, regardless of the target platform, the networking support was what grew into keeping the whole cooporation connected and to some extent running. The HONE system was also VM based and that was what provided the online business and operations infrastructure for the marketing, sales, and branch people worldwide ... random past HONE references:
https://www.garlic.com/~lynn/subtopic.html#hone

however, ... back to vnet issue at hand ... the native-mode drivers continued to be available inside ibm ... for one reason they were a lot more efficient (besides handling the network size).

there was also a neat hack one of the people in rochester did for the internal network. ibm mainframes are all half-duplex i/o interfaces ... so when you mapped a full-duplex link directly into mainframe support ... it was operated as half-duplex. the guy in rochester built a y-cable for a full-duplex link ... which were then connected to pairs of mainframe i/o interfaces. then he hacked the vnet native mode driver for dual-simplex operation ... one i/o interface dedicated to transmit and one i/o interface dedicated to receive (achieving full media thruput in each direction).

i don't remember seeing a power driver ... but that didn't mean that they didn't exist ... since they would have been any more or less "native" than the NJE drivers

network history

From: lynn@garlic.com
Date: Sat, 5 Jun 2004 09:21:42 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
eugene@cse.ucsc.edu (Eugene Miya) wrote in message news:<40c080f9$1@darkstar>...
Packet swtiching from 1968 to the mid 80s was still new and complex. An established circuit switching base from AT&T (and others) was a constant block. To this day there are still opponents to packet switching. No one had any idea when designing protocols what numbers to put in: packet sizes, timers, everything.

so ... is this supporting your comment about my interpretation of the arpanet being revisionist... or excuse, supporting my interpretation of the arpanet.

network history

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sat, 5 Jun 2004 09:44:29 -0700
Newsgroups: alt.folklore.computers
Subject: Re: network history
Brian Inglis wrote in message news:<c8b2c0d282sfd2s4l5i36vbdb9o9v4j4j7@4ax.com>..>
We replaced a lot of 1200 baud long-distance dialup modem lines to field offices with (Canadian) DataPac links (effectively leased 9600bps lines but not charged as such) and saved a bunch of money. Usage charge was per kpacket and distance independent.

Ironic part was: monthly circuit charges were to closest geographical DataPac exchange, which was about 50km from one office, but in another province and telco territory, so the office telco decided they would instead route through their own exchange 1200km away instead, but only charge us for regulated 50km. Great deal!

Hardest part was getting the links brought up, as many telco offices in the boonies had never dealt with data lines before, and I had to talk the telco installers out in the field and/or exchange thru making adjustments to the circuit until I could see the remote PADs.


we had an interesting argument with the communication division circa 1986 about T1 lines. We claimed a big growth in T1 lines. They did a study and predicted that by 1992 there would be at most 200 T1 lines installed by customers. We did a study and trivially found 200 T1 lines already installed at customers.

so what was the difference in the studies?

well, we went out and looked for T1 telco links.

they went out and studied telco links connected to their product. It turns out that their product only supported multiple 56kbit links, didn't actually support T1 links. Their product did have something called "fat pipes" that ganged multiple lines together to simulate a single line. They plotted the number of 56kbit "fat pipes" between the same two locations; total number of single 56kbit links, number of dual 56kbit links, number of triple 56kbit links, number of quatro 56kbit links, etc. addenda: they also weren't officially planning on having T1 support in their product line until 1992

So they did find a few quatro 56kbit links ... but none higher than four. Based on that, they concluded that there wasn't currently a demand and projected based on the trend of single, douple, triple, etc 56kbit links ... that there might possibly be 200 customers with T1 links by 1992.

so what accounted for the difference?

what the communication group didn't appear to realize was that at the time, the tariff structure was such that there was a cross-over where five to six 56kbit links was about the same price as a single T1 link. Since their product didn't support T1, the customers that installed T1 tended to connect them with gear from other vendors (there were only a few 20+ year old 2701s w/T1 support still around and only a few gov. accounts had the S/1 T1 zirpel cards).

even HSDT for some internal backbone was using HYPERchannel for T1 and higher link-speed operation
https://www.garlic.com/~lynn/subnetwork.html#hsdt

i had done the rfc 1044 support for the first mainframe tcp/ip support eventually shipped from the company

Infiniband - practicalities for small clusters

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sat, 5 Jun 2004 10:34:12 -0700
Newsgroups: comp.arch
Subject: Re: Infiniband - practicalities for small clusters
Greg Pfister wrote in message news:<40c0a1fc_1@news1.prserv.net>...
Yes, strange things happened as a result of various corporate heavies considering anything but os/360 "nonstrategic" and "the enemy."

All the time there was a reasonably good system, VM, that was forever getting its budget cut, getting moved to boondocks locations, etc. It survived a long time, all the way to now, due to the dedication of its developers and the ultimate realization by the powers that be that it made bringing up a new os/360 or MVS system much easier.

I knew one of the developers, back in around 1975. He said they were routinely being scheduled out of allocated time on new systems to bring up VM on new gear. After they reached the point of being able to host the new gear, he "fixed" that problem by regularly coming in late at night and physically disabling the main console. MVS wouldn't come up without it. VM would. So they were forced to run VM to get any time, at which point he would host their test sessions and simultaneously be able to run his own.


another example was when they shutdown the development group in burlington mall ... the people were intitially told that they were all being transferred to POK, that there would be no more vm/370 releases, and they would work on the internal only "vmtool" in support of mvs/xa development. that saw a number of people staying in the boston area and going to work for places like dec and prime.

similar thread in a.f.c (which also mentions rochester):
https://www.garlic.com/~lynn/2004g.html#35 network history

spool

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sun, 6 Jun 2004 18:29 -0700
Newsgroups: alt.folklore.computers
Subject: Re: spool
Rich Alderson wrote in message news:<mddekotwnn3.fsf@panix5.panix.com>... > It's a retronym. People spoke of spooling printer output to tape, for example, > on 709x-class machines.

my first programming job was re-implementing 1401 MPIO on 360/30. The university ran student jobs on 709 tape-to-tape .... the unit-record<->tape was handled by 1401 front-end. I was given copy of the 1401 MPIO "binary" bootable program ... which could be run on the 360/30 when switched to 1401 hardware emulation mode.

my job was to implement equivalent function in 360 assembler running on the 360/30 (in 360 mode). I got to invent my own interrupt handling, device drivers, memory management, multi-tasking etc.

Later on the university installed HASP to handle student jobs on larger 360. This was the houston automatic spooling (i.e. you didn't need an operator moving tapes between machines). HASP eventually morphed into JES2 (this was a single processor infrastructure ... until later when JES2 got multi-machine shared-spool).

There was also ASP ... which was a two-processor spooler ... somewhat akin to the 709/1401 lash-up ... except it was two 360s ... moving data back and forth using shared disk. ASP morphed into JES3 (my wife did a stint in the g'burg JES group and was one of the "catcher" for ASP).

losts of ancient hasp/jes refs:
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/95.html#7 Who built the Internet? (was: Linux/AXP.. Reliable?)
https://www.garlic.com/~lynn/96.html#9 cics
https://www.garlic.com/~lynn/96.html#12 IBM song
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#58 When did IBM go object only
https://www.garlic.com/~lynn/99.html#76 Mainframes at Universities
https://www.garlic.com/~lynn/99.html#77 Are mainframes relevant ??
https://www.garlic.com/~lynn/99.html#92 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#94 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#110 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#113 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#117 OS390 bundling and version numbers -Reply
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#212 GEOPLEX
https://www.garlic.com/~lynn/2000.html#13 Computer of the century
https://www.garlic.com/~lynn/2000.html#55 OS/360 JCL: The DD statement and DCBs
https://www.garlic.com/~lynn/2000.html#76 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#10 IBM 1460
https://www.garlic.com/~lynn/2000c.html#18 IBM 1460
https://www.garlic.com/~lynn/2000c.html#29 The first "internet" companies?
https://www.garlic.com/~lynn/2000d.html#36 Assembly language formatting on IBM systems
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#15 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#58 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#71 HASP vs. "Straight OS," not vs. ASP
https://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001e.html#6 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#7 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#69 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001e.html#71 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001f.html#2 Mysterious Prefixes
https://www.garlic.com/~lynn/2001f.html#26 Price of core memory
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#48 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001h.html#24 "Hollerith" card code to EBCDIC conversion
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#33 Waterloo Interpreters (was Re: RAX (was RE: IBM OS Timeline?))
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#62 ASR33/35 Controls
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#12 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#37 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#60 CMS FILEDEF DISK and CONCAT
https://www.garlic.com/~lynn/2002.html#31 index searching
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#50 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#71 Blade architectures
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090
https://www.garlic.com/~lynn/2002f.html#38 Playing Cards was Re: looking for information on the IBM
https://www.garlic.com/~lynn/2002f.html#53 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
https://www.garlic.com/~lynn/2002h.html#2 DISK PL/I Program
https://www.garlic.com/~lynn/2002h.html#14 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#22 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#73 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#23 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002k.html#48 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002l.html#15 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#53 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002o.html#24 IBM Selectric as printer
https://www.garlic.com/~lynn/2002o.html#68 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#16 myths about Multics
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#23 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#32 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2002q.html#36 HASP:
https://www.garlic.com/~lynn/2002q.html#39 HASP:
https://www.garlic.com/~lynn/2002q.html#49 myths about Multics
https://www.garlic.com/~lynn/2003.html#68 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#51 vnet 1000th node anniversary 6/10
https://www.garlic.com/~lynn/2003h.html#19 Why did TCP become popular ?
https://www.garlic.com/~lynn/2003h.html#31 OT What movies have taught us about Computers
https://www.garlic.com/~lynn/2003h.html#60 The figures of merit that make mainframes worth the price
https://www.garlic.com/~lynn/2003i.html#12 Which monitor for Fujitsu Micro 16s?
https://www.garlic.com/~lynn/2003i.html#18 MVS 3.8
https://www.garlic.com/~lynn/2003i.html#32 A Dark Day
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003j.html#19 tcp time out for idle sessions
https://www.garlic.com/~lynn/2003k.html#13 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003l.html#11 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
https://www.garlic.com/~lynn/2003m.html#39 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003m.html#56 model 91/CRJE and IKJLEW
https://www.garlic.com/~lynn/2003n.html#22 foundations of relational theory? - some references for the
https://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#40 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004b.html#55 Oldest running code
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#9 TSS/370 binary distribution now available
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004c.html#26 Moribund TSO/E
https://www.garlic.com/~lynn/2004c.html#27 Moribund TSO/E
https://www.garlic.com/~lynn/2004c.html#28 Moribund TSO/E
https://www.garlic.com/~lynn/2004c.html#32 Moribund TSO/E
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#39 System/360 40th Anniversary
https://www.garlic.com/~lynn/2004d.html#69 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#51 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#29 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004g.html#8 network history
https://www.garlic.com/~lynn/2004g.html#35 network history (repeat, google may have gotten confused)

IBM 7094 Emulator - An historic moment?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Mon, 7 Jun 2004 13:00:42 -0700
Newsgroups: alt.folklore.computers
Subject: Re: IBM 7094 Emulator - An historic moment?
Brian Inglis wrote in message news:<h5d6c0dfm8rsk4v4k771l7s5d1e7e8v2u1@4ax.com>...
Batching PUT tapes until the patches-to-patches settled down (after the summer students had left or been hired), or some giant "feature" patch was coming out, worked for me. Occasionally a third party product would get burned because IBM didn't disclose third party product APARs to customers in the PUT bucket. Falling too far behind meant that you might have to do a lot of catch up in order to be able to apply a critical patch.

VM/370 put out monthly PLC accumulated source & binary maint. tapes.

in the mid-80s i was on business trip to madrid .. the madrid science center was involved in digitizing a bunch of documents ... getting ready for cdrom to issued in conjunction with anv. on 1992.

one evening went to theater in downtown madrid ... which included a short film produced at the univ. i don't remember the plot or subject now ... but part of the set was a room with a wall of TVs ... all scrolling the same text at possibly 1200 baud. The wierd thing was that not only did I recognize it to be a vm/370 load map ... but i also recognized the PLC from the apar listing that was part of the load map comments.

when I did the resource manager .... before the initial release, there was an extensive benchmarking and regression testing that involved over 2000 benchmarks and three months elapsed time.
https://www.garlic.com/~lynn/submain.html#bench
they originally wanted me to release an updated resource manager PLC tape on the same schedule as the base product. I argued that the regression test process for the resource manager would involve at least 100 benchmarks ... that took a minimum of 48hrs ... I was unable to put out an updated PLC tape more than once every three months (i was architect, design, development, test, release, pubs, documentation, teaching classes, as well as 1st, 2nd, and 3rd level field support).

misc.
https://www.garlic.com/~lynn/subtopic.html#fairshare

[URL] (about) passwords

From: lynn@garlic.com
Date: Mon, 7 Jun 2004 15:48:49 -0700
Newsgroups: sci.crypt
Subject: Re: [URL] (about) passwords
Mok-Kong Shen <mok-kong.shen@t-online.de> wrote in message news:<c9qmv4$ts2$06$1@news.t-online.com>...
Title: Passwords can sit on hard disks for years
http://www.newscientist.com/news/news.jsp?id=ns99995064


recent posting in similar thread:
https://www.garlic.com/~lynn/aadsm17.htm#42

there are various kinds of shared-secrets that can sit around for years ... not just the obvious pin/passwords ... shared-secrets that may lend themselves to identity theft and/or various kinds of account hijacking.

command line switches [Re: [REALLY OT!] Overuse of symbolic constants]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Mon, 7 Jun 2004 17:47:23 -0700
Newsgroups: alt.folklore.computers
Subject: Re: command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
jmfbahciv@aol.com wrote in message news:<40c30d31$0$2944$61fed72c@news.rcn.com>...
This is another poser for me. Why did there have to be more than one sharable segment in the user's address space? Was it a common practice to co-routine between the COBOL and FORTRAN libraries?

Or am I stumbling because my thinking style isn't pure data processing (which is what IBM was an expert at)?


a big production system was co-routine between APL and FORTRAN. One of the largest time-sharing services was the internal HONE system
https://www.garlic.com/~lynn/subtopic.html#hone
that supported world-wide marketing, sales, branch and field people. The US HONE datacenter had something going on 40,000 defined (US) users (late 70s, it was large disk farm, with eight SMP 370 processors, aka disk controller infrastructure at the time allowed up to eight different processor complexes to connect to the same set of disks). The HONE system had front-end that handled various load-balancing, routing, fall-over, etc between the user population and the data center resources.

the majority of the HONE delivered applications were almost exclusively APL-based. However, some number of extremely numerical intensive operations were implemented in Fortran. So there was some amount of effectively APL co-routine to Fortran application/library.

I deployed major production paged mapped filesystem
https://www.garlic.com/~lynn/submain.html#mmap
and position independent shared-code,
https://www.garlic.com/~lynn/submain.html#adcon
for HONE services on a release 2 VM/370 base (as well as some number of other internal data processing centers). A subset of the feature/function was made available in VM/370 release 3 product shipped to customers.

An example of HONE applications were configurators. Starting with 370/115 & 370/125 it was no longer possible for a salesman to fill-out a computer order manually ... but it had to be done thru HONE configurator (aka for a salesman to submit any mainframe computer order ... it had to be created with a HONE configurator).

i've mentioned before that this APL infrastructure offered a lot of features that are currently done in various spreadsheet implementations today.

Another application was a performance predictor .... given some representive trace data from a customer operations &/or some other generic profile of what went on in the customer's datacenter ... the salesman could run various what-if scenarios regarding adding more hardware, upgrading processor, adding more disk, etc.

The segment stuff was also used for various kinds of CMS things somewhat akin to DOS TSR functionality .... adding a specific REXX command processor to the address space that is utilized by an editor command macro processor (aka the editor provides a sort of generalized command macro processor ... that can involve a variety of different actual macro languages ... one specific such macro language that the macro language might be is REXX).

This could be considered a difference between the CMS model and say the UNIX model ... the CMS model tended to be co-routine with pointer passing ... as opposed to bi-directional pipes that you might implement using different address spaces in say Unix. Sort of in the CMS model, the editor might read each individual statement from the macro command file ... and do a call to the appropriate command language processor (with the editor and the command processor running in the same address space).

some of the recent posts in this &/or related threads:
https://www.garlic.com/~lynn/2004f.html#11 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#14 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#23 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#24 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#26 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#43 can a program be run withour main memory ?
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#59 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#60 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#62 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004g.html#1 before execution does it require whole program 2 b loaded in

Sequence Numbbers in Location 73-80

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Tue, 8 Jun 2004 16:27:04 -0700
Newsgroups: bit.listserv.ibm-main
Subject: Re: Sequence Numbbers in Location 73-80
gilmap@ibm-main.lst wrote in message news:<200406081633.i58GXhg04691@sanitas>...
Of course, this scheme relies on use of a certain editor which may be distasteful to some of your programmers. I'll not take sides; everyone should be supported in using the editor he finds most comfortable. ISPF under TSO supports only one editor; under CMS, two. Both numbers are too small; the proper number should be "any".

-- gil


original CMS in the mid 60s had update command and default for increment by 1000 in the sequence number. the editor could resequence and you could specify the increment. the update command you had to put in both the control commands and the respective sequence numbers manually in inserted/replaced records.

I was making so many changes ... that I wrote a preprocessor that recognized "$" on the ./ control commands which instructed the preprocessor to insert sequence numbers in the inserted/replaced records as appropriate. this was late 60s and still used a single "update" file.

in the early '70s, a exec process was defined for handling multi-level updates and a control file that specified the various update files to be included before generating the working source copy (whole sequences of update files could be applied sequentially ... eventually resulting in the working source file).

in the transition to vm/370 (and the cambridge monitor system becoming the conversational monitor system ... still CMS), many of the editors were enhanced to support the multi-level update process; specify the file to be edited and the control file ... and the edit process would apply all of the necessary update files before presenting the resulting working source file for editing. Then any changes that were done during the edit session, where in turned saved in update file format ... as opposed to a completely changed working file.

the vm370/cms community had convention of resequencing all source (default by 1000 increment) every new release. This created something of a problem with installations that had significant source updates to the ibm products. A couple utilities were created:

• delta} basically take the most recent previous IBM release of unresequenced source and apply all ibm updates to create a the working file that is nominally compiled/assembled to producing executable binary. Then run delta against this set of working files and the newly released IBM resequenced source. Delta would produce a incremental update file that can be applied to the unresequenced source that will exactly produce the same source in the new release (but using the previous release's unresequenced source).

• reseq) given two files that were otherwise exactly the same except for the sequence number fields (i.e. the first could be the working file from a previous release with the "delta" file applied), take all incremental updates that might be applied to the first working file and convert their sequence numbers to apply to the second file.

so, modulo actual functional conflicts introduced by a new release, the combination of delta & reseq could be used to convert an extensive set of local &/or non-ibm incremental updates from previous release' sequence numbers to the newly released, resequenced numbers.

misc. past posts
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2003.html#62 Card Columns
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003j.html#14 A Dark Day
https://www.garlic.com/~lynn/2003j.html#36 CC vs. NIST/TCSEC - Which do you prefer?
https://www.garlic.com/~lynn/2003j.html#45 Hand cranking telephones
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!

Sequence Numbbers in Location 73-80

Refed: **, - **, - **
From: lynn@garlic.com
Date: Tue, 8 Jun 2004 20:21:01 -0700
Newsgroups: bit.listserv.ibm-main
Subject: Re: Sequence Numbbers in Location 73-80
the cms cascading/incremental updates also adopted the convention of putting 1-8 char id starting at col. 71 and working down. for ptf/apars this would frequently be the apar number ... but could be other feature code identification. with the convention of merging and resequencing every major release ... base source code could accumulate quite a bit of identification codes in cols 64-71.

example of some source code convention from past posting
https://www.garlic.com/~lynn/2003b.html#42 VMFPLC2 tape format

command line switches [Re: [REALLY OT!] Overuse of symbolic constants]

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Wed, 9 Jun 2004 11:47:08 -0700
Newsgroups: alt.folklore.computers
Subject: Re: command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
Brian Inglis wrote in message news:<kp96c098f51fhgcq9371ch23lp28t02f9v@4ax.com>...
The CP kernel had to be built at absolute zero because 360/370 interrupt vectors were hardwired to low addresses. The kernel symbol table DMKSYM was built into each kernel so you could easily patch the system on the fly. Most common use for this was DST time change, as IBM systems ran on local time zones, but time could be offset for DST.

when i originally did pageable (CP) kernel stuff on cp/67 ... i had to fiddle with the loader because the changes break some stuff into 4k blocks (the paging stuff runs real ... so it sort of operates a little bit like os/360 transient area ... but can use any available page) which created more entry points and exceeded 255 limit of the BPS loader used by CP/67.

the finagling also noted that the BPS loader on exit to the loaded program passed the address of the start of the symbol table and the number of entries. So I did this hack ... copying the symbol table into the end of the pageable kernel. This was better than DMKSYM, since DMKSYM had to be maintained for every defined symbol. Copying the actual symbol table met that you always had all symbolds. The appending the BPS loader table to the end of the pageable kernel got dropped in the morph from cp/67 to vm/370.

i leveraged DMKSYM for hack. During FS
https://www.garlic.com/~lynn/submain.html#futuresys

all the super-classified FS documents were online on super-secured VM system ... one of the FS-associated people (who i had been repeatedly panning ... as per the references to the movie in central sq) ... started claiming that they had so locked down the system that even if I (aka LHW) was physically in the machine room, I couldn't get to them. My response was it would take less than five minutes. So I demenstrated after work one weekend ... the five minutes involved disabling the machine from all external access ... using locate on DMKSYM to find the authentication checking module entry point and then flipping bit in the branch instruction after checking for correct or incorrect authentication ... aka everything entered was then treated as correctly authenticated.

this is not unlike the recent spate of counterfeit emv cards referred to in the EU-press as yes cards
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm17.htm#13 A combined EMV and ID card
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
https://www.garlic.com/~lynn/2003o.html#37 Security of Oyster Cards

PL/? History

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PL/? History
Newsgroups: bit.listserv.ibm-main
Date: Fri, 11 Jun 2004 17:41:45 -0600
ptduffy042@ibm-main.lst (Peter Duffy) writes:
When did SL/I fit into this? "Student Language One" was the first piece of software I opened a PMR on. This was in high school in 1974 and some of the more advanced output editting formats didn't work right. An output field with commas, decimals with leading dollar signs and trailing minuses I think was the one I called on.

how 'bout PL/M ... there is a cp/67, cp/m, and pl/m tie-in
https://www.garlic.com/~lynn/2004e.html#38 [REALLY OT!] Overuse of symbolic constant
http://computing-dictionary.thefreedictionary.com/PL/M

note also that multics was written in pl/l
http://web.mit.edu/Saltzer/www/publications/f7y/f7y.html
http://www.multicians.org/multics-source.html

and then there was pl360
http://portal.acm.org/citation.cfm?id=321442&dl=ACM&coll=portal

and pl/c from cornell
http://www.fact-index.com/p/pl/pl_c_programming_language.html
http://home.nycap.rr.com/pflass/PLI/PLC/v5628003.htm

languages related to PL/I
http://home.nycap.rr.com/pflass/relate.htm

another CP/67 tie-in ... BRUIN was also shipped early on with cp/67

also see SL/1 in the above reference mentions running on 1130

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PL/? History

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PL/? History
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sat, 12 Jun 2004 07:19:07 -0600
in addition to
https://www.garlic.com/~lynn/2004g.html#46 PL/? History

the boston programming center had cps ... conversational programming system ... that supported a conversational PL/1 and a basic implementation.

multics (mentioned in previous post) was on the 5th floor, cambridge science center (did cp/67, cms, gml, interactive stuff, internal network, performance modeling, some of the origins of capacity planning, etc) was on the 4th floor, and the boston programming center was on the 3rd floor.

with the expansion of the development group and the morphing of cp/67 to vm/370, the development group split off from the science center and absorbed much of the boston programming center. with the morphing of cp/67 to vm/370, cms was also altered from the cambridge monitor system to the conversational monitor system.

some specific cps references:
https://www.garlic.com/~lynn/2004.html#32 BASIC Language History?

other random past posts on cps, boston programming center:
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#19 ITF on IBM 360
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003h.html#34 chad... the unknown story
https://www.garlic.com/~lynn/2003k.html#0 VSPC
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2004.html#20 BASIC Language History?
https://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004e.html#37 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hercules

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sun, 13 Jun 2004 08:22:03 -0700
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Hercules
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) wrote in message news:<200406130235.i5D2Z6lN028086@jefferson.patriot.net>...
What year? Different levels of the language had different names, starting with BSL and including the ones you list. I'm not sure whether PL/8 was ever in the mix or whether that was strictly 801.

pl.8 ... subset of pl/1 ... for 801. cp/r (somewhat in the cp/40 & cp/67 tradition of naming control programs) was written in pl.8 ... was original operating system for the displaywriter follow-on ... but it got canceled and the platform retargeted to unix workstation. random pl.8, cp/r
https://www.garlic.com/~lynn/subtopic.html#801

slightly related threads
https://www.garlic.com/~lynn/2004g.html#46 PL/? History
https://www.garlic.com/~lynn/2004g.html#47 PL/? History

Adventure game (was:PL/? History (was Hercules))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Tue, 15 Jun 2004 17:18:37 -0700
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Adventure game (was:PL/? History (was Hercules))
john.mckown@ibm-main.lst (McKown, John) wrote in message news:<3718408C4D654A4D89223E69DC010CC601B29F91@uicnrhechp3.uicnrh.dom>...
I got a version that I put on VM/370 (yes, that long ago). One of the ACP (now TPF) programmers was so bored that he wrote an EXEC2 (or maybe REXX - I forget) program which use the CMS stack to stack all the Adventure commands to successfully run the entire game from start to successful conclusion.

what i remember is that somebody at Tymshare copied it from a DEC machine at stanford (sail?) to a tymshare machine and then ported it to their vm/370 time-sharing service. I was going to get it on tape ... but via a very round about way ... somebody in the UK got it from tymshare to an IBM (internal) machine ... and then sent the source to me over the internal network.

I made the binary available internally and would send the source to anybody that made the 300 pts. somebody in STL made 300 early on ... and then did a port from Fortran to PLI ... adding another 150 points along the way (450 pt game instead of 300).

at one point, the claim was that business at many internal plant sites almost came to a halt with so many people playing the game. At some point STL management announced a 24 hr grace period and then no more playing the game as part of normal work period. in bldg. 28 we managed to preserve a recreational area for things like games ... with the argument that if an order went out to delete things that weren't strictly business ... it would just result in people coming up with ingeneous ways to disquise private copies.

I wonder if anybody at ALM has access to possibly backup of the old recreational area with possible early source.

random past refs:
https://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
https://www.garlic.com/~lynn/99.html#83 "Adventure" (early '80s) who wrote it?
https://www.garlic.com/~lynn/99.html#84 "Adventure" (early '80s) who wrote it?
https://www.garlic.com/~lynn/99.html#169 Crowther (pre-Woods) "Colossal Cave"
https://www.garlic.com/~lynn/2000d.html#33 Adventure Games (Was: Navy orders supercomputer)
https://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#12 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002d.html#12 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2004c.html#34 Playing games in mainframe

Chained I/O's

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Tue, 15 Jun 2004 21:17:12 -0700
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Subject: Re: Chained I/O's
john.mckown@ibm-main.lst (McKown, John) wrote in message news:<3718408C4D654A4D89223E69DC010CC601B29F88@uicnrhechp3.uicnrh.dom>...
Basically, in the S/360 and later series of machines, all I/O is done by a separate computer called a Channel. This channel takes its orders from a program called, cleverly, a Channel Program. A channel program consists of a series of CCW (Channel Command Words). There is a bit in each CCW which tells the channel if another CCW exists and, if so, what the CCW contains. That is, some CCWs, say: "The next CCW is another command" (aka command chaining). Other CCWs say: "The command in the next CCW is to be ignored. Only use the address and length fields, but do what this CCW says to do." This is called "data chaining" and is used for what is often called "gather / scatter" processing.

for the majority of the 360 & 370 machines ... there tended to be a micro-engine that had some programming that emulated 360/370 instruction set (in somewhat the same way hercules simulates 360/370 instruction set ... many of the boxes had about the same avg. ratio of micro-instructions to 360/370 instructions as found in hercules). these micro-engines frequently had a separate set of "time-shared" microprogramming that implemented the channel programming function.

this is quite strikenly seen in the 303x channel director. the 370/158 had integrated channels with the 158 engine processing both the 370 instruction set programming and the channel programming. For the 303x channel director, they took the 158 engine and stripped out the 370 instruction set programming .... leaving just the 158 integrated channel programming supporting up to six channels. The 3031 was essentially the 158 engine with only the 370 instruction set programming ... and a second 158 engine implementing the channel (director) programming. In that sense all 3031 uniprocessors ... were in actuallity two processor 158 configurations sharing the same memory (but running different micro-programs).

The 3032 was essentially a 168-3 configured to work with a 303x channel director.

A big issue with the 360 channel programming specification was that it precluded prefetching .... so trying to support longer distancies at high-speeds, the serialized, synchronous fetching of each channel operation ... at the channel and control unit level started to become a significant latency issue.

In part the introduction of IDALs with 370 addressed a timing problem that cp/67 encounterd on 360/67 attempting to break-up channel programs with data chaining when referencing sequentially virtual addresses that cross page boundary (and the virtual pages weren't sequential). The problem was to split the CCW into two (or more) data-chained CCWs ... which under extreme circumstances could encounter timing difficulties. IDALs allowed for the prefetching of the address lists for non-contiguous page-crossing operation.

The original VS2 prototype implementation started out with a MVT with 360/67 virtual memory support stitched in .... along with the CCWTRANS module from cp/67; aka the module that copied the virtual address space CCWS to "real" ccws, did the fixing of virtual pages and handled the virtual to real page translation and the necessary fixup for contiguous virtual address involving non-contiguous real storage addresses.

So data-chaining was sort of the original scatter/gather support ... but was somewhat subsumed by IDALs ... especially handling contiguous virtual address mapped to non-contiguous real addresses ... in part to circumvent the timing problems associated with only being able to serially handle a single CCW at a time (with no prefetching, which didn't apply to IDALs addresses).

Channel busy without less I/O

Refed: **, - **, - **
From: lynn@garlic.com
Date: Wed, 16 Jun 2004 15:57:12 -0700
Newsgroups: bit.listserv.ibm-main
Subject: Re: Channel busy without less I/O
bblack@ibm-main.lst (Bruce Black) wrote in message news:<40D078A1.7000702@fdrinnovation.com>...
Not quite. DSS (ADRDSSU) reads up to 1 cylinder per I/O (OPTIMIZE(4)). FDR reads either 1/2 or 1 cylinder per I/O. It is possible to use appendages or exits to kick off the next I/O > during processing of the previous I/O termination (actually QSAM/BSAM does this) but these are counted as separate I/Os.

oldtime multi-track search could busy out a channel, controller, and device for full cylinder's worth also ... and it wasn't necessarily predictable. most recent repeat of an archeological story in comp.arch as part of thread on how really bad tso response was:
https://www.garlic.com/~lynn/2004g.html#11

Chained I/O's

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Chained I/O's
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 20 Jun 2004 19:00:20 -0600
"Kelli Halliburton" writes:
On the contrary, I'm complaining about their absence, and not being sarcastic about it either. I have been looking at how quite an expert system on IBM lore could be constructed using these related articles. Lynn already has most of the connections indexed.

i was on the road ... and have been trouble with standard usenet server and have been resorting to google ... and was making some slightly abbreviated postings.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Chained I/O's

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Chained I/O's
Newsgroups: alt.folklore.computers
Date: Mon, 21 Jun 2004 09:04:02 -0600
jmfbahciv writes:
Warning: thread drift...Speaking of travel..Do not try to come to Boston or surrounding areas during the Democrat convention. The Dems are succeeding in pissing off every conceivable union and schmuck who works for a living. The latest target was the taxi cab drivers. About the only ones they haven't POed ...yet... are the fisherman but I'm sure the Dems will think of something.

possibly more interesting place ... I was at a workshop at NIST several weeks ago and got invited to give a talk at GGF conference
http://www.ggf.org/webevents/publish/webevent.cgi?cmd=opencal&cal=cal44,cal40,cal41,cal42,cal43,cal52,cal53,cal49,cal50,cal51,cal45,cal46,cal47,cal48&d=06&m=06&y=2004
NOTE above is now 404, main site:
http://www.ggf.org/ggf_events_past_11.htm
pointer to talk is on this page
http://forge.ggf.org/sf/docman/do/listDocuments/projects.ogsa-wg/docman.root.meeting_materials_and_minutes.ggf_meetings.ggf11

titled GGF-11-design-security-nakedkey

slightly related
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

effeciently resetting a block of memory

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: effeciently resetting a block of memory
Newsgroups: comp.programming,alt.folklore.computers
Date: Mon, 21 Jun 2004 09:12:28 -0600
jdallen2000@yahoo.com (James Dow Allen) writes:
Many processors have special opcodes to facilitate this; Intel, e.g., has REPZ STOSW. IBM 370 has MVCL. IBM's opcode (as well as Intel's, IIRC) are interruptible: an exception can arise after which the registers are updated to allow the memory set/move to continue.

I'm crossposting to a.f.c so I can tell a little anecdote.

The first IBM 370/165's did not use MVCL opcode but were soon retrofitted for it. Those machines with AMS add-on memory then needed their power-supplies replaced with higher amperage units. Otherwise those supplies would "crowbar" off whenever software did a long MVCL!

James


the 360 ops would check starting and ending location for various kinds of access permission ... all had to be available for the instruction to execute. the (new) 370 "long" instructions (in theory) operated a byte at a time ... were interruptable ... and didn't require both start and end permission, available to operate.

vm/370 modified its boot procedure to use MVCL option that cleared memory for all of storage ... and relied on program check (and registers to have been incrementally updated). to indicate end-of-memory 115/125 had m'code "bug" in long instructions where they still used 360 rules ... requiring start & end to be accessible before starting instruction. as a result vm/370 would only calculate a couple K-bytes ... because none of the register values had changed.

minor past refs:
https://www.garlic.com/~lynn/2000c.html#49 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000g.html#8 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001f.html#69 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2002i.html#2 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2003.html#13 FlexEs and IVSK instruction
https://www.garlic.com/~lynn/2003j.html#27 A Dark Day
https://www.garlic.com/~lynn/2004b.html#26 determining memory size
https://www.garlic.com/~lynn/2004c.html#33 separate MMU chips

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The WIZ Processor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The WIZ Processor
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 21 Jun 2004 11:47:08 -0600
"David W. Schroth" writes:
I'm not clear what Stephen means when he refers to traditional "virtual memory", much less what he sees as the advantages and big disadvantages. The current architecture is both segmented and paged. The only feature of the architecture that I regard as novel is that the virtual address space is address tree levels that are used to separate addresses by their level of "sharing" - The OS lives at the most shared level of the address tree (level 0), shared code and data lives at the next most shared level of the address tree (level 2), process code and data lives at the next most shared level (level 4), and thread code and data lives at the least shared level (level 6).

is this shared ... as a somewhat obscure synonym for something like privilege ... or shared as somewhat related to frequency of reference (as in LRU replacement)?

i had an argument in the early '70s with the people doing the original OS/VS2 implementation (original name for what is referred today as MVS). They wanted to select "non-changed" pages for replacement before selecting "changed" pages. Their implementation at the time kept a "home" location for a page after reading into real storage .... if it wasn't changed during the most recent stay ... then when replaced they could avoid a page write (because of the existing copy on disk).

It wasn't until very late 70s (somewhere in the MVS 3.8 sequence) ... that it finally dawned on them that they were replacing highly shared executables (from "linklib", which were effectively never changed) before they were replacing lower usage private data storage areas (which tended to have some probability of change when referenced).

degree of sharing ... as in frequency of reference ... may be totally independent and dynamic pattern from any preconceived operating system organization ... except for the degenerate case of no sharing at all.

when I shipped the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
I introduced the concept of active, concurrent sharing .... i.e. effectively count of different address spaces actively/recently (as in LRU paradigm) making use of each shared region.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

War

From: lynn@garlic.com
Date: Sat, 26 Jun 2004 07:28 -0600
Newsgroups: alt.folklore.computers
Subject: Re: War
"Helmut P. Einfalt" wrote in message news:<40dc536c$0$13468$91cee783@newsreader01.highway.telekom.at>...
Are you referring to THE MALLEUS MALEFICARUM (The Witch Hammer) of Heinrich Kramer and James Sprenger ?

one of my wife's relatives received some letter from the salem chamber of commerce soliciting money (from salem area descendants) to commemorate some anniversary of the salem witch trials. he wrote back something to the effect that his family had already contributed the entertainment for the original witch trials and that should be sufficient.

Adventure game (was:PL/? History (was Hercules))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Date: Sat, 26 Jun 2004 05:20:09 -0700
Newsgroups: alt.folklore.computers
Subject: Re: Adventure game (was:PL/? History (was Hercules))
jmfbahciv@aol.com wrote in message news:<40d81d40$0$3005$61fed72c@news.rcn.com>...
I don't recall Lynn talking about this aspect of OS development. How many of the fixes were memory management?

dangling pointers and/or storage cancers are way up on the bug hit list .... various kinds of serialization problems ... where memory is released but there are still outstanding operations with pointers to the location ... or operations not releasing the memory (possibly because they are worried about still active operations). most of the dangling pointers are typically considered to be some form of serialization problem. Serialization problems also exhibit themselves as zombie processes (at the operating system level) &/or storage cancers. the benchmarking
https://www.garlic.com/~lynn/submain.html#bench
getting resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

ready for release had some stress tests that were guaranteed to crash the system. This was so repeatable that I eventually totally rewrote the kernel serialization infrastructure as part of preparing the resource manager ... which for a long time eliminated all cases of zombie/hung processes.

some languages are touted as better because they obfuscate the whole allocate/deallocate memory paradigm ... apl, lisp, java.

then there are various kernel performance issues. after many cp/67 kernel optimizations, fastpath, balr linkages, etc .... kernel memory allocation/deallocation was approaching 30% of total kernel CPU time. This was brought down to a couple percent with the introduction of "subpools". Prior to that there were all sorts of stuff about storage fragmentation algorithms, best-fit list scanning, buddy allocation, etc. the 360/67 had a special RPQ "list" instruction that was used by the memory management for scanning the list of available storage looking for a good fit that satisfied the request. This reduced the loop time for scanning several hundred to several thousand entries to just about the storage fetch times. The subpool logic went to push/pop structure for the most heavily used kernel storage requests. Instead of several thousand storage references (per request) ... nominal storage subpool allocation took 14 instructions.

misc. references to cp/67 storage management subpool change
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux

in the 3081 time-frame .... this was enhanced to make sure all kernel storage allocation were cache-line aligned. The problem was having fragements of two different storage areas occupying the same cache line and the different storage areas concurrently being accessed by different processors. going to cache-line alighed storage gave something like 5-6% performance improvment.

previous, next, index - home