List of Archived Posts

2004 Newsgroup Postings (11/02 - 11/20)

CKD Disks?
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
CKD Disks?
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Multi-processor timing issue
Integer types for 128-bit addressing
some recent archeological threads in ibm-main, comp.arch, & alt.folklore.computers ... fyi
Integer types for 128-bit addressing
Multi-processor timing issue
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
Integer types for 128-bit addressing
Integer types for 128-bit addressing
RISCs too close to hardware?
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Demo: Things in Hierarchies (w/o RM/SQL)
Demo: Things in Hierarchies (w/o RM/SQL)
CKD Disks?
pop secure connection
Integer types for 128-bit addressing
pop secure connection
Integer types for 128-bit addressing
z/OS UNIX
NEC drives
What system Release do you use... OS390? z/os? I'm a Vendor S
Integer types for 128-bit addressing
The Sling and the Stone & Certain to Win
Scanning old manuals
Integer types for 128-bit addressing
pop secure connection
Facilities "owned" by MVS
Facilities "owned" by MVS
Facilities "owned" by MVS
osi bits
how it works, the computer, 1971
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
evoluation (1.4) and ssl (FC2) update
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
Integer types for 128-bit addressing
JES2 NJE setup
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
360 longevity, was RISCs too close to hardware?
Integer types for 128-bit addressing
Relational vs network vs hierarchic databases

CKD Disks?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CKD Disks?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 02 Nov 2004 09:18:52 -0700
bblack@ibm-main.lst (Bruce Black) writes:
Art, are you sure you didn't work for IBM ISAM? <gr> ISAM was famous for using complicated, branching CKD CCW chains. The ISAM logic manual had pages of such chains.

The downside of such search chains is that they tended to tie up the channel and the disk while all this searching and seeking was going on. It was fine if only the one file was on the disk and there were extra channels for other disks, but they were bad neighbors when systems got bigger and busier.


as per previous post ... even multi-track search of pds directory could tie up channel, controller, (string) and disk for extended periods of time ... 3330 with 19tracks and 60rps was almost 1/3rd sec per operation.
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?

this was trade-off of a one-level index (pds directory or vtoc) being constantly scanned by I/O subsystem or keeping the index in memory. With the real storage constraints of the mid-60s, the constant I/O scanning appeared to be a reasonable IO-resource/memory-resource trade-off. By the mid to late 70s ... the situation had exactly reversed.

It wasn't just that ISAM CKD channel programs were long and complicated ... they were effectively implementing multi-level index searches with expensive i/o scanning ... earlier reads could provide the subsequent seek&search CCHHR for subsequent CCWs.

recent post about having to spend a week (back in 1970) at a customer site working on an ISAM issue
https://www.garlic.com/~lynn/2004m.html#16 computer idnustry scenarios before the invention of the PC?

misc. past ckd posts
https://www.garlic.com/~lynn/submain.html#dasd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch
Date: Tue, 02 Nov 2004 10:12:19 -0700
Benny Amorsen <benny+usenet@amorsen.dk> writes:
Err no, I don't. With the default 3/1GB user/kernel memory split, the limit for directly kernel addressable memory is somewhere around 900MB. You can of course still use memory above that limit, and even memory above the 4GB limit.

With 1024MB installed you end up with a 896MB zone-NORMAL, and a 128MB zone-HIGHMEM, if I recall correctly. The VM has much fun trying to find an effective way to use the 128MB zone-HIGHMEM. In that respect you're better off with 1536MB or 2048MB installed -- you still have the highmem overhead, but at least you get a decent amount of extra memory and no castrated 128MB-zone.


i'm sitting here on 4gbyte real machine with fedora core 2 ... and it is telling me i only have 3.5gbytes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 02 Nov 2004 13:48:34 -0700
"mike" writes:
Both Lynn & Nick have objected that tuning is a problem because applications that use a large address space, much bigger than main memory, will slow down all other concurrent applications.

While early versions of the S/38 suffered from many of these tuning problems, they have been solved since then. OS/400 has the concept of jobs running in "subsystems". The subsystems have optional limits on the amount of main memory available to them. This permits huge memory consuming batch jobs to run concurrently with smaller interactive jobs. They will compete for cycles based on processor priority but will not compete for main memory.

The /400 is also more efficient at sequential file processing than Lynn seems to think based on these old /360-370 references. In the /400 like the /370 virtual memory pages are 4KB. However, disk storage is managed in 16MB segments and the OS can detect sequential processing and bring in whole 16MB segments when appropriate.

I never claimed that an effective, efficient and manageable 128 bit huge sparse virtual system was easy to implement for the system programmers, only that it offers many benefits to application programmers. Since application programmers out number system programmers by 100 to 1 the efficiency gains are worth the effort.


the issue is somewhat analogous to abstracting away the length in C string libraries .... it may improve the productivity of the casual programmer by some significant amount ... but it also has led to something like a a factor of two orders of magnitude increase in buffer exploits.

an issue is totally abstracting away some concepts1 like locality of reference (as was done in tss/360), while resulting in some productivity increases for the casual programmer ... could lead to enormous thruput difficulties for production systems. GM claimed enormous productivity increases for their programmers developing 32bit address mode applications on tss/360 for the 360/67 ... but there wasn't much mention of things like system thruput.

the issue is trying to abstract concepts for programming productivity while at the same time not totally sacrificing system operational efficiency.

a large number of cp67/cms and vm370 installation were mixed mode environments running significant batch operations concurrent with loads of interactive activity. i originally did fairshare resource manager as undergraduate (actually generalized policy resource manager with fairshare the default)
https://www.garlic.com/~lynn/subtopic.html#fairshare

one of the issues in the background was scheduling to the bottleneck ... aka attempting to identify resources that represented significant resource bottleneck thruput and dynamically adapting strategies to deal with the resource bottlenecks. the remapping of cms filesystem into a memory mapping paradigm ...
https://www.garlic.com/~lynn/submain.html#mmap

was not only dynamically recognizing large logical requests ... but also being able to contiguously allocate on disk ... and bring in large contiguous sections (as appropriate) if there was sufficent real memory for the operation. in situations where real memory was more constrained, either because there wasn't a lot ... or there was large amount of contention for real memory ... the service requests were dynamically adapted to the operational environment.

note that for quite a long time ... a large amount of the work that went on in rochester actually was done on vm370 systems ... for minor reference, this internal network update from 1983 lists several rochester vm systems
https://www.garlic.com/~lynn/internet.htm#22

for total topic drift circa 1990 ... there was significant contention between rochester and austin with regard to 64bit chip design ... rochester kept insisting on having 65bit rather than 64bit.
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 02 Nov 2004 14:07:23 -0700
part of the issue was that during FS, i frequently commented that I thot what I had deployed in production systems were more advanced than some of the stuff that was being specified in FS; there was this cult film that had been playing for a long time down in central sq ... and i sometimes drew analogies between what was going on in FS and the inmates being in charge of the institution. after FS was killed, it was some number of these FS'ers that went off to rochester to do s/38 ... random future system references:
https://www.garlic.com/~lynn/submain.html#futuresys

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 02 Nov 2004 15:54:07 -0700
"mike" writes:
Indeed, while there may be theoretical performance advantages to Z/OS and CICS, the latest iSeries and pSeries 595 can support tens of thousands of concurrent users and that is more than enough for all but the very largest application.

slight scale-up issue when we were doing ha/cmp
https://www.garlic.com/~lynn/95.html#13

at the time, i was asked to write a section for the corporate continuous availability strategy document .... however, both pok and rochester non-concurred
https://www.garlic.com/~lynn/submain.html#available

however, file i/o can benefit a lot from both contiguous allocation and large block transfers (regardless of the file mapping).

note however, (i'be been told that) 400 seems to have fairly heavy weight file open/close overhead.

we were doing this thing called electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and got involved a couple years ago looking at doing a fairly large deployment of servers for financial transactions. there was a 400 review with consulting house that specializes in cold fusion on 400s. they claimed that doing mandatory access control on file opens/closed increased overhead by something like ten times and would make it impractical in a 400 environment.

i had showed that contiguous allocation and block transfers with mapping cms filesystem to page mapped infrastructure in the '70s. lots of the *ixes frequently default to relatively scatter allocation strategry (basic cms filesystem allocation tended to be much more like the *ixes with scattered allocation). even tho cms (& cp) used ckd disks ... their original strategy from the mid-60s was to treat ckd disks as logical FBA having fixed records ... and tended toward using a scatter record allocation strategy. When i did the remap to a page mapped infrastructure ... i also put in contiguous allocation support and (large) block transfer support. note the page mapped stuff never shipped to customers as part of standard product ... although it was used extensively at internal sites like hone
https://www.garlic.com/~lynn/subtopic.html#hone

there has been recent thread running in another n.g. on ckd, fba, etc
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?

lots of other ckd posts
https://www.garlic.com/~lynn/submain.html#dasd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 02 Nov 2004 19:13:38 -0700
"mike" writes:
On the other hand: a) starting new jobs on the /400 is more like starting a new VM on VM/CMS or TSO session on MVS and "pre-start jobs", a skeleton job waiting to be used, are available. b) files can be opened multiple times concurrently with different usages - read, append, update, etc with several separate handles allowing many programs to open files once and leave them open for an entire session. c) one or more jobs can be used as servers to support transactions from multiple devices just like CICS on the mainframe. d) database journalling and remote database journaling can be used for high availability. e) the /400 has automatic tuning jobs that watch file and disk usage. These jobs balance the workload across disk arms, make file allocations contiguous and place "hot" objects for fastest retrieval.

i wasn't saying that they weren't not necessarily not available on the 400 ... but possibly were made available 10 to sometimes 20 years after other infrastructures had done some of them (it is a lot easier if there has been example that has been around for 20 years to copy ... as opposed to needing to having to invent it from scratch). in fact, that a lot of the popular lore about one level store from tss/360, future system, and s/38 was wrong ... and needed quite a bit of evoluation for the 400 (although i don't know if that happened in the 80s for cisc 400 or not until the 90s for risc 400 ... i do know that i was deploying some amount of the stuff in the 70s).

with respect to watching files for re-org, ... i think even windows/xp now has something similar. however, bunches of this stuff is orthogonal to whether there is a one level store paradigm or not ... just simple operational stuff that has been done (in some cases for decades) for operational systems.

the cold fusion/financial example was specifically a web server scenario where a huge numbers of file open/closes were happening. i did get the impression that some amount of the file open/close overhead ... was in fact related to the way 400 handled one level store objects. the issue (compared to some locked-down unix web servers) was that you couldn't be running cics-like operation with light-weight threads (frequently in the same address space) with startup pre-opened files (long ago and far away, when i as an undergraduate, the university got to be beta-test for original cics on project for library card catalog automation with a grant from onr ... i got the privilege of shooting several of the cics bugs).

the mention about continuous availability strategy document wasn't so much about high availability ... although we started out calling the project we were doing ha/cmp ... but along the way we coined the terms disaster survivability and geographic survivability .... in fact along the lines of some large financial transaction systems have done with IMS hot-standy (geographically triple redundant). misc. stuff from the past ... semi-related to IMS hot-standby
https://www.garlic.com/~lynn/submain.html#shareddata

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CKD Disks?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CKD Disks?
Newsgroups: bit.listserv.ibm-main,alt.folkore.computers
Date: Tue, 02 Nov 2004 21:43:30 -0700
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
The 3375 used 3370 technology under the covers. AFAIK the 3340, 3344 and 3350 were the last true CKD disk drives that IBM sold.

and while 3375 used control logic to map/emulate ckd to 3370 fba ... the 3344 used control logic to map/emulate multiple 3340s to a *real* 3350 drive.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 03 Nov 2004 09:00:30 -0700
Jan Vorbrüggen <jvorbrueggen-not@mediasec.de> writes:
As I understand it, XP watches file accesses during boot (and probably also during login) and re-orders those files for sequential access. Makes for sub- stantially faster booting, especially if you have a slow (laptop) disk drive.

when i was undergraduate ... i re-orged os/360 system ... by careful ordering the sequence that files were built on the disks. for typical university workload, thruput was increased by a factor of three times (typical job elapsed time was decreased by 2/3rds). recent post
https://www.garlic.com/~lynn/2004n.html#23 Shipwrecks

two issues for windowing paradigm associated with page-mapped filesystem ... was that it gave efficient hints to system for both 1) loading the new stuff and 2) flushing the stuff no longer being used (even for stuff that wasn't being sequentially accessed ... and nominal read ahead strategies aren't triggered). this is somewhat analogous to some of the cache preload instructions that give hints about storage to be used:
https://www.garlic.com/~lynn/submain.html#mmap

of course this would enhanced by something like vs/repack ... attempting to group program & data being used together into more condenced memory collection. recent post referencing vs/repack from the 70s:
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing

basically vs/repack and disk re-org technologies are very similar ... analysis of access patterns and attempting to re-og to maximuse system thruput ... where vs/repack applied to things like working set (but granularity of pattern analysis used by vs/repack was 32bytes which would also make it applicable to cache line).

at sjr in 1979, i did a special modification to vm/370 kernel to capture record addresses of all disk accesses (fba and ckd, it was extended in fall of 1980 to supprot extended-ckd, aka Calypso). this was deployed on internal installations in the san jose area capturing typical cms usage ... as well as a lot of mvs pattern usage (run under vm) from operations at stl.

the trace information was originally used in a detailed file cache model ... which investigated various combinations of disk drive, disk controller, channel, groups of channels (like 303x channel director) and system cache strategies. one of the results of the cache study was for any total amount of cache storage ... the most efficient use of that cache storage was a system-level cache ... aka 20mbytes of system-level cache was more efficient than 1mbyte caches on 20 different disks. this corresponds with theory of global LRU replacement algorithms ... that i did in the 60s when i was an undergraduate
https://www.garlic.com/~lynn/subtopic.html#wsclock

the published literature at the time (in the 60s) was very oriented towards local LRU replacement strategies ... and I showed that global LRU replacement strategies always outperformed local LRU (except in a few, fringe, pathological cases). some ten years later, this work became involved in somebody's phd at stanford.

The next stage of using the disk record level trace information was showing how it could be used for arm load balancing as well as clustering of information that was frequently used together. There was an issue regarding the volume of such trace data for standard production systems, ... but a methodology was developed for being able to reduce the information in real time.

Note that at the time ... that standard vm system had a "MONITOR" facility that would record all sorts of performance & thruput related data ... but it went to disk in raw format. What was developed was a much more efficient implementation that was targeted at being of low enuf overhead that it could potentially always be active in all standard production systems ... and be able to support file load balanching and file clustering as part of normal system operation.

misc. other posts about activities done in conjunction with disk engineering and product test labs
https://www.garlic.com/~lynn/subtopic.html#disk

as distinct from posts about ckd disk
https://www.garlic.com/~lynn/submain.html#dasd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 03 Nov 2004 11:19:05 -0700
oh, and couple random past posts about the disk activity analysis, file/disk cache modeling work, etc. in 79-80:
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 03 Nov 2004 17:55:42 -0700
"David Wade" writes:
The way the system fits together has a certain beauty. CP(VM) has a pretty simple task. It makes VM's which are "machines". I don't think these are like the VM's in other OS's. They are low level VMs. A 370, with disk, tape and console. There is no file system, No shared write access to DASD/files, the two things that make a conventional multi-user system complex. It just carves the disks into STATIC chunks and its up to the users how they access them Yes it tweaks the IO programs, but it just has to relocate the cylinders and check for writes... (and you can have shared memory, but its a really simple model)

CMS is also pretty simple, like an eary MSDOS. Its a single user OS. No virtual memory, very simple file system, talks to one terminal.. Very low overhead...

I seem to remember many years ago that in terms of OS over head, it was about 1/10 of that of MVS. So I think from a program issuing a disk read instruction it took around 100,000 instructions before the SIO got sent to the channel, compared with 1,000,000 in MVS.

Of course sharing files in this environment was "interesting"

CP is around 260,000 lines of assembler code, CMS slightly less.... I think this is a but less than MVS/TSO...


note that some number of applications running under cms are effectively the same as their counterparts running under mvs (compilers, assemblers, etc). i used to joke (in the '70s) that the cms 64kbyte os/360 simulation ... didn't quite do everything that the mvs 8mbyte os/360 simulation did ... but it did a significant subset ... and much, much faster and more efficient.

the original cms (cambridge monitor system) under cp/67 ran as if it was on the real machine ... and in fact, it could run on a real 360.

i was doing a bunch of cp/67 pathlength optimizations as an undergraduate ... as well as developing fair share scheduling, global lru page replacement, etc ... in some of the stuff of running os/360 under cp/67 ... i reduced some amount of cp/67 pathlength by up to two orders of magnitude (for running os/360 in a cp/67 virtual machine) ... compared to the original version that had been installed at the university.

one of the issues then became cp/67 support of cms operations. since each cms (virtual machine) was pretty much single thread ... the SIO/LPSW-wait/interrupt/resume faithful virtual machine simulation was pretty superfluous. I originally created a special disk I/O operation (cms disk operations were extremely uniform so could special case/fastpath their translation) that was treated as a CC=1, csw-stored operation ... aka the return to the cms virtual machine after the SIO instruction was when the operation had been totally completed. With some other operations ... this cut typical cp67 related pathlength supporting cms virtual machines by 2/3rds or more.

The people controlling cp/67 architecture claimed that this exposed a violation of the 360 principles of operation ... since there were no defined disk i/o transfer commands that resulted in cc=1/csw-stored. However, they observed that the 360 principles of operation defines the hardware *diagnose* instruction as model specific implementation. They took this as an opportunity to create the abstraction of a cp67 virtual machine 360 model machine ... which defined some number of *diagnose* instruction operations specific to operations in a cp67 virtual machine. the cc=1/csw-stored simulation for sio was redone as a variation of a special *diagnose* instruction.

when i was redoing the page replacement algorithm, the disk device driver (including adding ordered seek queueing to what had been FIFO, and chained requests for 2301 fixed-head drum), interrupt handler, task switching, dispatching, etc .... i got the total round-trip pathlength for taking a page fault, executing the page replacement algorithm, scheduling the page read (and write if necessary), scheduling the page I/O, starting the i/o, doing a task switch, handling the subsequent page i/o interrupt, and the task switch back to the original task ... down to nearly 500 instruction pathlength total for everything. this is compared to typical bare minimum of 50k (and frequently significantly more) instructions pathlength for most any other operating system. note that later vm/370 versions possibly bloated this by a factor of five to six (maybe 3k instructions).

the ordered seek queueing allowed 2314 to peak out at over 30 i/o requests per second ... rather than the 20 or so with fifo queueing. the 2301 fixed-head drum had originally had single request page transfers and would max. out at about 80 transfers per second .... with chained requests ... and any sort of queue ... it could easily hit 150 transfers per second and peak at nearly 300/second under worst case scenarios.

an old posting on some of this
https://www.garlic.com/~lynn/93.html#31 Big I/O or kicking the Mainframe out the Door

for a little drift ... appropriately configured vs/1 operating systems with handshaking would run faster in a vm/370 virtual machine than on the bare hardware w/o vm/370. part of this was because the handshaking interface allowed responsibility for all vs/1 page i/o operations to be handled by vm/370 (rather than by vs/1 native code which had a significantly longer pathlength).

the original cp/67 could boot and do useful work in 256kbyte real 360/67.

the machine at the university was a 768kbyte real 360/67 ... but there was an issue of the fix kernel starting to grow upwards over 80kbytes (and this was all code in the cp/67 kernel, including all the console functions and operator commands). if you added the fixed data storage for each virtual machine you easily double the fixed storage requirements. various development was then starting to add feature/function & commands so the fixed kernel requirements were growing. to address some of this at the university, i did the original support for "paging" selective portions of the cp/67 kernel ... to help cut down on the fixed kernel requirements. This pageable kernel was never released to customers (although lots of the other stuff i had done as undergraduate was regularly integrated into standard cp/67 release). The pageable kernel stuff did finally ship with vm/370.

later, near the tail-end of cp/67 cycle ... about when vm/370 was ready to come out ... I did the translation of the cms filesystem support to a new feature in cp/67 supporting paged mapped disk operations ... along with a whole bunch of stuff extending the concept of shared segments.
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

I finally got around to porting all of this to early vm/370 release 2 and making it available to a large number of internal installations ... for instance it was used extensively at hone for a number of things
https://www.garlic.com/~lynn/subtopic.html#hone

a small subset of the shared segment stuff was incorporated into vm/370 release 3 and released as something called discontiguous shared segments. one of the reasons that the discontiguous shared segment stuff is such a simple and small subset of the full stuff ... was that the original code heavily leveraged the cms page mapped filesystem work as part of the shared segment implementation.

the page mapped version of the cms filesystem was never shipped in a standard vm/cms product (except for a custom version for xt/at/370). and since the page mapped cms filesystem stuff didn't ship ... none of the fancy bell & whistles that were part of the original shared segment support shipped either.

while the *diagnose* i/o significantly cut the pathlength for support cms disk i/o ... it retained the traditional 360 i/o semantics ... which when mapped to a virtual address space environment ... required all the virtual pages in the operation to be fixed/pinned in real storage before the operation is initiated ... and then subsequently released ... which still represents some amount of pathlength overhead.

going to paged mapped semantics for cms filesystem in virtual address space environment allows a much more efficient implementation (in part because the paradigms are synergistic as opposed to being in conflict).

a few recent descriptions of the overhead involved in mapping the 360 channel i/o semantics into a virtual memory environment
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#0 IBM 360 memory
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004m.html#16 computer industry scenairo before the invention of the PC?
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?

random past posts mentioning pageable kernel work:
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004b.html#26 determining memory size
https://www.garlic.com/~lynn/2004f.html#46 Finites State Machine (OT?)
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 04 Nov 2004 06:27:37 -0700
jmfbahciv writes:
A lot of times, doing a tweak of software would help (raising the core limit) or there were times that the hardware needed configuration. In addition, learning about how your system worked during the day over a week gave the sysadmin enough information to plan future hardware upgrades.

the system performance model done in (originally cms\apl) apl at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
evolved into driving the benchmarks for the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
calibration and validation
https://www.garlic.com/~lynn/submain.html#bench
evolved into the performance predictor tool on the hone systems
https://www.garlic.com/~lynn/subtopic.html#hone
for world-wide sales and marketing support (be able to answer guestions about the effects of some hardware change on the customer's workload) and significantly helped drive the evolution of performance turning into capacity planning.

one of the jokes leading up to releasing the resource manager ... was that an enormous amount of work went into dynanmic adaptive system operation ... and somebody in the corporation ruled that the "state of the art" for resource managers were lots of performance tuning knobs for use by the system tuning witch doctors ... and that the resource manager couldn't be released w/o any performance tuning knobs.

So i added some number of resource tuning knobs, documented the formulas on how the tuning knobs worked and gave classes on them ... and all the source was shipped with the product. With all of that, as far as I know, nobody caught the joke. The issue was that the base implementation operated with a lot of dynamic feedback stuff as part of its dynamic adaptive operation. The hint is from traditional operational research .... and the degrees of freedom giving the base implementation and the degrees of freedom giving the add-on performance tuning knobs (and whether the native base implementation could dynamically compensate for any change in a any tuning knob).

part of the driving factor (leading up to the joke) was that the big batch TLA system had hundreds of paramenters and there were tons of studies reported at share about effectively random walks performed of parameter changes ... attempting to discover some majic combination of tuning parameter changes that represented something meaningful.

part of the issue is that over the course of a day or a week or a month ... there can be a large variation in workload and thruput characteristics ... and the common, "static", tuning paramether methodology (from the period) wasn't adaptable to things like natural workload variation over time. Specific, fixed paramenters might be better during some part of the day and worse at other parts of the day ... and the reverse might be true of other specific, fixed parameters .... so there might be a large number of different combinatorial parameter settings all with similar, avg. overall results.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 04 Nov 2004 08:57:02 -0700
"David Wade" writes:
and its up to the users how they access them Yes it tweaks the IO programs, but it just has to relocate the cylinders and check for writes... (and you can have shared memory, but its a really simple model)

the base vm/370 (and cp/67) shared memory model was primarily focused on sharing read-only pages for (page) performance (reduced paging, reduced real-storage requirements).

the basic vm/370 structure had something called DMKSNT that had tables of named systems ... and pointed to reserved page slots on disk. A privileged user could issue a "savesys" command referencing a DMKSNT named system entry. The named system entry specified a set of virtual memory page addresses ... in the savesys command, the current contents of those specified virtual pages (belonging to the entity issuing the command) would be written to the specified reserved disk slots.

the "ipl" command simulated the front panel hardware IPL (initial program load, aka boot) button. a form of the ipl command could specify a named system ... in which case the issuers virtual memory would be cleared and initialized with pointers to the corresponding reserved disk locations. The named system could also specify spans on virtual page addresses (segments) that were to be be read/only shared across all users of that named system.

the problem was that this was primarily useful for sharing (virtual) kernel type software (like much of the cms kernel) but didn't work well with application code ... since the IPL command also completely reset the users virtual machine.

as part of redoing the cms filesystem to a paged mapped infrastructure, the result effectively subsumed the DMKSNT named system function ... and could be applied to any set of virtual pages ... and could be defined by all users (not just limited subset of things in the DMKSNT table under control of privileged control). The page mapped semantics included the ability to specify segments as read/only shared ... as part of the page mapping operation.

one of the most extensive early adopters of this was HONE
https://www.garlic.com/~lynn/subtopic.html#hone

their environment was primarily an extremely large APL application that provided a very constrained environment for the sales and marketing people world-wide. It provided almost all of the interactive environment characteristics ... and many users weren't even aware that CMS (and/or vm/370) existed.

The issue was that they wanted 1) "cms" named system with shared segments, 2) custom "apl" application named system that consisted of a custom APL interpreter that included much of the APL code for the customized environment integrated with the interpreter ... and almost all of it defined as shared segments, 3) over time it was realized that some number of the applications that had been written in APL ... could benefit from 10:1 to 100:1 performance improvement if it was recoded in fortran, and 4) they then needed to be able to gracefully transition back and forth between APL environment and the Fortran environment ... completely transparent to the end-user.

In the IPL paradigm supporting shared-segments ... there was no way of gracefully transitioning back & forth between the apl environment (with lots of shared segments) and the fortran environment.

Also, because of the ease with which shared-segments could be defined in the page-mapped filesystem environment ... it was easy to adopt a number of additional cms facilities to the read-only shared segment paradigm.

For vm/370 release 3, it was decided to release the enhanced, non-disruptive (non-IPL command) implementation of mapping memory sections. A subset of the CP code was picked up (but w/o the cms filesystem page mapped capability), the non-disruptive mapping had to be kludged into the DMKSNT named system paradigm. Some number of additional "name tables" were added to DMKSNT ... and a LOADSYS function was introduced ... which performed the memory mapping function of the IPL command w/o the disruptive reset of the virtual machine. Some amount of the CMS application code that had been reworked for read-only, shared-segment environment was also picked up (and mapped into the DMKSNT named system paradigm). This extremely small subset function was released as Discontiguous Shared Segements in vm/370 release 3.

there were still all the restrictions of having single set of system wide named systems that required special administrative privileges to manage ... and only applied to introducing the mapping new (and for shared segments, only read-only) pages into the virtual address spaced.

in the original filesystem paged mapped implementation ... any image in the filesystem was available for mapping into the virtual address space ... and the access semantics were provided by the filesystem infrastructure (aka trivial things like if you didn't have access to the specific filesystem components ... then you obviously couldn't map them into your address space).

misc. past posts on the subject:
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

some recent archeological threads in ibm-main, comp.arch, & alt.folklore.computers ... fyi

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: some recent archeological threads in ibm-main, comp.arch, & alt.folklore.computers ... fyi
Newsgroups: bit.listserv.vmesa-l
Date: Thu, 04 Nov 2004 09:09:23 -0700

https://www.garlic.com/~lynn/2004k.html#46 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004k.html#47 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004k.html#49 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004k.html#51 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#2 IBM 3090 : Was (and fek that) : Re: new computer kits
https://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#9 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#18 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004l.html#20 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#22 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#24 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004l.html#61 Shipwrecks
https://www.garlic.com/~lynn/2004l.html#70 computer industry scenairo before the invention of the PC?
https://www.garlic.com/~lynn/2004l.html#72 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#73 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#74 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#3 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#18 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#20 Whatever happened to IBM's VM PC software?
https://www.garlic.com/~lynn/2004m.html#25 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#26 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#30 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#36 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#45 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#47 IBM Open Sources Object Rexx
https://www.garlic.com/~lynn/2004m.html#48 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2004m.html#54 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#58 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#63 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#3 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#6 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#7 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#13 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#17 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#34 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#36 Shipwrecks (dynamic linking)
https://www.garlic.com/~lynn/2004n.html#37 passing of iverson
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#2 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#4 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#5 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#6 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#10 Multi-processor timing issue
https://www.garlic.com/~lynn/2004o.html#11 Integer types for 128-bit addressing

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 04 Nov 2004 17:43:14 -0700
"David Wade" writes:
I think IBM did it the other way round. CP would simulate a 16Meg virtual machine regardless of the amount of real memory in the machine. And CMS did not have any understanding of Virtual Memory or paging. As CMS only had to support a single user this was O.K.

In fact you could/can IPL an OS that needs Virtual Memory in a Virtual Machines but when you did so CP had to simalate the paging hardware and performance takes a dive. Later machines had (I think) "assists" (microcode) that help in this case.


360 principles of operation & architecture defined a relatively clean privilege instruction operation ... anything that affected the state of the machine was a privilege instruction. CP/67 ran virtual machines in virtual memory mode and problem state. Any time a virtual machine attempted to execute a supervisor/privilege instruction, it would program check into the cp/67 kernel ... and the cp kernel would simulate the operation according to various virtual machine rules. In some sense, the cp/67 kernel became the "microcode" of the virtual machine since it was "interpreting" all supervisor state instructions.

the ratio of cp/67 kernel "overhead" execution to virtual machine "problem state" execution tended to be associated with the ratio of supervisor state instructions to problem state instructions in the code running in the virtual machine. for what ever reason, the ratio of supervisor state instructions to problem state instructions significantly increased in the transition from os/360 MFT to os/360 MVT.

The transition of os/360 MVT to VS2/SVS was even more dramatic, in part because the cp kernel was now faced with emulated the hardware TLB with a lot of software (since the SVS kernel now was using virtual address space architecture ... while MVT had been running as if it was in real address space).

the 158 & 168 first got (limited) microcode virtual machine "assists" (vma); the vm/370 kernel would set a (privilege) control register ... and when various privilege instructions were encountered by the "hardware", ... rather than interrupting into the cp kernel, the native machine microcode would execute the instruction according to virtual machine rules (rather than "real machine" rules).

The amount and completeness of the virtual machine assists increased over time ... until you come to PR/SM and LPARS ... where there a logical partition can be defined and the microcode completely handles everything according to virtual machine rules (w/o having to resort to a cp kernel at all).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multi-processor timing issue

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-processor timing issue
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 05 Nov 2004 07:05:58 -0700
jmfbahciv writes:
We would have worried about offending the customer who did get the joke. OTOH, the people who did today's equivalent of sysadmin at our customer sites appreciated the hooks for a few knobs when they had PHBs who wanted to help run the system.

i would have taken most of the heat ... it was possibly the only corporate product that customers commonly referred to using the author's name (as opposed to the provided corporate name).

since all code shipped ... they could actually do to it anything they wanted. there were actually knobs ... but they goverened administrative resource policy issues ... as opposed to performance tuning issues. note that even experienced operators weren't able to change performance tuninng knobs in real-time as workload and requirements changed over the course of the day.

as always
https://www.garlic.com/~lynn/subtopic.html#fairshare

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 05 Nov 2004 07:51:22 -0700
pg_nh@0409.exp.sabi.co.UK (Peter Grandi) writes:
Perhaps at your IBM office people tolerated you for trying to do your job well, but usually that kind of attitude gets you labeled as a non-team-player, a know-it-all busybody. :-)

In other countries, or perhaps on the opposite coast, in the offices of another big monopoly, people would consider it as a challenge to write an i/o subsystem that would crash and hang the system as often as possible, and get away with it. :-)


it was at the internal disk engineering and product test labs (bldg 14 & 15 on the san jose plant site)
https://www.garlic.com/~lynn/subtopic.html#disk

they had been doing all their testing using stand-alone machine time ... that had to be serialized/scheduled among all the testers and different testcells. they had tried running concurrently under MVS but the MTBF for the operating system was on the order of 15 minutes.

the objective was a bullet proof i/o subsystem so that all disk engineering activities could go on simulataneously & concurrently sharing the same machine.

of course it had other side effects ... since the machines under heavy testcell load was possibly 1 percent cpu utilization ... which met that we could siphon off a lot of extraneously & otherwise, unaccounted for cpu. the engineering & product test labs tended to be the 2nd to get the newest processors out of POK (typically something like serial 003, still engineering models ... but the processor engineers had the first two ... and then disk engineering and product test got the next one).

at one point, one of the projects that was needing lots of cpu and having trouble getting it allocated from the normal computing center machines was the air bearing simulation work ... for designing the flying disk (3380) heads. dropped it on a brand new 3033 engineering model in bldg. 15 ... and let it rip for all the time it needed.

a recent posting about dealing with 3880 issue when it was first deployed in the bldg. 15 ... for standard string of 16 3330 drives as part of interactive use by the engineers ... fortunately it was still six months prior to first customer ship ... so there was time to improve some of the issues:
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware

there was a problem raised from a somewhat unexpected source. there was an internal only corporate report and the MVS RAS manager in POK strenously objected to the mention of MVS 15 minute MTBF. There was some rumor that the objection was so strenuous that it quelshed any possibility for any award for significantly contributing to the productivity of the disk engineering and product test labs.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch
Date: Fri, 05 Nov 2004 07:59:19 -0700
torbenm@diku.dk (Torben Ægidius Mogensen) writes:
Reminds me of another Japan/US comment I once heard: "The reason the Japanese economy works better than the US economy is that Japan has ten engineers for each lawyer, while the US has ten lawyers for each engineer".

several weeks ago, in another forum, i posted a quote from some UK financial institution about improving economic conditions in the 2nd largest economy in the world, japan. i got called to task for posting the quote ... because japan is not the 2nd largest economy in the world ... the EU is (and i should know better ... or maybe UK financial institutions should know that?)

more recently there was some issue that as the EU consolidation proceeds, shouldn't various international bodies replace the individual EU member country memberships with a single EU membership.

my first trip to japan was to do the HONE
https://www.garlic.com/~lynn/subtopic.html#hone

installation for IBM Japan in Tokyo in the early 70s. At that time, the yen exchange was greater than 300/dollar. what is it now?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 05 Nov 2004 09:54:43 -0700
oh, and one reason that the disk engineering and product test labs tolerated me ... was that i wasn't in the gpd organizations. i had a day job in research (bldg 28) ... the stuff in bldg. 14&15 was just for the fun of it. something similar for the hone complex further up the peninsula or for stl/bldg-90 or for the vlsi group out in lsg/bldg.29. i would just show up and fix problems and go away. one of the harder problems was keeping track of all the security authorizations for the different data centers. I didn't exist in any of those organizations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 05 Nov 2004 13:10:29 -0700
"David Wade" writes:
Ah the joys of VM under VM. When I went on the VM systems programming courses of course we ran VM under VM (at St Johns Wood. 4361 I think)..

And didn;t the itcrawl when the whole class did it. I think we only had eight or nine on the course but it did slug response....


one of the issues is that shadow table maintenance follows the TLB hardware mainenance rules. for instance how big a pool of shadow tables to you keep ... so that when the virtual machine switches address space register(s). typical virtual memory operating systems tend to have a large number of virtual address spaces (modulo the single address space conversions like vs1, even SVS had at least "two" virtual address spaces ... although they are almost the same).

The simple case is to have a single set of shadow tables and every time the virtual machine switches virtual address space, wipe it clean and start all over. This was the original implementation up until release 5/HPO, which would support keeping around a few shadow tables (per virtual machine) ... and hopefully when the virtual machine switched virtual address spaces, it was to one that had been cached.

not published, but dual-address space introduced on 3033 had somewhat similar effect on the real hardware TLB. while dual-address introduction brought common segment relief (so you didn't totally run out of room in address space for application execution) ... it frequently increased the number of concurrent distinct address spaces needed passed the 3033 TLB limit ... in which case it needed to scavenge a cached address space and all the associated TLB entries (which resulted in measurable performance decreased compared to non-dual-space operation on the same hardware). recent posting discussing some dual-address space
https://www.garlic.com/~lynn/2004n.html#26 PCIs as chip-to-chip interconnect

one of the first production uses of virtual machines supporting virtual address spaces was the 370 hardware emulation project (it was also a something of stress case for the relatively new multi-level source management support). the 370 hardware architecture book was out but there was no actual hardware. cambridge was operating something of a open time-sharing service on their cp/67 system
https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/submain.html#timeshare

with some number of mit, bu, harvard, etc students as well as others.

the issue was that we couldn't just start modifying the production cp/67 system providing 370 virtual machines ... because there was some possibility that some non-employee would trip over the new capabilities.

so the base production update level was somewhat called the "l" system ... lots of standard operational and performance enhancements to a normal cp/67 system.

a new level of kernel source updates were started ... referred to as the "h" updates; they were the stuff that when the operation was specified, the kernel emulated a virtual 370 machine (with virtual 370 relocate architecture) ... rather than a 360/67 virtual machine.

an "h" level kernel would be run in a virtual 360/67 machine on the production system (but isolated from all the other normal time-sharing users).

a third level of kernel source updates then were created, the "i" level ... which were the modifications to the kernel to operate on real 370 hardware ... rather than on real 360/67 hardware. the "i" level kernel was in regular operational a year before the first real 370 hardware with virtual memory support existed (in fact it was a 370/145 engineering machine in endicott that used a knife switch as an ipl-button ... and the "i" system kernel was booted to validate the real hardware).

in any case, the operation in cambridge then could be:
real 360/67 running a "l" level cp/67 kernel virtual 360/67 machine running a "h" level cp/67 kernel virtual 370 machine running an "i" level cp/67 kernel virtual 370 machine running cms

this was something of a joint project with engineers in endicott and one of the first uses of the "internal" network for distributed source development (link between cambridge and endicott)
https://www.garlic.com/~lynn/subnetwork.html#internalnet

later there was a "q" level source updates (or maybe the project was called "q" and the update level was "g", somewhat lost in the fog of time) ... that included some 195 people. the project was to provide cp/67 support for virtual 4-way 370 smp. this was slightly related to the later effort to add dual i-stream hardware to an otherwise standard 195 (something akin to the current multi-threading hardware stuff). for most intents the kernel programming was as if it was a normal 2-way smp ... but it was standard 195 pipeline with one-bit flags tagging which i-stream an instruction belonged to.

a few random past l, h, & i kernel posts:
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2001k.html#29 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002j.html#0 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2003g.html#14 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#74 DASD Architecture of the future
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#45 A quote from Crypto-Gram

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 05 Nov 2004 16:03:19 -0700
Jon Forrest writes:
This is slightly off-topic but the thing I remember disliking about CMS (and other IBM OSs) is that you couldn't write a program that prompted for the name of a file, and then opened the file. Instead, you had to predefine a handle-like name outside of the program, and refer to the handle when opening the file. This is basically what the redoubted JCL 'DD' card also used to do.

that was the emulated os/360 services semantics (from earlier comment that cms had a 64kbyte os/360 services simulator subset that did almost as as much as MVS 8mbyte os/360 services simulator)

it was trivial to do in using cms filesystem semantics

also if you new the os/360 services magic words ... you could fake it out ... that was how the various compilers and assemblers that were brought over from os/360 worked ... they had some interface routine glue that you could specify the filename to the command ... and then inside the cms glue routine it made the magic os/360 services incantations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 06 Nov 2004 08:25:15 -0700
Gene Wirchenko writes:
I remember docs that the standard MTS system ran under UMMPS (University of Michigan Multi-Program Supervisor). I also heard it could run under JESS/2 (not sure of spelling). I never heard of it running under MVS.

the folklore is that michigan adopted LLMPS (lincoln labs multi programming supervisor) for UMMPS.

random past posts mentioning llmps ... (i have hardcopy of the old share contribution library document for llmps):
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001e.html#13 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2001h.html#24 "Hollerith" card code to EBCDIC conversion
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001k.html#27 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#9 mainframe question
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002d.html#49 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002f.html#54 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002m.html#28 simple architecture machine instruction set
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
https://www.garlic.com/~lynn/2003i.html#8 A Dark Day
https://www.garlic.com/~lynn/2003m.html#32 SR 15,15 was: IEFBR14 Problems
https://www.garlic.com/~lynn/2004b.html#31 determining memory size
https://www.garlic.com/~lynn/2004d.html#31 someone looking to donate IBM magazines and stuff
https://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 06 Nov 2004 08:44:52 -0700
"Stephen Fuld" writes:
But there were other, non-IBM implementations that allowed that. The "batch" orientation of OS/360 required that all resources needed by the job to be identified in the JCL so the job wouldn't "stall" waiting for resources. This was needed since the program, once loaded used real addresses and essentially couldn't be swapped out (unless it was swapped back in to the same physical memory space). Thus "dynamic" allocation of files was a no-no. The result was that things like compilers, that needed scratch files, couldn't allocate them internally and you had to have them expressly allocated externally in the catalogued procedures (essentially JCL macros). It was definitly icky! :-)

somewhat the issue in the batch paradigm is that the resources are known before the job starts ... because there is not nominally the human responsible for the batch program present. the other characteristic was that extents tended to be pre-allocated and pre-reserved before the program started (to somewhat limit the issue that the program gets part way in and can't continue ... and again the person responsible for the batch program was not present).

the interactive paradigms tend to do much more allocation on the fly ... frequently even a record at a time as needed. they tend to have filesystems that have allocation bit-map per record of allocation and files have indexes listing all the record for the file, which is updated as additional records are allocated. cms started this way with the original filesystem built starting in '65.

i did a morphing of this filesystem to a page map paradigm where the records were pages ... and did some infrastructure smarts to promote contiguous record allocation ... rather than the straight-forword record at a time ... which could be quite scattered. previous page map refs:
https://www.garlic.com/~lynn/submain.html#mmap

cms also imported some number of applications from the os/360 world (as did MTS) where the code did open/closes on DCBs ... and the traditional DCBs were expecting the DD-oriented file specification. The basics for this was provided by a bit of os/360 simulation code ... that was about 64kbytes in size. This included a filedef command that would do a straight forward simulation of the DD specification. However, for some number of the imported os/360 applications (commonly used compliers and assemblers) there were sometimes magic glue code written that handled the mapping of cms filesystem conventions to os/360 filesystem conventions at a much lower and granular level ... which tended to hide much more of the os/360 gorp in a cms interactive oriented environment.

the os/360/batch paradigm has evolved over the years ... providing more and more sophisticated facilities in support of running applications and programs where the default assumption is that the responsible party for the application is not present (as opposed to the interactive paradigm which assumes that the person responsible for the execution of the application, is in fact present).

the batch paradigm has somewhat found a resurgance ... even in the online and internet environment ... where there are deployed server applications that may have hundred of thousands of clients ... but there is some expectation that the server has much more characteristic of the batch paradigm ... the person responsible for the server operation isn't necessarily present when it is running.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 06 Nov 2004 09:14:16 -0700
the other characteristic of the os/360 extent, non-record allocation was the trade-off between i/o capacity and real storage requirements in the filesystem design around multi-track search.

basically the os/360 had very simple, one level index filesystem orientation. the file information with the pointer to the start of the file was located in the vtoc (which contained a list of all the files). to open/find a file, the system would do a multi-track search on the vtoc to find (and read) the record with the specific file information. this traded off the seemingly significant real-storage requirements needed for record level bit-maps of various kinds with simple disk-based structure that could be searched with i/o commands.

"library" files could have members ... and for these kind of files, called partition directory datasets (or PDS) ... the PDS directory was simple and could also be searched&read with i/o command (rather than having index/directory storage that could tie-up real memory). members in directories were contiguous allocation ... and PDS were contiguous ... and deleting/replacing members just left gaps in the PDS database. At some point, PDS datasets had to be "compressed" to recover the gaps.

as i've pointed out previously sometime in the 70s, the io/memory trade-off for disk-based structures had shifted ... and it became more efficient to have memory based index structures and attempt to conserve/optimize i/o infrastructure.

minor ckd threads
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#6 CKD Disks?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Demo: Things in Hierarchies (w/o RM/SQL)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Demo: Things in Hierarchies (w/o RM/SQL)
Newsgroups: comp.databases,comp.databases.object,comp.databases.theory
Date: Sat, 06 Nov 2004 11:08:41 -0700
"Laconic2" writes:
An airline reservation system covering about 45 airlines, about 100,000 flights per day, about 10,000,000 passengers, credit cards, airplanes, seating capacity in several classes, and about 10,000 fare changes a day. 200 simultaneous users on the web.

say 100 passengers/flight? ... then it is 10m PNR records ... so you might say that there are 5m new PNR records created each day (covers round-trip); 10m PNR records are updated as the flight is taken. The 5m new PNR records are typically created at some time for a future flight and will exist for 90 days after the flight is taken and then 5m old PNR records are deleted each day as they expire. Say an avg. PNR record is created ten days before the flight, that gives PNR record something like 100 days ... or total PNR databse of possibly approaching 500m records (multiply by 8 if you want to keep for two years instead of 90-some days).

there are actually (at least) four different databases (for res)

PNR database flight segment/seat database routes database (i once got to rewrite routes from scratch) fares database

random past posts re: working on routes and/or amadeus
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#153 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000.html#61 64 bit X86 ugliness (Re: Williamette trace cache (Re: First view of Willamette))
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001d.html#74 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2001k.html#26 microsoft going poof [was: HP Compaq merger, here we go again.]
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002i.html#38 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#40 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
https://www.garlic.com/~lynn/2002l.html#39 Moore law
https://www.garlic.com/~lynn/2003b.html#12 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003c.html#52 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003o.html#17 Rationale for Supercomputers
https://www.garlic.com/~lynn/2003o.html#38 When nerds were nerds
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Demo: Things in Hierarchies (w/o RM/SQL)

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Demo: Things in Hierarchies (w/o RM/SQL)
Newsgroups: comp.databases,comp.databases.object,comp.databases.theory
Date: Sat, 06 Nov 2004 14:55:30 -0700
a few years ago, oag for all commercial flights segments in the world had a little over 4000 airports (with commercial scheduled flights) and about half million "flight segments" (i.e. take/off landings). the number of flights segments didn't exactly correspond to flights/day ... since some flight segments didn't fly every day. there was also an issue that each individual flight segment was a flight ... plus combinations of flight segments were also flights (i.e. say flight from west coast to east coast with two stops could represent 3+2+1 different "flights"). the longest such flight that i found had 15 flight segments (it wasn't in the US) ... taking off first thing in the morning and eventually arriving back at the same airport that night ... after making the round of a lot of intervening airports.

there is also some practice of the same flight segment having multiple different flight numbers. the first instance of this (that i know of) was in the very early 70s ... the first twa flight out of sjc in the morning ... flew both to seatac and kennedy. it turns out that the people going to kennedy had a change of equipment at sfo (not a connection).

this is an airline "gimmick" ... traditionally the non-stops and directs are listed before the flights with connections. if you had a dual flight number ... with change of equipment ... tho flights would show up in the first "direct" section ... not in the following "connections" section.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CKD Disks?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CKD Disks?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 06 Nov 2004 15:22:38 -0700
tedmacneil@bell.blackberry.net (Ted MacNEIL) writes:
.. IBM was forced into the technological leap to thin film heads, but they just weren't ready in time. ..

I learned a valuable lesson at an HDS presentation when the arguments were bounding about thin heads and sputtering of 'particulate matter' on media.

There were as many management types as techies at this presentation and the presenter started with: 'Let me again talk about the benefits of metal heads vs thin film...'. Before he could continue, one of the management types yelled: 'Who cares?' She wanted to know the business value, rather than the technical value.

As a performance/capacity analyst, it took this to heart. I realized that the people holding the purse strings are not going to pay for technology for technology's sake. This is a lesson many techies still have to learn.

I have spent the last 20 years working on the skill that allows me to explain technology in business terms. No matter how much one may despise management, if you can't convince them why the money has to be spent, it won't be spent.


recent post discussing some of this period
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?

other posts about the period
https://www.garlic.com/~lynn/subtopic.html#disk

and mention of the air bearing simulations doing the calculations for (physical) design of the floating/thin-film heads.

so the heads fly closer ... enabling signficant increases in density and transfer rate ... resulting in the manufacturing cost per bit being reduced dramatically ... i try and stay out of pricing issues ... although i got dragged into for the resource manager ... being the guinea pig for first pricing kernel software
https://www.garlic.com/~lynn/subtopic.html#fairshare

the other part of the thin-film head design was chip that had both digital and analog circuits. a lot of this work was done over in the los gatos vlsi lab (bldg. 29, which has since been torn down, it was originally built in the 60s as advanced system development division hdqtrs). the group also did jib-prime ... which was the microprocessor in the 3880 controller.

in any case, they built the high performance logic simulation engine, LSM (originally named the los gotos simulation machine, but renamed logic simulation machine for external publication) ... and it included support for timing. it ran chip logic simulation about 50,000 times faster than could be done with convential programming on 3033. the subsequent logic simulators may have improved on that thruput ... but they dropped support for timing ... effectively assuming synchronous clock chip designs. One of the things that the LSM gave them with timing support was the ability to run a chip design that had non-synchronous timing and/or chip design that had combination of digital (potentially synchronous or asynchronous) circuits and analog circuits (like the design for the thin-film heads).

random past references to lsm & other logic simulators:
https://www.garlic.com/~lynn/2002d.html#3 Chip Emulators - was How does a chip get designed?
https://www.garlic.com/~lynn/2002g.html#55 Multics hardware (was Re: "Soul of a New Machine" Computer?)
https://www.garlic.com/~lynn/2002g.html#77 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#82 Future architecture
https://www.garlic.com/~lynn/2002j.html#26 LSM, YSE, & EVE
https://www.garlic.com/~lynn/2003.html#31 asynchronous CPUs
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#14 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

pop secure connection

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: pop secure connection
Newsgroups: netscape.public.mozilla.general
Date: Sun, 07 Nov 2004 18:41:44 -0700
i've been trying to get mozilla mail &/or thunderbird to read pop mail on port 110 using secure connection.

i've got all unnecessary incoming/outgoing ports blocked using iptables.

evolution reads mail fine using SSL on port 110

i set mozilla mail (& thunderbird) to ssl on part 110 and try and read mail ... and it just hangs and eventually times out.

i can see it has established a connection to port 110 ... but iptables is also showing it is blocking attempts to port 995.

this is with older thunderbirds, older mozillas, brand new thunderbird, brand new mozilla, and brand new mozilla nightly build.

mozilla & thunderbird both handle outgoing/sent mail using SSL just fine.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 08 Nov 2004 12:58:49 -0700
"Stephen Fuld" writes:
OK, but the organization of the file system is an orthogonal issue to whether dynamic allocations are allowed.

it was much easier to mask os/360 applications (compilers, assemblers, etc) running batch-oriented filesystem expectations on cms (with incremental record allocation, total space record-based allocation map, per file allocated record map, etc) than the other way around.

the two scenarios mentioned for the paradigm was

1) original batch-oriented (rather than interactive which could assume responsable human was present)

2) original io/real-storage tradeoff ... where real storage was conserved by having a relatively trivial filesystem structure that was totally resident on disk and could be searched/used by relatively trivial ckd multi-track search commands.

some cross-over from recent ckd thread
https://www.garlic.com/~lynn/2004n.html#51 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#0 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#6 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?

low-end target for os/360 was (at least) the 32kbyte to 64kbyte 360/30; I did assemblies on 360/30 with something like os/360 pcp release 5 or 6(?).

later releases added some additional incremental, dynamic capability ... but the os/360 genre was never really targeted at the interactive computing environment ... and so didn't have a lot of (market?) motivation to support features for the interactive environment ... aka could they have done better?, very probabily; did they think that better interactive facilities would provide sufficient ROI to justify it ... i don't think they believed so.

Furthermore they continued to be saddled with the original filesystem design with its io/real-storage trade-off ... aka effectively all disk storage allocation was disk resident, no memory based structures, simple enuf structure that it could be managed with ckd disk i/o commands. Dynamic allocation in the abstract is possible ... but the amount of work to perform any allocation at all (static or dynamic) is extremely heavy weight ... and requires a lot of disk i/o activity.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

pop secure connection

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pop secure connection
Newsgroups: netscape.public.mozilla.general
Date: Mon, 08 Nov 2004 13:07:57 -0700
Jay Garcia writes:
The port in the account settings MUST be set to 995 and not 110. That's the way it works here.

configuration says that 995 is the ssl (secure connection) default ... but allows it to be changed.

my isp supports ssl on port 110 (not 995).

evolution does it fine. even windows/laptop w/eudora does it fine. eudora even has a window that shows the last certificate & session ... which also shows it at port 110.

All versions of mozilla/thunderbird that i've tried with ssl/110 has failed to work (even when overriding the default 995 in the configuration menu).

one issue is that i have locked down all ports with iptables that are absolutely necessary (both incoming and outgoing).

when i try mozilla (&/or thunderbird) with ssl/110 ... i can see that a session has been established on port 110 ... but i also see in the iptables log that there are attempts at port 995 being discarded.

it almost appears that both mozilla and thunderbird, even when 110 is specified for secure connection (ssl) operation has vistages of code that still tries port 995 (even tho there is at least some code that honors the 110 configuration specification).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 08 Nov 2004 15:36:12 -0700
now (vm, cms, apl, etc based) US HONE
https://www.garlic.com/~lynn/subtopic.html#hone

in the late '70s put together something different for large scale cluster operation (eight large mainframe SMPs all sharing large disk farm and workload for all the branch/field service, sales and marketing in the US). it was the largest single-system-image cluster that i know of at the time (had something approaching 40,000 defined userids at the time)

they used a different on-disk structure ... not for allocation ... but for locking disk space use across all processors in the complex. there was a lock/in-use map on each disk (read-only, write-exclusive, etc, by processor). a processor that wanted to enable access to an area on disk, would read the disk's lock/in-use map ... check to see if the area was available, update the lock-type for that area, and use a ckd ccw sequence that emulated smp compare&swap semantics (aka search equal ... and then rewrite/update if the search still mapped).

misc. ckd dasd posts
https://www.garlic.com/~lynn/submain.html#dasd

it used standard disk controllers for the operation ... not the "airline control program" (acp, later renamed to tpf) disk locking RPQ. the acp controller rpq extended a little memory in the disk controller and allowed symbolic locks that were used for various kinds of cluster (aka loosely-coupled) disk access serialization & coordination ... w/o resorting to the whole device reserve/release locking, or a disk-based structure (like hone used). I know the rpq was available in the early 70s on 3830 disk controller. 3330/3830 page:
http://www-1.ibm.com/ibm/history/exhibits/storage/storage_3330.html

total aside ... possible typo in the above:

Features • 30 milliseconds was average access time; minimum was 55 milliseconds.

========

and a ACP/TPF history page
http://www.blackbeard.com/tpf/tpfhist.htm

some drift from the above ... one of the airline res efforts in the 80s was amadeus and my wife served for a brief time as chief architect for amadeus

random posts mentioning amadeus
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
https://www.garlic.com/~lynn/2003d.html#67 unix
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
https://www.garlic.com/~lynn/2004o.html#24 Demo: Things in Hierarchies (w/o RM/SQL)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

z/OS UNIX

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: z/OS UNIX
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 08 Nov 2004 16:06:03 -0700
WGShirey@ibm-main.lst (Greg Shirey) writes:
The I originally stood for Iceberg, Storagetek's name for the RVA before they licensed it to IBM. I suppose IBM could have renamed the product rather than just substitute "IBM" for "Iceberg", but they were just working with what they inherited.

in the middle of all this was c-star (or was it seastar?, must have been seastar, since the software was called seahorse ... a brand-new controller being worked on by adstar, started sometime around 91).

old reference that has buried references to both iceberg and seastar (from 2/96):
http://www.informationweek.com/565/65mtrob.htm
gone 404
https://web.archive.org/web/20080608164743/http://www.informationweek.com/565/65mtrob.htm
and another here
http://www.stkhi.com/nearline.htm
gone 404
https://web.archive.org/web/20060328034324/http://www.stkhi.com/nearline.htm

so if it was seastar ... might there be some topic drift to seascape and then netscape. ... so for some real topic drift and trivia question ... when mosaic was told that they couldn't use the name and had to change their corporate name ... who owned the term netscape and donated it to mosaic?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

NEC drives

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: NEC drives
Newsgroups: alt.folklore.computers
Date: Tue, 09 Nov 2004 14:01:17 -0700
Morten Reistad writes:
But I admit freely to have used e-mail regularly since November 17th, 1978 (with a small hiatus in 1984) and emacs since January 17th 1979. Being an old fart is also a state of mind. (Lame excuse).

i admit freely to trying to read email back in the states while on a business trip to paris in the early 70s ...
https://www.garlic.com/~lynn/subnetwork.html#internalnet

around the time EMEA hdqtrs moved from the states to Paris (new bldgs., La Defense, on the outskirts of paris) and I got to help clone a copy of HONE for them
https://www.garlic.com/~lynn/subtopic.html#hone

i lost some archives from the early & mid '70s when there was a datacenter glitch in the mid-80s that managed to clobber all three tape copies.

does having a dial-up home terminal since march 1970 count?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

What system Release do you use... OS390? z/os? I'm a Vendor S

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: What system Release do you use... OS390? z/os? I'm a Vendor S
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 09 Nov 2004 21:23:20 -0700
rhalpern@ibm-main.lst (Bob Halpern) writes:
Do Interpretive Loop (DIL) instruction with the emulator hardware. The 360/65 could do 704/709/7040/7090/7044/7094 series of machines. JES3 was originally ASP (Attached Support Processor) to batch up 70xx programs to run on a 360/65. The first support processor was a 360/40. Later, a 65 could drive bunch of 360.50s.

in the early 70s (before virtual memory was announced for 370s), IBM SE on the boeing account did a hack on the cp/67 kernel at the (ibm) seattle datacenter to run on a 360/50 using DIL to provide base&bound contiguous storage address relocation (as opposed to virtual memory and paging).

basically a 30+ year old software version of LPARs.

totally unrelated ... my wife was a catcher in the gburg JES group when ASP was transferred to gburg for JES3 ... one of her tasks was reading the ASP listings and writing JES3 documents.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 10 Nov 2004 06:51:13 -0700
Jan Vorbrüggen <jvorbrueggen-not@mediasec.de> writes:
No, those in the know understood, and the others don't care 8-|.

Case in point: When VMS went SMP, the architects defined a hierarchy of spinlocks to achieve deadlock avoidance. The implementors then created three versions of the spinlock code, the version to be loaded being deter- mined at boot time. One version was the NOP version for uniprocessors; one version implemented the spinlocks _and_ checked that the acquisition hierarchy was respected, with a bugcheck (aka suicide crash) happening if not - this was the debugging and diagnosis aid for the SMP code; and a third version that left out the checking for performance reasons.


charlie first did smp support for cp/67 after Lincoln Labs discontinued one of its 360/67 processors and shipped it back to the plant (and cambridge called the moving company and had it redirected to 545tech sq ... and cambridge got to upgrade their machine to two processor smp). it used a lot of spin-locks with test&set instruction available on the 360/67 smp. this support never shipped to customers, but out of the work, charlie invented compare&swap instruction which showed up in 370s.
https://www.garlic.com/~lynn/subtopic.html#smp

later, i got to do a microcoded smp project called VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

where i migrates lots of the vm/370 to microcode ... sort of enhanced version of various vm microcode performance assists (i was working on ecps for the 138/148 about the same time)
https://www.garlic.com/~lynn/submain.html#mcode

what remained of the vm kernel was essentially single-threaded but it would use compare&swap semantics to place work on the dispatch queue. the microcode dispatcher pulled stuff off the dispatch queue for different processors. if a processor had something for the kernel and no other processor was currently executing in the kernel (global kernel lock), it would enter the kernel. however, if another processor was executing in the kernel ... there was a "kernel" interrupt queued against the kernel ... and the microcoded dispatcher would go off and look for other work (bounce lock, rather than spinning on a global kernel lock that was common at the time). when the microcode smp project was killed, there was an activity to adapt the design to a software only implementation. The equivalent kernel software (that had been migrated to microcode) was modified to support fine-grain locking and super lightweight thread queuing mechanism (as opposed to the hardware kernel interrupt).

this was shipped to customers as standard vm/370 product ... where the customers now had two different kernels ... one with the inline fine-grain locking and one w/o. however, all source was shipped and it was common for a large number of the customers to completely rebuild from source. the fine-grain locking was a combination of inline logic with conditional assembly and "lock" macro which also had conditional assembly statements. part of the issue was that there was actual inline code in the dispatcher and other places for smp queue/dequeue operations (instead of the possibly more straight-forward kernel spin-lock logic).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Sling and the Stone & Certain to Win

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: The Sling and the Stone & Certain to Win
Newsgroups: alt.folklore.computers
Date: Wed, 10 Nov 2004 07:04:19 -0700
some Boyd and OODA-loop drift ....

The Sling And The Stone ... off an infosec mailing list yesterday
http://www.washingtondispatch.com/article_10508.shtml

and related article, 4th Generation Warfare & the Changing Face of War
http://d-n-i.net/fcs/comments/c528.htm

Certain to Win, The Strategy of John Boyd, Applied to Business
http://d-n-i.net/richards/ctw.htm

Advance Reviews of Certain to Win
http://d-n-i.net/richards/advance_reviews.htm

and of course, lots of my other Boyd references
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Scanning old manuals

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Scanning old manuals
Newsgroups: alt.folklore.computers
Date: Wed, 10 Nov 2004 15:19:59 -0700
"Charlie Gibbs" writes:
Like many of us, I'm sure, I have a collection of old computer manuals that's taking up a lot of shelf space. In the name of home renovations (and marital bliss) I would be willing to let go of some of the newer (i.e. later than 1980) ones if I could scan them. Like most such manuals, they're 95% text (in a couple of typefaces and sizes) and line drawings. I presume the ideal solution is to use some sort of scanning/OCR software to turn them into PDF files. Is there readily available software (preferably for Linux) to do this? Have any of you embarked on such a project, and do you have any words of wisdom to share?

i was just looking at asking the same question ... all sorts of odds and ends stuff from the 60s and 70s.

however we just unearthed a bunch of old handwritten letters from the 40s ... that i would also be interested in scanning(?).

when i looked at some of this stuff nearly 10 years ago ... it all seemed to be scaffolded off fax scanning, tiff format and ocr of tiff/fax softcopy (current scanners appear to have much higher resolution as well as color capability compared to the older fax oriented stuff).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 10 Nov 2004 15:53:08 -0700
Anne & Lynn Wheeler writes:
this was shipped to customers as standard vm/370 product ... where the customers now had two different kernels ... one with the inline fine-grain locking and one w/o. however, all source was shipped and it was common for a large number of the customers to completely rebuild from source. the fine-grain locking was a combination of inline logic with conditional assembly and "lock" macro which also had conditional assembly statements. part of the issue was that there was actual inline code in the dispatcher and other places for smp queue/dequeue operations (instead of the possibly more straight-forward kernel spin-lock logic).

for some source update drift ... there was some task (which has somehow been lost in the fog of time) which required to go thru all of the kernel source and change something(?) to something else(?) and add something(?). people were complaining that it was going to take possibly people weeks of time.

in a couple hrs we had written a RED edit exec (edimac, RED was one of the fullscreen editors that some what predated xedit ... but never got passed the internal use only scenario) ... and had its own exec language .. with a lot of the characteristics of rexx ... but red-editor specific ... while rexx can run as either xedit environment or most any other environment) and some glue execs.

cms source update process used updates (as opposed to "down dates" that you find with things like RCS); each change was a separate cms file ... and the source update procedure would sequentially apply all appilicable updates to the base source file and then re-assemble the result (which was viewed as a temporary file).

the RED edit exec made whatever change that was necessary and generated a unique sorce update file for the change. the process then reran the build process to all kernel modules, applying all the source updates (including the brand new generate file), re-assembled each module and rebuilt the executable kernel.

instead of several person weeks ... it was a couple hrs to write all the necessary procedural code ... and then turn it loose and the total rebuild process (including finding and making the necessary source code changes, generating the new source update file, re-assemble the resulting temporary file, etc) took something like 22minutes elapsed time (i have no idea why i remember how long it took to run ... and don't remember what the change was).

at one time there was an analysis ... that the total number of lines of source code modifications (to the kernel) on the waterloo/share tape was greater than the total lines of source code in the base product.

random past descriptions of the cms source update process:
https://www.garlic.com/~lynn/2000b.html#80 write rings
https://www.garlic.com/~lynn/2001e.html#57 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2002g.html#67 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#67 history of CMS
https://www.garlic.com/~lynn/2002n.html#39 CMS update
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003.html#58 Card Columns
https://www.garlic.com/~lynn/2003e.html#38 editors/termcap
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003k.html#47 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003l.html#17 how long does (or did) it take to boot a timesharing system?
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#69 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004g.html#44 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#30 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

pop secure connection

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pop secure connection
Newsgroups: netscape.public.mozilla.general
Date: Wed, 10 Nov 2004 18:48:04 -0700
Nelson B writes:
so, I conclude that the server known as mail.garlic.com and also as pop.garlic.com does NOT support StartTLS.

evolution does it fine. even windows/laptop w/eudora does it fine. eudora even has a window that shows the last certificate & session ... which also shows it at port 110.

I gather that either (a) these clients were not actually using StartTLS or (b) you were using a different server.

Hope this helps clear it up. I know this was pedantic for you, but I wrote it for the benefit of others who may read it also.


i keep garlic.com even after a couple moves ... so many people know the email as well as the urls for things like ietf index (the rfc-editor's page lists this as one of the places for looking up rfcs):
https://www.garlic.com/~lynn/rfcietff.htm

the glossaries
https://www.garlic.com/~lynn/index.html#glosnotes

etc. and so i use .forward, etc.

as i stated originally ... the isp (that i currently use) ... works using SSL to port 110 ... and that is how both eudora and evolution are working.

i have iptables on (internet) interface machine with all ports locked down ... other than what is absolutely necessary ... aka outgoing 25 & 110 is enabled; outgoing 465 and 995 are not enabled. both eudora and evolution are setup for SSL on port 110. eudora has feature that shows last session, port number and server certificate used for the ssl session. it shows port 110 used with the server certificate for the ssl session.

even if i go outside the iptables boundary and try port 995 with this specific isp ... it doesn't work. it only works with port 110.

as i mentioned in the previous post, inside iptables boundary ... with outgoing port 110 enabled and outgoing port 995 disabled ... eudora and evolution both work with SSL specified using port 110 (and you can query eudora for the last pop session and it will show the ssl certificate sent by the server and the port used ... aka port 110).

also, as per previous posts,
https://www.garlic.com/~lynn/2004o.html#26 pop secure connection
https://www.garlic.com/~lynn/2004o.html#28 pop secure connection

inside iptables "boundary", using numerous different versions of mozilla and thunderbird ... i set things up for ssl pop on port 110 and they all hang and then eventually time-out w/o transferring any email. i can see that there is a session initiated for port 110 ... but i also see in the iptables log that mozilla/thunderbird while having initiated a session on port 110 are also trying to do something on port 995 ... which is being thrown away by the iptables rules.

again, both evolution (fc2 & evolution 1.4 and fc3 & evoluation 2.0) and eudora (6.0) work with ssl on port 110 going to the internet thru the same iptable rules.

sending mail with ssl thru port 25 works for all ... evolution, eudora, mozilla, and thunderbird. pop receiving mail with ssl thru port 110 works for evolution and eudora but not with mozilla and thunderbird.

based on seeing port 110 session being active when trying to use mozilla and thunderbird ... but also seeing port 995 packets being discarded (by iptables), i suspect that there is some common code someplace that is ignoring the port 110 specification.

for some topic drift ... misc. other refs at garlic.com:

posts about domain name ssl certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

and a little about electronic commerce (from the early days of ssl, even before the group moved to mountain view and changed their name):
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

summary for rfc 2595
https://www.garlic.com/~lynn/rfcidx8.htm#2595

in the summary fields, clicking on the ".txt=nnn" field, retrieves that actual rfc. clicking on the rfc number brings up the term classification display for that rfc. clicking on any term classification ... brings up a list of all RFCs that have been classified with that term. clicking on any of the updated, obsoleted, refs, refed by, etc RFC numbers ... switches to the summary for that RFC.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Facilities "owned" by MVS

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Facilities "owned" by MVS
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 11 Nov 2004 11:25:01 -0700
wdriscoll@ibm-main.lst (Wayne Driscoll) writes:
<Changing subject line to one used by Greg Price> Gil, While agree with Greg that IBM does have a need to limit usage of certain instructions, to say that no-one outside of IBM OS development should use the instructions is insane. Without ISV's (or even sysprogs at customer sites) having the ability to use authorized and/or privileged operations, a lot of the advances in z/OS would never had occurred. For example, it was work done by a handful of people in the security arena (outside of IBM btw) that prodded IBM to develop, and enhance RACF. Also, when DB2 was released, IBM provided few tools, outside of SPUFI and the utilites to manage the product. Today IBM markets a large number of DB2 tools, but 1 - The original drive for DB2 tools was done by companies like BMC, Platinum, Candle, to name only a few and 2 - Many of the DB2 tools that IBM markets today are not written by IBM employees, but by ISV's with marketing agreements with IBM.

there used to be the joke that almost no ibm products were developed by ibm development groups ... for a long time, they originated mostly at customer and/or internal datacenters ... and then a development group was formed with responsibility to support and maintain them (i.e. some name inflation to call them development as opposed to support groups).

for instance there was technology transfer from sjr of system/r to endicott for sql/ds ... and what person in this mentioned meeting claims primary responsibility for technology transfer of sql/ds from endicott back to stl to become db2?
https://www.garlic.com/~lynn/95.html#13

note that stl and sjr were only about 10 miles apart but the technology needed to make a coast-to-coast round-trip to go from system/r to db2.

misc. system/r
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Facilities "owned" by MVS

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Facilities "owned" by MVS
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 11 Nov 2004 16:08:50 -0700
"Jim Mehl" writes:
Well that doesn't sound quite right. SQL/DS from Endicott was an almost pure code transfer. The main thing they did was translate the top level RDS (SQL) portion from PL/I to PLS. DB/2 at STL was pretty much a re-write led by Franco Putzolu, Jim Gray, Bob Yost, and others. The definitive web site is of course

that he claimed it? ... or that he did it?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Facilities "owned" by MVS

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Facilities "owned" by MVS
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 11 Nov 2004 16:58:30 -0700
... from long ago and far away (and even more drift) ....

Date: 03/29/80 10:24:48
From: wheeler
To: Gray

There is an 8 line MEMO ORACLE file which just went onto VMSHARE: Has anybody heard about a new data base system ORACLE. It was rumoured to be a version of IBMs famed System R.


... snip ... top of post, old email index

a quick search of the vmshare archive:
http://vm.marist.edu/~vmshare/browse.cgi?fn=ORACLE&ft=MEMO

.... a short extract from above:
Created on 03/25/80 17:13:20 by MUC

ORACLE Data Base System

Anyone have any info about a data base system called ORACLE? It was rumoured to be a version of IBMs famed System R.

Alan

*** CREATED 03/25/80 17:13:20 BY MUC ***

Appended on 07/24/80 03:05:17 by WMM

Oracle is available from Relational Systems Inc in Menlo Park for PDP systems (VAX-11/780 and others). It is supposed to be ready for IBM systems (only under VM/CMS) around DEC 80. About $100,000 for the whole system which is based on IBM's SEQUEL language.

*** APPENDED 07/24/80 03:05:17 BY WMM ***

Appended on 12/15/82 17:37:26 by FFA

Oracle Relational Database for VM I was wondering if anyone had actually installed the Oracle product. We were considering it and, based on the discussions at the most recent Oracle users group, we decided to postpone it until we could find users who would say that it was a relativly trouble-free, well running product. I would appreciate any comments about the product from people who either installed it or evaluated it. Our major concerns when we decided not to install were inter-user security and reliability. - Nick Simicich - FFA

*** APPENDED 12/15/82 17:37:26 BY FFA ***


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

osi bits

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: osi bits
Newsgroups: alt.folklore.computers
Date: Thu, 11 Nov 2004 17:22:12 -0700
ok, i just couldn't resist. "Purchase" was going to have been the brand new Nestle bldg which they sold in the 80s (and was later resold to Mastercard ... and is now Mastercard hdqtrs).

a slight problem that i've frequently pointed out was that OSI didn't support heterogeneous networks (aka lack any sort of internetworking support). ISO compounded the problem by directive that only networking protocols that met the OSI model could be considered for standardization by ISO and ISO-charted standards bodies. Note this includes protocols that would talk to LAN MAC interfaces ... since LAN MAC interface violates OSI and sits in the middle of OSI networking layer 3.

minor related posts regarding Interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop

and misc posts referencing some of ISO/OSI ... especially with respect to introducing "high speed protocol" to iso/ansi ... unfortunately hsp was defined as directly interfacing to lan/mac interface:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

.....
January 11, 1988

SUBJECT: OSI FORUM - Purchase, NY, March 15-18, 1988

Communications Products Division is sponsoring an OSI Forum in Purchase, NY on March 15-18, 1988.

As you know, OSI is the universally accepted model by which interconnectivity and interoperability for heterogeneous systems will be accomplished. A number of standards bodies, user groups, governments and special interest groups are focusing on OSI and vigorously advocating the adoption of the emerging standards. The pressure to support OSI is increasing and the stimuli influencing OSI acceptance by users and vendors are numerous.

The purpose of this forum is to exchange information related to OSI so that you will better understand the OSI market requirements, the Corporate strategy for OSI and the OSI product directions. At the same time, this forum will be used as a vehicle to collect additional requirements and product observations, identify potential problem areas, and assure synergism between the many OSI activities across organizations.


--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

how it works, the computer, 1971

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: how it works, the computer, 1971
Newsgroups: alt.folklore.computers
Date: Fri, 12 Nov 2004 06:44:44 -0700
How it works, the computer, 1971
http://davidguy.brinkster.net/computer/009.html

/.'ed late yesterday
http://slashdot.org/articles/04/11/12/131204.shtml?tid=133&tid=1

tab card from 60s and earlier .... of course, (at least) the area around MIT was starting to change in the 60s, with CTSS and then multics and cp67 in 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

with keyboards and various online & interactive computing.

lincoln labs got a version of cp67 in 1967 and cp67 was installed at the university i was at in jan. 1968.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 12 Nov 2004 07:37:33 -0700
"Del Cecchi" writes:
You have 600 Million bits spread over several tens of square inches of area. You have to hit each of them with a laser beam focused down in order to read it. How would you propose to do that without moving parts? Expensive optical scanners? This was a consumer product, designed to be inexpensive. And a fine job of engineering it was.

in the mid-80s i was railing about the high-cost of computer specific gear ... especially telecom ... was also doing some stuff with a company (that at the time was called cyclotomics) on trying to apply some reed-solomon ecc to some more convential computer telecom. part of the issues was the quantities weren't particularly large and so a lot of the costs were heavily front-end loaded.

after some business trip to japan ... i came back with a statement that i could get better technology out of a $300 cdrom player than some $20k (maybe only a little exaggeration) fiber-optic computer telecom gear ... and i would have a couple fine servo-motors left over.

random past mentions of cyclotomics
https://www.garlic.com/~lynn/2001.html#1 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2002p.html#53 Free Desktop Cyber emulation on PC before Christmas
https://www.garlic.com/~lynn/2003e.html#27 shirts
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: 360 longevity, was RISCs too close to hardware?
Date: Fri, 12 Nov 2004 16:05:49 -0800
Anne & Lynn Wheeler wrote:
after some business trip to japan ... i came back with a statement that i could get better technology out of a $300 cdrom player than some $20k (maybe only a little exaggeration) fiber-optic computer telecom gear ... and i would have a couple fine servo-motors left over.

recent, slightly related post regarding one of those business trips to japan in the mid-80s time-frame
https://www.garlic.com/~lynn/2004g.html#12 network history

and yes, the communication products division referenced in the above post is the same one referenced in this post from thurs:
https://www.garlic.com/~lynn/2004o.html#41 osi bits

another contrast ... the initial mainframe tcp/ip product got about 43kbytes/sec while consuming nearly a full (100 percent) 3090 engine. i added rfc1044 support to the base product and in tuning tests at cray research between a cray and a 4341-clone was getting 1mbyte/sec sustained (nearly 25 times more thruput, hardware limit with the 4341-attachment box) using very modest amount of 4341 engine (and they actually shipped the code not too long after the rfc was published) random past 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

and from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm

the rfc1044 summary entry
https://www.garlic.com/~lynn/rfcidx3.htm#1044

as always, clicking on the ".txt=nnn" field retrieves the actual rfc

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Integer types for 128-bit addressing
Date: Sat, 13 Nov 2004 13:35:55 -0800
Morten Reistad wrote:
The term "Zeitgeist" (of which I find no good English term; it refers to the general set of ideas at a time) comes to mind.

The things that fascinate groups of like-minded people have been remarkably similar throughout the world at any given time the last 500 years or so.

Some of them need not get through to the world at large to have an impact. The telephone took 20+ years to make an impact on the main media in the coal-and-steam age; but a sufficient number of people saw the potential and implemented it.

Ideas are picked up rapidly. Just look at how virtual memory went through the motions from 1964 to 1994, from being a novelty, to various implementations, to a new "do-it-right" phase, and them became standard technology throughout the world. What is remarkabe is how everyone was more or less at the same stage of this process at the same time.

What is interesting here is that the vocabulary is very different if you look at IBM, DEC, AT&T/unix or at Multics. The same ideas come out with only a few years difference, but the wording is very different. Notice how I have had to translate Barb's statements at some times, and Lynn makes a large effort to be a "bivocabularian".

Since I have observed most of the history of computers from an "Outside-looking-in" position it is very valuable to get glimpses of what actually went on inside some large organizations that made the state-of-the art technology. Dennis Richie has done a great job of documenting what they thought when they made Unix, and we get glimpses from IBM and DEC through Lynn and Barb here, although there is a problem with understanding corporate vocabulary. You need to know the system to understand the history.

Some important systems are very difficult to penetrate in terms of language and semantics. Multics is one. It has had a tremendous impact on later systems, but even with 10 years experience with a "wannabee" (Primos), and extensive reading I still have problem with understanding the rationales for lots of the designs, and even struggle with the vocabulary.

This makes it very difficult to see who affected whom here, or if this was parallell, independent races. The vocabulary indicates the latter, but the closeness of the results the former.


remember there was ctss at mit ... and some of the ctss people went to science center on 4th floor, 545 tech sq and worked on cp67/cms and others went to 5th floor, 545 tech sq and worked on multics. not only was there some common history, but there was also physical proximity. some of my random posts about 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

another view of that period can be gotten from melinda's paper
http://www.leeandmelindavarian.com/Melinda#VMHist

there is even some 67 references at the multics "site"
http://www.multicians.org/thvv/360-67.html

one of the comments that i've periodically made is that cp67/cms and then vm/370 had significant number of installed customers ... more than many other (non-ibm) timesharing systems that might come to mind. however, when people think about ibm, there is almost kneejerk reaction to think about the batch systems .... since those numbers tended to dwarf the other systems (significantly dwarfed the number of vm/370 systems ... which in a number cases dwarfed non-ibm timesharing systems). some random past comments about time-sharing
https://www.garlic.com/~lynn/submain.html#timeshare

over the years, one of my periodic hobbies was to build, ship, and support customized systems (independent of development of product features that were shipped by the official product group in standard product) ... i was very active and in building and supporting the hone system for going on 15 years (purely as a hobby)
https://www.garlic.com/~lynn/subtopic.html#hone

in addition, at one point, I believe i may have been building and shipping production customized operating systems to more "internal only" datacenters than there were total multics customers in the whole life of the multics product (again it was purely a hobby that i did for the fun of it).

one of the reasons that i developed an advanced (for the time) problem determination tool
https://www.garlic.com/~lynn/submain.html#dumprx

was to help cut down on the time i spent in directly support datacenters that were participating in my hobby. another that i originally did and deployed at hone was backup/archive system that eventually evolved into adsm and now tsm
https://www.garlic.com/~lynn/submain.html#backup

while i did a lot of stuff that was somewhat more research like stuff
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon
https://www.garlic.com/~lynn/subtopic.html#smp
https://www.garlic.com/~lynn/submain.html#bounce
https://www.garlic.com/~lynn/subnetwork.html#3tier

i also have enjoyed deploying real-live production systems ... like the work to make a bullet proof system for the disk engineering and product test labs
https://www.garlic.com/~lynn/subtopic.html#disk

and along the way getting to play in disk engineering.

Integer types for 128-bit addressing

From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Integer types for 128-bit addressing
Date: Sat, 13 Nov 2004 14:05:09 -0800
Morten Reistad wrote in message news:<sj35nc.8i32.ln@via.reistad.priv.no>...
Since I have observed most of the history of computers from an "Outside-looking-in" position it is very valuable to get glimpses of what actually went on inside some large organizations that made the state-of-the art technology. Dennis Richie has done a great job of documenting what they thought when they made Unix, and we get glimpses from IBM and DEC through Lynn and Barb here, although there is a problem with understanding corporate vocabulary. You need to know the system to understand the history.

one of my hobbies is also merged taxonomy and glossary (although most of the work is merging the definitions ... and quite a bit has building the taxonomy)
https://www.garlic.com/~lynn/index.html#glosnote

for an ibm glossary/jargon ... try ...
http://www.212.net/business/jargon.htm

Integer types for 128-bit addressing

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Integer types for 128-bit addressing
Date: Sun, 14 Nov 2004 09:57:34 -0800
"del cecchi" wrote:
Actually it is due to the insularity of IBM through much of the time. Often IBM was first or tied for first, but since during that era there was little external communication, IBM's terminology didn't make it out to the society at large. IBM had its own internal conferences, its own internal journals, its own internal newsgroups. Talking to the outside was so painful for the non Yorktown folks we never did it.

i was fortunate that I did a lot of invention and deployment while still an undergraduate ... and got to give presentations at user group meetings ... which had a spectrum of commerical, technical, gov., and academic organizations (it was sort of the heyday ... for its time ... of free sharing and distribution of software).

after joining the science center ... i was still called on to go to user group meetings, call on customers, and do misc. and sundry other stuff. it wasn't so much that i was at the science center ...
https://www.garlic.com/~lynn/subtopic.html#545tech

it was because i had been doing that sort of stuff for a number of years before getting hired.

i've claimed that having extensive contact with the rubber meeting the road ... was one of the reasons that kept me from being susceptible to the lure of future system project (and some number of my jaundice comments about it) ....
https://www.garlic.com/~lynn/submain.html#futuresys

i had possibly way too much perspective of why it couldn't be done ... which people wouldn't have if they were several levels removed from real live operation.

of course there is some thread that one of the projects i worked on as an undergraduate ... a 360 controller clone
https://www.garlic.com/~lynn/submain.html#360pcm

was (at least one of the) factor in creating the clone controller business .... which in turn is claimed to be (possiblye *the*) motivating business factor spawning the future system project.

when i moved to sjr ... i was allowed to continue to interact with real live customers (including, but not limited to academic oriented events) on a regular basis.

minor side-note, sjr put up the original corporate gateway to csnet
https://www.garlic.com/~lynn/internet.htm#0

and one of the sjr people registered the coporate class-a net in the interop 88 time-frame
https://www.garlic.com/~lynn/subnetwork.html#interop88

... which i haven't paid any attention to recently as to whether it is still being used.

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Integer types for 128-bit addressing
Date: Mon, 15 Nov 2004 15:15:31 -0800
nmm1@cus.cam.ac.uk (Nick Maclaren) wrote:
Also, in the 1970s and 1980s, I quite often got involved in telling one part of IBM what another part was doing. The insularity was in some ways worse between parts of IBM than between IBM and its customers.

more than once (say when i was at sjr on the west coast), I acted as an information conduit between somebody in one corridor in a bldg. (on the opposite coast) and a nearby corridor in that same bldg (in large part because i was very active using the internal network and email). that is independent of acting as an information conduit between the outside and the inside .... for instance, circa 1980, i started managing a r/o shadow of vmshare computer conferencing (hosted by tymshare) on HONE and several other internal machines. vmshare archive:
http://vm.marist.edu/~vmshare/

toolsrun was developed and deployed in the 80s ... in response to the growing use of the internal network for information sharing. it basically had a dual personality ... being able to act in a maillist-like mode and in a usenet-like mode (local shared repository and conferencing-like mode implemented for various editors). i was somewhat more prolific in my younger days and there were sometimes jokes about groups being wheeler'ized (where I posted well over half of all bits). some subset flavor of toolsrun was communicated to bitnet/earn (w/o the usenet like characteristic)
https://www.garlic.com/~lynn/subnetwork.html#bitnet
which eventually morphed into listserv (and a clone, majordomo); a listserv "history"
http://www.lsoft.com/products/listserv-history.asp

in 84/85 timeframe, it also led, to a researcher getting assigned to me for something like 9 months, they sat in the back of my office and took notes on how i communicated, got copies of all my posts, incoming and outgoing email ... and logs of all my instant messages. the research and analysis also resulted in a stanford phd thesis (joint between language and computer ai) and some follow-on books. random related posts
https://www.garlic.com/~lynn/subnetwork.html#cmc

Integer types for 128-bit addressing

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Integer types for 128-bit addressing
Date: Mon, 15 Nov 2004 16:15:49 -0800
ref:
https://www.garlic.com/~lynn/2004o.html#48

from long ago and far away ... userids munged to protect the guilty. this required getting legal sign-off making it available on internal machines. somebody was also starting to look at making machine readable material available to customers for a fee ... and there was some investigation regaring the complications of including the (internal) r/o shadow of the vmshare conferencing material as part of that machine readable distribution (with disclaimers that any fee would not be applicable to any included vmshare material).

note that the consolidated US HONE datacenter, Tymshare and SJR were all within 20 miles.
https://www.garlic.com/~lynn/submain.html#timeshare



Date: 3/18/80  20:00:40
From: wheeler
CC: xxxxx at AMSVM1, xxxxx at BTVLAB, xxxxx at CAMBRIDG, xxxxx at
DEMOWEST, xxxxx at FRKVM1, xxxxx at GBURGDP2, xxxxx at GDLPD, xxxxx at
  GDLPD, xxxxx at GDLS7, xxxxx at GDLS7, xxxxx at GDLS7, xxxxx at GDLSX,
xxxxx at GFORD1, xxxxx at HONE1, xxxxx at HONE2, xxxxx at LOSANGEL,
xxxxx at NEPCA, xxxxx at OWGVM1, xxxxx at PALOALTO, xxxxx at PARIS,
xxxxx at PAVMS, xxxxx at PAVMS, xxxxx at PLKSA, xxxxx at PLPSB, xxxxx
  at PLPSB, xxxxx at RCHVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx
at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1, xxxxx at SJRLVM1,
  xxxxx at SJRLVM1, xxxxx at SNJCF2, xxxxx at SNJCF2, xxxxx at SNJTL1,
xxxxx at SNJTL1, xxxxx at STFFE1, xxxxx at STLVM2, xxxxx at STLVM2,
xxxxx at TDCSYS3, xxxxx at TOROVM, xxxxx at TUCVM2, xxxxx at UITHON1,
xxxxx at VANVM1, xxxxx at WINH5, xxxxx at YKTVMV, xxxxx at YKTVMV,
  xxxxx at YKTVMV, xxxxx at YKTVMV, xxxxx at YKTVMV
re: VMSHARE;

initial VMSHARE tape has been cut and is in the mail. Will be a couple
of days before it is up in research. Will take several more before
it is up at HONE.

... snip ... top of post, old email index, HONE email

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch
Date: Wed, 17 Nov 2004 07:25:54 -0700
old_systems_guy@yahoo.com (John Mashey) writes:
For anybody, that's an itneresting book, and Chapter 2 is especially useful in articulating the sorts of processes that go on in the evolution of ISAs and implementations thereof in real companies. I especially liked the short of the almost-done 65-bit CPU...

how 'bout the haggling that went on between rochester and austin about adding the 65th bit

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 17 Nov 2004 09:58:30 -0700
"Ken Hagan" writes:
I have no idea where you get that impression from. Perhaps you are confusing necessary with sufficient conditions.

It is unlikely that projects will be successful without large amounts of perspiration. Mr Edison estimates a 1%/99% ratio. That being the case, successful projects will, necessarily, be seen with hindsight to have passed through a small number of turning points, at which inspiration turned a previously unsolved problem into mere grunt-work.

That doesn't imply that one bright idea followed by some grunting is sufficient to achieve anything.


lot of development is incremental changes to existing technology.

original development is frequently inspiration and possibly may represent some sort of disruptive technology. added to all the (enormous amounts) grunt work to go from idea to business quality solution ... there also may also be large amounts of grunt work countering existing, entrenched interests.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 06:31:16 -0700
jmfbahciv writes:
Are circular files black now? Mine was gray.

random past references to administrative overhead and red-tape becoming so massive that nothing can escape
https://www.garlic.com/~lynn/99.html#162 What is "Firmware"
https://www.garlic.com/~lynn/2001l.html#56 hammer
https://www.garlic.com/~lynn/2004b.html#29 The SOB that helped IT jobs move to India is dead!

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 06:59:43 -0700
for some (more) drift, recently in presentation there was some comment about "see, the current internet infrastructure resillancy is the result of the original arpanet design for highly available networks".

my observation is that it is more the result of long(er) history of telco provisioning than anything to do with the original arpanet design. I have some recollection circa 1979 of somebody joking that the IMPs were able to nearly saturate the 56kbit links with inter-IMP administrative chatter about packet routing (bits & pieces about the packet traffic, as opposed to the actual packet traffic) ... aka administrative overhead and turning into a black-hole; in effect, the design didn't scale ...
https://www.garlic.com/~lynn/2004o.html#52 360 longevity, was RISCs too close to hardware?

the original arpanet was homogeneous networking infrastructure using IMPs and any resilliancy was provided by these front-end IMPs and a lot of background (& out-of-band) administrative chatter. In the conversion to heterogeneous, internetworking .... various patches were made to the infrastructure to try and get around the scaling issues.

in the mid-90s, about the time e-commerce was being deployed ... the internet infrastructure finally officially "switched" to hierarchical routing (beginning to look more & more like telco infrastructure) in part, because of the severe scaling issue with (effectively) any sort of anarchy, random routing. this was the period that I got to have a lot of multiple A-record discussion with various people (several who made statements about basic, simple TCP protocol totally handled reliability, resilliancy, availability, etc ... and nothing more was needed ... including multiple A-record support).

At this point, i can insert the comment about: in theory, there is no difference between theory and practice, but in practice there is.

However, it is much more applicable to comment about some residual entry level texts equating some of the original arpanet implementation with existing implementations .... and there being some dearth of real live experience with production, business-strength, deployable systems. some amount of this we investigated in detail when we were doing ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

one might even claim that much of the lingering myths about arpanet implementation and internet implementation being similar to some of the lingering stuff about OSI being related in any way to internetworking
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

random past mentions of multiple a-record:
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#158 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#159 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/aepay4.htm#comcert17 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aadsm13.htm#37 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm15.htm#13 Resolving an identifier into a meaning
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#34 Buffer overflow
https://www.garlic.com/~lynn/2003.html#30 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2003.html#33 Round robin IS NOT load balancing (?)
https://www.garlic.com/~lynn/2003c.html#8 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#12 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#24 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#25 Network separation using host w/multiple network interfaces
https://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment
https://www.garlic.com/~lynn/2004k.html#32 Frontiernet insists on being my firewall

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

evoluation (1.4) and ssl (FC2) update

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: evoluation (1.4) and ssl (FC2) update
Newsgroups: alt.os.linux.redhat
Date: Thu, 18 Nov 2004 07:09:37 -0700
i've got FC2 and evolution 1.4 running on machine. evoluation receives (pop) and sends (smtp) mail using SSL ... and everything has worked since evoluation went up on the machine with FC1. yesterday, yum applied latest ssl changes for FC2. ever since then, evoluation aborts every time it attempts to read mail (which makes me suspicious that there is some tie-in with the ssl changes). i've even wiped all my existing email and evoluation definitions and recreated from scratch ... and evolution still aborts with any attempt to read mail.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 08:59:25 -0700
jmfbahciv writes:
Nope. I'm confusing anything since I was a part of a lot of "new stuff" getting made. The other people who are trying to correct your assumptions actually did the work for decades. At least Lynn did; I can't recall how long Del has been in the biz.

i got summer job programming while undergraduate in mid-60s ... then university hired me to be responsible for the production systems. normally the university datacenter shutdown over the weekend ... they gave me a key ... and i frequently had all the computers in the datacenter from 8am sat. until 8am monday ... there was sometimes issues about getting to monday classes after already having been up for 48hrs straight.

spring of 69 ... shortly after boeing formed bcs ... ibm con'ed me into skipping spring break and teaching a one week computer class to the bcs technical staff. bcs then hired me as full-time employee ... while i was still a student (was even given supervisor parking permit at boeing field). there was this wierd semester where i was a full-time student, a full-time bcs employee (on educational leave of absence), and doing time-slip work for IBM (mostly supporting cics beta-test that was part of onr-funded university library project).

while i did a lot of research and shipped products over the years ... for much of the time ... i was also involved in directly building and supporting production systems for day-to-day production datacenter use; frequently wasn't even in my job description ... more like a hobby ...

example i've frequently used is hone
https://www.garlic.com/~lynn/subtopic.html#hone

which world-wide field, sales, and marketing all ran on. i was never part of the HONE structure or business operation.

another example was disk engineering and product test in bldgs. 14 & 15
https://www.garlic.com/~lynn/subtopic.html#disk

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 09:18:28 -0700
for even more topic drift ... i had worked construction in high school, and first summer in college ... got a job as forman on construction job. about year after i joined ibm ... they gave me this song and dance about ibm rapidly growing and THE career path was being manager, having lots of people reporting to you, etc. i asked to read the manager's manual (about 3in thick, 3ring binder). I then told them that how i learned (in construction) to deal with difficult workers was quite incompatable with was presented in the manager's manual. i was never asked to be a manager again.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 13:12:06 -0700
Edward Wolfgram writes:
Perhaps I read the article incorrectly, but I seem to have missed any explanation of how performance problems are present and how 64 bit addressing solved these performance problems.

Clearly, because the Linux kernel desires (again, not *needs*) to address all of physical memory, that there are addressing limits due to large virtual memory related data structures. It doesn't have to be that way, and again, this has nothing to do with performance problems.

If you wish to confine your OS discussion to shared memory VM models, then you may get nearer to showing a requirement for a gt 32 bit addressibility. But I don't see one, even there.


some (possibly extraneous) examples.

370s had 24-bit addressing ... i've posted before how systems (cpus) were getting much faster than disks were getting faster (resulting in a decline in disk relative system performance). as a result real storage was being relied on more & more to compensate for disk declining relative disk eprformance.

3033 had 4.5 mip processor, 16 channels, and 24-bit addressing (both real and virtual) so was limited to 16mbytes real storage ... and was about the same price as a cluster of six 4341s, each was about one mip, had six channels, and 16mbytes real storage (6mips aggregate, 36channels aggregate and 96mbytes real storage aggregate). as a result, 3033 was suffering from the 16mbyte real storage limit .... and a two processor 3033 smp suffered even worse since the configuration was still limited 16mbytes real storage (since you were now trying to cram 8+mips worth of work thru a 16mbytes real storage limited system ... that was frequently real storage limited with 4.5mip single processor).

they came up with a gimmick to allow up to 64mbytes of real storage in a configuration ... even tho the architecture was still limited to 24bit addressing. the page table entry was 16bits; 12bit (4k) page number, 2 flag bits, and two unused bits. what they did was allow concatenating the two unused bits to the 12bit page number ... allowing addressing of up to 16384 real pages. instructions were still limited to 24bit addressing (real and virtual) ... but the TLB could resolve to a 26bit (real) address.

the issue wasn't specific to smp, it was that the system thruput balance could use 8-10 mbytes per mip ... and single processor 3033 was frequently under configured with real storage (and 3033 smp further aggrvated the mbytes/mip starvation). Note that the 4341 clustering solution got around the real storage addressing constraint by allowing an additional 16mbytes of real storage for every mip added.

MVS in the time-frame had a different sort of addressing problem. the original platform was real storage operating system that was completely based on pointer-passing paradigm. the initial migration of mvt real storage to virtual memory ... was doing enuf so that the kernel (still pretty much mvt) appeared to be running in 16mbyte real storage .... using a single 16mbyte virtual address space ... which still contained all kernel and applications (as found in the mvt real storage model) ... and continued to rely on pointer-passing paradigm.

the migration to mvs created a single 16mbyte address space per application ... but with the mvs kernel occupying 8mbytes of every virtual address space. also, since there had been some number of non-kernel system functions that were now in their own virtual address space ... a one mbyte "common area" was created. applications needing to call a subsytem function would place stuff in the common area, setup pointer and make a call. the call passed thru the kernel to switch address spaces and the subsystem used the passed pointer to access the data in the common area. this left up to 7mbytes for applications. The problem was that as various installation added subsystem functions, they had to grow the common area. In the 3033 time-frame, it wasn't unusual to have installations with 4mbyte common areas ... leaving a maximum of 4mbytes (in each virtual address space) for application execution.

note that TLB hardware was address space associative ... and for various reasons ... there was two address spaces defined for each application ... with most of the stuff in identical (a kernel-mode address space and a application-mode address space). Furthermore, the kernel stuff and the "common area" was pretty "common" across all address spaces ... however, the implementation tended to eat up TLB entries.

to "address" the looming virtual address space constraint .... "dual-address space" was introduced on the 3033. subsystems now had primary address space (the normal kind) and secondary address space. The secondary address space was actually the address space of the calling application that had past a pointer. Subsystems now had instructions that could fetch/store data out of secondary address spaces (calling applications) using passed pointers (instead of creating a message passing paradigm).

a little digression on the MVT->SVS->MVS evoluation. In the initial morphing of MVT->SVS, POK relied on cambridge and cp67 technology (for instance the initial SVS prototype was built by crafting CP67 "CCWTRANS" onto the side of a MVT infrastructure .... to do the mapping of virtual channel programs to real channel programs). There was also quite a bit of discussion about page replacement algorithms. The POK group was adamant that they were going to do a LRU approximation algorithm ... but modify it so that it favored selecting non-changed pages before changed pages (overhead and latency of writing the changed page wasn't required). There was no amount of argument that I could do to convince them otherwise; it was somewhat a situation of not being able to see the forest for the trees.

So about in the MVS/3033 time-frame ... somebody in POK finally came to the conclusion that they were biasing the page replacement algorithm towards private, application-specific, changed data pages, at the expense of high-use, shared system pages (i.e. high-use system, shared, instruction/execution pages would be selected for replacement before private, application-specific, modified data pages).

come 3090, they had a different sort of problem with real-storage and physical packaging. this somewhat harked back to 360 "LCS" or local/remoate memory in various SMP schemes like supported by SCI. They wanted more real memory than could be phsycially packaged within the latency distances for standard memory. Frequently you would find a 3090 with maximum amount of real storage ... and then possibly as much "expanded storage" (that effectively was identical technology used in the standard storage). expanded storage was at the end of a longer distance and wider memory bus.

A 360 *LCS* configuration could have say 1mbyte of 750nsec storage and 8mbytes of 8mic *LCS* storage. *LCS* supported direct instruction execution (but slower than if data/instruction were in 750nsec storage). Various software strategies attempted to trade-off copying from *LCS* into *local* storage for execution ... or executing directly out of *LCS*.

In the 3090, expanded stor case, it only supported copying (standard instruction&data fetch was not supported, also I/O wasn't supported). There was a special, synchronous expanded stor copy instruction ... and expanded stor was purely software managed ... slightly analogous to electronic paging device ... except it used synchronous instructions instead of I/O.

misc 3033, dual-address space, and/or mvs postings (this year):
https://www.garlic.com/~lynn/2004.html#0 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#9 Dyadic
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#17 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004.html#19 virtual-machine theory
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#35 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#49 new to mainframe asm
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2004b.html#60 Paging
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#2 Microsoft source leak
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#19 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#20 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#41 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
https://www.garlic.com/~lynn/2004d.html#69 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2004e.html#6 What is the truth ?
https://www.garlic.com/~lynn/2004e.html#35 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#11 command line switches [Re: [REALLY OT!] Overuse of symbolic
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#30 vm
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#60 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#2 Text Adventures (which computer was first?)
https://www.garlic.com/~lynn/2004g.html#11 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#15 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
https://www.garlic.com/~lynn/2004g.html#29 [IBM-MAIN] HERCULES
https://www.garlic.com/~lynn/2004g.html#35 network history (repeat, google may have gotten confused?)
https://www.garlic.com/~lynn/2004g.html#38 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#55 The WIZ Processor
https://www.garlic.com/~lynn/2004h.html#10 Possibly stupid question for you IBM mainframers... :-)
https://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
https://www.garlic.com/~lynn/2004l.html#22 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#23 Is the solution FBA was Re: FW: Looking for Disk Calc
https://www.garlic.com/~lynn/2004l.html#54 No visible activity
https://www.garlic.com/~lynn/2004l.html#67 Lock-free algorithms
https://www.garlic.com/~lynn/2004l.html#68 Lock-free algorithms
https://www.garlic.com/~lynn/2004m.html#17 mainframe and microprocessor
https://www.garlic.com/~lynn/2004m.html#36 Multi-processor timing issue
https://www.garlic.com/~lynn/2004m.html#42 Auditors and systems programmers
https://www.garlic.com/~lynn/2004m.html#49 EAL5
https://www.garlic.com/~lynn/2004m.html#50 EAL5
https://www.garlic.com/~lynn/2004m.html#53 4GHz is the glass ceiling?
https://www.garlic.com/~lynn/2004m.html#63 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#0 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#7 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#14 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004n.html#39 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#5 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#19 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#38 Facilities "owned" by MVS
https://www.garlic.com/~lynn/2004o.html#39 Facilities "owned" by MVS
https://www.garlic.com/~lynn/2004o.html#40 Facilities "owned" by MVS

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 13:24:41 -0700
"Tom Linden" writes:
Small world, I was there from '66 to '70, but in Renton and on CDC6600 the IBM gear was mostly at Boeing field

summer of '69, renton data center was supposedly rapidly being populated with ibm gear. i was told that there was already $200m worth of ibm gear in the renton data center ... and that machines were arriving as fast as they could be installed. there was supposedly some claim that for several months ... the hallways outside the machine room proper avg. all the compontents for three 360/65 systems waiting installation (as fast as boxes were installed ... new boxes would be arriving).

the person assigned to head up BCS was from corporate hdqtrs (up at boeing field) and there was some possible turf issues ... since BCS was to absorb the significantly larger renton data center operation (as well as some number of other data center operations).

there was story i was told about a certain famous salesman that boeing was generating computer orders for as fast as the person could write up the orders ... whether or not the person actually knew what was being ordered. supposedly this is what prompted the switch from straight commission to the sales quota system (since the short-term straight commission off the sales to boeing was, itself, quite large). the salesman supposedly left shortly after the sales quota system was put in place and founded a large computer service company.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 13:36:02 -0700
Janne Blomqvist writes:
"On 32-bit systems, memory is now divided into "high" and "low" memory. Low memory continues to be mapped directly into the kernel's address space, and is thus always reachable via a kernel-space pointer. High memory, instead, has no direct kernel mapping. When the kernel needs to work with a page in high memory, it must explicitly set up a special page table to map it into the kernel's address space first. This operation can be expensive, and there are limits on the number of high-memory pages which can be mapped at any particular time."

design right out of 3033 >16mbyte support (25years ago) .... there were some number of operations (including some i/o operations) that had to be in first 16mbyte of storage.

the kernels running on 3033 ... setup special page table pointing to the page above the 16mbyte line ... and either 1) directly accessed the data and/or 2) copied the contents of 4k page to a 4k slot in the <16mbyte area.

as an aside, i have a relatively new dimension 8300 with 4gbytes real storage ... which so far i've loaded fedora FC1 and FC2 ... and all they claim is 3.5gbytes real storage (while they indicate 2gbytes real storage on machine with 2gbytes real storage).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

JES2 NJE setup

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: JES2 NJE setup
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 18 Nov 2004 15:51:02 -0700
edjaffe@ibm-main.lst (Edward E. Jaffe) writes:
CTCs talk to CNCs. You can use them for VTAM <-> VTAM SNA connections. Once those links are established, you can run *any* SNA traffic (including SNA/NJE) over them. It beats having a dedicated resource.

in the early 90s there was some stuff comparing vtam lu6.2 and nfs .... typical (unix) nfs ran over ip ... might have (total) pathlength of 40k-50k instructions and five buffer copies. the comparable number for vtam was something like 160k instruction pathlength and 14 buffer copies. somebody estimated that the (3090) processor cycles to do fourteen 8k buffer copies might actually be more than the processor cycles for the instructions.

somewhat earlier .... there was an effort to deploy a STL/Hursley offload project (hursley using stl computers offshift and stl using hursley computers offshift) using a high-speed(?) double-hop satellite link. they first brought up the link with native vnet drivers (note for customers ... they eventually managed to eliminate all the native vnet drivers ... leaving customers with only nje drivers on vnet).

then the networking manager in stl insisted on switching the double-hop satellite link to sna/nje. there was never any successful connection made. the stl networking manager had the link swapped back&forth a number of times with native vnet drivers and sna/nje drivers .... with the native vnet drivers always working w/o a problem and no sna/nje driver ever making a connection.

his eventually conclusion was that there was high error rate on the link and the native vnet driver was not sophisticated enuf to realize it. it turned out that the actual problem was sna/nje had a hard coded keep-alive check which was shorter interval than a complete round-trip over a double-hop satellite link (stl to geo-sync orbit over the us, down to an east coast earth station, up to geo-sync orbit over the atlantic and down to hursley ... and then return). note however, that was an unacceptable conclusion ... it was much more politically correct that the transmission was malfunctioning.

for some additional/total topic drift, got to be in the stands for
http://www.nasa.gov/mission_pages/shuttle/shuttlemissions/archives/sts-41D.html

because hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

was going to make use of one of the payloads.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 16:31:30 -0700
glen herrmannsfeldt writes:
There are undersea fiber optic cables that are over a tenth of a second long. One is about 0.14s long.

try roundtrip on double-hop satellite link in geo-sync orbit, recent mention
https://www.garlic.com/~lynn/2004o.html#60 JES2 NJE setup

hsdt used geo-sync satellite for some links (also had fiber and microwave links).

single hop round trip is about 88k miles, double hop is twice that.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 Nov 2004 18:21:46 -0700
"del cecchi" writes:
I remember when the geniuses at IBM put the tie lines on satellite. It was awful. Half duplex with a lag.

ibm had invested heavily in the technology for sat. computer communication ... unfortunately the sna forces got involved trying to dictate the protocol that should be mandated for such use. if you thought that real-time voice wasn't designed for geo-sync sat. latency .... neither was sna (aka voice over sat actually working better than sna over sat). of course, i wrote hsdt driver for dual-simplex over sat. with adaptive rate-based pacing (as opposed to window-based pacing).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 19 Nov 2004 08:04:59 -0700
Anne & Lynn Wheeler writes:
ibm had invested heavily in the technology for sat. computer communication ...

put both money and people into the effort .... one of the issues was (at the time), ibm had about 14 levels of management for a 485,000 people organization. for the sat. effort, guess how many levels of management was put in place for a 2000 person organization? guess what percentage of the organization had titles of director or above?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 19 Nov 2004 08:56:07 -0700
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Hmm. 8-10? And 10%?

I know of one example that, temporarily, had 5 levels for 100 people, with over 40 people having some (internal) managerial role.


it had the same number of levels ... and 50 percent

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

360 longevity, was RISCs too close to hardware?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 360 longevity, was RISCs too close to hardware?
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 19 Nov 2004 09:27:15 -0700
... the use of one of the (hsdt) links between austin and san jose is claimed to have helped contribute to bringing the rios chip set in a year early; .... (large) chip designs being shipped out to run on the EVE and LSM logic simulators in san jose

random recent posts mentioning the logic simulators
https://www.garlic.com/~lynn/2004g.html#12 network history
https://www.garlic.com/~lynn/2004j.html#16 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 20 Nov 2004 07:15:17 -0700
glen herrmannsfeldt writes:
It seems that at the time they thought 24 bits would last a while. They did, but the architecture lasted longer than they expected.

in theory, future system
https://www.garlic.com/~lynn/submain.html#futuresys

was going to replace it about mid-thru 370, possibly w/o even having to announce virtual memory for 370s.

in the early 70s, Amdahl gave a talk at mit about starting his new company and the business case he used with the investors. there was some line about customers had already invested something like $200b in software (this was only 7-8 years after the original 360 announcement) for the architecture and that even if IBM were to completely walk away from it (could be interpreted as veiled reference to FS), there would still be customers running that software at least until 2000 (which also showed up as big y2k remediation projects in the late 90s ... driving demand for programming skills with all the other frenzy of the late 90s).

specific blurb/reference on fs
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System

the above references the plug compatible and controller clone competition that was showing up ... at the time there was some write up blaming a project that i had worked on as undergraduate
https://www.garlic.com/~lynn/submain.html#360pcm

the folklore is that after FS was killed, some of participants retreated to rochester and produced the s/38 (which morphed into the cisc as/400 and then into the current risc as/400).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Relational vs network vs hierarchic databases

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Relational vs network vs hierarchic databases
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Sat, 20 Nov 2004 09:15:11 -0700
"Dan" writes:
Traversing by pointers is very, very fast. There is not doubt about it. However, there is a huge downside to hardcoding a data model by using pointers and fixed data segments. This should be obvious.

I believe what Laconic2 meant by saying "all other things equal" is that if we were to race a relational system and an IMS hierarchical system with *exactly* the same resources in terms of memory, processing power, platform characteristics, I/O capabilities, secondary storage, and inter-process communication overhead (which is probably not really possible), over fixed data access paths which correspond to how the hierarchy is formed in IMS, the IMS system would be and is faster (sub-second responses).


the big argument that i remember going on between stl/ims (bldg. 90) and sjr/systemr (and bldg. 28, just a couple miles away):
https://www.garlic.com/~lynn/submain.html#systemr

was that there was a trade-off between the disk space of direct (physical) pointers in the 60s era database implementations (ims, hierarchical, network, etc) and relational using indexes. the comparisons in the late '70s was that the typical systemr implementation doubled the physical disk space (to handle the indexes) which was trade-off against the reduction in dba/sysadm people time managing the pointers (the issue of trade-off of physical pointers vis-a-vis physical disk space for indexes was common across the various '60s database implementations, regardless of information organization/model, hierarchical, network, etc).

note that the target of relational at the time was fairly striaght forward "flat", single table, bank account database .... with the account number the primary index ... not really taking into consideration infrastructures involving multiple tables, joins, etc.

as an undergraduate, i had worked on an onr-funded university library project that utilized bdam (& cics) ... which turns out to have been similar project going on at the same time at the NIH's NLM ... using the same technology (but much larger scale). I had an opportunity to look a NLM's implementation in much more detail a couple years ago with some stuff associated with UMLS ... a couple recent posts mentioning UMLS:
https://www.garlic.com/~lynn/2004f.html#7 The Network Data Model, foundation for Relational Model
https://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data

There had been some work on mapping UMLS into RDBMS implementation ... the problem was that much of UMLS is non-uniform and frequently quite anomolous ... making anything but a very gross, high-level schema extremely difficult and people intensive .... for really complex, real-world data, the trade-off in operational sysadm/dba time was being traded-off against heavily front-loaded people time associated with normalization .... with the difference in physical disk-space requirements (for indexes) being ignored.

In any case, at that time, NLM was still using the original BDAM implementation (from the late '60s).

random bdam &/or cics posts:
https://www.garlic.com/~lynn/submain.html#bdam

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/



previous, next, index - home