List of Archived Posts

2006 Newsgroup Postings (05/13 - 05/18)

virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
The Pankian Metaphor
virtual memory
ALternatives to EMail
Value of an old IBM PS/2 CL57 SX Laptop
The Chant of the Trolloc Hordes
The Pankian Metaphor
History of first use of all-computerized typesetting?
virtual memory
virtual memory
30 hop limit
Resolver Issue (was Revolver Issue)
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
virtual memory
Password Complexity
How to implement Lpars within Linux
virtual memory
virtual memory
Code density and performance?
How to implement Lpars within Linux
Arpa address
Code density and performance?
The Pankian Metaphor
virtual memory
The Pankian Metaphor
virtual memory
virtual memory
virtual memory
Passwords for bank sites - change or not?
virtual memory
virtual memory
Arpa address
Arpa address
where do I buy a SSL certificate?
where do I buy a SSL certificate?
Arpa address
Arpa address
Hey! Keep Your Hands Out Of My Abstraction Layer!
Passwords for bank sites - change or not?
Arpa address

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 12:29:08 -0600
CBFalconer writes:
Something I built in the late 70s/early 80s was a variant on LRU. It basically took the used bit at intervals, and shifted it right into a history byte (actually 5 bits), while resetting the used bit. The result was something that gave decreasing weight to older history, and thus could resolve decisions made over the past 5 or six intervals.

The intervals were derived from inter-segment procedure call counting, which in turn were the only events that could use the LRU scheme. Nothing further was done unless the required segment was not present, when the scheme only selected segments to displace. This was only used for code segments, not data.


this was somewhat similar to the Multics 4-bit experiments in the early 70s ... shifting the reference bit. there was a multics paper in acm in the early 70s, i believe by saltzer. they actually tried experiments with keeping different number of history bits

I did something similar in about the same time frame ... but using up to eight history bits:
http://www.garlic.com/~lynn/subtopic.html#wsclock

one of the things that clock and a lot of the simulation work at the science center turned up was that there was different use/past characteristics and able to predict future behavior; aka there were various past behavior thresholds that wouldn't correlate with future behavior over various periods (invalidating assumption behind least recently used replacement strategy)

random past posts mentioning saltzer (mostly nothing to do with this subject).
http://www.garlic.com/~lynn/2000e.html#0 What good and old text formatter are there ?
http://www.garlic.com/~lynn/2002p.html#13 Multics on emulated systems?
http://www.garlic.com/~lynn/2003.html#18 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2003o.html#32 who invented the "popup" ?
http://www.garlic.com/~lynn/2004g.html#46 PL/? History
http://www.garlic.com/~lynn/2004l.html#56 project athena & compare and swap
http://www.garlic.com/~lynn/2004l.html#73 Specifying all biz rules in relational data
http://www.garlic.com/~lynn/2005b.html#29 M.I.T. SNA study team
http://www.garlic.com/~lynn/2005u.html#42 Mainframe Applications and Records Keeping?
http://www.garlic.com/~lynn/2006e.html#31 MCTS
http://www.garlic.com/~lynn/2006h.html#46 blast from the past, tcp/ip, project athena and kerberos
http://www.garlic.com/~lynn/2006h.html#55 History of first use of all-computerized typesetting?

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 17:46:33 -0600
Bill Todd writes:
Lynn's exposition is not always easy (at least for me) to follow completely, but his expansion on this topic in other posts (see, for example, http://www.garlic.com/~lynn/2001c.html#10 and also the first reference therein)seems to suggest 1) that the grenoble implementation implemented only static (fixed-size) working sets and 2) that his implementation, while global in nature, *also* involved something resembling working sets (and something he called 'dynamic adaptive thrashing control').

If either of those was the case (or if the systems differed in other significant respects - since his tweaks seem to have tended to be fairly pervasive), then his observations about the relative performance of those two implementations really aren't applicable to a discussion of contemporary (more mature) working-set implementations.


previous incarnation of this thread 7aug2004
http://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?

the referenced acm article describes the grenoble implementation
J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 20:07:38 -0600
CBFalconer writes:
Something I built in the late 70s/early 80s was a variant on LRU. It basically took the used bit at intervals, and shifted it right into a history byte (actually 5 bits), while resetting the used bit. The result was something that gave decreasing weight to older history, and thus could resolve decisions made over the past 5 or six intervals.

The intervals were derived from inter-segment procedure call counting, which in turn were the only events that could use the LRU scheme. Nothing further was done unless the required segment was not present, when the scheme only selected segments to displace. This was only used for code segments, not data.


the other thing that was starting to happen by the second half of the 70s was things were starting to shift from being real memory constrained to being much more disk arm constrained.

the working set and page replacement strategies of the 60s and early were focused on real storage being frequently a highly constrained resource. however, by the second half of the 70s, disk arms were much more frequently the constrained resources .... although you could have second order effects involving paging operations putting heaving contention on disk resource, possibly creating long queues and long service times. the page thrashing with huge system page wait of the 60s could be replaced with disk arm contention with huge system page waits ... but not particularly because of excessive over commitment of real storage.

this was the late 70s shift from attempting heavy optimization of real storage use ... to attempting to trade-off real storage resources in attempt to compensate and mitigate the disk arm bottlenecks. some exploration of this subject in previous post in this thread
http://www.garlic.com/~lynn/2006i.html#28 virtual memory

some of the references in the above references a comparision between a typical 360/67 cp67 configuration supporting 80 some users comfortably (on the cambridge system) to a decade plus later with a 3081 vm370 configuration supporting 300 some users with similar workload operation. the processor mips and the available "pageable pages" had increased on the order of fifty times while the numbers of supported concurrent users increased around four times. as shown in the repeated comparision the increase in number of users was relatively consistent in the increase in disk arm performance (which had contributed to my comment that relative system disk thruput had declined by a factor of 10 times during the period). one of the old references mentioned in the post referenced above:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door

the issue became is it possible to develop strategies that better optimized the use of disk arms ... possibly at the sacrifice of either processor and/or real storage optimization.

one such strategy was "big pages" for paging in the early 80s for both vm370 and mvs. part of this was 3380 disk arm thruput was maybe 3-4 times that of the earlier 2314 disk arm ... however the data transfer was ten times as much. "big pages" went to creating "on-the-fly" full-track clusters of 4k pages. paging area on disk was layed out possibly ten to twenty times larger than actually needed and managed by a "moving cursor" algorithm ... somewhat similar to some of the log structured filesystem work that would appear a decade later (which were done for similar objectives to try and drastically improve the disk arm use efficiency).

for page-out ... a full-tracks worth of pages from an address space would be collected together (whether the pages had been changed or not during their current residency in storage) and written as one operation. later when there was a page fault for any 4k page in a "big page" ... the whole track would be read into real storage. This profligate movement of pages might increase the total real storage required by an application by 20-30percent (and similar increase in number of bytes moved) ... but was traded off by redcuing the disk arm resource for moving each page by 90 percent. this also could play fast and loose with the accuracy of tracking what virtual pages were actually the least recently used ... but the trade-off of drastically improving the efficiency of disk arm resource for moving pages showed a net win (as something that represented the primary bottleneck in the infrastructure).

with such a big increase in the amount of real storage ... and the shifting of major system bottleneck to disk arm ... there could be significant system thruput trade-offs that might result in much less efficient use of real storage if it might drastically improve the overall system thruput situation dealing with disk arm bottleneck.

i had also done analogous efficiency with the page mapped filesystem i had developed
http://www.garlic.com/~lynn/submain.html#mmap

improving the probability of contiguous allocation for many filesystem objects and being able to dynamically adjusting how large a block transfer could be done when accessing the object (based on configuration, load) ... again trading off the efficiency of real-storage utilization for improved diak arm efficiency.

misc. past posts discussing the "big page" strategy from a couple decades ago.
http://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
http://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
http://www.garlic.com/~lynn/2002f.html#20 Blade architectures
http://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
http://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
http://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#12 Page Table - per OS/Process
http://www.garlic.com/~lynn/2003o.html#61 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2003o.html#62 1teraflops cell processor possible?
http://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
http://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
http://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#19 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#21 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#22 Code density and performance?

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 20:22:33 -0600
Anne & Lynn Wheeler writes:
one such strategy was "big pages" for paging in the early 80s for both vm370 and mvs. part of this was 3380 disk arm thruput was maybe 3-4 times that of the earlier 2314 disk arm ... however the data transfer was ten times as much. "big pages" went to creating "on-the-fly" full-track clusters of 4k pages. paging area on disk was layed out possibly ten to twenty times larger than actually needed and managed by a "moving cursor" algorithm ... somewhat similar to some of the log structured filesystem work that would appear a decade later (which were done for similar objectives to try and drastically improve the disk arm use efficiency).

re:
http://www.garlic.com/~lynn/2006j.html#2 virtual memory

note that one of the advantages that "big pages" moving cursor had over log structure filesystem ... was that the filesystem data on disk was typically treated as persistant and periodically it had to execute a garbage collection and coalescing process. paging areas were treated as much more ephemeral ... whenever a "big page" had a fault and was brought back into real storage, the disk location was discarded and became available (a nature of paging operation tended to provide the garbage collection and coalescing operation as a side-effect which was an explicit periodic overhead operation in the log-structure paradimg).

for total topic drift ... during our ha/cmp product effort
http://www.garlic.com/~lynn/subtopic.html#hacmp

there was work started on filesystem for geographic survivability (replicated at multiple physical sites), one of the consultants brought in, had done a lot of the work on a unix log structured filesystem ... cited in one of the following posts mentioning log structure filesystems:
http://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
http://www.garlic.com/~lynn/93.html#29 Log Structured filesystems -- think twice
http://www.garlic.com/~lynn/2000.html#93 Predictions and reality: the I/O Bottleneck
http://www.garlic.com/~lynn/2000c.html#24 Hard disks, one year ago today
http://www.garlic.com/~lynn/2000g.html#38 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
http://www.garlic.com/~lynn/2001c.html#28 The Foolish Dozen or so in This News Group
http://www.garlic.com/~lynn/2001f.html#59 JFSes: are they really needed?
http://www.garlic.com/~lynn/2001f.html#60 JFSes: are they really needed?
http://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
http://www.garlic.com/~lynn/2002n.html#9 Asynch I/O
http://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
http://www.garlic.com/~lynn/2003k.html#0 VSPC
http://www.garlic.com/~lynn/2004g.html#22 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
http://www.garlic.com/~lynn/2005n.html#22 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#36 Code density and performance?

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 13 May 2006 21:01:50 -0600
Anne & Lynn Wheeler writes:
for page-out ... a full-tracks worth of pages from an address space would be collected together (whether the pages had been changed or not during their current residency in storage) and written as one operation. later when there was a page fault for any 4k page in a "big page" ... the whole track would be read into real storage. This profligate movement of pages might increase the total real storage required by an application by 20-30percent (and similar increase in number of bytes moved) ... but was traded off by redcuing the disk arm resource for moving each page by 90 percent. this also could play fast and loose with the accuracy of tracking what virtual pages were actually the least recently used ... but the trade-off of drastically improving the efficiency of disk arm resource for moving pages showed a net win (as something that represented the primary bottleneck in the infrastructure).

re:
http://www.garlic.com/~lynn/2006j.html#2 virtual memory

the replacement strategies and page thrashing from the 60s and early 70s was much more oriented towards having only just the right pages in real memory.

by the late 70s, with the increase in many real storage size configurations ... real storage contention because there was too many concurrent tasks (and resulting page thrashing) was much less of an issue ... the small size of the disk arm spigot resource was becoming so (relatively) limited that any major movement of pages to/from disk could represent paging bottlenecks.

the "big page" strategy could get away fetching as much as 70-80 percent wrong pages ... resulting in equal amount of 70-80 percent of real storage being occupied by the "wrong" pages ... it could still come out a winner, if it could cut in half the number of disk arm uses getting the right pages into real storage.

the correllary is that there were drastistically fewer situations where any judgement about the aggregate, concurrent working set sizes (of active tasks) would force a scheduling reduction in the number of concurrent tasks that were allowed to simultaneously compete for real storage.

past incarnation of this thread:
http://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 14 May 2006 08:35:29 -0600
jmfbahciv writes:
Then a subset of those categories would have to be code and data, with the rare exception where code is data (writable code segment which god never meant happen).

I suppose there would also have to be special handling of data pages that were suddenly changed to code.

Comments on the discussion:

1. An OS did not need VM to do relocation. Example: KA10. 2. Do not confuse paging hardware with virtual memory. They are different. The reason this confusion happens is because both were usually done during the same major version OS release. If your new CPU has paging hardware, you might as well schedule your VM project at the same time. You might as well subject the customer to both pains all at the same time. It was like pulling a tooth: yank it and get it over with or tweak it and have years of long drawn annoying pain in the nethers.


i've described two instances where there was special case processing ... and both instances resulted in non-optimal implementations ...

• one was the initial morph from cp67 to vm370 where they had actual lists of pages for the scanning/reset/replacement selection and "shared" pages were treated specially ... not based on reference bits

• the other was the initial morph from mvt to os/vs2 where they would bias the replacement selection for non-changed pages before changed pages

post including description of the above two scenarios
http://www.garlic.com/~lynn/2006i.html#43 virtual memory

i had arguments with both groups over the details and they went ahead and did it anyway (which, in both cases, got corrected much later ... the os/vs2 they had to do the correction themselves, the vm370 shared pages, i was able to correct in the release of the resource manager).

i had also done a paging, non-virtual memory thing originally as an undergraduate in the 60s for cp67 ... but it was never picked up in the product until the morph of cp67 to vm370 where it was used. the issue was the kernel code ran real mode, w/o hardware translate turned on. all its addressing was based on real addresses. when dealing with addresses from virtual address space ... it used the LRA (load real address) instruction that translated from virtual to real.

the issue was that the real kernel size was continuing to grow as more and more features were being added. this was starting to impact the number of pages left over for virtual address paging. recent post in this thread making mention of measure of "pageable pages" (after fixed kernel requirements):
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006j.html#2 virtual memory

so i created a dummy set of tables for the "kernel" address space ... and partitioned some low-useage kernel routines (various kinds of commands, etc) into real, 4k "chunks". I positioned all these real chunks at the high end of the kernel. When there were calls to addresses above the "pageable" line ... the call processing ... instead of directly transfering would run the address thru the dummy table to see if the chunk was actually resident. if it was resident, then the call would be made to the translated address location ... running w/o virtual memory turned on. if the 4k chunk was indicated as not resident, the paging routine was called to bring it into storage before transferring to the "real" address. during the call, the page fixing and lock ... that CCWTRANS for performing virtual i/o ... was used for preventing the page for be selected from removal from real storage. the count was decremented at the return. otherwise these "pageable" kernel pages could be selected for replacement just like any other page. some recent mentions of CCWTRANS
http://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
http://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
http://www.garlic.com/~lynn/2006i.html#33 virtual memory

this feature never shipped as part of the cp67 kernel, but was picked up as part of the initial morph of cp67 to vm370.

Later for the resource manager, i also created a (2nd) small dummy set of tables for every virtual address space that was used for administrative writing of tables to disk. tables were collected into page-aligned 4k real chunks and the page I/O infrastructure was used for moving the tables to/from disk (in a manner similar to how i had done the original pageable kernel implementation) previous description of "paging" SWAPTABLEs.
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370

in the initial morph of cp67->vm370, some of cms was re-orged to take advantage of the 370 shared segment protection. however, before 370 virtual memory was announced and shipped, the feature was dropped from the product line (because the engineers doing the hardwareretrofit of virtual memory to 370/165 said that shared segment protect and a couple other features would cause an extra six month delay). a few past posts mentioning virtual memory retrofit to 370/165:
http://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: what does it really mean?
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370

as a result, the shared page protection had to redone as a hack to utilize the storage protect keys that had been carried over from 360. this required behind the scenes fiddling of the virtual machine architecture ... which prevented running cms with the virtual machine assist microcode activated (hardware directly implemented virtual machine execution of privilege instructions). later, in order to run cms virtual machines with the VMA microcode assist, protection was turned off. instead a scan of all shared pages was substituted that occured on every task switch. an application running in virtual address space could modify shared pages ... but the effect would be caught and discarded before task switch occured (so any modification wouldn't be apparent in other address spaces). this sort of worked running single processor configurations ... but got much worse in multi-processor configurations. now you had to have a unique set of shared pages specific to each real processor. past post mentioning the changed protection hack for cms
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: what does it really mean?
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370

past posts mention pageable kernel work:
http://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
http://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
http://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
http://www.garlic.com/~lynn/2004b.html#26 determining memory size
http://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
http://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2006.html#40 All Good Things

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Sun, 14 May 2006 12:17:16 -0600
jmfbahciv writes:
And then airlines got busy. Was there anything academic that had a similar effect?

for a little drift, sometime after we took the early out in 92, one of the major airline res systems were talking to us about the ten impossible things that they couldn't do.

i went away and a couple months later came back with an implementation that solved all ten impossible things. that was when they started wringing their hands ... and eventually let slip that they actually didn't want us to solve the problems ... they just wanted to be able to tell the board that we were consulting on the problems for the next decade.

the issue was that many of the impossible things were a result of there being a large staff manually performing many of the steps. if all the manual steps were automated, then much of the difficulty went away and things got much simpler. some of this had all been layed down based on the technology of the 60s and never really revisited (technology may have been looked at over the years for some straight-forward, linear scaling but never from the standpoint of enabling fundamental paradigm change).

to some extent, the status of the executive in-charge was related to the size of the organization. if the size of the organization was cut by three orders of magnitude, then the executive's status was also impacted.

various past posts mentioning airline res systems (only some referring to this particular incident):
http://www.garlic.com/~lynn/99.html#17 Old Computers
http://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
http://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
http://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
http://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#26 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
http://www.garlic.com/~lynn/2001n.html#0 TSS/360
http://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
http://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
http://www.garlic.com/~lynn/2002i.html#83 HONE
http://www.garlic.com/~lynn/2002j.html#83 Summary: Robots of Doom
http://www.garlic.com/~lynn/2002m.html#67 Tweaking old computers?
http://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
http://www.garlic.com/~lynn/2003c.html#30 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#67 unix
http://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004f.html#58 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#14 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004l.html#6 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
http://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004p.html#26 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
http://www.garlic.com/~lynn/2005.html#22 The Soul of Barb's New Machine (was Re: creat)
http://www.garlic.com/~lynn/2005.html#41 something like a CTC on a PC
http://www.garlic.com/~lynn/2005c.html#67 intel's Vanderpool and virtualization in general
http://www.garlic.com/~lynn/2005e.html#47 Using the Cache to Change the Width of Memory
http://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005n.html#44 What was new&important in computer architecture 10 years ago ?
http://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2006d.html#5 IBM 610 workstation computer

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 14 May 2006 13:04:45 -0600
Anne & Lynn Wheeler writes:
the other thing that was starting to happend by the second half of the 70s was things were starting to shift from being real memory constrained to being much more disk arm constrained.

another characteristic of the real memory resource vis-a-vis disk resource trade-off was the original os/360 design using CKD DASD.

the disk technology was referred to count-key-data ... it was possible to format disk space with specified record sizes and tag each record with various flags and attributes (not the fixed record size and sequential numbered records that you find in much of today's much more familiar disk technology). os/360 made a decision about conserving real storage requirements by leaving the disk layout information resident on disk. the "VTOC" (volume table of contents, somewhat akin to a MFD/master file directory) had the individual records labled. When there was a request to open some file, an I/O request was built by the system to "search" the VTOC for the required information.

a similar design trade-off was created for file libraries that had a special file format called PDS (partitioned data set) that included a on-disk directory of all the members in the file. when there was a request to retrieve a specific library member, there was an I/O request built by the system to do ("multi-track") search of the PDS directory for the required information.

these two design trade-offs required exorbitant amounts of disk I/O resources ... however, the savings in scarce real memory resource was deemed to be a reasonable trade-off.

roll-forward to mid-70s ... when the amounts of available real memory was starting to dramatically increase ... and the improvement in disk throughput was dramatically falling behind the thruput improvement of other system components. as a result, major system bottleneck was shifting from real storage to disk activity.

a widely deployed disk drive was the 3330 (a lots of other vendors produced clones of the 3330 that made their way into a variety of systems). it had 20 surfaces and 20 heads per (cylinder) arm position ... only 19 of the heads were addressable (the 20th head was used for something called rotational position sensing). the device spun at 3600 rpm ... or 60 revolutions per second. a multi-track search of a multi-cylinder PDS directory took approx. 1/3 of a second elapsed time ... per cylinder (arm position). The avg. elapsed time to find a member in a large program library could take 1/2 second or more elapsed time.

in the late 70s, a very large national retailer was starting to experience severe performance degradation of its in-store, online applications. the retailer had its stores organizied into regions ... and the datacenter had dedicated processor complex per region. however, all the processor complexes had shared access to a large common pool of disks. there was a single large application library for the online, in-store operations that was shared across all the dedicated regional processor complexes.

over period of several months, numerous people had been brought in attempting to diagnose the problem. finally i was brought into a class room that was had a dozen or so long class tables. all the tables were nearly covered with high stacks of paper performance reports from all the operating systems.

people were expected to scan the performance reports looking for resource activity that corresponded with periods of extremely poor performance. the problem was that most people were accustomed to looking at extremely high activity counts as an indication of bottlenecked resources. all the periodic disk i/o counts failed to indicated any specific disk in the large pool (disk drives in the large score of drives) with high i/o count activity.

one problem was that the performance reports gave interval i/o activity counts by disk for specific processor complex. to get the total i/o activity for a disk ... you had to manually sum the i/o count activity per processor complex across all the processor complexes. the other problem was that there was no actual service time numbers or queuing delay numbers ... just raw activity counts.

I started quickly scanning the piles of paper and after maybe 20-30 minutes started to realize that a specific disk drive had an aggregate activity count consistently of approx. 6-7 operations per second during the periods of bad performance and thruput. it wasn't normally considered a high i/o rate ... but it started to appear to be the only value that consistently correlated with periods of poor performance. all the other values reported in the reams of paper performance data appeared to pretty much randomly varying all over the place.

zero'ing in on that particular disk ... closer examination showed that it was typically doing a full-cylinder multi-track search of the PDS directory (to find a library member to load) taking nearly 1/3rd second elapsed time ... followed by reading the library memory (taking a couple tens of millisecond). basically the whole national datacenter complex was reduced to loading an aggregate of three application library programs per second.

all the system configuration easily had sufficient real memory for caching the PDS directory of library members ... and the in-store application load would be reduced to possibly 10-15milliseconds rather than 340milliseconds. however, the system implementation structure had been cast in concret from the mid-60s ... so they had to find other compensating procedures to speed up in-store application load.

furthermore, within a few years, typical system configurations would not only be able to cache the directory of members but also all the members themselves.

misc. past posts about the 60s ckd-disk/storage performance trade-off and its lingering effects long after it was no longer valid
http://www.garlic.com/~lynn/submain.html#dasd

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

ALternatives to EMail

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ALternatives to EMail
Newsgroups: alt.folklore.computers
Date: Sun, 14 May 2006 13:24:44 -0600
"Sam Alexander" writes:
Just curious ...

Though the protocols used in EMail are still very reliable, with so much spyware and so many filters for me anyway email has started becoming more and more less reliable. I was wondering what other email applications of the past you'd choose to used if modern day Internet Emai was to be gone tomorrow. Would you go back to Fidonet netmail? What about telex or other such messages? Just curious.


there were a large variety of value added networks (VANs) that sprung up ... some as early as the 70s ... but lots in the 90s ... that had all sorts of proprietary formats for conveying information. the existance of some of this VANs contributed to the EDI standardization efforts.

the proliferation of the internet pretty much turned the VANs of the 70s and 80s into dinosaurs.

however, there are issues with social engineering and hostile intent when there is relatively free any-to-any connectivity across the whole world.

note that the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than the internet
http://www.garlic.com/~lynn/internet.htm
http://www.garlic.com/~lynn/subnetwork.html#internet

from just about the beginning until sometime mid-85.

there was also bitnet and earn which was larger than "internet" (more nodes) during a period in the early 80s. note that while bitnet/earn used similar technology to that used in the internal network ... they were considered totally separate networks (the count of nodes in the two networks were total different).
http://www.garlic.com/~lynn/subnetwork.html#bitnet

the internal network also evolved its own information distribution ala usenet ... a portion of that appeared in bitnet/earn as "listserv" and somewhat exists today in the usenet bit.listserv.* group hierarchy.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Value of an old IBM PS/2 CL57 SX Laptop

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Value of an old IBM PS/2 CL57 SX Laptop
Newsgroups: alt.folklore.computers
Date: Sun, 14 May 2006 13:53:23 -0600
greymaus writes:
Besides that, PayPal seens useful, but I got in a humour over my first purchase from eBay, didn't send back the confirming number, and PayPal wants me to try again under a different CC number, which, as CC's are subject to a yearly tax here (Ireland), is a bit much. For a while there, a lot of Internet commerce sites were moving to PayPal, but that process seems to have stopped.

some more drift along this topic

3 of the big 4 - all doing payment systems
https://www.financialcryptography.com/mt/archives/000715.html

and various comments along the way:
http://www.garlic.com/~lynn/aadsm23.htm#35 3 of the big 4 - all doing payment systems
http://www.garlic.com/~lynn/aadsm23.htm#36 3 of the big 4 - all doing payment systems
http://www.garlic.com/~lynn/aadsm23.htm#37 3 of the big 4 - all doing payment systems

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

The Chant of the Trolloc Hordes

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Chant of the Trolloc Hordes
Newsgroups: alt.folklore.computers
Date: Sun, 14 May 2006 14:04:19 -0600
scott@slp53.sl.home (Scott Lurndal) writes:
A few megabyte database fits comfortably in memory. Sleepycat?

trivia question ... what connection between sleepycat and following mention of log structured file system and geographic survivability:
http://www.garlic.com/~lynn/2006j.html#3 virtual memory

work in ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Mon, 15 May 2006 06:44:49 -0600
Charles Shannon Hendrix writes:
That's true, but you can help this a bunch by doing things like reducing or removing the cost of swapping pages out.

For example, don't mark swap pages free when bringing pages into RAM. That way if they need to be swapped back out, you can do it nearly for free and without paging I/O.

A set of patches to the Linux 2.6.x kernels does this, among other things, and it helps a lot for both interactive and server loads.


this is the subject of recent "dup"/"no-dup" (duplicate/no-duplicate) post. retaining previous position can save a page-write when replacing pages that haven't been changed.
http://www.garlic.com/~lynn/2006i.html#41 virtual memory

however, "no-duplicate" gives up that saved position ... which can optimize use of available space ... especially in hierarchy of cache/swap ... references to mix of fixed-head & non-fixed-head paging devices (2305/3330) or caching controllers (3880-11/ironwood) mentioned in above.

however, the aspect of saving a page write can be taken to the extreme ... as in this post that includes description of original os/vs2 implementation to specifically bias replacement strategy for non-changed pages (and saving page write)
http://www.garlic.com/~lynn/2006i.html#43 virtual memory

note, "big pages" went to writing everything ... not for conserving space allocation ... but to have pages likely to be used together to be contiguously located on disk ... so later retrieval could be made in single i/o operation. this happened with realization that primary system bottleneck shifting away from being real storage constrained to disk arm access constrained ... and it was possible to trade-off optimization of real storage utilization against disk arm access optimization.
http://www.garlic.com/~lynn/2006j.html#2 virtual memory

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

History of first use of all-computerized typesetting?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of first use of all-computerized typesetting?
Newsgroups: alt.folklore.computers
Date: Mon, 15 May 2006 07:37:29 -0600
Charles Shannon Hendrix writes:
Not just that, but the phenomenom where people believe anything that is in print, and it often doesn't even matter who it came from.

It used to be that way for news over the radio, but I don't know if it is still that way now or not. I.E. "War of the Worlds!" broadcast.

At various workplaces I have noticed that talking in person or even sending email is often ignored.

But print it up as a memo, especially on company letterhead, and the story is different. They'll believe almost anything if you "make it official".

I told a manager this in one shop where I worked and he didn't believe me. "No one is that stupid", he told me.


story about password guidelines being printed on corporate letterhead and placed in corporate bulletin board
http://www.garlic.com/~lynn/99.html#52 Enter fonts
http://www.garlic.com/~lynn/2001d.html#51 OT Re: A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001d.html#53 April Fools Day

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 07:44:19 -0600
robert.thorpe writes:
Yes. My point was not that others weren't mentioned so much as they were mentioned only as bylines.

Posting a deluge of information about IBM work, may give the impression to casual the reader that IBM were initially innovative in that area. When in fact, despite being the largest, they were only the third computer company to implement virtual memory. They became innovative in it AFAIK once they started doing it, fairly much straight away, but that was years after the first examples had been made.


no mostly, i was posting stuff primarily about my work and products I shipped ... as an undergraduate
http://www.garlic.com/~lynn/subtopic.html#wsclock
and then as an employee at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

i wasn't prepared to particularly write a lot about stuff about work that other people had done. it wasn't that i was particularly trying to bias against work that other people had done ... but i wasn't as qualified to write about other peoples' work as i was able to write about the work that i had done.

i was mostly including bylines about other people's work as an indication that i wasn't the only person that had done work in the area.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 08:36:03 -0600
Jan Vorbr├╝ggen writes:
Yes, that was the philosophy behind the VMS working set design: They called it "paging against the process" vs "paging against the system" for the global approach.

The first implementation only had quotas that were also limits. This fit the initial hardware - our first VAX-11/780 had 256 _k_Byte of system memory - but did not fit systems even a few years later. There, one had the situation that one wanted to have low quota - to make sure that at 11 a.m. with the maximum number of users logged in, the system didn't bog down with paging - but allow a user at other times to use much more real memory when contention was low. This was solved by introducing limits that could be much higher. In essence, the limit was the max amount of real memory your process could get, and the quota was the min amount.

The restriction that all working set lists had to be the same, boot-time size that used up non-virtual memory was a VAX hardware feature. The Alpha with its later VM incarnation does not have this feature.


I never had that ... dating back to my original implementations in the 60s.

working set size was mechanism for limiting over commit of real storage and resulting page thrashing ... pathelogical case was that you needed a page that wasn't in real storage, had a page fault, waiting for the page was brought in and by the time that page was ready ... other pages that you required had been stolen; worst case was no progress got made (since all tasks were constantly stealing pages from each other).

working set size was mechanism for managing/limiting concurrent competition for available real storage.

global LRU replacement was mechanism for making sure that the optimal set of all pages were resident and ready for use in real storage.

as real storage got more plentiful ... starting by at least the later part of the 70s ... there were other techniques used to manage amount of paging. one strategy that had been developed in the late 70s ... was tracking per task pagewait time (i.e. bottleneck had shifted from constrained real storage to constrained disk i/o ... from the 60s there was inter-relationship between constrained real storage and the amount of paging involving disk i/o, however, moving into the late 70s ... requests could start queueing against disk, even with relatively little real storage contention).

part of the issue was the characteristic somewhat written up as "strong" and "weak" working sets ... i.e. the set of actual pages required were strongly consistent over a period ... versus somewhat changing. the set "size" over time could be the same in both ... but once "strong" working set had been established, the task would run with no additional paging operation. in weak working set, once the initial set had been established ... there would continue to be paging over time (the size of the set didn't change, but the members of the set changed over time).

to try and handle this situation ... there were implementations that modified global LRU replacement strategy based on task progress (or its inverse, amount of pagewait); a task making little progress (higher pagewait) ... would have some percentage of its pages selected by global LRU ... skipped i.e. global LRU would select pages from the task, but because the task was making less progress than its target, some percentage of its selected pages would be retained ... global LRU would then skip past the selected page and search for another. the percentage of pages initially selected by global LRU replacement strategy ... but then skipped ... would be related to how much progress the task was making vis-a-vis the objective. this tended to lower the pagewait time for tasks that had somewhat weak working sets that were getting behind schedule.

the issue here was that available real storage tended to be much larger than the aggregate of the concurrent working set sizes ... so use of aggregate working set size to limit the number of contending tasks as mechanism for limiting paging was ineffective. global LRU still provided that the optimal set of global pages were resident. however, tasks with strong working sets might have advantage over tasks with weaker working sets. as a result, a mechanism was provided to reduce the rate of pages stolen for those tasks (typically from tasks with weaker working sets) that were getting behind schedule in their targeted resource consumption (because of time spent in pagewait).

for the other scenario ... for tasks with extremely weak working set ... and even all avaiable real storage provided them little or no additional benefit ... there was a strategy for "self-stealing" ... aka a form of local LRU ... but that was only for limiting the effects of extremely pathelogical behavior ... it wasn't used for task operating with normal operational characteristics.

something similar was found in the detailed analysis of disk record caching designs.
http://www.garlic.com/~lynn/2006i.html#41 virtual memory

given detailed tracing of all disk record access across a wide range of different live operational environments ... for a given fixed amount of total electronic memory ... and all other things being equal ... using the memory for a single global system cache provided better thruput than dividing/partitioning the memory into sub-components.

this was the case of instead of using all the fixed amount of electronic memory for a single global system cache, the available fixed amount of electronic memory would be divided up and used in either controller-level caches (as in the reference to 3880-11 and 3880-13 controller caches) and/or disk-level caches.

there were two caveats; some amount of electronic store was useful at the disk head level for staging full track of data as rotational delay mitigation and you needed strategy to minimize caching of purely sequential i/o ... where caching provided no benefit; i.e. for sequential i/o ... it basically violated basic assumption behind least recently used replacement strategies ... that records/pages used recently would be used in the near future. for that behavior, a "most recently used" replacement strategy would be more ideal ... which can also be expressed as "self-stealing" ... new request replaces previous request by same task (trying to avoid a purely sequential i/o transfer from wiping out all other existing cache entries).

and in its analogy back to paging as a form of cache and its relationship to least recently used replacement strategy ... past a certain point, extremely weak working sets ... can appear to be very little different from sequential transfer.

posts in this thread:
http://www.garlic.com/~lynn/2006i.html#22 virtual memory
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006i.html#28 virtual memory
http://www.garlic.com/~lynn/2006i.html#30 virtual memory
http://www.garlic.com/~lynn/2006i.html#31 virtual memory
http://www.garlic.com/~lynn/2006i.html#32 virtual memory
http://www.garlic.com/~lynn/2006i.html#33 virtual memory
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#38 virtual memory
http://www.garlic.com/~lynn/2006i.html#39 virtual memory
http://www.garlic.com/~lynn/2006i.html#40 virtual memory
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory
http://www.garlic.com/~lynn/2006i.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#0 virtual memory
http://www.garlic.com/~lynn/2006j.html#1 virtual memory
http://www.garlic.com/~lynn/2006j.html#2 virtual memory
http://www.garlic.com/~lynn/2006j.html#3 virtual memory
http://www.garlic.com/~lynn/2006j.html#4 virtual memory
http://www.garlic.com/~lynn/2006j.html#5 virtual memory
http://www.garlic.com/~lynn/2006j.html#12 virtual memory

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

30 hop limit

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 30 hop limit
Newsgroups: comp.protocols.tcp-ip
Date: Mon, 15 May 2006 09:46:13 -0600
"Albert Manfredi" writes:
So we are saying the same thing. Routing in IP can change at the drop of a hat, but that's a far cry from any implication that the routes are random. They are calculated and they are stable, unless something in the network causes them to change.

Also strayed off topic, but at least it's an interesting topic.


when we were working on the original payment gateway
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

we were looking at doing high availability configuration, in part because we had earlier done the ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

for some trivia drift ... some of the people in this ha/cmp meeting
http://www.garlic.com/~lynn/95.html#13

later showed up at a small client/server startup responsible for turning out a payment gateway. this small client/server startup also had this technology called SSL.

in any case during the process of initial deployment ... we were looking at advertising routes for ip-addresses as countermeasure to various kinds of route outages. however, in that period, the internet backbone decided to move to hierachical routing. we had assumed that we could get diverse physical routing to some number of different, carefully selected ISPs. If any of the ISPs were having connectivity problems (like taking down critical components on sunday afternoon prime-time for maintenance) we could advertise routes via other paths.

the transition to hierarchical routing eliminated attention being paid to alternate advertise routes. that primarily left multiple a-records as an alternate path mechanism (i.e. same domain/host name mapping to a list of different ip-addresses).

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Resolver Issue (was Revolver Issue)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Resolver Issue (was Revolver Issue)
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 15 May 2006 11:29:52 -0600
eric-phmining@ibm-main.lst (Eric N. Bielefeld) writes:
I used to look at the list online. The biggest problem I had is remembering where you were. If I forgot to write down the post number, it was hard to find just where I left off. Also, the only quick way to navigate from post to post was to click on the next button. If I had to do something else and I was half way through the posts overnight, it was hard to figure out where I left off. Our internet connection at work made you log back on if you didn't use it for 20 minutes or so.

i've been using various flavors of gnus/emacs usenet newsreaders for going on close to 20 years. it has always kept track of all that stuff.

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 14:50:28 -0600
"Eric P." writes:
As I understand it, one key issue is what happens after the page falls out of the working set.

From what I have read, the basic Denning working set model circa 1968 had each page make one trip through a fifo list. This was later enhanced to use a reference bit and a make two or more trips if the page was referenced (due to performance analysis by Belady and others).

In the Denning model, when a page was pushed out of the working set, if it was modified it was written back to file. In any case after that the frame went onto the free list and *** note *** all connection between it and its process was lost even though its contents were still valid and even if there were lots of free pages. If the page was touched again it was brought in from disk.

This meant there was a high cost to evicting a working set page so it would be avoided by having larger working sets, yada, yada, yada.

Unlike Denning, VMS retains a link to the evicted page so that if it is touched again before the frame is assigned to someone else, that it can be pulled back into the working set quickly. (They just reset the PTE valid bit but leave the frame number valid. If the owner touches it again, it just flips the PTE to valid. If the page is grabbed for someone else, then the PFN gets zapped.)

This allows VMS to toss pages out of its working sets at low cost, which allows them to roll over working sets more often. In theory this mitigates the bad effects of the Denning fifo model.

So the question I was curious about is to what extent Grenoble was like Denning .vs. like VMS.


note that was what i did in the late '60s in global LRU ... and it was retained in the grenoble local LRU implementation ... the reclaiming of pages back into the working set was like what i had done as an undergraduate in the 60s and shipped in cp67 (prior to grenoble doing their local LRU stuff ... which leveraged the already existing graceful reclaim).

i ran into something like that in the very late 70s. neither tss/360 (which had been implemented before i had done any work on cp67) nor mvs gracefully reclaimed pages that had been selected as having moved out of working set.

the flavor of global LRU and trashing controls that i had done for cp67 as an undergraduate in the 60s ... had very graceful reclaim of any virtual page still in real storage.

i had argument with the initial os/vs2 implementation over 1) selecting non-changed pages before changed pages as abnormally peverting the fundamental principles of least recently used replacement strategy ... previous discussion of the subject
http://www.garlic.com/~lynn/2006i.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#5 virtual memory
http://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor

and 2) "graceful reclaim" (i.e. recover virtual page if it was still resident in real storage).

Well into the os/vs2 mvs release cycles in the late 70s ... they discovered that #1 was selecting high-used, shared (across multiple address spaces) executables before low-used, private (changed) data pages.

In the very late 70s, I had gotten a call from somebody in POK, they had just gotten a big corporate award for adding graceful reclaim to mvs and was wondering if he could do something similar for vm370.

I commented that I had never not done it that way since as undergraduate more than a decade earlier in the 60s .... and never could figure out why anybody would have ever not done graceful reclaim; aka no, he was not able to get another big corporate award for doing something similar to vm370 ... and furthermore, from my standpoint there should have never been a large corporate award for adding graceful ... instead the people that had done the original implementation, that didn't include graceful reclaim, should have been severely penalized.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 15:05:41 -0600
David Scheidt writes:
LRU has the disadvantage of having to update the list of when a page is used at every access. FIFO avoids that. With a second-chance mechanism, it's not obvious that it wouldn't work well.

clock and global LRU uses an LRU approximation ... that kind of sorts pages into two categories ... those that have their reference bit used since the last examination/reset, and those that haven't had their reference bit used since their last examination/reset.

we did significant simulation with full instruction traces some of it was part of the vs/repack technology previously referenced
http://www.garlic.com/~lynn/2006i.html#37 virtual memory

where in detailed simulation with full instruction traces of i-refs, d-refs, and d-stores ... compared lots of global and local lru approximations as well as "true" lru ... where exact ordering of page references were maintained.

the clock approach to lru approximation (that i had come up with as an undergraduate in the 60s) ... which is also described in Carr's phd thesis previously mentioned (and can be found online) misc. recent past posts mentioning Carr's thesis
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#38 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory
R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)

R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981


.... typically operated within 10-15percent of "true" lru with extremely nominal overhead (based on the detailed simulation). their was no list manipulation at all, just the cyclic examination ... which was approx. six instructions per page examined and not taken ... and the avg. depth of search typically avg. six pages before finding a page to select (i.e. on the order of 36 instructions to find and select a page for replacement).

during this period with the work on extensive full instruction traces and analysis of wide variety of replacement strategies ... both real implementation in live deployment and the extensive simulation and modeling work ... I discovered a coding hack that allowed a kind of clock operation to actual beat "true" LRU ... the instructions and pathlength was identical to regular clock ... but there was a coding slight of hand the resulted in being able to beat "true" LRU (based on the detailed simulation work). post in this thread mentioning the detailed simulation work, comparing the LRU approximations with "true" LRU ... and coming up with the slight-of-hand clock that would beat "true" LRU
http://www.garlic.com/~lynn/2006i.html#42 virtual memory

posts in this thread:
http://www.garlic.com/~lynn/2006i.html#22 virtual memory
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006i.html#28 virtual memory
http://www.garlic.com/~lynn/2006i.html#30 virtual memory
http://www.garlic.com/~lynn/2006i.html#31 virtual memory
http://www.garlic.com/~lynn/2006i.html#32 virtual memory
http://www.garlic.com/~lynn/2006i.html#33 virtual memory
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#38 virtual memory
http://www.garlic.com/~lynn/2006i.html#39 virtual memory
http://www.garlic.com/~lynn/2006i.html#40 virtual memory
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory
http://www.garlic.com/~lynn/2006i.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#0 virtual memory
http://www.garlic.com/~lynn/2006j.html#1 virtual memory
http://www.garlic.com/~lynn/2006j.html#2 virtual memory
http://www.garlic.com/~lynn/2006j.html#3 virtual memory
http://www.garlic.com/~lynn/2006j.html#4 virtual memory
http://www.garlic.com/~lynn/2006j.html#5 virtual memory
http://www.garlic.com/~lynn/2006j.html#12 virtual memory
http://www.garlic.com/~lynn/2006j.html#13 virtual memory
http://www.garlic.com/~lynn/2006j.html#14 virtual memory
http://www.garlic.com/~lynn/2006j.html#17 virtual memory

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 15:22:40 -0600
Anne & Lynn Wheeler writes:
.... typically operated within 10-15percent of "true" lru with extremely nominal overhead (based on the detailed simulation). their was no list manipulation at all, just the cyclic examination ... which was approx. six instructions per page examined and not taken ... and the avg. depth of search typically avg. six pages before finding a page to select (i.e. on the order of 36 instructions to find and select a page for replacement).

re:
http://www.garlic.com/~lynn/2006j.html#18 virtual memory

oops, that was six instructions on the 360/67. the changed & reference bits were maintained in the storage keys ... which were for 2k blocks of storage ... and paging was done on 4k blocks. the result was that cp67 had to check two sets of changes&reference bits to get the result for a single 4k page. the bits had to be retrieved and then stored back with value reset. also since cp67 was not only providing virtual memory paging management but also providing virtual machine simulation ... whatever the real hardware changed&reference bits were before the reset, had to be copied/shadowed to the virtual machine values.

this was slightly reduced in the move from 360/67 to 370 ... since the 370 provided a single instruction that would retrieve, interrogate and reset the reference bit in a single instruction (although for correct virtual machine emulation, there still requirement to shadow the reference bit value, prior to reset, for the virtual machine).

some recent posts mentioning various shadowing requirements for correct virtual machine:
http://www.garlic.com/~lynn/2006c.html#36 Secure web page?
http://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#6 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#12 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#19 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
http://www.garlic.com/~lynn/2006i.html#10 Hadware Support for Protection Bits: what does it really mean?
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 15:47:43 -0600
David Scheidt writes:
LRU has the disadvantage of having to update the list of when a page is used at every access. FIFO avoids that. With a second-chance mechanism, it's not obvious that it wouldn't work well.

possibly you are thinking of simulation of LRU for things like database caches ... rather than processor paging that has hardware support for reference and change bits ... and paging least recently used replacement stragies that approximate LRU with approaches like clock. ref:
http://www.garlic.com/~lynn/2006j.html#18 virtual memory
http://www.garlic.com/~lynn/2006j.html#19 virtual memory

in that situation, where the database cache entries have little or no correspondance to any familiar hardware support for change/reference bits ... then you might find a link-list of database cache entries. the database manager can emulate least recently used of database cache entries by updating a cache line link list whenever a transaction requests a location of a database record (and it is found in the cache) and the cache location is returned (i.e. remove the found cache line from its current location in the linked list and move it to the front of the list).

for some topic drift ... misc. posts mentioning working on original relational/sql implementation
http://www.garlic.com/~lynn/submain.html#systemr

and in the previous incarnation of this subject
http://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?

it strayed into doing cross-cache transfers in conjunction with the work on distributed lock manager for ha/cmp
http://www.garlic.com/~lynn/subtopic.html#hacmp

and a little more drift here
http://www.garlic.com/~lynn/95.html#13

misc. other past posts mentioning distribute lock manager work
http://www.garlic.com/~lynn/2000.html#64 distributed locking patents
http://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
http://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
http://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2002e.html#67 Blade architectures
http://www.garlic.com/~lynn/2002e.html#71 Blade architectures
http://www.garlic.com/~lynn/2002f.html#1 Blade architectures
http://www.garlic.com/~lynn/2002f.html#4 Blade architectures
http://www.garlic.com/~lynn/2002f.html#6 Blade architectures
http://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
http://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
http://www.garlic.com/~lynn/2004m.html#5 Tera
http://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
http://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
http://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
http://www.garlic.com/~lynn/2005.html#55 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
http://www.garlic.com/~lynn/2005f.html#32 the relational model of data objects *and* program objects
http://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
http://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 17:26:09 -0600
Brian Inglis writes:
Pool/subpool allocation seems advantageous in situations where, as with OS control blocks, you sometimes have high rates of de-/allocation and sometimes have high numbers, but only rarely in different subpools at the same time.

I've used a generic subpool allocator, which doubles the total number of allocated objects each time it has to ask malloc for more memory, in such situations on projects, and as a result never had noticeable time or space problems.

So I never actually had to measure the difference between that approach and regular malloc (yeah, premature optimization is the root of all evil, but I'm Scottish and cheap with other people's CPUs and bits, and dislike debugging problems when I can design to avoid them).


cp67 kernel initially implemented bestfit ... an issue was storage fragmentation and long chains of "free" blocks ... production operation frequently would get to several hundred (or more) available blocks ... releasing a block required scanning the list to find storage address to sort it into and see if newly released storage block was adjacent to something already on the chain and could be merged into single larger block ... obtaining a block required scanning list for best fit.

Lincoln Labs had defined the search list instruction (SLT) which was available on many 360/67 models. Lincoln did a modification of the CP67 kernel to use the SLT instruction for running the free storage blocks. it still required a couple storage references per block ... even if the instruction looping overhead had been minimized (even with the SLT instruction, scanning several hundred blocks still took 3-4 storage references per block or a couple thousand storage references per call).

when i was an undergraduate ... in addition to the virtual memory stuff
http://www.garlic.com/~lynn/subtopic.html#wsclock
and dynamic adaptive scheduling
http://www.garlic.com/~lynn/subtopic.html#fairshare

I had also done a lot of other work on the kernel ... including some significant pathlength reductions. as other kernel pathlengths were significantly improved, the storage allocation routine was becoming a significant fraction of total kernel pathlength ... approaching half of kernel overhead

in the early 70s, there was a lot of work on cp67 subpool strategy. this eventually resulted in an implementation that defined a set of subpools that handled something like 95percent of typical storage requests and satisfied a typical request in something like 14 instructions (half of which were creating trace table entry for each call). this implementation got cp67 internal kernel storage management overhead back down under ten percent.

reference to paper on the work:
B. Margolin, et al, Analysis of Free-Storage Algorithms, IBM System Journal 10, pgs 283-304, 1971

this subpool design was carried over into vm370. in the mid-70s, when I was working on the ECPS microcode performance assist for the 138/148 there was extensive kernel pathlength investigation. the following shows a summary of major internal kernel pathlength sorted by percent of total ... that was used for selecting portions of the vm370 kernel to drop into microcode
http://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist

there was 6k bytes of microcode space available ... and 370 instructions translated into microcode at approximately 1:1 bytes ... so 6k bytes of 370 instructions translated into approx. 6k bytes of microcode ... but ran ten times faster. The top 6k bytes of kernel pathlength instructions represented nearly 80percent of kernel execution ... dropping it into microcode got a 10:1 performance improvement.

in any case, from the above, the typical call to return a block of storage ("FRET") represented 3.77percent of total kernel overhead and typical call to obtain a block of storage ("FREE") represented 3.47percent of total kernel overhead.

misc. past posts mentioning the cp67 subpool work
http://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
http://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
http://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
http://www.garlic.com/~lynn/2002.html#14 index searching
http://www.garlic.com/~lynn/2002h.html#87 Atomic operations redux
http://www.garlic.com/~lynn/2004g.html#57 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004h.html#0 Adventure game (was:PL/? History (was Hercules))
http://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
http://www.garlic.com/~lynn/2006e.html#40 transputers again was: The demise of Commodore

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 15 May 2006 19:09:05 -0600
Anne & Lynn Wheeler writes:
clock and global LRU uses an LRU approximation ... that kind of sorts pages into two categories ... those that have their reference bit used since the last examination/reset, and those that haven't had their reference bit used since their last examination/reset.

we did significant simulation with full instruction traces some of it was part of the vs/repack technology previously referenced
http://www.garlic.com/~lynn/2006i.html#37 virtual memory

where in detailed simulation with full instruction traces of i-refs, d-refs, and d-stores ... compared lots of global and local lru approximations as well as "true" lru ... where exact ordering of page references were maintained.


ref:
http://www.garlic.com/~lynn/2006j.html#18 virtual memory

besides the numerous different variations on LRU approximations, "true" LRU implemented in simulators ... there was also belady's "OPT" ... basically with a full trace of all storage accesses ... OPT chose the page to replace that resulted in the fewest future page faults (of course ... you were no longer working on past history ... like LRU or LRU approximation ... you needed a full prediction of the future ... which you could simulate given a complete trace of all storage access to replay).

a few recent posts mentioning other of belady's references
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#38 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory
http://www.garlic.com/~lynn/2006j.html#17 virtual memory

here is a spring 2005 class assignment
http://www.cs.georgetown.edu/~clay/classes/spring2005/os/paging.html

requiring implementation & comparison of:
• FIFO: First In, First Out • GC: Global Clock • LFU: Least Frequently Used • LRU: Least Recently Used • MFU: Most Frequently Used • OPT: Belady's Optimal Algorithm • RAND: Random

.....

for some drift:
R. L. Mattson, J. Gecsei, D. R. Slutz, and I. L. Traiger. Evaluation techniques for storage hierarchies. IBM Systems Journal, 9(2):78-117, 1970.

with respect to the cache investigation using simulator that had full trace of all record accesses for production systems ...
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006j.html#14 virtual memory

I had implemented much of the data collection stuff in 1979 and Mattson had done most of the simulator implementation.

One of the other outcomes of Mattson 1979 simulation work was that he developed a process that could analyse the record access activity in real-time ... looking at being able to do run it as part of normal production operation and do real-time optimal disk data reorganization.

a more recent paper referencing Mattson's 1970 work.
http://csdl2.computer.org/persagen/DLAbsToc.jsp?resourcePath=/dl/trans/tc/&toc=comp/trans/tc/1998/06/t6toc.xml&DOI=10.1109/12.689650

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 16 May 2006 07:51:45 -0600
jmfbahciv writes:
I always thought that virtual memory was akin to custom-made suits. Each manufacturer had its own unique problem set and would have applied to anybody else's.

You should also remember that IBM was based on batch folklore. We weren't; DEC never learned how to do batch computing with huge data bases and transaction processin. To IBM this stuff was as natural as breathing.


that was POK and the majority of the customer base. however, a lot of virtual machines, virtual memory, paging, interactive computing, networking, multiprocessor and other stuff was done at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

the technology for the internal network was developed at the science center. the internal network was larger than the arpanet/internet from just about the beginning until possibly mid-85.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

the technology used in the internal network was also used in bitnet & earn
http://www.garlic.com/~lynn/subnetwork.html#bitnet

gml (precusor to sgml, html, xml, etc) was invented at the science center
http://www.garlic.com/~lynn/submain.html#sgml

the invention of the compare&swap instruction was invented by charlie at the science center (CAS comes from charlie's initials and then had to come up with an instruction name that matched charlie's initials)
http://www.garlic.com/~lynn/subtopic.html#smp

a lot of the work on performance monitoring and workload profiling leading up to capacity planning was also done at the science center
http://www.garlic.com/~lynn/submain.html#bench

while the original relational stuff wasn't done at the science center ... i did get to work on some of it after i transferred from the science center out to research on the west coast. however, all of the original relational/sql stuff was developed on vm370 ... which was on outgrowth of the virtual machine cp67 stuff that had originated at the science center
http://www.garlic.com/~lynn/submain.html#systemr

note that while the virtual machine and interactive install base was considered relatively small compared to the big batch customer base ... there was some numbers that in the early 80s, the 4341 vm370 install base was approximately the same size as the vax install base (even tho the 4341 vm370 install base was considered almost trivially small compared to the company's batch install base).

some old vax install numbers:
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
http://www.garlic.com/~lynn/2002f.html#5 Blade architectures
http://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors

some old 4341 and/or vax postings
http://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000c.html#83 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#10 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2001m.html#15 departmental servers
http://www.garlic.com/~lynn/2002h.html#52 Bettman Archive in Trouble
http://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002k.html#1 misc. old benchmarks (4331 & 11/750)
http://www.garlic.com/~lynn/2002k.html#3 misc. old benchmarks (4331 & 11/750)
http://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
http://www.garlic.com/~lynn/2003c.html#17 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003c.html#19 diffence between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#0 big buys was: Tubes in IBM 1620?
http://www.garlic.com/~lynn/2003d.html#33 Why only 24 bits on S/360?
http://www.garlic.com/~lynn/2003d.html#61 Another light on the map going out
http://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
http://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
http://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
http://www.garlic.com/~lynn/2003i.html#5 Name for this early transistor package?
http://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
http://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
http://www.garlic.com/~lynn/2004f.html#39 Who said "The Mainframe is dead"?
http://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
http://www.garlic.com/~lynn/2004j.html#57 Monster(ous) sig (was Re: Vintage computers are better
http://www.garlic.com/~lynn/2004l.html#10 Complex Instructions
http://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004m.html#63 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
http://www.garlic.com/~lynn/2005f.html#30 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005m.html#12 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005n.html#10 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#11 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#12 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#16 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#47 Anyone know whether VM/370 EDGAR is still available anywhere?
http://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2005p.html#19 address space
http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005s.html#36 Filemode 7-9?
http://www.garlic.com/~lynn/2005s.html#39 Filemode 7-9?
http://www.garlic.com/~lynn/2006.html#47 "VAX" Tradename reused !
http://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 16 May 2006 08:58:02 -0600
Brian Inglis writes:
As with malloc and friends, there must be some bad VM implementations out there for them to build their own, unless it's a holdover from old 68K Macs without MMU.

science center
http://www.garlic.com/~lynn/subtopic.html#545tech

had ported apl\360 to cms\apl. apl\360 had basicly its own subsystem monitor, terminal handler, swapper, storage manager etc ... running "real memory" under os/360. typical apl\360 workspaces were 16kbytes to 32kbytes.

in the move to cms\apl ... got rid of the subsystem monitor, terminal handler, swapper, etc ... as part of the apl implementation (allowing it to rely on the underlying infrastructure provided by cp67).

one of the things that the move to cms\apl environment was transition from 16kbyte ("real") workspaces to possibly multi-megabyte virtual memory workspaces. this created a big problem for the apl storage manager. it basically started with all unused space as available, then on ever assignment, it allocated the next available unused space (discarding any old location containing a value). when it had exhausted unused space, it did garbage collection, compacting all "used" storage ... and creating a large contiguous area of unused space ... and then started all over again.

in a small, swapped, real-storage implementation, the side-effects of this strategy wasn't particularly bad. however, with very large virtual memory workspaces ... this strategy quickly touched all available virtual pages, then garbage collect/compact, and started all over.

the early vs/repack technology:
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

...r>http://www.garlic.com/~lynn/2006i.html#37 virtual memory

was used to analyze this behavior and perfect a modified garbage collection implementation that was significantly more virtual memory friendly.

misc. past posts mentioning apl &/or hone
http://www.garlic.com/~lynn/subtopic.html#hone

hone started out was a joint project with the science center and the marketing division to provide online, interactive support for all the sales, marketing, and field people ... initially with cp67 and later moved to vm370 base. a majority of the applications for the sales and marketing people were implemented in apl.

in the mid-70s, the various hone us datacenters were consolidated in northern cal. by the late 70s, the size of the defined US hone users was approaching 40k. in addition there were numerous clones of the US hone datacenter at various locations around the world. in the very early 80s, the high-available, fall-over hone implementation in northern cal. was replicated first with a new datacenter in dallas and then a 3rd datacenter in boulder (providing load balancing and fall-over across the datacenters in case of a natural disaster in cal.).

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 16 May 2006 14:00:54 -0600
"Eric P." writes:
Ok, so it sounds like Grenoble did have a second chance mechanism. Rats, I was kinda hoping it didn't so I could explain away your results. :-)

Why would your global (approx) LRU needed to reclaim? I suppose if you trim the global valid list to refill a low free list, then the previous owner touches one of the trimmed pages before it is reassigned, you want to just retrieve it from the free list. Correct?


so, I've told the story of how MVS decided to bias the LRU replacement algorithm in favor of non-changed pages ... because there was lower overhead and less latency ... than compared to when a changed page had to be first written out.
http://www.garlic.com/~lynn/2006i.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#5 virtual memory

however, if you are just doing straight page replacement selection, somewhat syncronized with the page fault ... then the latency for servicing a page fault when a changed page has been selected is possibly twice the elapsed time of servicing a page fault when a non-changed page has been selected (since replacing a page that has been changed first requires writing the current contents out before the replaced contents can be read in).

to cut down on this latency, you run the selecting of pages for replacement slightly out of sync and ahead of servicing a page fault. you have a small (fifo) pool of pages selected for replacement. a page fault results in selecting a replacement page from the pool ... probably dropping the pool below threshold, which then requires invoking the replacement algorithm to replenish the pool. changed pages that are selected for replacement are written out as they are added to the replacement pool. by the time, a page fault requires the page, any changed page has completed its write and the real storage location is immediately available to bring in the replacement page ... cutting down on the latency to service a page fault.

you might also check carr's global clock thesis ... i was able to find it online.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 17 May 2006 07:43:20 -0600
robert.thorpe writes:
I see. I wish I'd done something so interesting when I was an undergraduate.

i got to do the support and maint. for the univ. datacenter production systems. i would frequently get the machine room 8am on sat. and have it all to myself until 8am on monday. monday classes were sometimes a little difficult having already gone 48hrs w/o sleep.

misc. past posts pulling 48hr shift before monday classes.
http://www.garlic.com/~lynn/2001b.html#26 HELP
http://www.garlic.com/~lynn/2001f.html#63 First Workstation
http://www.garlic.com/~lynn/2002k.html#63 OT (sort-of) - Does it take math skills to do data processing ?
http://www.garlic.com/~lynn/2002o.html#1 Home mainframes
http://www.garlic.com/~lynn/2002o.html#2 Home mainframes
http://www.garlic.com/~lynn/2002p.html#38 20th anniversary of the internet (fwd)
http://www.garlic.com/~lynn/2003c.html#51 HASP assembly: What the heck is an MVT ABEND 422?
http://www.garlic.com/~lynn/2003c.html#57 Easiest possible PASV experiment
http://www.garlic.com/~lynn/2004c.html#19 IT jobs move to India
http://www.garlic.com/~lynn/2004e.html#45 going w/o sleep

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 17 May 2006 08:39:16 -0600
Brian Inglis writes:
In virtual machine environments, software versions of hardware approaches tends to be used instead of "creating beautiful new impediments to understanding" (Henry Spencer -- from #8 in "The Ten Commandments for C Programmers").

old post discussing how hardware address translation worked on trout 1.5 (3090), including email from fall of '83
http://www.garlic.com/~lynn/2003j.html#42 Flash 10208

there is a reference to Birnbaum starting in early '75 on 801 project (including split caches), i.e. 801 turned into romp, rios, power, power/pc, etc. misc. past posts mentioning 801
http://www.garlic.com/~lynn/subtopic.html#801

and old discussion about SIE (virtual machine assist) from long ago and far away (couple weeks short of 25years ago):

Date: 6/30/81 15:33:04
To: wheeler

I would like to add a bit more to the discussion of SIE. I seem to have hit a sensitive area with my original comments. I would have preferred to contain this to an offline discussion, but I feel that some of the things that have appeared in the memos require a reply.

First, let me say that all of the comments I have made have been accurate to the best of my knowledge. The performance data I quoted came directly from a presentation I attended given by the VM/811 group. The purpose of the presentation was to justify extensions to the SIE architecture. Since I my last writing, I have been told by XXXXX that the numbers quoted were for MVS running under VMTOOL on the 3081. XXXXXX mentioned that VMTOOL has some significant software problems which are partially responsible for the poor performance. Presumably, VM/811 would not have these problems. This was not pointed out at the meeting.

For many years the software and hardware groups have misunderstood each other. Engineers who knew nothing about software could not understand why it was necessary to make their hardware do certain things. Likewise, programmers who knew nothing about software could not understand why the engineers could not make the hardware do the things they wanted. Traditionally, microcode has been done by engineers because a thorough understanding of the hardware is necessary in order to write microcode. In recent years, this has become a problem as more complex software functions have been placed into microcode. In my department, we have tried to remedy this problem by hiring people with programming experience as microprogrammers.

The statement that millions of dollars have been spent writing microcode in order to avoid putting a few ten cent latches into the hardware is completely false. The truth is that changes have often been made to the microcode to AVOID SPENDING MILLIONS OF DOLLARS by putting a change in hardware. In the world of LSI and VLSI, there is no longer such a thing as a "ten cent latch." Once a chip has been designed, it is very expensive to make even the smallest change to it.

Microcode offers a high degree of flexibility in an environment that is subject to change, especially if one has a writable control store. When a change is necessary, it can often be had for "free" by making a change to the microcode and sending out a new floppy disk, whereas it might cost millions of dollars to make an equivalent change to the hardware. While the performance of the microcode may not be as good as the hardware implementation, the overall cost/performance has dictated that it is the method of choice.

As I pointed out in a previous writing, what works well or does not work well on one machine says absolutely nothing about the performance of that item on another machine. XXXXX seems to have completely missed this critical point, since he expects a 158-like performance boost from SIE on machines which are nothing like the 158 in their design.

SIE is a poor performer on the 3081 for several reasons. One reason is that the 3081 pages its microcode. Each time it is necessary to enter or exit SIE, a large piece of microcode must be paged in to carry out this function. This is rather costly in terms of performance. A performance gain could be realized if the number of exit/entry trips could be reduced. One way of doing this would be to emulate more instructions on the assumption that it takes less to emulate them than it does to exit and re-enter emulation mode. This thinking is completely valid for the 3081, but is not necessarily relevent when it comes to other machines, such as TROUT.

TROUT does not page its microcode, and therefore the cost of exiting and re-entering emulation mode is less. The thinking behind the changes to the SIE architecture should be re-examined when it comes to TROUT because the data upon which the changes were based is not necessarily valid. This is why I have asked that the extensions to SIE be made optional. This would allow machines that do have performance problems to implement the extensions, while machines that do not have problems could leave them out and use their control store for more valuable functions.

The extensions that are being proposed are not at all trivial. It may seem like a simple matter to emulate an I/O instruction, but such is not the case. To appreciate what is involved, one must have a detailed understanding of just how the CPU, SCE and and Channels work.

Other machines do indeed have an easier time when it comes to implementing some of these assists. That is because they are rather simple machines internally, not because their designers had more foresight when they designed the machines. The cycle time of TROUT is only slightly faster than the 3081, yet TROUT is much faster in terms of MIPS. This performance comes from the highly overlapped design of the processor. This makes for a much more complex design. Sometimes you pay dearly for this, like when it comes to putting in complex microcode functions.

TROUT has never treated SIE as "just another assist." SIE has been a basic part of our machine's design since the beginning. In fact, we have chosen to put many functions into hardware instead of microcode to pick up significant performance gains. For example, the 3081 takes a significant amount of time to do certain types of guest-to-host address translation because it does them in microcode, while TROUT does them completely in hardware.


... snip ... top of post, old email index

nomenclature in the above with "guest" refers to an operating system running in a virtual machine.

...

with regard to the above comment about virtual machines and I/O instruction ... part of the issue is translating the I/O channel program and fixing the related virtual pages in real memory .. since the real channels run using real addresses from the channel programs. the channel programs from the virtual address space have all been created using the addresses of the virtual address space. this wasn't just an issue for virtual machine emulation ... but OS/VS2 also has the issue with channel programs created by applications running in virtual address space.

...

3090 responded to Amdahl's hypervisor support with PR/SM, misc. past posts mentioning PR/SM (and LPARs)
http://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2002o.html#15 Home mainframes
http://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2002p.html#46 Linux paging
http://www.garlic.com/~lynn/2002p.html#48 Linux paging
http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
http://www.garlic.com/~lynn/2003n.html#13 CPUs with microcode ?
http://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
http://www.garlic.com/~lynn/2004c.html#4 OS Partitioning and security
http://www.garlic.com/~lynn/2004c.html#5 PSW Sampling
http://www.garlic.com/~lynn/2004m.html#41 EAL5
http://www.garlic.com/~lynn/2004m.html#49 EAL5
http://www.garlic.com/~lynn/2004n.html#10 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004o.html#13 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004p.html#37 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
http://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
http://www.garlic.com/~lynn/2005.html#6 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005b.html#5 Relocating application architecture and compiler support
http://www.garlic.com/~lynn/2005c.html#56 intel's Vanderpool and virtualization in general
http://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005d.html#74 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005h.html#19 Blowing My Own Horn
http://www.garlic.com/~lynn/2005k.html#43 Determining processor status without IPIs
http://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
http://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
http://www.garlic.com/~lynn/2006e.html#15 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006h.html#30 The Pankian Metaphor

...

misc. past posts mentioning CCWTRANS (cp/67 routine that created "shadow" channel program copies of what was in the virtual address space, replacing all virtual addresses with "real" addresses, for example initial prototype of OS/VS2 was built by crafting hardware translation into mvt and hacking a copy of CP67's CCWTRANS into mvt):
http://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
http://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
http://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
http://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
http://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
http://www.garlic.com/~lynn/2001l.html#36 History
http://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
http://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
http://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
http://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
http://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
http://www.garlic.com/~lynn/2002n.html#62 PLX
http://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
http://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
http://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
http://www.garlic.com/~lynn/2004d.html#0 IBM 360 memory
http://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
http://www.garlic.com/~lynn/2004m.html#16 computer industry scenairo before the invention of the PC?
http://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
http://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
http://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005b.html#23 360 DIAGNOSE
http://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005b.html#50 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005p.html#18 address space
http://www.garlic.com/~lynn/2005q.html#41 Instruction Set Enhancement Idea
http://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
http://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
http://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
http://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
http://www.garlic.com/~lynn/2006i.html#33 virtual memory
http://www.garlic.com/~lynn/2006j.html#5 virtual memory

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Password Complexity

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Password Complexity
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 17 May 2006 09:37:01 -0600
gilmap@ibm-main.lst wrote:
I read somewhere that the motivation for support of mixed case passwords in z/OS v1r7 is an external requirement that the password space have cardinality at least 10^13. Does any reader of this list know the source of this requirement? Sarbanes-Oxley (chapter and verse)? Other (specify)?

lots of old password rules were essentially based on there being one and only password ... theirs ... and no others existed. a password as a shared-secret requires a unique shared-secret for ever unique security domain ... as countermeasure against somebody in one security domain attacking another security domain (say local garage ISP and home banking or local garage ISP on employer's telecommunity activity). lots of postings about shared-secret infrastructure issues
http://www.garlic.com/~lynn/subintegrity.html#secret

the problem with design totally in a vacuum resulted in people having scores of impossible to remember passwords (a side-effect of them being hard to guess) that changed monthly. all the password rules done in a vacuum w/o regard to the overall infrastructure ... introduced serious vulnerabilities because people were forced to record/archive these impossible to remember and frequently changing values (ignored fundamental human factors)

story about password rules that were published on 1/4/84 on corporate letterhead and placed in corporate bulletin boards ... long ago and far away
http://www.garlic.com/~lynn/99.html#52 Enter fonts
http://www.garlic.com/~lynn/2001d.html#51 OT Re: A beautiful morning in AFM.

copy of the corporate directive
http://www.garlic.com/~lynn/2001d.html#52 OT Re: A beautiful morning in AFM.
http://www.garlic.com/~lynn/2001d.html#53 April Fools Day

How to implement Lpars within Linux

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How to implement Lpars within Linux
Newsgroups: linux.redhat,alt.folklore.computers
Date: Wed, 17 May 2006 10:05:14 -0600
Greg Hackney writes:
I believe that Lpars is an IBM-hardware-only thing.

There's a commercial product called Virtuozzo (http://www.virtuozzo.com). It's a new efficient method of creating virutal servers with dedictated assigned resources, all the while sharing common items such as the kernel and system patches.

The opensource version is called OpenVZ.


recent post discussing SIE
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

SIE was the "hardware" virtual machine assist that came out with the 3081.

Amdahl then introduced "hypervisor", which was as a means of partitioning a machine in two parts w/o requiring a virtual machine operating system (i.e. was basically configuration settings to the hardware ... somewhat akin to bios configuration).

the response was PR/SM for the 3090 ... which eventually evolved into lpars ... i.e. multiple Logical PARtionS ... much more than two ... again configuration in the hardware w/o requiring virtual machine operating system.

for some configurations talking about 40,000 linux virtual machines ... they have had the real hardware partitioned into a few LPARs with a virtual machine operating system running in one of the LPARs ... which, in turn, is supporting the 40,000 linux virtual machines

a couple old posts referencing the creation of 40k-some linux virtual machines in a single LPAR:
http://www.garlic.com/~lynn/2001h.html#33 D
http://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
http://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?

LPAR has been the capability in the hardware to create partitions (basically a small subset of a virtual machine operating system has been moved into the hardware) w/o requiring a software virtual machine operating system running to support the capability.

the original virtual machine operating system was cp67 built in the 60s for the 360/67 (there was an earlier prototype cp40 that was ran on a hardware modified 360/40). this was back when most all software was open source and free.

various litigation by the federal gov. and others forced the unbundling announcement on 23/6/69 ... and starting to charge for software ... which then evolved over the next decade until "object-code-only" in the early 80s (i.e. not only charging for software, but no longer shipping source as part of the products). misc. past postings about unbundling
http://www.garlic.com/~lynn/submain.html#unbundle

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 17 May 2006 12:47:50 -0600
Anne & Lynn Wheeler writes:
i got to do the support and maint. for the univ. datacenter production systems. i would frequently get the machine room 8am on sat. and have it all to myself until 8am on monday. monday classes were sometimes a little difficult having already gone 48hrs w/o sleep.

ref:
http://www.garlic.com/~lynn/2006j.html#26 virtual memory

i.e. my real undergraduate job was supporting and maintaining the univ. datacenter production (real memory) os/360 batch system.

the stuff about inventing, implementing, and shipping production code for things like replacement dynamic adaptive scheduling
http://www.garlic.com/~lynn/subtopic.html#fairshare
and replacement/paging algorithms
http://www.garlic.com/~lynn/subtopic.html#wsclock

as well as misc. other things like the hardware clone controller
http://www.garlic.com/~lynn/subtopic.html#360pcm

was much more of a hobby for entertainment.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 17 May 2006 13:18:38 -0600
Anne & Lynn Wheeler writes:
TROUT has never treated SIE as "just another assist." SIE has been a basic part of our machine's design since the beginning. In fact, we have chosen to put many functions into hardware instead of microcode to pick up significant performance gains. For example, the 3081 takes a significant amount of time to do certain types of guest-to-host address translation because it does them in microcode, while TROUT does them completely in hardware.

re:
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

"811" (named after 11/78 publication date on the architecture documents) or 3081 was considered somewhat of a 155/158 follow-on machine ... being much more of a m'coded machine.

"TROUT" or 3090 was considered somewhat of a 165/168 follow-on machine ... being much more of a hardwired machine.

these were the days of processors getting bigger and bigger with much more effort being put into more processors in SMP configuration.

they had created two positions, one in charge of "tightly-coupled" architecuture (SMP) and one in charge of "loosely-coupled" architecture (clusters). my wife got con'ed into taking the job in pok in charge of loosed-coupled architecture.

she didn't last long ... while there, she did do done peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

which didn't see much uptake until sysplex ... except for the ims group doing ims hot-standby.

part of the problem was she was fighting frequently with the communication's group, who wanted SNA/VTAM to be in charge of any signals leaving a processor complex (even those directly to another processor).

one example was trouter/3088 ... she fought hard for hardware enhancements for full-duplex operation. there had been a previous "channel-to-channel" hardware which was half-duplex direct channel/bus communication between two processor complexes. 3088 enhanced this to provide connectivity to up to eight different processor complexes.

sna was essentially a dumb terminal controller infrastructure. their reference to it as a "network" required other people in the organization to migrate to using the term "peer-to-peer network" to differentiate from the sna variety.

of course, earlier, in the time-frame that sna was just starting out ... she had also co-authored a peer-to-peer networking architecture with Burt Moldow ... which was somewhat viewed as threatening to sna ... misc. past posts mentioning AWP39:
http://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005u.html#23 Channel Distances
http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe

anyway, in the trotter/3088 time-frame ... san jose had done a prototype vm/cluster implementation using a modified trotter/3088 with full-duplex protocols. however, before it was allowed to ship, they had to convert it to san operation. one of the cluster example was to fully "resynch" cluster operation of all the processors ... with took under a second using full-duplex protocols on the 3088 ... but the same operation took on the order of a minute using sna protocols and a half-duplex paradigm.

we ran afoul again later with 3-tier architecture
http://www.garlic.com/~lynn/subnetwork.html#3tier

this was in the time-frame that the communications group was out pushing SAA ... a lot of which was an attempt to revert back to terminal emulation paradigm
http://www.garlic.com/~lynn/subnetwork.html#emulation

from that of client/server. we had come up with 3-tier architecture and was out pitching it to customer executives ... and the same time they were trying to revert 2-tier architecture back to dumb terminal emulation.

then we did ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

minor reference
http://www.garlic.com/~lynn/95.html#13

which didn't make a lot of them happy either.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 17 May 2006 13:42:33 -0600
Eric P. wrote:
Nah. PALcode is just normal Alpha instructions that are executed with PAL mode enabled. PAL mode is a third mode: user, supervisor, PAL. It disables interrupts, enables access to special control registers, and can also enable instructions to be fetched from a different RAM (such as an on chip RAM).

However the instruction formats are unchanged.

The net effect is create privileged subroutines that can be loaded at boot time and behave similar to vertical microcode but are implemented in the same instruction set as the rest of the processor.

Oh, and it also embeds a patented concept into the ISA that would serve to prevent architecture cloning without permission. I guess this was more than just a lucky side effect because everything that PALcode does could have been accomplished without a patented third mode through traditional design techniques (such as simply picking off the highest 64K addresses, forcing a super mode check, and routing the request to an on chip RAM).


Amdahl had done something similar in the early 80s on their mainframe ... except they called it "macrocode" ... being part-way between normal instructions and "microcode" (it sort of operated as microcode but programmed using subset of normal 370 instruction set). One of the first major projects that they used it for was the "hypervisor" ... basically partitioning the machine using a subset of virtual machine functions w/o requiring/needing a virtual machine operating system.

misc. past posts mentioning Amdahl's "macrocode"
http://www.garlic.com/~lynn/2002p.html#44 Linux paging
http://www.garlic.com/~lynn/2002p.html#48 Linux paging
http://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
http://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
http://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005d.html#60 Misuse of word "microcode"
http://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
http://www.garlic.com/~lynn/2005p.html#14 Multicores
http://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
http://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
http://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
http://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away

How to implement Lpars within Linux

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How to implement Lpars within Linux
Newsgroups: linux.redhat,alt.folklore.computers
Date: Wed, 17 May 2006 14:09:41 -0600
Greg Hackney writes:
1969? Hokey smokes Professor Peabody. I think I just stepped into the "Way Back" machine.

ref:
http://www.garlic.com/~lynn/2006j.html#29 How to implement Lpars within Linux

yes, well, originally they claimed that only application code needed to be unbundled and charged for ... and that the operating system code needed to still be free (23jun69).

i had done a bunch of stuff as an undergraduate that got shipped in the cp67 virtual machine operating system. in the morph of cp67 to vm370, a lot of that code was dropped from the product.

i was then given an opportunity to re-introduce a lot of the code back into the vm370 virtual machine operating system calling it the "resource manager" (which included a whole bunch of stuff besides resource management)

however, they decided to use it as guinea pig for starting to charge for operating system software ... and i got to spend time with the business people working out the policy for charging for operating system software. the resource manager was released 11may76.

then over a few years, more and more of operating system software was changed from free to priced/charged ... although it continued to ship with full source ... until the OCO (object-code-only) wars in the early 80s.

a few recent posts mentiong 11may76
http://www.garlic.com/~lynn/2006h.html#25 The Pankian Metaphor
http://www.garlic.com/~lynn/2006i.html#24 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006i.html#26 11may76, 30 years, (re-)release of resource manager
http://www.garlic.com/~lynn/2006i.html#28 virtual memory
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006i.html#43 virtual memory

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Arpa address

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Arpa address
Newsgroups: alt.computer.security,alt.folklore.computers
Date: Wed, 17 May 2006 14:32:11 -0600
ibuprofin@painkiller.example.tld (Moe Trin) writes:
Depends on where in the hostname the string "arpa" occurs. There is a top level domain "arpa" which originally meant "Advanced Projects Research Agency" - the organization that funded to development of what became the Internet. This domain is used when looking up an IP address to determine the host name. You would look for the name associated with 192.0.2.21 by sending a DNS query for "21.2.0.192.in-addr.arpa". The answer might come back as "21.2.0.192.in-addr.arpa is foo.example.com". The use of the "in-addr.arpa" domain is a trick to simplify DNS lookups (is 192.0.2.21 a hostname, or an address?).

slight nit ... arpa had funded arpanet and development of packet-switching protocol. this was relatively homogeneous environment requiring fairly expensive interface "IMPs" to be part of the network.

the infrastructure cut-over to internetworking protocol was on 1/1/83 ... with gateways, internetworking ... etc.

I've frequently asserted that the internal network was larger than the whole arpanet/internet from just about the beginning until approx. summer 85 ... because the internal network had a kind of gatewaying capability from nearly the beginning (i.e. the 1/1/83 cut-over to internetworking had approx. 250 nodes while the internal network was nearing 1000 nodes).
http://www.garlic.com/~lynn/subnetwork.html#internalnet

NSF and other organizations were started to fund educational networking connectivity ... first csnet ... minor reference
http://www.garlic.com/~lynn/internet.htm#0

then NSF released the NSFNET backbone RFP ... which should be considered the "operational" precursor to the internet (as opposed to the technology/protocol). as i've mentioned before ... my wife and I had an operational high-speed backbone at the time
http://www.garlic.com/~lynn/subnetwork.html#hsdt

but we were told we weren't allowed to bid on the NSFNET backbone RFP ... although we eventually got a technical audit by NSF ... that said something about what we had operational was at least five years ahead of all the NSFNET bid submissions. minor ref:
http://www.garlic.com/~lynn/internet.htm#nsfnet

it was also starting in this time-frame that arpa and various other parts of the federal gov. were starting to mandate that internetworking protocol, internet, etc, was to be eliminated and every thing converted to ISO OSI protocol ... along with the various GOSIP documents ... recent reference about OSI mandate and GOSIP
http://www.garlic.com/~lynn/2006i.html#19 blast from the past on reliable communication

the NSFNET backbone had acceptable use guidelines about not being used for commercial purposes. the transition to today's internet somewhat involved providing provisions for commercial traffic (much of the backbone involved commercial companies loosely under NSFNET contract to provide networking for educational and non-commercial purposes).

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Code density and performance?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Code density and performance?
Newsgroups: comp.arch,alt.folklore.computers,bit.listserv.vmesa-l
Date: Wed, 17 May 2006 19:19:05 -0600
Peter Flass writes:
Don't know about HP, bit it's also IBM:
http://www.research.ibm.com/journal/rd/483/heller.html


re:
http://www.garlic.com/~lynn/2006j.html#32 Code density and performance?

the above journal artcile reference mentions 1997 for millicode. it mentions millicode being needed for (at least) sie and pr/sm. pr/sm was done about a decade earlier on 3090 (and was significantly more difficult in native microcode).

this is a recent posting including email from jun81 mentioning sie effort for 3090
http://www.garlic.com/~lynn/2006j.html#27 virtual memory

this mention of macrocode
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

has piece of email from mar81 that mentions the (then) announced amdhal 5880 with macrocode (and used for hypervisor implementation).

i had first started running into references regarding amdahl's work on macrocode the previous year (mar80).

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Wed, 17 May 2006 22:16:08 -0600
"Rostyslaw J. Lewyckyj" writes:
I dimly remember that one of IBMs' Oses for the 360/370 ... systems did it that way. Allocate for the requested region size. Load whole program. Then page out to the paging devices.

the os heritage was loading application with real i/o into real storage. with the advent of virtual memory ... the application was being brought into virtual memory with real i/o ... and then would get paged.

this was also true of cms ... which did simulated real i/o reading complete program image into virtual memory and then having it paged. on smaller real memories some application program images were nearly as large or even larger than available real storage.

in the early 70s, i did a paged mapped filesystem for cp67/cms
http://www.garlic.com/~lynn/submain.html#mmap

which addressed most of those issues. after the product morph from cp67 to vm370 ... i moved the cms memory mapped filesystem support to vm370/cms. a trivially small subset of some of this dealing with application images in a virtual memory was released as something called DCSS (discontiguous shared segments) ... recent posting mentioning the subject:
http://www.garlic.com/~lynn/2006i.html#22 virtual memory

a flavor of the paged mapped support was finally released as part of washington ... xt/370. this was a personal computer based 370 with a very limited virtual 370 support necessary to run cms and had extremely limited real storage for paging ... and simple program loading of typical 370 applications represented a significant operation.

misc. past posts mentioning washington and/or xt/at/370:
http://www.garlic.com/~lynn/94.html#42 bloat
http://www.garlic.com/~lynn/96.html#23 Old IBM's
http://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
http://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
http://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
http://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
http://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
http://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
http://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
http://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
http://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
http://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
http://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
http://www.garlic.com/~lynn/2003h.html#40 IBM system 370
http://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
http://www.garlic.com/~lynn/2004m.html#7 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004m.html#10 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004m.html#11 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2004m.html#13 Whatever happened to IBM's VM PC software?
http://www.garlic.com/~lynn/2005f.html#6 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2006.html#10 How to restore VMFPLC dumped files on z/VM V5.1
http://www.garlic.com/~lynn/2006f.html#2 using 3390 mod-9s

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 May 2006 07:31:37 -0600
jmfbahciv writes:
Right. At that time DEC was trying to do what IBM did very, very well. And IBM was trying to do what DEC used to do very, very well. It was one of the ironies of the biz. It reminded me of the horse who always wanted the grass on the other side of the fence no matter which side he was on. But I suppose that's what competition is all about.

Clarification: I'm talking about each company's production line work, not its special case work.


it wasn't that the science center didn't know how to do very well the type of thing that DEC did very well and/or have as many customers as DEC in this market segment ... it was that the size of the batch market segment was so much larger than this market segment ... that when people thot of the company ... they automatically thot of the much larger batch market segment ... which seem to totally obfuscate in their minds, the fact that the science center had been doing as well as DEC in this totally different market segment.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

The Pankian Metaphor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Pankian Metaphor
Newsgroups: alt.folklore.computers
Date: Thu, 18 May 2006 07:59:54 -0600
Anne & Lynn Wheeler writes:
the os heritage was loading application with real i/o into real storage. with the advent of virtual memory ... the application was being brought into virtual memory with real i/o ... and then would get paged.

ref:
http://www.garlic.com/~lynn/2006j.html#36 The Pankian Metaphor

the other part of the os real memory heritage was that applications and program data was assumed to be loading at any arbitrary adddress in real memory along with the fact that code generation convention arbitrarily intermixed instructions and data and that there was extensive use of absolute address constants for all sorts of addressing operations.

for the program image on disk, this address constants (which could be liberally found through out the program image) were stored in relative displacement form ... with a "relocatable address constant" directory appended to the file.

after the loader had read the program completely into memory (in later years, virtual memory), the loader then had to read the relocatable address constant directory ... which gave the location in the program image of each one of the address constants ... and access each one, changing it from its relative location value to its absolute location value.

after that the loader could transfer control to the starting application address.

cp67/cms eventually borrowed many applications from the os/360 software base (assemblers, compilers, and various other applications). cms having started from a virtual memory base where potentially every application could be loaded at the same (virtual) address ... added a new program image type called "MODULE". After the program loader had pulled everything into memory and did is address constant swizzling bit ... it was possible to write to disk an application program image file ... which then could be subsequently reloaded ... w/o the loader having to run through the image fixing up all the address constants in the program image.

i was somewhat able to take advantage of this when i did paged mapped filesystem for cp67/cms
http://www.garlic.com/~lynn/submain.html#mmap

however, i also wanted to support library program images that could be simulataneously (read-only) shared simultaneously in multiple different virtual address spaces). the issue with library images is that they are loaded at arbitrary different address (in a virtual address space) in conjunction with arbitrary different applications. one possibility is to assign each possible library program a fixed, unique virtual address. however, with only 16mbytes of virtual address space to work with, there quickly were more possible library programs than there were available unique virtual addressess. any specific application, run at any specific moment, would require more than 16mbytes of library programs .... but across all possible applications and all possible library programs ... there were more than could fit in 16mbytes. this is somewhat akin to the mvs "common segment" scenario mentioned here (even tho it had multiple different virtual address spaces)
http://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
http://www.garlic.com/~lynn/2006i.html#33 virtual memory

and the single virtual address space of s/38 going with 48bit virtual address (which later morphed into as/400)
http://www.garlic.com/~lynn/2006i.html#33 virtual memory

various other systems designed from the ground up for virtual memory (like tss/360, multics), defined program image formats were location dependent things like address constants were managed separately from basic program image. while cms had been define as virtual memory operation ... it borrowed a lot of applications, assemblers, compilers, coding conventions, etc ... from (real memory) os/360.

in doing the memory mapped filesystem work for cms ... attempting to create some number of (address) location independent program images that could be directly memory mapped as (read-only) shared objects at arbitrarily different address in different virtual address spaces ... i spent a lot of time fiddling with address constants in various applications. numerous past postings describing the address constant fiddling activity
http://www.garlic.com/~lynn/submain.html#adcon

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 May 2006 08:17:06 -0600
Brian Boutel writes:
FIFO, like all these algorithms, tries to predict future behaviour from the past. LRU guesses that pages that have not been used for a while have moved out of the working set, LFU prefers pages that have not been used very often, and FIFO thinks that the working set typically moves through the page set, so old pages are more likely to have moved out of it.

The standard objection to FIFO is that it has the undesirable property of being able to perform worse given more real memory. LRU and friends have the property that if you repeat a run with more real memory, then, at any time t, the set of pages in memory will be a superset of the pages in memory at time t in the previous run, i.e the most important pages are in memory, and adding more memory will not cause any of them to be not in memory at the same point of the run.

FIFO cannot guarantee such good behaviour. It is quite easy to set up an execution trace that demonstrates this.


actually LRU replacement strategy has the property that it can degenerate to FIFO replacement strategy ... i.e. program behavior that cycles through memory location. An example is the apl\360 memory management within a workspace ... referenced here
http://www.garlic.com/~lynn/2006j.html#24 virtual memory

it worked ok, in small real-memory workspaces ... but porting apl\360 to cms\apl large virtual memory was disasterous. the issue is that some number of applications can have transient execution patterns that resumble such cyclic behavior.

the problem with LRU degenerating to FIFO is that even if the virtual memory pages, that are being cycled, are only one larger than the available real memory ... every virtual page in the cycle will be faulted on every pass thru the cycle (aka the first/FIFO page in such a cycle is also the least recently used). Say there are N available real pages (for the cycle) and there are N+1 pages in the cycle. Page fault for page N+1 will replace page 1 (since it is both LRU and FIFO) ... but the next page to be accessed will be page 1 ... the page just replaced. In the N+1 scenario, this replacing the very next page that is going to be needed will continue throughout the cycle.

global clock is a very good and efficient approximation to global LRU replacement strategy ... including degenerating to FIFO under various conditions. as i mentioned before
http://www.garlic.com/~lynn/2006i.html#42 virtual memory

in the early 70s, i came up with a coding slight of hand that resulted in global clock (LRU approximation) that degenerating to RANDOM rather than FIFO. in the N+1 case where FIFO replacement was constantly selecting the very next page that was about to be used ... a RANDOM page would be selected for replacement instead. This significantly reduced the overall fault rate ... since you were constantly replacing the very page that you were about to use.

misc. past posts on page replacement algorithms, virtual memory, etc
http://www.garlic.com/~lynn/subtopic.html#wsclock

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 May 2006 09:26:06 -0600
"Eric P." writes:
I had to look up the term. "pink scheduling" = OS thrashing loop that caused a mode light on the PDP-10 console to glow pink.

Yes, so you just don't make those OS data structures pagable, in the sense that they never page fault when touched. But resources can still dynamically grow and shrink by explicitly managing how the virtual pages map to physical frames, rather than using the paging mechanism.


what i did in vm370 ... was i created a separate psuedo address space for each virtual machine to allow the page I/O infastructure to move such data structures to/from the paging area. there was some amount of dynamic adaptive stuff that precluded getting into pathelogical behavior moving the stuff in/out.

administrative code had explicit checking to see whether the objects were present ... as opposed to the kernel taking a page fault. part of this was based on the LRA (load-real address) instruction ... where the instruction set a condition code based on whether the page &/or tables were actually present.

at least one of the vm370-based time-sharing service bureaus
http://www.garlic.com/~lynn/submain.html#timeshare

enhanced that support to support process migration in a loosely-coupled environment ... i.e. a collection of independent processor complexes (non-shared memory) that all had access to the same physical disks (a process and all its support data structures could be paged out from one processor complex to disk ... and then paged into a different processor complex.

this was useful for some of these operations that had gone 7x24 with customers world-wide ... and the processor complexes had to be taken down periodically for standard hardware maintenance purposes.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 May 2006 09:44:53 -0600
"Eric P." writes:
Any page table entry that references the frame number which is about to be reassigned has to be invalidated. Since a page may be shared between many processes, that can involve tracking down and zapping PTE's in multiple page tables.

cp67 use to support a limited number of shared pages in this manner. however, in the morph from cp67->vm370 ... the 370 offerred a "64kbyte" segment option (16 4k pages) in the two level table structures.

a virtual address space was represented by a "segment" table. segment table entries pointed to page tables. page tables had PTE entries that indicated real address. segment table entries in different segment tables (i.e. virtual address spaces) could point at the same page table. such a page table (associated with a specific segment or range of virtual addresses) could then be "shared" accross multiple different address spaces. however, real, shared pages would only have a single, unique PTE in a single page table ... even tho multiple different segment tables could point at the same shared page.

as previous mentioned, the original 370 architecture allowed for read-only shared segment protection. basically a spare bit was defined in the segment table entry (which included pointer to a potentially shared page table entry) that precluded instructions executing in that virtual address space from storing/altering locations in that address range. this allowed for storage protection property to be virtual address space specific for the same shared pages i.e. a shared page table is "store" protected for virtual address spaces with the segment protect bit set in their segment table entry ... however the same shared page table may not be "store" protect for virtual address spaces w/o the segment protect bit set.

in the morph from cp67 to vm370, cms was reorganized to take advantage of the 370 shared segment architecture and the segment protection (allowing different address spaces to share the same information but preventing one cms from affecting other cms'es).

as previously mentioned, when the retrofit of virtual memory hardware to 370/165 fell behind schedule, the segment protect feature was dropped from the architecture (as well as some number of other features in the original 370 virtual memory architecture). this forced vm370/cms to revert to the storage protection mechanism that had been used in cp67.
http://www.garlic.com/~lynn/2006i.html#4 Mainframe vs. xSeries
http://www.garlic.com/~lynn/2006i.html#9 Hadware Support for Protection Bits: what does it really mean?
http://www.garlic.com/~lynn/2006i.html#23 Virtual memory implementation in S/370
http://www.garlic.com/~lynn/2006j.html#5 virtual memory

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Passwords for bank sites - change or not?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Passwords for bank sites - change or not?
Newsgroups: alt.computer.security
Date: Thu, 18 May 2006 10:36:25 -0600
Sheik Yurbhuti writes:
Reasonable password management isn't impractical. Requiring a password change every 6 months isn't unreasonable. It's a marvelous policy, and no normal person should have any problem relearning a sufficiently strong password twice a year, or using a suitable method of storage and retrieval.

You're trying to prop up an argument that flies in the face of every shred of common sense, and the advice of every knowledgeable security professional that ever lived. I seriously doubt you're going to get very far, but if you must you must I suppose. :(


the problem with passwords now start to crop up when you have a 100 or more different passwords. post in similar thread
http://www.garlic.com/~lynn/2006j.html#28 Password Complexity

shared-secrets based authentication paradigm require unique password for every unique security domain ... as countermeasure to cross-domain replay/impersonation attacks. lots of past posts about shared-secret based authentication
http://www.garlic.com/~lynn/subintegrity.html#secret

references to an old april 1st, password corporate directive from 1984
http://www.garlic.com/~lynn/2001d.html#52 A beautiful morning in ARM

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 May 2006 10:40:40 -0600
jmfbahciv writes:
But what gets done and used in-house can be extremely horrible to turn into a product that can be sold, maintained and put into the company's main software production line.

as i've mentioned before we were operating a high-speed backbone, but told we were allowed to bid on the initial NSFNET RFP, recent posting discussing some of the subject:
http://www.garlic.com/~lynn/2006j.html#34 Arpa address

an older post also mentioning the subject
http://www.garlic.com/~lynn/internet.htm#0

the technology used in the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

was re-released the same year the resource manager was released (1976) (a version with subset of the function had been released earlier)

it was also deployed for bitnet ... for a period in the early 80s, the number of bitnet nodes (independent of the internal network nodes) was also as large or larger than the number of arpanet/internet nodes.
http://www.garlic.com/~lynn/subnetwork.html#bitnet

this was also then propagated to europe for earn ... old earn email by the person got the job to head it up
http://www.garlic.com/~lynn/2001h.html#65
and also mentioned in this recent posting
http://www.garlic.com/~lynn/2006i.html#31 virtual memory

some amount of the success of tcp/ip was having peer-to-peer (in addition to gateways and internetworking) that extended down to workstations and PCs.

as i've mentioned before there was significant internal corporate politics by the communication group ... trying to maintain the terminal emulation paradigm for workstations and PCs (in the mid-80s there were huge number of PCs connected to internal networking nodes ... but they were mostly forced to be treated as emulated terminals ... as opposed to networking nodes):
http://www.garlic.com/~lynn/subnetwork.html#emulation

a few recent postings also discussing the subject
http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
http://www.garlic.com/~lynn/2006i.html#2 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#8 ALternatives to EMail
http://www.garlic.com/~lynn/2006j.html#31 virtual memory

for a little drift ... i've been periodically made somewhat facetious statements about the installed customer base of products that originated from the science center work
http://www.garlic.com/~lynn/subtopic.html#545tech

was as large or larger than some other vendors whole product line; however because the corporation's mainstream batch offerings so dominated the market ... many people's impression of what products were in the market place was obscured.

i've joked about some rivalry between the science center on the 4th floor and the work that went on at the 5th floor at 545tech sq. however, it was difficult to compare the two efforts in terms of installed customers ... since there was so much of the corporation behind the science center's effort.

i've periodically tried to put in into perspective by pointing out that there was a really humongous installed batch customer based ... and that the installed customer base for the science center technology was much smaller (that the batch customer base) ... and the number of internal installed customers for the science center technology was much smaller than the external customers.

now, one of my hobbies at the science center was producing my own internal production system product and distributing and supporting it directly for a limited number of internal customers. this was a small subset of the total number of internal customers (which was in turn smaller than the number of external customers, which in turn was much smaller than the number of the batch external customers) ... however at one point, the number of internal places running the heavily customized system that I directly distributed was about the same as the total number of systems (internal, external, over its complete lifetime) that ever ran the system produced on the 5th floor.

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

virtual memory

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtual memory
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 18 May 2006 11:13:35 -0600
jmfbahciv writes:
I'm not sure we ever had much bidding wars in the early 70s with your science group's output. Didn't IBM use your nook to do the R&D to eventually seep into the main stream of their products?

around the middle of the cp67 product cycle ... they split off the cp67 group from the science center into a standard product group ... which eventually moved to the 3rd floor and absorbed the boston science center located there (they had done cps ... converstational programming system that swapped and ran in real memory environment under os/360 ... and provided interactive basic and interactive pli, jean sammet was also in the boston programming center). the cp67 product group continued to expand and also started the work to morph cp67 to vm370). it eventually outgrew the space on the 3rd floor and moved out to burlington mall taking over the old service bureau corporation bldg. there (and growing to a couple hundred people).

misc. past posts mentioning the evoluation of the cp67 (and then vm370) product group
http://www.garlic.com/~lynn/94.html#2 Schedulers
http://www.garlic.com/~lynn/98.html#7 DOS is Stolen!
http://www.garlic.com/~lynn/99.html#179 S/360 history
http://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
http://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
http://www.garlic.com/~lynn/2001m.html#47 TSS/360
http://www.garlic.com/~lynn/2001m.html#49 TSS/360
http://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
http://www.garlic.com/~lynn/2002e.html#27 moving on
http://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction
http://www.garlic.com/~lynn/2002h.html#59 history of CMS
http://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002m.html#9 DOS history question
http://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
http://www.garlic.com/~lynn/2002p.html#14 Multics on emulated systems?
http://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
http://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
http://www.garlic.com/~lynn/2003h.html#34 chad... the unknown story
http://www.garlic.com/~lynn/2003k.html#0 VSPC
http://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
http://www.garlic.com/~lynn/2004.html#20 BASIC Language History?
http://www.garlic.com/~lynn/2004.html#32 BASIC Language History?
http://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
http://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years
http://www.garlic.com/~lynn/2004e.html#37 command line switches [Re: [REALLY OT!] Overuse of symbolic
http://www.garlic.com/~lynn/2004g.html#24 |d|i|g|i|t|a|l| questions
http://www.garlic.com/~lynn/2004g.html#35 network history (repeat, google may have gotten confused?)
http://www.garlic.com/~lynn/2004g.html#38 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
http://www.garlic.com/~lynn/2004m.html#6 a history question
http://www.garlic.com/~lynn/2004m.html#54 Shipwrecks
http://www.garlic.com/~lynn/2004n.html#7 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
http://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005h.html#37 Software for IBM 360/30
http://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd
http://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005p.html#0 Article: The True Value of Mainframe Security
http://www.garlic.com/~lynn/2005q.html#12 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005q.html#14 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005s.html#35 Filemode 7-9?
http://www.garlic.com/~lynn/2005s.html#36 Filemode 7-9?
http://www.garlic.com/~lynn/2006b.html#18 {SPAM?} Re: Expanded Storage

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Arpa address

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Arpa address
Newsgroups: alt.folklore.computers
Date: Thu, 18 May 2006 12:57:43 -0600
eugene@cse.ucsc.edu (Eugene Miya) writes:
Different OSes isn't enough to cut it Lynn. We are just going to have to continue to disagree.

actually there are two, somewhat separate heterogeneous capability provided by gateways ... one is somewhat the technology/protocol heterogeneous ... and the other is the administrative heterogeneous.

while the internal network appeared to single corporation ... the individual plant sites and individual countries had their own profit responsibilities, cost centers and administrtive control (i.e the internal network didn't evolve with any sort of corporate hdqtrs cooperation and/or funding).
http://www.garlic.com/~lynn/subnetwork.html#internalnet

in fact, several times internal network connectivity went on in spite of corporate hdqtrs ... many times you get into corporate hdqtrs and you encounter people that believe that they must be in control of something. there was a different inhibitor to the growth of the internal network. there was a strong corporate and business requirement that all links leaving company premise had to be encrypted. at various time, there was the comment that the internal network had over half of all link encryptors installed in the world. talking different countries into allowing encrypted links between two different company facilities in two different countries ... where the links cross country boundaries ... could be a real pain.

the gateway capability greatly simplified being able to deploy in environments where there was different networking protocols (even if they tended to be platformed many times on a variety of different OSes that happened to operated on the same hardware architecture) ... as well as totally different administrative and responsibility boundaries.

my claim has been that the cut-over of the arpanet/internet on 1/1/83 ... removed a significant inhibitors to interconnecting a large number of different nodes across various protocol, processor as well as administrative domains
http://www.garlic.com/~lynn/subnetwork.html#internet

... something that had been found in the 70s with growing the internal network.

in the late 70s, i was told a number of times that the reason that the inter-IMP links had to be 56kbits ... was because of the significant inter-IMP protocol chatter trying to maintain a real-time globally consistent network-wide view of all detailed IMP operation (i.e. somewhat implying globally consistent administrative and policy operation).

the acceptable use policy (AUP, non-commercial) stuff with the NSF backbone did introduce some inhibitor to growth ... until enough of the commercial ISPs deployed their independent stuff. actually much of the NSF backbone was done by corporate entities under the umbrella of the NSF contract.

however there is a lot of folklore that the commerical entities providing resources under the NSF backbone contract umbrella ... actually contributed resources many times what was actually paid for by NSF. the scenario at the time was that there was significant unused bandwidth in the form of dark-fiber deployed and nobody had been able to come up with a transition strategy i.e. extensive existing fixed cost infrastructure was being paid for mostly by the existing voice tariffs. If you cut the tariffs to encourage the next generation of bandwidth hungry applications ... there would be a long stretch before the tariff*bandwidth product was again able to cover the fixed costs (bandwidth use increased to offset the cut in the tariff). the transition strategy then required a isolated incubator environment where the new generation of bandwidth hungrey applications could evolve ... w/o significant impact on the existing income infrastructure.

one of the old AUPs:
NYSERNet, Inc. ACCEPTABLE USE POLICY

(Revised 12/14/89)

NYSERNet, Inc. recognizes as acceptable all forms of data communications across its network, except where federal subsidy of connections may require limitations. In such cases use of the network should adhere to the general principle of advancing research and education through interexchange of information among research and educational institutions in New York State.

In cases where data communications are addressed to recipients outside of the NYSERNet regional network and are carried across other regional networks or the Internet, NYSERNet users are advised that acceptable use policies of those other networks apply and may, in fact, limit use.

The President of NYSERNet, Inc. and his designees may at any time make determinations that particular uses are or are not consistent with the purposes of NYSERNet, Inc. which determinations will be binding on NYSERNet users.

NYSERNET - ACCEPTABLE USE POLICY (Adopted July 16, 1987)

This statement represents a guide to the acceptable use of NYSERNet facilities.

1. All use must be consistent with the purposes of NYSERNet.

2. The intent of the use policy is to make clear certain cases which are consistent with the purposes of NYSERNet, not to exhaustively enumerate all such possible uses.

3. The President of NYSERNet Inc. and his designees, may at any time make determinations that particular uses are or are not consistent with the purposes of NYSERNet. Such determinations will be reported in writing to the Board of NYSERNet Inc. for consideration and possible revision at the next meeting of the board.

4. If a use is consistent with the purposes of NYSERNet, then activities necessary to that use will be considered consistent with the purposes of NYSERNet. For example, administrative communications which are part of the support infrastructure needed for research and instruction are acceptable.

5. Use for scientific research or instruction at not-for-profit institutions of research or instruction in New York State is acceptable.

6. Use for a project which is part of or supports a scientific research instruction activity for the benefit of a not-for-profit institution of research or instruction in New York State is acceptable, even if any or all parties to the use are located or employed elsewhere. For example, communications directly between industrial affiliates engaged in support of a project for such an institution is acceptable.

7. Use for scientific research or instruction at for-profit institutions may or may not be consistent with the purposes of NYSERNet, and will be reviewed by the President or his designees on a case-by-case basis.


.. snip ...

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Arpa address

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Arpa address
Newsgroups: alt.folklore.computers
Date: Thu, 18 May 2006 13:12:34 -0600
ref:
http://www.garlic.com/~lynn/2006j.html#34 Arpa address
http://www.garlic.com/~lynn/2006j.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#45 Arpa address

what the heck ... here is another aup
Interim 3 July 1990 NSFNET Acceptable Use Policy

The purpose of NSFNET is to support research and education in and among academic institutions in the U.S. by providing access to unique resources and the opportunity for collaborative work.

This statement represents a guide to the acceptable use of the NSFNET backbone. It is only intended to address the issue of use of the backbone. It is expected that the various middle level networks will formulate their own use policies for traffic that will not traverse the backbone.

(1) All use must be consistent with the purposes of NSFNET.

(2) The intent of the use policy is to make clear certain cases which are consistent with the purposes of NSFNET, not to exhaustively enumerate all such possible uses.

(3) The NSF NSFNET Project Office may at any time make determinations that particular uses are or are not consistent with the purposes of NSFNET. Such determinations will be reported to the NSFNET Policy Advisory Committee and to the user community.

(4) If a use is consistent with the purposes of NSFNET, then activities in direct support of that use will be considered consistent with the purposes of NSFNET. For example, administrative communications for the support infrastructure needed for research and instruction are acceptable.

(5) Use in support of research or instruction at not-for-profit institutions of research or instruction in the United States is acceptable.

(6) Use for a project which is part of or supports a research or instruction activity for a not-for-profit institution of research or instruction in the United States is acceptable, even if any or all parties to the use are located or employed elsewhere. For example, communications directly between industrial affiliates engaged in support of a project for such an institution is acceptable.

(7) Use for commercial activities by for-profit institutions is generally not acceptable unless it can be justified under (4) above. These should be reviewed on a case-by-case basis by the NSF Project Office.

(8) Use for research or instruction at for-profit institutions may or may not be consistent with the purposes of NSFNET, and will be reviewed by the NSF Project Office on a case-by-case basis.


... snip ...

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

where do I buy a SSL certificate?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: where do I buy a SSL certificate?
Newsgroups: alt.security,alt.computer.security
Date: Thu, 18 May 2006 13:31:04 -0600
Ask Josephsen <ask_rem@rem_minreklame.dk> writes:
I've got at website that needs to run https and I understand you need to purchase an SSL certificate to do so. But what is a fair price and does it matter from who I buy and what I buy?

it is also possible to generate your own self-signed SSL certificate and have clients that need to access your site ... validate the certificate via some out-of-band process.

i regularly access some number of https sites with self-signed certificates ... where my browser initially complains it is signed by an unknown certification authority (itself) and gives me an opportunity to view it, accept it for the current session, and/or load it for long term acceptance (basically into the same repository that contains the certification authority self-signed digital certificates that were loaded as part of building the browsers).

if you really want to buy one ... go to the security menu in the browsers (that will be typically used by your clients) and do a list of the currently loaded self-signed digital certificates ... this will give you an indication of which certification authorities that the browsers are currntly configured to automatically accept.

numerous collected past postings mentioning ssl and ssl digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

where do I buy a SSL certificate?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: where do I buy a SSL certificate?
Newsgroups: alt.security,alt.computer.security
Date: Thu, 18 May 2006 13:38:02 -0600
Anne & Lynn Wheeler writes:
i regularly access some number of https sites with self-signed certificates ... where my browser initially complains it is signed by an unknown certification authority (itself) and gives me an opportunity to view it, accept it for the current session, and/or load it for long term acceptance (basically into the same repository that contains the certification authority self-signed digital certificates that were loaded as part of building the browsers).

the real major difference between a self-signed digital certificate that you generate ... and a self-signed digital certificate generated by some certification authority ... is that the certificate authorities have convinced the browser vendors (typically by paying them) to preload their digital certificates into the browser's digital certificate repository when the browser is built.

however, it is straight-forward operation for clients to do post-install administrative operations on their browser's digital certificate repository (adding and/or deleting digital certificates).

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Arpa address

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Arpa address
Newsgroups: alt.computer.security,alt.folklore.computers
Date: Thu, 18 May 2006 14:39:43 -0600
ibuprofin@painkiller.example.tld (Moe Trin) writes:
"the internal network"... meaning?

404 compliant.

0836 Who talks TCP?. D. Smallberg. January 1983. (Format: TXT=43643 bytes) (Obsoletes RFC0835) (Obsoleted by RFC0837) (Status: UNKNOWN)


re:
http://www.garlic.com/~lynn/2006j.html#34 Arpa address

finger slip
http://www.garlic.com/~lynn/subnetwork.html#internalnet

the internal network passed 1000 nodes here 10jun83 (and was already rapidly approaching that on 1jan83)
http://www.garlic.com/~lynn/internet.htm#22
http://www.garlic.com/~lynn/99.html#112

this has note regarding CSNET from 22oct82
http://www.garlic.com/~lynn/internet.htm#0

here is status note from CSNET dated 30dec82 getting prepared for switch-over
http://www.garlic.com/~lynn/2000e.html#18

short extract from note included above:
The transition requires a major change in each of the more than 250 hosts on the ARPANET; as might be expected, not all hosts will be ready on 1 January 1983. For CSNET, this means that disruption of mail communication will likely result between Phonenet users and some ARPANET users. Mail to/from some ARPANET hosts may be delayed; some host mail service may be unreliable; some hosts may be completely unreachable. Furthermore, for some ARPANET hosts this disruption may last a long time, until their TCP-IP implementations are up and working smoothly. While we cannot control the actions of ARPANET hosts, please let us know if we can assist with problems, particularly by clearing up any confusion. As always, we are <cic@csnet-sh> or (617)497-2777.

... snip ...

slight reference from my rfc index
http://www.garlic.com/~lynn/rfcietff.htm

various archeological references, including pointers to 801, 832, 833, 834, 835, 836, 836, 838, 339, 842, 843, 845, 846, 847, 848, 876
http://www.garlic.com/~lynn/rfcietf.htm#history

in the normal frame version of rfc index, the rfc summaries are brought up in the bottom frame. In the RFC summary field, clicking on the author name brings up all RFCs by that author. Clicking on the RFC number for the summary brings up all keywords assocated with that RFC. Clicking on the ".txt=nnn" field, retrieves the actual RFC. Clicking on other RFC numbers, will bring up the respective summary for that RFC.

as an aside ... this information from my rfc index use to be included as section 6.10 in older STD1s
http://www.garlic.com/~lynn/rfcietf.htm#obsol

sample of rfc summaries from
http://www.garlic.com/~lynn/rfcidx2.htm
846 - Who talks TCP? - survey of 22 February 1983, Smallberg D., 1983/02/23 (14pp) (.txt=45597) (Obsoleted by 847) (Obsoletes 845) 845 - Who talks TCP? - survey of 15 February 1983, Smallberg D., 1983/02/17 (14pp) (.txt=45983) (Obsoleted by 846) (Obsoletes 843) 844 Who talks ICMP, too? - Survey of 18 February 1983, Clements R., 1983/02/18 (5pp) (.txt=9078) (Updates 843) 843 - Who talks TCP? - survey of 8 February 83, Smallberg D., 1983/02/09 (14pp) (.txt=46193) (Obsoleted by 845) (Updated by 844) (Obsoletes 842) 842 - Who talks TCP? - survey of 1 February 83, Smallberg D., 1983/02/03 (14pp) (.txt=45962) (Obsoleted by 843) (Obsoletes 839) 839 - Who talks TCP?, Smallberg D., 1983/01/26 (14pp) (.txt=45175) (Obsoleted by 842) (Obsoletes 838) 838 - Who talks TCP?, Smallberg D., 1983/01/20 (14pp) (.txt=45033) (Obsoleted by 839) (Obsoletes 837) 837 - Who talks TCP?, Smallberg D., 1983/01/12 (14pp) (.txt=44864) (Obsoleted by 838) (Obsoletes 836) 836 - Who talks TCP?, Smallberg D., 1983/01/05 (13pp) (.txt=43643) (Obsoleted by 837) (Obsoletes 835) 835 - Who talks TCP?, Smallberg D., 1982/12/29 (13pp) (.txt=42959) (Obsoleted by 836) (Obsoletes 834) 834 - Who talks TCP?, Smallberg D., 1982/12/22 (13pp) (.txt=42764) (Obsoleted by 835) (Obsoletes 833) 833 - Who talks TCP?, Smallberg D., 1982/12/14 (13pp) (.txt=42973) (Obsoleted by 834) (Obsoletes 832) 832 - Who talks TCP?, Smallberg D., 1982/12/07 (13pp) (.txt=42751) (Obsoleted by 833)

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Arpa address

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Arpa address
Newsgroups: alt.computer.security,alt.folklore.computers
Date: Thu, 18 May 2006 14:54:09 -0600
ibuprofin@painkiller.example.tld (Moe Trin) writes:
I imagine you recognize several of those by sight. These are only the "current" assignment/allocation dates - network 10 was ARPA, and is now RFC1918, network 3, 4, and 8 were BBN, now GE and Level3, and so on.

re:
http://www.garlic.com/~lynn/2006j.html#34 Arpa address
http://www.garlic.com/~lynn/2006j.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#45 Arpa address
http://www.garlic.com/~lynn/2006j.html#46 Arpa address
http://www.garlic.com/~lynn/2006j.html#49 Arpa address

trivia ... guess who asked for and obtained class-A subnet address 9??

hint, the location is referenced in the email dated 22oct82 email included in this post
http://www.garlic.com/~lynn/internet.htm#0

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Hey! Keep Your Hands Out Of My Abstraction Layer!

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hey! Keep Your Hands Out Of My Abstraction Layer!
Newsgroups: sci.electronics.design,comp.dcom.lans.ethernet,comp.protocols.tcp-ip,alt.cellular.bluetooth,alt.internet.wireless,alt.folklore.computers
Date: Thu, 18 May 2006 15:07:55 -0600
Keith writes:
This is only half correct. IBM, at the time, was experimenting with skunk-works types of projects to try to get around some of the bureaucratic stodginess they'd built up. The PC was one of these semi-autonomous projects. No, it wasn't expected to turn the world on its ear, and likely would have been killed if anyone thought it really would. Of course it wasn't done with IBM's "proven project methods". That was the point of these independent projects (every development lab had them).

they were suppose to be independent business units ... and they were funded to be lean and mean. however, they frequently conserved costs by being co-located at an existing corporate facility ... and had to deal with various bureaucratic issues at those locations.

the frequent response to claiming that you weren't suppose to be subject to some bureaucratic process ... was that those rules only applied to other bureaucratic processes ... IT DID NOT apply to THEIR bureaucratic processes. when nearly all of the bureaucrats made such assertions ... you found that you weren't funded and/or staffed to handle such bureaucratic processes.

on a smaller scale in the 70s, most labs were supposed to set aside some portion of their budget for advanced technology projects ... and you found various labs sponsoring "adtech" conferences. however, going into the late 70s, you found some number of sites heavily using their "adtech" resources for fire fights in normal day-to-day product operation. as a result there was a derth of internal adtech conferences during the late 70s and early 80s.

i managed to run one in mar82 out of SJR ... but it had been the first in a number of years. minor post mentioning the event and listing the CFP
http://www.garlic.com/~lynn/96.html#4a

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Passwords for bank sites - change or not?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Passwords for bank sites - change or not?
Newsgroups: alt.computer.security
Date: Thu, 18 May 2006 15:29:53 -0600
Borked Pseudo Mailed writes:
If you find using pseudo-random passwords and changing them every 6 months a "severe problem" you have absolutely no business at ALL hanging out in a security oriented newsgroup handing out advice.

This is one of the dumbest debates I've seen here. Of COURSE changing your password regularly is a good thing. Only totally clueless newbies or completely lazy slobs would say otherwise.


i know quite a few people that have on the order of 100 passwords, and effecitvely only use online banking once a month for bill payment. remembering a pseudo-random password that you only used once a month (and possibly is one out of 100) is a non-trivial task. it is also somewhat difficult to convince such people that they have to change such password every six uses.

one of the reasons that banking community is looking at moving to biometrics is that something like 30percent of the population are reported to write their pin number on their debit card. the knee-jerk reaction frequently has been that biometrics like fingerprints aren't very secure.

the counter argument is ... not very secure compared to what? giving a person the choice of registering one of their fingers that is least likely to handle the card ... which becomes more difficult for a crook,

1) to copy a pin written on a lost/stolen card and replay it

or

2) to lift a fingerprint (that isn't very likely to be there) off a lost/stolen card and replay it

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/

Arpa address

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Arpa address
Newsgroups: alt.computer.security,alt.folklore.computers
Date: Thu, 18 May 2006 16:05:02 -0600
ArarghMail605NOSPAM writes:
RegDate: 1988-12-16

ref:
http://www.garlic.com/~lynn/2006j.html#50 Arpa address

old email that had been sent to 3-4 people

Date: 16 December 1988, 16:40:09 PST
From: somebody
Subject: Class A Network number

As a welcomed gift from the Internet, my request for a Class A network number for IBM has been approved. Initially we decided to go with multiple class B numbers because it would allow us to have multiple connections to the Internet. However, as time passed, and IP envy increased, I found it necessary to re-evaluate our requirements for a Class A number. My main concern was still the issue of connectivity to the rest of the Internet and the technical constraints that a Class A address would present. At Interop 88 I discussed my concerns with Jon Postel and Len Bosak. Len indicated that although a Class A number would still restrict us to 1 entry point for all of IBM from the Internet, it would not preclude multiple exit points for packets. At that point it seemed as if Class A would be ok and I approached Jon Postel and the network number guru at SRI to see if my request would be reconsidered. It turns out that the decision to deny us in the past was due to the numbers I projected for the number of hosts on our IBM Internet in 5 years. Based on that number, they couldn't justify giving us a full Class A. Can't blame them. So after Interop, I sent in a new request and increased our projected needs above a threshold which would warrant a Class A. Although I doubt we will ever use the full address space in 20 years let alone 5, I did what was necessary to get the number. However, the application went in quite some time ago and I still hadn't received a response. Yesterday I found out that it was because I had put down an incorrect U.S. Mail address for our sponsor!!! These people are tough. Anyway, after Postel informed me about my error, I corrected it and sent in the updated application again. The result was the issuance today of a Class A network number for IBM. Being an old Beatles fan, I asked for Number 9. Cute huh? Whatever. Anyway, that's what we got. Consider it a Christmas present from the Internet.

As many of you know, I will be leaving IBM at the end of this year. Obtaining this number was the last thing I wanted to do for IBM and the IBM Internet project. The hard part lies ahead. We still have 10 class B numbers. A lot of engineering of the network remains to be done. I will leave that up to you folks. xxxxx will be assuming responsibility for the project after I leave. I wish you all the best. It's been fun working with you on this!! My only regret is that I didn't have more time for it.


... snip ... top of post, old email index

for other drift, misc. past posts mentioning interop 88
http://www.garlic.com/~lynn/subnetwork.html#interop88

-- Anne & Lynn Wheeler | http://www.garlic.com/~lynn/




previous, next, index - home