List of Archived Posts

2003 Newsgroup Postings (03/31 - 04/08)

early vnet & exploit
History of project maintenance tools -- what and when?
History of project maintenance tools -- what and when?
Alpha performance, why?
Alpha performance, why?
Alpha performance, why?
recent cp67/vm370 walks down memory lane
unix
Alpha performance, why?
Alpha performance, why?
Alpha performance, why?
Security Certifications?
Alpha performance, why?
Alpha performance, why?
Alpha performance, why?
Alpha performance, why?
Alpha performance, why?
History of project maintenance tools -- what and when?
Could somebody use SCSH, Sheme, or Lisp to create the "Lispm"
Disk prefetching
Alpha performance, why?
"Super-Cheap" Supercomputing
Could somebody use SCSH, Sheme, or Lisp to create the "Lispm"
Alpha performance, why?
New RFC 3514 addresses malicious network traffic
New RFC 3514 addresses malicious network traffic
Alpha performance, why?
Ibm's disasters in the 70's
New RFC 3514 addresses malicious network traffic
Natl. Crypto Museum was: reviving Multics -- Computer Museum
Alpha performance, why?
Alpha performance, why?
Alpha performance, why?
PDP10 and RISC
Lisp Machines
Public Encryption Key
Super Anti War Computers
unix
unix
1130 Games WAS Re: Any DEC 340 Display System Doco ?
inter-block gaps on DASD tracks
SLAC 370 Pascal compiler found
SLAC 370 Pascal compiler found
ECPS:VM DISPx instructions
unix
Timesharing TOPS-10 vs. VAX/VMS "task based timesharing"
Any DEC 340 Display System Doco ?
ECPS:VM DISPx instructions
Alpha performance, why?
unix
Alpha performance, why?
inter-block gaps on DASD tracks
ECPS:VM DISPx instructions
Alpha performance, why?
ECPS:VM DISPx instructions
Alpha performance, why?
ECPS:VM DISPx instructions

early vnet & exploit

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: Mon, 31 Mar 2003 08:35:36 -0700
Newsgroups: bit.listserv.vmesa-l
Subject: early vnet & exploit
At 8:08 AM 3/31/2003 -0600, Graeme Moss wrote:
Hi Listers,

I have come across the directory statement D8ONECMD which can be used to limit the number of CP commands passed by diag08 to one.

Where would it be used and why ?

How does it plug an integrity hole ?

Cheers Graeme


probably mid-70s (nearly 30 years ago) ... before VNET/NJE/JES networking product was announced and made available to customers .... it first appeared on the internal network. It was possible to do "msg vnet cmd remotemachine something or another". The "something or another" could be a CP command ... that was first checked by vnet. somebody tried putting command line separator in the "something or another" and hiding other cp commands that wouldn't be checked. simple would be msg to a user on the remote machine ... the remote vnet would use diagnose 8 to send the message to the user on that machine .... but I think as a joke, somebody tried #cp shutdown.

the internal network was larger than arpanet/internet up until sometime in '85. part of the reason was it had native efficient line drivers and didn't have a lot of architectural bugs that JES2/NJE came up with. It also provided the support for the logical equivalent of gateway; both the arpanet (with IMPs, until the great 1/1/83 switchover to internet protocol) and JES2 required a homogeneous implementation. JES2/NJE implementation was especially onerous. NJE started out with the JES2 internal 255-entry psuedo device table ... and any entries left over could be used for network node definitions (possibly 180-200 entries). At the time that JES2/NJE first shipped to customers, the internal network was well over 255 nodes. NJE also had the unfortunate characteristic if it saw something that had either the origin or the destination node not in the local internal table .... NJE trashed it (effectively couldn't operate as any kind of network intermediate node). The other characteristic was that NJE jumbled various different protocols together in the header. Slight header variations between releases could each others MVS system (there is a famous incident of file originating in san jose crashing a mvs system in hursley). Eventually the MVS/NJE nodes were relegated to end-nodes behind VNET intermediate nodes. A crop of special VNET/NJE (non-native) line drivers grew up that was specific to each different release of NJE .... where it was the responsibility of the VNET/NJE line drivers to provide a canonical translation of the NJE header and convert it to exact format reguired by the JES2/NJE on the other end of the line.

misc 1/1/83 discussions:
https://www.garlic.com/~lynn/internet.htm#22 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#6 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
https://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002q.html#4 Vector display systems
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN

past discussions of internal network, nje, jes2, vnet, etc
https://www.garlic.com/~lynn/95.html#7 Who built the Internet? (was: Linux/AXP.. Reliable?)
https://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002q.html#31 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003.html#68 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003d.html#59 unix

misc. discussions of size of internal network:
https://www.garlic.com/~lynn/97.html#2 IBM 1130 (was Re: IBM 7090--used for business or science?)
https://www.garlic.com/~lynn/97.html#26 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#109 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000c.html#46 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#60 Disincentives for MVS & future of MVS systems programmers
https://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
https://www.garlic.com/~lynn/2000e.html#14 internet preceeds Gore in office.
https://www.garlic.com/~lynn/2000e.html#30 Is Tim Berners-Lee the inventor of the web?
https://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2000g.html#24 A question for you old guys -- IBM 1130 information
https://www.garlic.com/~lynn/2000g.html#39 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
https://www.garlic.com/~lynn/2001e.html#12 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001e.html#34 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001h.html#34 D
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#28 Title Inflation
https://www.garlic.com/~lynn/2001j.html#29 Title Inflation
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#56 E-mail 30 years old this autumn
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes
https://www.garlic.com/~lynn/2001l.html#35 Processor Modes
https://www.garlic.com/~lynn/2001l.html#45 Processor Modes
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#9 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#11 Security Proportional to Risk (was: IBM Mainframe at home)
https://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002e.html#47 Multics_Security
https://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
https://www.garlic.com/~lynn/2002k.html#18 Unbelievable
https://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#23 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002k.html#42 MVS 3.8J and NJE via CTC
https://www.garlic.com/~lynn/2002n.html#35 VR vs. Portable Computing
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2003.html#68 3745 & NCP Withdrawl?
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
https://www.garlic.com/~lynn/2003d.html#59 unix
https://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN

History of project maintenance tools -- what and when?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of project maintenance tools -- what and when?
Newsgroups: alt.folklore.computers,alt.os.multics,alt.sys.pdp10
Date: Mon, 31 Mar 2003 15:59:23 GMT
dpeschel@eskimo.com (Derek Peschel) writes:
The earliness of the date (1965) made me wonder even more strongly why these ideas have taken so long to catch on. How widely did papers about CTSS' and Multics' programming environments travel? What kinds of people read them? (I assume access to the information isn't what holds people up, but I could be wrong.)

note that some of the CTSS people went to multics on the 5th floor of 545 tech sq. and some went to csc on the 4th floor of 545 tech sq. (and did cp/49 and cms starting 1965 ... which then morphed into cp/67 and eventually begate vm/370).
https://www.garlic.com/~lynn/subtopic.html#545tech

when i encountered cp/67 and cms, it already had update and compare commands (but nothing from compare could be used by update). after i joined csc, the multi-level update scheme was developed ... and one of the MIT students did the parallel merge support which had some diff support (however, the parallel merge support never propagated into the vm/370 version or shipped to customers).

sometime after the mid-70s there was diff command developed internal ... and after presentation at share ... something similar was developed and made available on the waterloo/share tape (for lots of stuff, if you couldn't get it out of internal, make a technology presentation at share ... and have some of the share community reimplement it).

A very specific motivation for the diff command was release to release transition. Standard product procedure for a new release was to permanently apply all accumlate service and other updates to the base source file and then freshly resequence it by 1000. Customers had loads of their own source update files that were no longer usable. So the process was to take all of the previous, base product release source and updates and create a temporary source file (using the old sequence numbers). Then run a diff between that file (with old sequence numbers) and the source file from the new release (which could have new development that hadn't shown up in previous files distributed to customers). When that diff/update file was applied to the (old) temporary source file ... it would result in the equivalent executable to the new release .... but would have the "old" sequence numbers. Customers then (manually) reconciled any update conflicts (with their local updates and anything from the new release).

There was a companion program that I called reseq ... which given two otherwise identical source files that only differed in the sequence field, it would take any number of updates that applied to one of the base source files and convert their sequence numbers to correspond with the other source file.

previous refs:
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#76 History of project maintenance tools -- what and when?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

History of project maintenance tools -- what and when?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of project maintenance tools -- what and when?
Newsgroups: alt.folklore.computers,alt.os.multics,alt.sys.pdp10
Date: Mon, 31 Mar 2003 16:14:55 GMT
dpeschel@eskimo.com (Derek Peschel) writes:
More or less as I asked Tom: Was this work described anywhere? How many people (not working on a /360) had access to it? How many (and what kinds) of people actually read it?

cp/40 was described in a number of places. Cp/67 was initially available at CSC and Lincol labs ... and then jan. 68 they installed it at univ. I was at. At the spring '68 share meeting in houston it was announced as ibm product .... and the documentation could then be ordered from ibm by anybody.

it got distribution in commercial sector and large number of univerisity datacenters however, I didn't see a lot of bleed over into the academic community. for instance a significant faction of share membership was university datacenters .... and lot of stuff found on the share waterloo tape was university datacenters ... and even some amount of vmshare computer conferencing in the mid & late '70s was by people from university datacenters.

i don't have any feeling for how many people would have read the stuff about update et al. Misc. other stuff done at CSC was internal network, script (both with dot run-off as well as "markup language"). the internal network was announced as product and there were other vendors that implemented interfaces to it (like univerisity community with respect to bitnet and earn). script spawned a number of script clones on a number of other platforms .... and of course everybody knows that gml begate sgml which begat html, et al.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch
Date: Mon, 31 Mar 2003 18:36:34 GMT
hack@watson.ibm.com (hack) writes:
Actually you can, as long as you "demand" nicely! Again, in the days of tight real memory, the VM/370 kernel was partially pageable even though it ran with address translation off. Pageable modules were page-aligned of course, the CALL mechanism checked for presence and brought in the missing page if needed. Not-recently-used modules could be paged out, and the frame re-used for other things (in this case, actual virtual-memory support). I believe modules had to be read to their assigned real address, so that a virtual page might have to be moved. One could imagine however using position-independent modules, backed up by appropriate book-keeping, to achieve the same effect without any address translation!

i first did the paged kernel trick with pieces of the cp/67 kernel. when stuff was exeucting .... it used same paged-fixed mechanism that was used for pages fixed for I/O transfer (which also ran real). There was a psuedo address space for the kernel .... and call/return did the appropriate fetch/lock/unlock (again using the same mechanism used for virtual I/O transfers). It didn't ship in CP/67 but was part of the standard vm/370 product. This used the LRA (load real address) instruction which referenced a set of hardware architecturally defined virtual address tables ... but there was never any transfer of control into that address space ... aka it set up all the necessary tables and used the page I/O subsystem for doing the disk transfer. Also the standard page replacement algorithm would find & select the pages based on standard replacement logic (if they weren't pinned/locked).

I later expanded that ... where each process was given a secondary psuedo address space into which various kernel control blocks could be mapped. One example was the backing-store/disk-map tables for all of the process's virtual pages. If the process was suspended .... and its virtual pages written out ... depending on load .... the tables mapping those pages on disk could also be written out. This shipped as part of the resource manager:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#35 unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics

there was also a "disk cleaner" (page migration) that would check for low-usage pages on higher-speed transfer devices (like fixed-head disks or drums) and migrate them to lower speed disks.

at least one vm/370 time-sharing service bureau expanded that support to include "all" process control blocks .... allowing process to be checkpointed to disk ... migrate to a different processor complex with access to the same disk-pool .... or even migrate to a different processor complex with transfer over a network (aka waltham to san fran ... cases were processor complex had to be brought down 3rd shift over the weekend for scheduled maintenance).

random past discussions of paging kernel & other pieces:
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#11 REXX
https://www.garlic.com/~lynn/2000b.html#32 20th March 2000
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#54 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 31 Mar 2003 18:51:23 GMT
hack@watson.ibm.com (hack) writes:
When real memory is tight, the file-read can indeed amount to a copy from file-space to paging-space, and it may be the beginning of the program that got paged out. Ouch -- unless startup code was carefully placed at the end, just to deal with this effect.

Peter's original comment that smart loaders are needed is to the point.

When relocation is involved, the program text ends up being scanned twice -- once for relocation, then again for execution. (This is relevant only when program text can contain address constants -- on many platforms that is not the case.) On my platform, where relocation is an issue, and whose origins are in the mid-70s when memory was tight, the loader processes the program text backwards to avoid thrashing, and this partially undoes the page-out of the beginning of the program.

I never changed this because now that real memory is abundant, it does not matter anymore. But the effect of mostly-sequential file-read vs page-at-a-time mmap is still there, with a vengeance. It's not just startup cost, btw: many programs run in their entirety in response to a single request (e.g. compiler), and the difference in elapsed time is significant. (When program text is shared and re-used, mmap works fine of course, because the pages are typically already resident.)


I did several things in my original mmap'ing of the cms file system (originally on cp/67 and then migrated to vm/370). The CMS user/application environment had no knowledge of the overall system characteristics; that was also where the filesystem and loader ran. the mmap interface that i provided to cms .... allowed for cms to provide some hints. Then below the line in the kernel mmap implementation it would look at overall real storage contention and sizes and adapt some stuff.

Under severe real storage constraints it would just update the lower level tables and return .... then individual pages would be brought in via the standard virtual page fault operation. For totally unconstrained environment (at least comparing the mmap specification to the current real storage size and the concurrent paging activity) .... it would do the mmap and then start a pre-fetch for all pages before returning to the process ... either immediately or delayed by some amount. If the process touched a page before the prepage was complete .... then the standard virtual serialization would do the right thing. It could also select to prepage a subset (along with hints) and either return immediately or delayed.

random other refs:
https://www.garlic.com/~lynn/submain.html#mmap

In the vm/370 version, i added the semantics to the executable creation command (genmod) to allow it to specify "shared segments". the loader (loadmod) would use that information (saved by genmod as part of the executable control information) to provide the appropriate specification in the mmap api.

a small subset of the "sharing" code ... part of the restructure of table handling in the cp kernel ... and a lot of the cms cleanup for execute only code (like embedded work areas had to be removed) was shipped in release 3 of vm/370 under something called discontiguous shared segments:
https://www.garlic.com/~lynn/2000.html#18 Computer of the century
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002g.html#59 Amiga Rexx
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2003b.html#33 dasd full cylinder transfer (long post warning)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 31 Mar 2003 21:36:17 GMT
"Glen Herrmannsfeldt" writes:
Many virtual storage OS would always allocate paging space for allocated virtual memory, even while that space was paged in. This prevents the problem of needing to page out and not having any place to put the page.

The first I knew that didn't do that was OS/2 2.0, though I am sure that there were others. It was necessary to run on small disks, so it was pretty important that it limit the page space usage. Also, like many OS now, would not allocate page space for executable files on hard disks, as it could always reload from disk. (When loading from floppy disks, it would read the whole file in.)

I believe OS/2 would lock the exeuctable file, but I know many unix that also use the executable file as backing store don't, and the program will die if the file is written over while it is running. Though now that many allow executing LZW compressed files, I would think it would be harder to use the file as backing store.


CP/67 and VM/370 did lazy allocation .... i.e. disk wasn't allocated until the first time a page had to be written; however after that the stragy maintained the "home" position for the page and was a "dup" strategy.

I started doing no-dup in late 70s .... based on whether the high-speed backing store was heavily constrained or not This was sort of as a follow-on the stuff that I released in the resource manager for doing page migration from high-speed/low-latency devices to lower-speed/higher-latency devices. Note that the dynamics allowed switching between "dup" & "no-dup" strategies based on resource bottlenecks (i.e. "no-dup" traded off secondary storage space for more writes).

Note that this also applied to 3880-11/ironwood contrul unit page-cache. It was relatively easy to overload the ironwood cache on large systems .... so a "no-dup" strategy in conjunction with a "distructive" read would also extend the "no-duplication" not only the physical disk surface ... but also to the intermediate controller cache (aka a "distructive read" indicated to the controller to remove it from cache after transfer, if it happened to be in cache). A dup strategy could have the page in processor memory, in the controller cache as well on some disk patter surface.

I also did a rewrite of the SYSOWN tables .... so that "high-speed" and "low-speed" could be configurable from allocation/deallocation standpoint. The standard SYSOWN index were full device ... and high/low speed was by device type ... so that both allocation and deallocation strategy was based on SYSOWN index & device type. I created a different structure for allocation that allowed finer control definition on a per area basis. For instance this allowed differentiation between an electronic-store emulated disk ... that had same device types as the "real" disk.

Note that the "swapper" (actually "big pages") implemented on both VM and MVS in the early 80s was a no-dup algorithm. A "big page" was collection of 4k pages that fit a track ... it also had some characteristics of log-structured filesystem .... in that it always wrote to a new location closest to the current head position (its primary objective was NOT to conserve scarce disk page space ... but to try and minimize arm movement).

misc. syspag/migration ref:
https://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)

lots of past postings on "dup" vis-a-vis "no-dup" strategies
https://www.garlic.com/~lynn/93.html#12 managing large amounts of vm
https://www.garlic.com/~lynn/93.html#13 managing large amounts of vm
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001i.html#42 Question re: Size of Swap File
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????
https://www.garlic.com/~lynn/2002b.html#10 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#19 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002f.html#26 Blade architectures

misc. 3880-11/ironwood refs:
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#54 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002o.html#3 PLX
https://www.garlic.com/~lynn/2002o.html#52 ''Detrimental'' Disk Allocation
https://www.garlic.com/~lynn/2003b.html#7 Disk drives as commodities. Was Re: Yamhill

misc. "big pages"
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)
https://www.garlic.com/~lynn/2002c.html#48 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#11 What are some impressive page rates?
https://www.garlic.com/~lynn/2002f.html#20 Blade architectures
https://www.garlic.com/~lynn/2002l.html#36 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002m.html#4 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

recent cp67/vm370 walks down memory lane

From: Anne & Lynn Wheeler <lynn@garlic.com>
Date: Mon, 31 Mar 2003 15:10:49 -0700
Subject: recent cp67/vm370 walks down memory lane
Newsgroups: bit.listserv.vmesa-l
from alt.folklore.computer and comp.arch newgroups .... fyi

https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#76 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#2 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#4 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?

unix

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: unix
Newsgroups: alt.folklore.computers
Date: Mon, 31 Mar 2003 22:32:20 GMT
Charles Shannon Hendrix writes:
In-place edits on configuration is fine if the system is designed with that in mind, and most UNIX software certainly is.

Now, that doesn't stop you from doing it otherwise, but at some point you have the same problem. When you "publish" your changes, there is just as much risk then as at the moment you save your in-place edit.

Now, some UNIX configuration is done with a safe process, which is easy to do with any configuration file:

1) copy configuration file 2) edit copy 3) test copy 4) replace original if test succeeded

This is what happens with things like vipw and other configuration edit programs.

It's pretty easy to script this.

The problem with using revision control, is that most revision control systems are not equipped to handle configuration files. I personally find it better to use RCS locally, and use a safe edit script to handle configuration.

I also take snapshots daily, and I'm working on a way to do generation snapshots, just for the sake of convienience.


the cp/67 and then vm/370 source was and up-date process rather than a down-date process found with rcs/cvs .... aka the original source was rarely touched and/or had its filesystem date/time touched. the control/configuration files added the incremental updates ... and the standard product maintenance process shipped monthly (PLC) cumulative incremental update files. the control files ... specified auxiliary files .... i.e. the control files tended to specify major functional applications ... and the auxiliary control files then listed the actual update files for that functional operation. So the testing was typically a "test" configuration file that had test/notest at the highest leve ... and then after testing ... move the update specification that corresponded to the tested feature into some auxiliary control/configuration file.

Everybody saw the whole source plus update files as distinct filesystem objects. Testing could be done with the same exact filesystem objects as used in the production system ... just by having a local version of the configuration/control file ... specifying the addition of the local "test" update files.

There were some regression testing issues if somebody slipped some new production files into one of the "lower-level" (aka earlier applied) auxiliary control files (aka like product maintenance). However, since they were distinct filesystem objects .... it would also be possible to temporarily make local copies of the affected auxiliary control files commenting out application of new things (like product maintenance updates) until appropriate regardion/review was completed.

one of the rules that tried to be followed was to keep the distinct filesystem objects with original date/time and NEVER change them (as closely as practical, everything became a new incremental update).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 31 Mar 2003 23:28:02 GMT
"Glen Herrmannsfeldt" writes:
The first I knew that didn't do that was OS/2 2.0, though I am sure that there were others. It was necessary to run on small disks, so it was pretty important that it limit the page space usage. Also, like many OS now, would not allocate page space for executable files on hard disks, as it could always reload from disk. (When loading from floppy disks, it would read the whole file in.)

somewhat aside ... somebody from boca contacted me regarding work going into os/2 1.3(? 1.3 released 1991, os/2 2.0 release 1992)) and wanted a lot of background regarding the dynamic adaptive stuff that I had done during the 60s & 70s. they were specifically interested in the scheduling and resource manager stuff ... but I provided them with a lot of general dynamic adaptive material .... and dynamically "scheduling/managing to the bottleneck" (aka changing strategies based on what was constrained resource).

... the mmap stuff for the cms filesystem (early '70s) ... had some adjustment that would either force to new location after initial read or "page in place" ... from its original filesystem location.

in the mmap for cms filesystem version that was released in the mid-80s as part of xt/370 .... it was always left to "page in place" .... since the cms filesystem emulated disk on a pc harddisk and the cp paging emulated disk on the pc harddisk nominally was the same (part of the mmap stuff in the xt/370 configuration ... was that the available real memory for paging on xt/370 machine was often smaller than the cms executable that was being loaded ... which in the non-mmap implementation ... aka simulated real i/o ... resulted in extremely long delays ... effectively loaded little bits in ... writing them out to a new location and then reading in additional little bits ... until the virtual address space has been populated).

sort of start of os/2 interaction ... from long ago and far away (just before release of os/2 1.00 in december of 1987).

Date: 11/24/87 17:35:50
To: wheeler
FROM: ????
Dept ???, Bldg ??? Phone: ????, TieLine ????
SUBJECT: VM priority boost

got your name thru ??? ??? who works with me on OS/2. I'm looking for information on the (highly recommended) VM technique of goosting priority based on the amount of interaction a given user is bringing to the system. I'm being told that our OS/2 algorithm is inferior to VM's. Can you help me find out what it is, or refer me to someone else who may know?? Thanks for your help.

Regards,
???? (????? at BCRVMPC1)


... snip ... top of post, old email index

os/2 history
http://www.os2bbs.com/os2news/OS2Warp.html
http://www.os2bbs.com/os2news/OS2History.html

random xt/at/370 posts:
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001g.html#53 S/370 PC board
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures
https://www.garlic.com/~lynn/2002f.html#49 Blade architectures
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2002f.html#52 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 31 Mar 2003 23:47:10 GMT
"Glen Herrmannsfeldt" writes:
Also, I saw something called "buffered log" DASD. I wasn't sure what that was, but your reference to log-structured file system reminded me of it. I thought it was a hardware feature, not a file system feature, though.

i don't place the reference at the moment. there was two log things >>>

1) earlier was full-track log write that played game with CKD so that the write started at the first record under the head (aka search was not for record equal) ... read/recovery could also do a full-track starting at the first record under the head (data inside the record .... allowed recovery to figure out what the original record sequence was). It could beat some of the later full-track caches .... which started cache loading as soon as head was settled ... but would actually transfer in processor sequence.

2) a disk/dasd controller store-in cache that was replicated and battery backed aka processor would get early indication that write was complete as soon as it was in controller cache ... and cache then could do lazy write ... replicated storage and battery backed allowed for various kinds of failure recovery

.....

there was other discussions of big pages .... driving 3380s at close to transfer rate (optimal head & arm scheduling) ... with 10 4kpages per big page (3380 40k track). there were some numbers about systems easily, routinely hitting over 2000 plus 4k-page tranmsfers per second.

as in previous big page discussion ... possibly 30-40 percent of such page transfers were unnessary ... however real storage wasn't the real constraint, it was disk arm latency. the efficiency from doing multiple page transfers in single disk operation more than offset the overhead of doing potentially unnessary transfers (and the associated unnessary occupancy of real storage). In effect, real storage and transfer rate was traded off for arm motion and rotational delay operational optimization.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 01:59:31 GMT
John Ahlstrom writes:
I thought the base registers in the 360 address calculations were supposed to allow relocation. Was it not logically possible to use them for that? Was it not practical to use them for that? Were they just not used for that?

"An early decision had dictated that all addresses had to be indexable, and that a mechanism had to be provided for making all programs easily relocatable."
Amdahl, Blaauw and Brooks
Architecture of the IBM System/360
IBM Journal of Research and Development, Vol 8, 2, 1964
page 28
Or was there some other mechanism that was uspposed to make all programs easily relocatable.


360 (et al) had 16 general purpose registers ... 15 that could be used as base/address registers under application program control. the os/360 system had fair amount of latitude in selecting location in the address space for placing a program .... but once control had been passed to the application ... there was no convention that allowed the operating system to move an application and update the appropriate registers (for one thing there was no enforced convention as to which were "address" registers and might need to be swizzled and which were work/numeric/temp/etc values).

iniital loading of a program could select an arbritrary address location ... and then pass control to the application ... and eventually the application would get all the general purpose registers it was use for addressing appropirately initialized. however, the kernel/system had no real good idea what that might be ... and which registers it would need to swizzle.

also the os/360 standard program convention included storage objects that occupied the application address spacecall adcons .... for instance the program could


             l    r15,=a(sub1)
balr r14,r15

which would load the address pointer to sub1 into general purpose register 15 and then branch and link to the subroutine (storing the address after the BALR instruction in R14). The program image on disk had these storage objects specifically identified and the value recorded as a displacement from some value. As part of program loading, the loader would "resolve" these relocatable adcons into absolute adcons before program invokation.

this created quite a problem for me when doing floatable shared segments .... i.e. the same shared program objects present in multiple different virtual address spaces at possibly different virtual addresses. Basically I had to replace all the "relocatable" adcons with "absolute" adcons that were the displacement from some reference value known to be present in some register. The displacement value would then be added to some register contents at runtime. The sequence would then look something like:


lr   r15,r12
a    r15,=a(sub1-base)
             balr r14,r15

where the application knew that r12 contained the current value of "base" for that specific address space.

The problem that I was up against was that there was 16mbyte virtual address space. Having each shared object occupy the same virtual address in each address space ... eventually implied that when defining the shared objects ... they had to be very carefully allocated pre-defined addressesn ... since on any specific system the aggregate size of all possible shared objects was going to be larger than 16mbytes. Situations where that some programming processes combinations of multiple different shared objects mapped into the same virtual address space. basically address sorage object could be
1) early resolution 2) medium late resolution 3) runtime resolution

Early resolution was to fixed the shared object address location at the time it was defined (and fixed all the adcons when the shared object was initially defined and written to disk). This is in effect the original cp/67, the original vm/370 and much of the VM stuff today.

medium late resolution would have fixed the shared object address objects by the first process/address space to load it. However, this easily has deadly embrace ... example different processes need two libraries that are in shared r/o storage. the first process initially loads libray1 at location N (resolving all address constants at the first loading of that shared library). the second process initially loads library2 at location N (resolving all address constants at the first loading of that shared libary). The first process not tries to also load libary2 .... but can't because of the address location at location N (and the shared image are already bound on first load). This problem led to early resolution.

runtime resolution .... the adcons are never changed ... but are absolute displacements for some location that can float in different address spaces. actual use is resolved at runtime. This allowed any process to have any combination of shared objects up to a total of 16mbytes. By comparison the early resolution severly restricted the combination of different shared objects. There was severe installation management effort involved in assigning which shared objects used the same addresses ... to avoid situations where some process might need concurrent availability of such shared objects.

misc. past discussions of floating/relocatable shared segments:
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002o.html#25 Early computer games

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Security Certifications?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Security Certifications?
Newsgroups: comp.security.misc
Date: Tue, 01 Apr 2003 03:32:16 GMT
Anne & Lynn Wheeler writes:
and online quizzer engine:
http://www.losclusterz.com/quiz/quiz.phb


ref:
https://www.garlic.com/~lynn/2003e.html#73 Security Certifications?
believe it or not, i managed to finger fumble both URLs
http://www.lostclusterz.com/quiz/quiz.php

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 16:27:30 GMT
Jan C. Vorbrüggen writes:
Ah yes, with cooperating code there are a lot of tricks one can pull.

Sounds like a management nightmare. Always make sure you use the right macro for the call, or you get some hard-to-find bugs...especially as the development shops tend to have better hardware (i.e., more resources) than some customers.


it was for the kernel ... people that have to write kernel code have a whole lot of procedures that they need to check on anyway. also there was only a single call macro used by the kernel. you don't have random people randomly writing random code for insertion into the kernel.

... start BALR call drift ...

I had already done a trick with the call macro. The original (cp/67) call macro always used an svc 8/12 convention (supervisor call). It would interrupt and allocate a new register save area and some other stuff and then call off to the called routine. The called routine woudl svc12 ... which would deallocate the dynamaic save area.

doing lots of performance measurements ... I noticed some relatively short path length subroutines that always returned and never called anything else, and were non-interruptable, they also had very high frequency call rates. they could effectively get by with a static savearea. So I modified them to use a BALR (branch and link, single instruction) call convention with a "static" savearea. I also modified the CALL macro to check a list of BALR routines ... and generate a BALR instead of a SVC8. I had earlier re-written the cp/67 svc8/12 implementatin that cut about 70% of its pathlength ... resulting in possibly 5-10 percent overall savings in kernel cpu utilization. I had also implemented the ability to dynamically extend the SVC save areas ... originally there was one hundred pre-allocated ... and if those ever ran out ... the system crashed and burned.

The BALR, for some critical, high usage routines, the BALR change eliminated the SVC pathlength altogether for another 5-10% savings in kernel cpu utilization.

ok, where do you put a static savearea for the BALR routines ... especially if you are in a multiprocessor environment with finegrain locking and possible parallel execution.

so slightly related drift in this thread
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance, why?

360/370 had 16 general purpose registers ... 15 that could be used as base/address registers. If you specified register zero in a address or index usage .... it didn't indicate the contents of register zero ... but no register at all. In effect, the 12-bit displacement would be with respect to first 4k bytes of storage.

in the real hardware this is where the processor hardware interrupt and misc. other stuff goes on (maybe first 512bytes of page zero). low-level interrupt handlers tended to use other parts of page zero for something like a temporary register savearea. The interrupt handler is going to need some registers to do its work ... save status in permanent area ... allocate dynamic save area, etc. So a temporary savearea is typically reserved somewhere in page 0 for interrupt handlers ... until they've done enuf work to save status whereever it might needed to be permanently. I just assigned/reserved a free location in page zero for the BALR routines.

the 360/67 for multiprocessing had a single linear address space. However, it wouldn't work trying to have more than one processor tramping around in the same real page zero. As a result, each processor had a page zero prefix register. You loaded a page number into the page zero prefix register ... and that processor started using that real page for its nominal "page zero" activities (including addressing when there wasn't an address register). The kernel code had the necessary logic that as it was initializing more than one processor ... it was allocating different real pages for use as page zero by different processors. so a real page (that was loaded into a prefix register) could be addressed by two different values its real page number and its page zero alias.

This was changed for 370 multiprocessor. if an attempt was made to address the real page indicated by the prefix register ... it was redirected to the ... "real, real page zero". In 360, once in multiprocessor mode ... it was no longer possible to access the "real, real page zero". The double reverse translation for 370 prefix register allowed access to the real, real page zero. This was done on the assumption that multiprocessing kernel software might use the area indicated by the multiprocessor prefix register for some sort of system wide multiprocessing coordination operations.

... end BALR call drift ...

so the logic for pageable kernel was to establish a kernel boundary address where all kernel calls with addresses less than the boundary went straight to that address. however any kernel calls that were higher than the kernel boundary address went thru the pageable logic.
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?

so pageable kernel routines had to be crafted following certain procedures .... and they had to be appropriately placed in the sequence of the kernel build. In the svc8 (call) interrupt handler, if the called-to address was greater than the kernel boundary value, it was treated as a pageable kernel call; if it was less than the boundary value, it was treated as a non-pageable kernel call. On an svc12 (return) ... if the interrupting from address was greater than the kernel boundary vlaue, it was treated as a pageable kernel routine; if the interrupting from address was less than the kernel boundary value, it was treated as a non-pageable kernel routine.

so originally on 768k 360/67 ... with 192 4k pages ... the fixed kernel was about 30 pages leaving around 160 4k pages. then dynamic storage out of fixed real memory ... bookkeeping for each process, process virtual memory tables, etc ... could be another 30-60 pages (depending on load); say leaving 120 4k pages. The original implementation that I did on cp/67 was taking "console functions" ... and fixing them up for pageable kernel operation; originally about 5-6 4k pages .... say about five percent of real storage. this didn't ship in cp/67 ... but an updated version of it shipped in vm/370.

later for some additional real-storage constraints ... i also did pageable "control blocks" ... basically various process-specific stuff that was laying around consuming real storage that wasn't needed. this was part of the resource manager. ... again previous description:
https://www.garlic.com/~lynn/2003f.html#3 ALpha performance, why?
also
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#35 unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics

slightly related is a.f.c thread on source/project maintenance procedures:
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#75 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#76 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#77 unix
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#2 History of project maintenance tools -- what and when?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 16:42:28 GMT
Terje Mathisen writes:
NetWare must have been one of, if not the, first OSs to do this effectively, and at several layers as well:

They (i.e. Drew Major probably) used all available RAM for disk caching, while using all available disk space as a way to save arbitrarily many old versions of updated/deleted/rewritten files.

Yes, when (not IF) your disks ran full of 'visible' files, old stuff had to be purged, but since this happened automatically (but with manual tweaking allowed, per file/directory), it didn't really matter.


when I was first rewriting page management stuff as an undergraduate, i had all this dynamic adaptive stuff that wouldn't do stuff unless it was absolutely warranted. I had as a counter-example TSS/360 doing all this deterministic stuff ... even if it wasn't needed. For instance when an interactive process became active ... all of its pages were swept from 2311 onto 2301 (fixed-head drum) ... and then the task started. if the task quiesced ... all the pages from memory and/or drum were swept back to 2311. This was even done on relatively quiet system when nether real storage nor drum were constrained resources at the moment.

Over ten years later ... somebody from MVS call to say that they had just gotten a big corporate award for changing mvs from deterministic sweep of pages from real storage to only doing it if real storage was constrained ... and could they do something similar for VM. I indicated to him that I had never, not done it that way ... and that was the way vm/370 (& my prior cp/67 rewrite) had always done it (I even had an argument about that with some of the pok people back before initial release of os/vs2/svs). I made some facetious comment that instead of POK having given a big award for fixing an obvious bug ... that the people responsible for the bug (needing fixing) should have done the honorable thing and at least returned the past ten years salary to the corporation ... and then rewrote their wills to forfeit all of their worldly possesions.

some topic drift regard DataHub related to certain pc company starting with the letter N:
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2000g.html#40 No more innovation? Get serious
https://www.garlic.com/~lynn/2002f.html#19 When will IBM buy Sun?
https://www.garlic.com/~lynn/2002g.html#79 Coulda, Woulda, Shoudda moments?
https://www.garlic.com/~lynn/2002o.html#33 Over-the-shoulder effect
https://www.garlic.com/~lynn/2003e.html#26 MP cost effectiveness

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/ Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 16:52:28 GMT
memorymorass@aol.com (Paul A. Clayton) writes:
Well, could one not have a 'valid' bit for each page and use segments for protection? If the pages marked invalid were only available for OS use, this might not be terribly painful. If the overhead of bit toggling on context switches was acceptable, physical pages could be mapped by multiple segments without being shared. Alternately, Permission IDs could be associated with pages.

(This is not to suggest that the above is in any sense sane, but the realm of the possible is much larger than that of the reasonable.)


the original cp/67 and then vm/370 kernel's ran in "real mode" ... they didn't use segment & page tables. For the purpose of having pageable kernel, i invented a dummy segment & page table for the kernel ... that was only used by the kernel call/return logic ... which leveraged the internal page management functions that supported "real i/o". Most of the process external transfer ... like filesystem stuff was done by 360/370 CCW I/O (at least until i did the mmap thing for the cms filesystem).

the real i/o system ran with real addresses ... i/o from a virtual process had to have all the references translated from virtual to copy structures with real addresses ... and the associated virtual pages pined/fixed in real storage. then the i/o was scheduled, then all the stuff was unpined/unfixed and necessary status addresses translated back from real to virtual. I made use of the infrastructure for pinning/unpinning pages for real I/O operations ... for managing pages that were part of the pageable kernel.
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?

as per above ... i later extended these psuedo virtual address tables for also paging internal control blocks associated with processes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 17:08:18 GMT
pmxtow@merlot.uucp (Thomas Womack) writes:
I believe there is a very similar program sitting some way under the hood in current versions of Windows (possibly everything since '98); instead of re-arranging the executable, it watches the order in which pages are called in on program start-up, and hooks into the defragmenter to ensure that these pages are contiguous on disc. You can see the profiling information, somewhere deep in the recesses of c:\windows\applog; on Win98, at least, it's in human-readable format; line after line of

one of the early VS/Repack ... circa 1976 from the cambridge science center (i had written some of the data reduction for vs/repack, it was also same year that i got to release the resource manager).

we had used the internal version for a number of things for 4-5 years before it was released as a product.

one of the things it was used on was taking (real storage) os/360 apl\360 and converting it to run on cms (released as cms\apl) and in a virtual memory environment. the traces and the reduction ... repackaged the module sequence for optimal virtual memory operation (using some fancy fortan code doing complex cluster analysis).

It could also produce storage/execution traces ... both instruction and data references. There were these six foot swaths of storage references on the backside of 1403 greenbar paper ... taped together and covering the walls of the hallways of 4th floor, 545 tech sq. Typical display was each horizontal line was 2000 instructions and the storage was scaled to fit the vertical, 6foot line (floor to ceiling).
https://www.garlic.com/~lynn/subtopic.html#545tech

One of the issues was that apl\360 allowed real storage workspaces that were 16kbytes or 32kbytes that were swapped in and out. all assingments went to new storage location ... when all storage in the workspace was assigned ... it would garbage collect and compress all variables back down to low storage. this storage management technique caused a lot of problems in a virtual memory environment .... where you might have an apl application that was maybe 20-100k ... but was operating with a 2mbyte to possibly 16mbyte "workspace". It was guaranteed to cause all sorts of page thrashing. The vs/repack trace of apl ... running down the halls showed very distinct saw-tooth effect ... a lot of access down at low storage and a very distinct pattern that ran from low storage to high storage (over time) ... and then a solic verticle line as garbage collection occurred.

it was also used extensively on doing tuning of various products ... STL used it on IMS (database, transaction).

random past vs/repack refs:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 17:20:31 GMT
Peter da Silva writes:
Prefetching all pages of the segment when the first page is loaded should provide exactly the same performance characteristics as swapping the segment in, no? Swapping thus becomes a special case of paging as far as performance goes.

note that in the big page scenario ... original 10 4k pages that fit a 40k 3380 track ... the members of the big page were pages that were resident in storage together .... not necessarily contiguously numbered pages (aka they were built dynamically). This dynamic clustering would tend to have lower false fetches ... as compared to possibly straight linear virtual address fetch (that might be more characteristic of straight swapping).

in that sense it was much more like dynamic demand paging ... but the members of a (40kbyte) "big page" was effectively dynamically determined on page out (and then on page fault for any 4k page within a specific 10-page "big page", all ten pages were fetched).

in the past ... swapping tended to be slightly more characteristics of contiguous memory allocation infrastructures ... like real memory apl\360 workspaces, mentioned in previous post.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

History of project maintenance tools -- what and when?

Refed: **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: History of project maintenance tools -- what and when?
Newsgroups: alt.folklore.computers,alt.os.multics,alt.sys.pdp10
Date: Tue, 01 Apr 2003 19:14:57 GMT
"Shmuel (Seymour J.) Metz" writes:
Or have some of the users officially ask IBM for it. Of course, no IBM employee involved in a research project would ever suggest such a course of action in a private conversation at Share.

i sat on a share committee that drafted some resolutions asking for some stuff that i had done as an undergraduate and shipped in cp/67 ... and then dropped in the morph to vm/370 ... to be (re-)introduced into vm/370. that helped with the justification for the resource manager:
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

it wasn't so much stuff that i was involved in doing research as an employee ... it was stuff that i had done as an undergraduate 8 years earlier.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Could somebody use SCSH, Sheme, or Lisp to create the "Lispm" architecture

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could somebody use SCSH, Sheme, or Lisp to create the "Lispm"  architecture.
Newsgroups: comp.lang.scheme.scsh,comp.lang.scheme,comp.lang.lisp,comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 19:22:58 GMT
Symbolics_XL1201_Sebek_Budo_Kafka@hotmail.com (Franz Kafka) writes:
The only important thing is to not tie the Lispm to a specific chip, or Machine like Symbolics, LMI, Xerox, TI, and the Scheme Chip did but to make it able to run on all hardware--so that more people could try it out.

Porting Linux into a Lisp/Scheme OS would be a great start.


slight drift regarding mit lisp machine & 801 circa 1979 ... see last ref at
https://www.garlic.com/~lynn/2003e.html#65 801 (was re: reviving Multics)
appears just before the next posting at:
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools ... what and when

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Disk prefetching

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Disk prefetching
Newsgroups: comp.arch
Date: Tue, 01 Apr 2003 19:37:10 GMT
Thomas writes:
Seagate did have a Barracuda drive with a duplicated head - two read gaps on the same head assembly, so two tracks could be read from the same platter at the same time. I don't think this was a commercial success.

long ago and far away ... the 2301 fixed-head drum read/wrote four heads in parallel (mid-60s, saw them a lot on 360/67s with cp/67). it was essentially a 2303 fixed-head drum with same total capacity, 1/4th the number of tracks, four times the track capacity, and four times the data-transfer.

later there were two models of 2305 fixed-head disk ... the two had the same number of physical heads, rotated at the same speed, had the same transfer rate; but one had half the number of tracks as well as half total data capacity ... but it also had half the rotational latency (exercise left to the student).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 01 Apr 2003 23:13:41 GMT
John Ahlstrom writes:
I thought the base registers in the 360 address calculations were supposed to allow relocation. Was it not logically possible to use them for that? Was it not practical to use them for that? Were they just not used for that?

note that it was possible to write location independent code for 360s ... as the discussion of pageable kernel and relocatable/floating shared segment examples showed .... it just wasn't very common.

in the pageable kernel case, a source module had to be 4kbytes or less and not cross a page boundary. the paging system could bring the kernel storage image into an arbritrary real page .... and execution then took place with that real address. It could be paged out and paged back in at a totally different real address. This required carefully following some specific coding conventions for pageable kernel routines.
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance

the location independence for relocatable/floating (read-only) shared segment is/was similar to that of pageable kernel routines .... it was code that ran with virtual addressing on ... but the same exact storage image could appear in multiple different virtual address spaces concurrently ... possible at different virtual addresses in each address space. As a result all address resolution had to be with respect to the address position in whatever address space it was currently operating in. Again this required relatively specific coding conventions.
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance

Note however, most of the effort I put into adapting CMS code to relocation/floating (read-only) shared segments ... wasn't so much making it address location independent ... but reworking various pieces of code to also make it free of any storage modifications (aka in some vernacular, re-entrant). In addition to various CMS system routines that had to be sanitized, another example that i reworked was browse, fulist, and ios3270; specific reference:
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema

random other browse, fulist, ios3270 references:
https://www.garlic.com/~lynn/96.html#41 IBM 4361 CPU technology
https://www.garlic.com/~lynn/99.html#60 Living legends
https://www.garlic.com/~lynn/99.html#61 Living legends
https://www.garlic.com/~lynn/99.html#108 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#8 Theo Alkema
https://www.garlic.com/~lynn/2002e.html#5 What goes into a 3090?
https://www.garlic.com/~lynn/2002i.html#79 Fw: HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002p.html#40 Linux paging

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

"Super-Cheap" Supercomputing

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "Super-Cheap" Supercomputing
Newsgroups: comp.arch
Date: Wed, 02 Apr 2003 05:37:50 GMT
Greg Pfister writes:
3. I don't know, but some guys used the analogous feature on the old 360/145 (it had microcode in main memory) to do a really bang-up APL interpreter.

as per post about vs/repack ... cambridge science center had taken apl\360 and did some number of things to it, including sensitizing it for virtual memory environment (especially the storage allocation & garbage collector) with the help of vs/repack.
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why

cambridge also put in the support for making system calls ... and it was released as cms\apl. the system call stuff caused quite a bit of consternation among the apl purists ... since it violated some number of the original apl principles.

palo alto science center then took cms\apl and did some number of things to it, including revamping the system call stuff into shared variable paradigm ... as well as doing the apl microcode assist for the 370/145. this was release as apl\cms and then apl\sv. A lot of apl applications ran as fast on 370/145 with the apl microcode assist as they did on a 370/168 w/o apl microcode assist (not quite ten times).

Across the back parking lot from the palo alto science center was hone, probably for a time, the largest single system cluster in the world. It had something like 40,000 userids and supported all the branch and field people in the US. In addition, HONE system was cloned and deployed in a number of other countries (in a couple cases, I hand carried it) around the world supporting branch and field people all over the world.

The major environment for the branch and field people was a large subsystem environment written in APL called sequoia (possibly one of the most used APL applications of all time) ... and within sequoia ran a lot of support tools ... like machine configurators (allow branch office people to configure and order machines for customers). A lot of sequoia would have ran as fast on 370/145 with apl m'code assist as on 370/168s .... but there was some amount of sequoia which wasn't addressed by the apl m'code assist.

some amount of discussions w/regard to hone & apl
https://www.garlic.com/~lynn/subtopic.html#hone

note that the person that was primarily responsible for the 145 apl microcode assist was also fundamentally responsible for FORTQ ... which became FORTHX.
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill

when we were woking on ECPS ... a kernel microcode assist for the 138/148 (follow-on to the 135/145), he did a special microcode PSW/instruction-address sampler for us on the 145 ... that help identify were the CP kernel was spending its time (there was actually two technologies ... one was the microcode psw sampler ... the other was some software kernel instrumentation):
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

and
https://www.garlic.com/~lynn/submain.html#mcode

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Could somebody use SCSH, Sheme, or Lisp to create the "Lispm"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Could somebody use SCSH, Sheme, or Lisp to create the "Lispm"
architecture.
Newsgroups: comp.lang.scheme.scsh,comp.lang.scheme,comp.lang.lisp,comp.arch,alt.folklore.computers
Date: Wed, 02 Apr 2003 14:48:25 GMT
cstacy@dtpq.com (Christopher C. Stacy) writes:
That story is pretty garbled. The early people on the Lisp Machine project were certainly aware of the 801 due to assorted connections with people at Yorktown, but they did not consider creating the Lisp Machine by using the IBM processor. (The Lisp Machine was invented more than 3 years before the time you're citing, there, by the way.)

that was just a copy of email to me ... sent on the date indicated; it didn't actually give a date as to the request to Evans. i would have expected the actual date of the request to Evans would have been at least a couple years earlier given the 8100 reference. The first 801 presentation I attended was spring of '76 (... which would correspond to your reference).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 02 Apr 2003 17:05:17 GMT
Jan C. Vorbrüggen writes:
I realize that. The problem is that if somebody missed using the macro, the code would work on resource-rich machines, and inexplicably fail on resource-starved machines, but without any pattern and only once in a while.

Only one macro? How did it distinguish between the pageable and the non- pageable entry points?

Jan


the standard call macro/procedure ... executed an svc 8 ... supervisor call (and return executed svc 12). the svc8 call routine dynamically allocated register savearea for the called routine. The pageable kernel support wasn't in the inline macro generate code .... it was in the supervisor call routine ... and the call macro generated a supervisor call instruction in order to perform a CALL. as per earlier reply
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?

somebody either does the svc8 or not. if they didn't do the svc8, the system would crash regardless ... since the appropriate linkage process wouldn't have been done. if they did the svc8 ... it would all automagically work. also, remember that the called routine returns via executing an svc12 interrupt ... which also requires all the necessary processing has been performed. If you branched directly to a standard called routine (not executing an svc8), all the calling structures wouldn't have been appropriately done ... and then there would be a problem when the called routine executing an svc12 interrupt for the return.

for pageable kernel, the support was added to the svc8 call routine (aka the svc interrupt routine that handled the svc8 instruction). All of the pageable kernel was positioned after the non-pageable kernel and there was a known boundary address separating the two. if the address of the called routine was greater than the boundary address, the svc8 call routine did the appropriate pageable kernel handling stuff. the return/svc12 handling routine ... would deallocated the dynamic register savearea ... and if the from address was greater than the boundary address, then it would perform the necessary house keeping.

the call macro would do
l 15,=a(kernel-routine) svc 8

the svc interrupt handling routine would check for a code "8" ... and then go off to the call processing. the call processing would dynamically allocate a storage area for register save area ... which also contained the linkage information as to the calling routine. It would then go off to the called routine. If =a(kernel-routine) address was greater than the end of the fixed-kernel address, it would execute the appropriate pageable code support.

The non-pageable kernel starting at location zero and was contiguous. All called-to addresses were greater than the non-pageable kernel boundary address. The location where a pageable kernel routine was loaded was pretty indeterminate .... except since the non-pageable kernel locations were fixed, starting at location zero and contiguous, it was also known that the return address from a pageable kernel routine always had to be greater than the boundary address also (aka a pageable kernel routine could never be loaded at a location less than the highest address of the non-pageable kernel).

aka ... it wasn't the responsibility of the call or return macro to do the appropriate pageable kernel stuff .... it was the responsibility of the supervisor routine that handled call/returns.

so lets say ... somebody branched directly to any routine (modulo BALR routines) ... instead of invoking SVC8 ... and for some reason the lack of appropriate linkage stuff didn't crash immediately ... when the called routine invoked svc12 there would be a problem .. since (at least) the appropriate linkage information wouldn't have been initialized ... and there would likely be some sort of failure with the return sequence.

There is another glitch specific to the pageable kernel processing. As per:
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?

the pageable kernel mechanism made use of the real I/O support logic. For real I/O, there is a "TRANS" done with the lock option i.e. the page is checked for being in real storage, if not, it is brought in, and then it has its pinned/lock count incremented. For the period that it takes to perform the real i/o operation, the page is pinned in real storage. When the real i/o operation involving that virtual page is complete the pinned/lock count is decremented. Pages aren't eligible for replacement (removal from real storage) if there is a lock count greater than zero. There is also a system failure if a lock count ever goes negative.

So because there could be multiple, kernel threads simultaneously executing in a pageable kernel module ... and potentially any of those threads might be suspended for one reason or another .... the SVC8 routine not only does a "TRANS" operation on the pageable routine ... but also specifies the lock option; incrementing the pinned/lock count. The svc12/return processing does a lock count decrement on the page. It is likely if the system survived a direct branch to a pageable module (potentially because somebody didn't do a svc8 call for some reason) ... and the called routine returned with an svc12 ... and the svc12 routine didn't fail because there was not appropriate setup by an svc8 call ... then at least when the svc12 routine called the page lock decrement routine ... there would be a high probability that the count would go negative (because it hand't been appropriately incremented) and the system would fail. The pin/unpin lock increment/decrement occurs regardless of whether it is a constrained or unconstrained environment.

the call/return macros were modified to handle calls to BALR routines differently than SVC8/12 routines .... not because of the pageable/non-pageable issue .... but BALR called routines didn't require dynamically allocated storage for register save area ... and of course, BALR called routines couldn't be pageable.

earlier reply with details of calls handled by supervisor calls (in order to have dynamically allocated storage for register save area) as well as description of the BALR call changes:
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?

your concern about in-line code in the calling routine ... and/or in-line code generated by the CALL macro doesn't directly concern the support for pageable kernel support ... since the CP kernel requires that all calls (modulo my original changes for selective BALR calls) be performed by the svc8 interrupt routine (the CP kernel calling convention requires the svc8 mechanism to dynamically allocate storage for register save area ... as well as other misc. housekeeping). The svc8 interrupt routine masks all the housekeeping mechanism associated with pageable kernel.

however, there is some in-line logic/code with regard to the changes for BALR calls (instead of supervisor interrupt). The call macro contains a list of BALR call routines. So if there is a statement:


CALL DMKFREE

the call macro checks the argument against a table internal to the macro ... and will generate the code:

l    r15,=a(dmkfree)
balr r14,r15

rather than

l    r15,=a(dmkfree)
                    svc  8

the balr instruction branches directly to dmkfree ... and puts the return address in r14 .... instead of generating a supervisor call.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

New RFC 3514 addresses malicious network traffic

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New RFC 3514 addresses malicious network traffic
Newsgroups: alt.folklore.computers
Date: Wed, 02 Apr 2003 17:38:39 GMT
Joe Morris writes:
The new RFC is a continuation of the tradition (and thus valid fodder for a.f.c) of equally serious research and developement of network standards that one can find elsewhere in the RFCs. The announcement I received was from the venerable Peter Neumann via RISKS; his message appears below, recast to a 72-character line:

somewhat related ... having missed yesterday:
https://www.garlic.com/~lynn/aepay11.htm#43 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??)

note that while the RFC is out ... i normally update my index based on the corresponding rfc-editor announcements which isn't do for at least a couple more hours.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

New RFC 3514 addresses malicious network traffic

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New RFC 3514 addresses malicious network traffic
Newsgroups: alt.folklore.computers
Date: Wed, 02 Apr 2003 18:01:37 GMT
Anne & Lynn Wheeler writes:

https://www.garlic.com/~lynn/aepay11.htm#43 Mockapetris agrees w/Lynn on DNS security - (April Fool's day??)


the above reference is joke by (some?) people about my long standing diatribe in a number of discussiong groups regarding domain name infrastructure integrity & SSL:
https://www.garlic.com/~lynn/subpubkey.html#sslcert

having worked on the early SSL stuff for electronic commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

I've observed that the CA infrastructure is now in something of a catch-22 situation. A major purpose for ssl server domain name certificates is to address some integrity issues with the domain name infrastructure. However, when somebody applies to a certification authority for a ssl server domain name certificate, the certification authority has to check with the authority agency for domain name ownership ... which is the domain name infrastructure.

so there are a number of proposals for improving the integrity of the domain name infrastructure ... some of them essentially from the certification authority industry ... so that they can better trust the information that they are certifying. note however, that in improving the integrity of the domain name infrastructure ... they are also reducing the justification for needing SSL server domain name certificates.

past posts observing the catch-22:
https://www.garlic.com/~lynn/aadsmore.htm#client3 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsmore.htm#pkiart2 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm4.htm#5 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#cfppki5 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm13.htm#26 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2001l.html#22 Web of Trust
https://www.garlic.com/~lynn/2001m.html#37 CA Certificate Built Into Browser Confuse Me
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002j.html#59 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002m.html#30 Root certificate definition
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2002o.html#10 Are ssl certificates all equally secure?
https://www.garlic.com/~lynn/2002p.html#9 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#29 SSL questions
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 02 Apr 2003 22:32:58 GMT
minor pageable kernel footnote to this thread:
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?

the cp/67 kernel, vm/370 kernel, and even the cms kernel build all used a modified version of the BPS loader. Basically the appropriate assembly file outputs were batched together with the BPS loader, the BPS loader was invoked which then read all the files, resolving all there symbols and relocatable fields; creating a memory image of the kernel. The bps loader then branched to a entry point in what was just loaded ... which had the responsibility of finding the disk boot location and writing the memory image out to disk ... with things appropriately patched so on a disk boot, the process would be exactly reversed.

note bps (Basic Programming System) loader is possibly earliest of the "systems" built for 360 ... and was supposedly targeted at just reading real 80col cards and being able to work in 8K (16K?) real storage configurations.

So when I was hacking cp/67 kernel originally for pageable kernel ... i was splitting up all these existing routines into little small chunks. I hit a wall with the BPS loader since it had a fixed maximum table of 255 entry symbol table ... and all the fiddling had pushed the number of external entry symbols over 255.

As a result I had to redo the fiddling to stay within the 255 limit. As I was doing that, I found out that when the bps loader passed control to the loaded program ... it past a pointer to its internal symbol table and count of valid entries in registers. The standard cp process of dealing with kernel debugging was getting character output (real or virtual printer) from the load process and working with it manually. I thot wouldn't it be handy to include the full symbol table with the kernel boot image. So I revised the code that wrote to disk qthe boot image, to copy the symbol table entries and fake out the system as if the symbol table was explicitly loaded at the end of the pageable kernel.

This was never shipped in the cp/67 product. However, in the morphing to VM/370 ... the BPS symbol table size once again became a problem. Rumaging around in the attic/storeroom of 545-tech sq (top floor of the bldg), I ran across an old CSC card cabinet that had the card assembly source for the modified BPS loader being used. I was able to hack that to extend the maximum symbol table size ... in part because there was a whole lot more stuff being put into vm/370 ... besides the symbol table additions necessary to support the programming paradigm for pageable kernel. However, the feature that appended the BPS loader symbol table to the end of the pageable kernel got dropped ... somewhat akin to a lot of the stuff i had done for dynamic adaptive sheduling and resource manager getting dropped:
https://www.garlic.com/~lynn/subtopic.html#545tech
https://www.garlic.com/~lynn/2003f.html#17 History of project maintenance tools

however, later when i was doing this problem determination project for vm/370 ... i reintroced appending all the loader symbol table entries to the end of the pageable kernel area:
https://www.garlic.com/~lynn/submain.html#dumprx Problem determination, zombies, dump readers

misc. past bps loader postings:
https://www.garlic.com/~lynn/98.html#9 Old Vintage Operating Systems
https://www.garlic.com/~lynn/99.html#135 sysprog shortage - what questions would you ask?
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#26 HELP
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002h.html#35 Computers in Science Fiction
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#72 bps loader, was PLX
https://www.garlic.com/~lynn/2002n.html#73 Home mainframes
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary

misc. past esd & symbol table postings:
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)
https://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
https://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
https://www.garlic.com/~lynn/2002f.html#41 Blade architectures
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
https://www.garlic.com/~lynn/2002o.html#25 Early computer games
https://www.garlic.com/~lynn/2002o.html#26 Relocation, was Re: Early computer games

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Ibm's disasters in the 70's

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Ibm's disasters in the 70's
Newsgroups: alt.folklore.computers
Date: Wed, 02 Apr 2003 22:41:01 GMT
"George R. Gonzalez" writes:
If you're into marketing, then IBM is a superb example of the triumph of marketing over technical excellence.

note that there is significant evidence that the shift from extreme technology-centric to extreme business-centric ... happened with the failure of FS ... misc. refs:
https://www.garlic.com/~lynn/submain.html#futuresys

some people have commented in the past .... that if any of the other computing companies had incurred the expense of a project the magnitude of FS ... and then simply canceled it ... they would have had to declare bankruptcy and go out of business.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

New RFC 3514 addresses malicious network traffic

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: New RFC 3514 addresses malicious network traffic
Newsgroups: alt.folklore.computers
Date: Wed, 02 Apr 2003 22:55:11 GMT
Anne & Lynn Wheeler writes:
note that while the RFC is out ... i normally update my index based on the corresponding rfc-editor announcements which isn't do for at least a couple more hours.

ok, rfc-editor announcement has come thru for 3514, and i've updated my rfc index.

go to:
https://www.garlic.com/~lynn/rfcietff.htm

either 1) Click on Term (term->RFC#) (in the RFC's listed by section and scroll down to "April1" or 2) in the lower frame click on "3514" and then click on "April1"

which gives you:
April1
3514 3252 3251 3093 3092 3091 2795 2551 2550 2549 2325 2324 2323 2322 2321 2100 1927 1926 1925 1924 1776 1607 1606 1605 1437 1313 1217 1149 1097 852 748


clicking on any of the RFC numbers in the above, gives you the RFC summary. clicking on the ".txt=" field (in the summary) retrieves the actual RFC.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Natl. Crypto Museum was: reviving Multics -- Computer Museum

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Natl. Crypto Museum was: reviving Multics -- Computer Museum
Newsgroups: alt.folklore.computers
Date: Thu, 03 Apr 2003 15:36:13 GMT
eugene@Durgon.Stanford.EDU (Eugene Miya) writes:
Is it really road construction or is it those concrete barracades all with "NSA" spray painted on them to act as vehicle mazes?

going north on the greenbelt, 32east was gone ... lots of stuff being dug up ... it looked like you had to take 32west ... go some distance and do something to get going 32east.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 Apr 2003 16:05:48 GMT
Jan C. Vorbrüggen writes:
Thanks for the explanation. I hadn't fully realized that the kernel took an SVC for every call and return - pretty CISCy, if you ask me. OTOH, it sounds a little like Alpha's PAL code: it allows you to modify the semantics of (in this case) call and return at a single centralized point, with the users of this "instruction" being none the wiser. That's a neat feature.

it was originally primarily for transparently allocating storage for register save area and maintaining a thread linkage thru the kernel. for instance, the page fault routine could call the page replacement algorithm, which would schedule a page write, and suspend, when write finished, there would be resume, a page read, and suspend. When read finished, there would be a resume, and finally a return to the page fault handler. During that process, there could have been lots of other page faults calling the same page replacement routine. note, as stated before ... this was the operating system supervisor/kernel ... and there wasn't really random people writing application code that ran in the kernel. For instance, for the release of the resource manager,
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

an automating benchmarking process was developed and 2000 benchmarks were run over a period of three months elapsed time to calibrate and verify the operation (before release as a product):
https://www.garlic.com/~lynn/submain.html#bench

three people from csc had brought out a copy of cp/67 to the univerisity the last week in jan. 1968 where i was an undergraduate. between then and the fall '68 share meeting in Atlantic City (inbetween there was the spring '68 share meeting in houston where they publicly announced cp/67) ... i rewrote a lot of code for optimized pathlength (part of presentation i made at the boston fall '68 share meeting):
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?

one of the rewrites was reducing the overhead of the svc call/return by over 70 percent. some other pathlengths i improved by a factor of one hundred times.

The selective use of BALR linkages .... as mentioned previously for routines not requiring dynamic save areas ... I did the summer of '69 (boeing had just formed bcs and con'ed me into a summer job helping set up their dataprocessing facilities and teach dataprocessing to some of the technical staff, in the spring they had con'ed me into teaching a one week dataprocessing staff to the technical staff during spring break) ... along with the initial pass at fiddling the console function routines for pageable kernel operation. I also created fairshare scheduling, dynamic adaptive feedback algorithms,
https://www.garlic.com/~lynn/subtopic.html#fairshare
the clock page replacement alorightm (over ten years before the stanford phd thesis on the same), and a different kind of measuring real storage size requirements for controlling page thrashing (different from the "working set" stuff that had been recently published)
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 Apr 2003 16:27:15 GMT
Anne & Lynn Wheeler writes:
it was originally primarily for transparently allocating storage for register save area and maintaining a thread linkage thru the kernel. for instance, the page fault routine could call the page replacement algorithm, which would schedule a page write, and suspend, when write finished, there would be resume, a page read, and suspend. When read finished, there would be a resume, and finally a return to the page fault handler. During that process, there could have been lots of other page faults calling the same page replacement routine. note, as stated before ... this was the operating system supervisor/kernel ... and there wasn't really random people writing application code that ran in the kernel. For instance, for the release

while it might seem a little heavy weight for application space inter-routine linkage .... it was effectively supporting lightweight threads thru the operating system core supervisor/kernel. it was slightly heavyweight for calls to subroutines that weren't in danger of being suspended ... but that was mostly fixed when i did the BALR stuff for those calls the summer of '69.

re:
https://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#4 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#10 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#13 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#16 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 Apr 2003 20:28:55 GMT
Anne & Lynn Wheeler writes:
Note however, most of the effort I put into adapting CMS code to relocation/floating (read-only) shared segments ... wasn't so much making it address location independent ... but reworking various pieces of code to also make it free of any storage modifications (aka in some vernacular, re-entrant). In addition to various CMS system routines that had to be sanitized, another example that i reworked was browse, fulist, and ios3270; specific reference:
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema


attached is piece of write-up that i did in 1978.

"PAM I/O" refers to changes to the cms filesystem to support a memory mapped paradigm.

The original CMS calling convention used supervisor call/interrupt


svc 202

or

svc 202
dc  al4(error)

this was 24bit addressing so the first byte of the al4 address constant would always be zero. On normal return, if there was no zero following the svc202, it would return at that address. If there was a zero, t would return+4, skipping over the address constant. If there was an error during process, it would check for a zero following the svc202 instruction. If there was a zero, it would load the (presumably) address constant and branch to that location. If there was no zero byte, it would assume that there was no application supplied error exit, abort the process and go to system defined error processing. For embedded constants with read-only, shared segments, the value would be identical, regardless of the address position of the loaded segment. To support relocatable/floating shared segments, the standard error handling constant associated with the standard CMS calling process had to be fiddled.

from long ago and far away ....
Relocatable Shared Segments is part of a large set of shared segment changes done in the early to middle VM/370 Release 2 time frame (Virtual Memory Management (1)). A subset of the function under the heading of Discontiguous Shared Segment was released as part of VM/370 Release 3. The development group sanitized the CP and CMS code before releasing it. Fortunately they did not eliminate the NUCON SVC$202 code. The CMS SVC 202 convention requires a 'DC AL4(address)' following the SVC for an error exit. The nonshared adcon isn't required for the Discontiguous Shared code support but it is mandatory for relocatable, shared code. The SVC$202 allows relocatable shared code to execute SVC 202s and still specify an error exit. The SVC$202 in page 0 is followed by an ERR$202 (an adcon) which is followed by a 'BR R14'. The ERR$202 field can be filled in with a relocated address and a 'BAL R14,SVC$202' executed. On return from the SVC 202 if there is no error, CMS will branch to the 'BR R14'. If there is an error, CMS will branch to the address pointed to by ERR$202. Yorktown Research has also been doing work in this area attempting to eliminate the adcon error exit requirement in conjunction with their Subcommand - Freeload support.

The full shared segment code also allows named shared segments both inside and outside of the virtual machine size. Support for shared modules inside (or outside) of the virtual machine is now running w/o using DMKSNT entries. The IBM Palo Alto HONE VM/370 systems have been using the shared module support for APL since early release 2 of VM. The prototype code was originally written (along with PAM I/O support) for CP/67 in 1972 and 1973 using SNT entries to define the shared module.


... snip ...

note in the following ... the size of the files are nearly the same, the number of blocks differ primarily because one is a 4k/page formated area and the other is an 800-byte formated area. note that normal CMS processing when loading exectubles will attempt to read up to 64k bytes in one physical operation aka 65535 bytes at a time).

... continued ...
Normal FORTHX versus 'fixed' FORTHQ

The following is an excerpt from a terminal session where FORTHX is in normal format on a normal formatted CMS disk. FORTHQ (FORTHQ is an enhanced FORTHX) is in fixed page aligned format on a PAM formatted CMS disk. There is a large difference in the time to LOADMOD essentially the same sized module in the different formats.


q search
LYNN01  191  A    R/W
FORTHQ  5AA  P    R/O - PAM                  => FORTHQ disk
CMS190  190  S    R/O - PAM
CMS19E  19E  Y/S  R/O - PAM
R;
l ifeaab module  (date          => reformatted FORTHQ
FILENAME FILETYPE  FM  FORMAT    RECS BLOCKS   DATE   TIME
IFEAAB   MODULE    P2  F   128  3645   114  10/06/78 15:05
R;

loadmod ifeaab
R; T=0.01/0.14 17:18:32

Now do old formatted module

q search
LYNN01  191  A    R/W
FORTHX  4A8  P    R/O
CMS190  190  S    R/O - PAM
CMS19E  19E  Y/S  R/O - PAM
R; T=0.02/0.07 17:18:50

l ifeaab module  (date          => normal formatted FORTHX
FILENAME FILETYPE  FM  FORMAT    RECS BLOCKS   DATE   TIME
IFEAAB   MODULE    P1  V 65535     9   633   3/11/78  2:11
R; T=0.06/0.09 17:18:57

loadmod ifeaab
R; T=0.07/0.57 17:19:09

The CPU times for LOADMOD'ing the  two files are .01/.14 for
PAM, Fixed, page aligned and .07/.57 for the normal one.

... snip ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

PDP10 and RISC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PDP10 and RISC
Newsgroups: alt.folklore.computers
Date: Fri, 04 Apr 2003 15:29:14 GMT
"J. Clarke" writes:
Hyperthreading on the P4 is not prefetching. It relies on the fact that the pipeline in the P4 is quite long with some duplication of function (remember, it supports predictive branching), much of which experiences a certain amount of idle time. So hyperthreading runs two instructions through the pipeline simultaneously--if neither hits an operation for which the other has the hardware tied up then it is literally processing two instructions simultaneously.

It's not a dual CPU because there is contention for some of the micro-operations (at least that is my understanding) but there is a performance benefit--instead of a dual CPU machine, from a performance viewpoint think of it as a fractional CPU machine--1.5 CPUs say.


this type of thing was worked on 30 years ago for 370/195 ... which never made it to market. there was 63 instruction pipeline and any branch (not to an instruction already in the pipeline) drained the pipeline. normal codes rarely kept the pipeline more than half full. basically, the implementation duplicated the registers and the psw ... and added a bit to the instructions in the pipeline indicating which i-stream the instruction was associated with.

random past refs to 195 dual-istream project:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001c.html#1 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Lisp Machines

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Lisp Machines
Newsgroups: comp.lang.scheme,comp.lang.lisp,comp.arch
Date: Fri, 04 Apr 2003 19:53:04 GMT
Marc Spitzer writes:
The reason for unix's success was that for a small fee the univ got:
1: time sharing os
2: everything you need for a compilers course
3: everything needed for an OS course
4: later a networking course(BSD/IP)

This lead to when startups like Sun and Apollo hired people to build their new desktop computers the people they hired had much experience hacking on unix and they said "go get us a unix licence". Then ATT relized they had a product and charged accordingly.


appearance of inexpensive processor industry. previous computer offerings had proprietary operating system offerings ... the expense of creating a proprietary operating system offering could be an order of magnitude more than the cost of developing the new generation of processor offerings.

being able to deploy a (portable) operating system on the processor for a fraction of the hardware development costs (rather than several times the hardware development costs) was significant. being able to pick up people that already had skills in the operating system was an additional characteristic of the emeraging portable operating system.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Public Encryption Key

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Public Encryption Key
Newsgroups: sci.crypt,alt.security,comp.security,comp.security.misc
Date: Fri, 04 Apr 2003 20:28:52 GMT
"eric" writes:
Consider the following scenerio:

1. A sends B (A, EUKb[M], B)
2. B acknowledges receipt by sending to A (B, EKUa[M],A).

Being a user of the network, a attacker has his own public encryption key and is able to send his own messages to A or to B and to receive theirs.

My question is, in what way can the attacker obtain message M user A has previously send to B??


public key technology has two different business processes defined for it. 1) privacy; only the recepient can decode the message 2) authentication, only the sender can have sent the message

to achieve #1, encrypt the message with the recipient's public key (or more frequently generate a random secret key, encrypt the message with the random secret key, and encrypt the secret key with the recepient's public key). only the recipient with the appropriate private key can decrypt the message.

to achieve #2, encrypt the message with the sender's private key (or more frequently take some trusted/secure hash of the message, and encrypt the hash with the sender's private key). only that sender's public key can decrypt/verify the message. typically it is just the hash that is encrypted with the sender's private key and referred to as a digital signature.

previous discussions of two distinct business processes:
https://www.garlic.com/~lynn/aadsm10.htm#keygen Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/2002f.html#9 PKI / CA -- Public Key & Private Key
https://www.garlic.com/~lynn/2003b.html#64 Storing digital IDs on token for use with Outlook

the two can be combined by: first do a digital signature of the message and then encrypt the combination of the original message and the digital signature.

so the vulnerabilities have to do with

1) the sender really having the recipient's real public key before starting the process 2) the recipient really having the sender's real public key

so the business process typically comes down to each (sender and recipient) having a table of public keys that traditionally have some trust information conveyed by some out-of-band process.

in the pgp web-of-trust ... the parties exchange public keys and use some additional trusted process to really validate that the keys that have been received are really for the parties.

The traditional certification authority (PKI or CADS) model defines things called certificates ... in the body of the certificate has some assertion and a public key; the CA then digitally signs the certificate, certifying the validity of the assertion (ex: an email address or a person's name).

In this scenario .... the sender can create a message, digitally sign it, and then transmit to the recipient: 1) the message, 2) the digital signature and 3) the certificate. The recipient still needs to have a table of public keys (aka like the web-of-trust model) for at least certification authorities (that have been independantly validated by some out-of-band trust process) ... allowing the recipient to validate the digital signature of the CA on the certificate.

This addresses the scenario where the recipient has had no prior contact or interface to the sender .... the sender can transmit a spontaneous message to just about anybody. The recipient then can be sure that the message has originated from an entity that matches the assertion in the appended certificate (assuming the recipient has the CA's public key in their trusted public key table).

However, the CA, spontaneous communication paradigm doesn't address the privacy issue. In order for the sender to encrypt the message with the recipient's public key, that recipient's public key needs to have been previously stored in some table kept at the sender. That means that the sender and recipient have had to made some previous contact and exchanged information.

The AADS scenario assertions is that for all serious business process communication, the sender and recipient have established some sort of previous business relationship .... making the CA, spontaneous communication model redundant and superfluous.
https://www.garlic.com/~lynn/aadsover.htm

misc. redundant and superfluous postings:
https://www.garlic.com/~lynn/aadsm10.htm#limit Q: Where should do I put a max amount in a X.509v3 certificat e?
https://www.garlic.com/~lynn/aadsm10.htm#limit2 Q: Where should do I put a max amount in a X.509v3 certificate?
https://www.garlic.com/~lynn/aadsm11.htm#39 ALARMED ... Only Mostly Dead ... RIP PKI .. addenda
https://www.garlic.com/~lynn/aadsm11.htm#40 ALARMED ... Only Mostly Dead ... RIP PKI ... part II
https://www.garlic.com/~lynn/aadsm12.htm#22 draft-ietf-pkix-warranty-ext-01
https://www.garlic.com/~lynn/aadsm12.htm#26 I-D ACTION:draft-ietf-pkix-usergroup-01.txt
https://www.garlic.com/~lynn/aadsm12.htm#27 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#28 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#29 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#39 Identification = Payment Transaction?
https://www.garlic.com/~lynn/aadsm12.htm#41 I-D ACTION:draft-ietf-pkix-sim-00.txt
https://www.garlic.com/~lynn/aadsm12.htm#52 First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm12.htm#53 TTPs & AADS Was: First Data Unit Says It's Untangling Authentication
https://www.garlic.com/~lynn/aadsm13.htm#0 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#2 OCSP value proposition
https://www.garlic.com/~lynn/aadsm13.htm#3 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#4 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#5 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#6 OCSP and LDAP
https://www.garlic.com/~lynn/aadsm13.htm#14 A challenge (addenda)
https://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
https://www.garlic.com/~lynn/aadsm13.htm#19 A challenge
https://www.garlic.com/~lynn/aadsm13.htm#20 surrogate/agent addenda (long)
https://www.garlic.com/~lynn/aadsm13.htm#25 Certificate Policies (addenda)
https://www.garlic.com/~lynn/aepay10.htm#46 x9.73 Cryptographic Message Syntax
https://www.garlic.com/~lynn/aepay10.htm#73 Invisible Ink, E-signatures slow to broadly catch on
https://www.garlic.com/~lynn/aepay10.htm#74 Invisible Ink, E-signatures slow to broadly catch on (addenda)
https://www.garlic.com/~lynn/aepay10.htm#78 ssl certs
https://www.garlic.com/~lynn/98.html#0 Account Authority Digital Signature model
https://www.garlic.com/~lynn/99.html#228 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2000b.html#92 Question regarding authentication implementation
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2000e.html#47 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#15 Why trust root CAs ?
https://www.garlic.com/~lynn/2000f.html#24 Why trust root CAs ?
https://www.garlic.com/~lynn/2001.html#67 future trends in asymmetric cryptography
https://www.garlic.com/~lynn/2001c.html#8 Server authentication
https://www.garlic.com/~lynn/2001c.html#9 Server authentication
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#79 Q: ANSI X9.68 certificate format standard
https://www.garlic.com/~lynn/2001d.html#3 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001f.html#77 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#65 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#68 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001h.html#3 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#16 Net banking, is it safe???
https://www.garlic.com/~lynn/2002c.html#35 TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
https://www.garlic.com/~lynn/2002d.html#39 PKI Implementation
https://www.garlic.com/~lynn/2002e.html#49 PKI and Relying Parties
https://www.garlic.com/~lynn/2002e.html#56 PKI and Relying Parties
https://www.garlic.com/~lynn/2002e.html#72 Digital certificate varification
https://www.garlic.com/~lynn/2002m.html#16 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#17 A new e-commerce security proposal
https://www.garlic.com/~lynn/2002m.html#55 Beware, Intel to embed digital certificates in Banias
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#30 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#56 Certificate Authority: Industry vs. Government
https://www.garlic.com/~lynn/2002o.html#57 Certificate Authority: Industry vs. Government

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Super Anti War Computers

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Super Anti War Computers
Newsgroups: alt.folklore.computers
Date: Fri, 04 Apr 2003 22:03:02 GMT
Morten Reistad writes:
There are currently 13 DNS root servers operative ([A-M].root-servers.net) of which 10 are answering timely to rightopondia requests right now. Another three are planned for this summer. There are likewise 13 servers for .com and .net, 11 for .mil, and nine for .gov, .edu and .org. Major countries have 5-8 servers for their top domains (but only three for .us and .ca). Even Afghanistan has 4.

Redundancy requirements fall the further from the top you go For first level domain there are at least three different instances,

For normal second-level servers a primary and a secondary on separate untilities for power and IP is required, with a third recommended.

This requirement is often violated, leading to glorious single points of failure.


when my wife and i were working on the original payment gateway for what was going to become called electronic commerce:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we had done separate utilities, emergency power, separate internet feeds into different places into the internet backbone ... with different ISPs .... co-lo in different major central exchange (48v, battery-backed, emergency power, etc). turns out the different telco paths had a routing under approx. same railroad area ... which had some construction one weekend and put both telco paths out of action for that weekend.

we had a little previous background in no-single-point-of-failure having done ha/cmp project/product
https://www.garlic.com/~lynn/subtopic.html#hacmp

and had also coined the terms disaster survivability and geographic survivabaility ... random past disaster/geographic survivability posts:
https://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/99.html#145 Q: S/390 on PowerPC?
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/aadsm2.htm#availability A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm9.htm#pkcs12 A PKI Question: PKCS11-> PKCS12
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#41 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#13 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001n.html#47 Sysplex Info
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#68 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002i.html#24 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002l.html#15 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002m.html#5 Dumb Question - Hardend Site ?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2003.html#38 Calculating expected reliability for designed system

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

unix

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: unix
Newsgroups: alt.folklore.computers
Date: Sat, 05 Apr 2003 17:07:35 GMT
"Jonadab the Unsightly One" writes:
He doesn't mean the same thing by "security" that the security crowd means. When security people say "security", they mean prevention of unauthorised control. Sometimes they also mean prevention of unauthorised access to information, though I prefer to keep that separate as "privacy". But the security that obscurity creates is prevention of the creation of a competing and compatible product. That's a totally different concept, use of the same word notwithstanding.

one of the definitions of security is PAIN ... privacy, authentication, integriy/identification, & non-repudiation. misc. PAIN refs:
https://www.garlic.com/~lynn/aadsm10.htm#cfppki15 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#cfppki17 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#cfppki18 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#paiin PAIIN security glossary & taxonomy
https://www.garlic.com/~lynn/aadsm11.htm#11 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#12 Meaning of Non-repudiation

misc. ref. to merged security glossary & taxonomy:
https://www.garlic.com/~lynn/2003e.html#73 Security Certifications?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

unix

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: unix
Newsgroups: alt.folklore.computers
Date: Sat, 05 Apr 2003 17:12:13 GMT
Steve O'Hara-Smith writes:
(1) Why do people always assume DBMS == RDBMS ? Most of the time if I want a database what I really want is a network database (1:1, 1:n, n:1 and n:m relationships as primitives) preferably with changeable structure.

what i use for maintaining and generating the IETF index
https://www.garlic.com/~lynn/rfcietff.htm
and the various merged glossary/taxonomy has those attributes:
https://www.garlic.com/~lynn/index.html#glossary
https://www.garlic.com/~lynn/index.html#glosnote

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

1130 Games WAS Re: Any DEC 340 Display System Doco ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1130 Games WAS Re: Any DEC 340 Display System Doco ?
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 05 Apr 2003 17:24:06 GMT
"David Wade" writes:
I don 't ever remember playing games on an 1130, but I do recall the one in Newcastle University was often idle.. I think we did have some small games for the older (and therefore I guess more expensive) 1620 though

... 2250-4; 2250 with 1130 as a controller .... somebody at csc had done port of (pdp-1) spacewars; past 2250-4 refs:
https://www.garlic.com/~lynn/2000b.html#67 oddly portable machines
https://www.garlic.com/~lynn/2001b.html#71 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#13 5-player Spacewar?
https://www.garlic.com/~lynn/2002i.html#20 6600 Console was Re: CDC6600 - just how powerful a machine was
https://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years
https://www.garlic.com/~lynn/2002o.html#17 PLX
https://www.garlic.com/~lynn/2002p.html#29 Vector display systems
https://www.garlic.com/~lynn/2003d.html#38 The PDP-1 - games machine?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

inter-block gaps on DASD tracks

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: inter-block gaps on DASD tracks
Newsgroups: bit.listserv.ibm-main,alt.folklore.computer
Date: Sat, 05 Apr 2003 17:54:40 GMT
IBM-MAIN@ISHAM-RESEARCH.COM (Phil Payne) writes:
Controller "upper" and "lower" protocol times were the great hidden secrets. Some controllers used to wander off with the fairies for 10 or more milliseconds at a time.

one of the problems with transition from 3830 to 3880 controllers is that they went from a (fast) horzontal microcode engine to a relatively slow veritical microcode engine (although it had separate datapath to handle 3mbyte transfers).

early on ... in order to meet the performance requirement of +/- 10 percent of 3830 ... they did some tricks in the 3880 ... like signalling ce/de to the channel early and doing some of the clean-up after the end of the interrupt. The official "acceptance" test was done with a 2-pack VS1 system.

now since I had the systems in bldg. 14 (engineering) and bldg 15 (product test):
https://www.garlic.com/~lynn/subtopic.html#disk

I got the blame when one monday morning the thruput on the bldg. 15 internal machine went into the crapper. They swore up and down there were absolutely no changes. Well, it turned out that over the weekend they had replaced the 3830 on string of 16 3330s with an engineering 3880. The problem was that with some modest amount of concurrent and asynchronous activity ... there would be pending requests for the controller. When ce/de came in, the system would immediately redrive the controller with a pending request (the 2-pack VS1 "test" didn't have concurrent activity & pending requests). So just about every SIO was getting CC=1, SM+BUSY, and then have to be redriven again when CUE came in (in effect, every SIO had to be done twice).

Fortunately, this was six months before FCS (first-customer-ship) and there was time to do some stuff before it hit customer shops.

The other problem was that I had rewritten multiple channel pathing ... and claimed that the 370 code could almost match the dedicated processors that were handling multi-pathing in the later machines. The problem was a 3880 could have four channel paths .... but if it got hit on a channel path that was different than the path for the most recent I/O .... it went off into la-la land for on the order of a millisecond ... defeating a lot of the dynamics of multiple path load balancing (you were better off with a primary/alternate stragegy ... than a dynamic load balancing strategy). Of course, you were up the creek, if it was a shared-disk environment since you didn't have a lot of control of different processors hitting the same controller.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

SLAC 370 Pascal compiler found

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SLAC 370 Pascal compiler found
Newsgroups: alt.folklore.computers
Date: Sat, 05 Apr 2003 19:55:53 GMT
Peter Flass writes:
I still can't believe SHARE pulled the plug without doing something to keep their collection safe. The people at SHARE headquarters now don't even know what you're talking about if you mention the SHARE Program Library. Seems to me they were somewhat careless, to be charitable.

I still have hard copy for LLMPS:
https://www.garlic.com/~lynn/2000g.html#0

360d-5.1-004, share contributed program library, submitted 5/26/66

strong rumor was that it was also used as the core which Univ. Of Mich built MTS on ... although LLMPS was just straight vanilla 360 with no support for virtual memory (however lincoln did have two-processor, SMP 360/67 and was the first installation of CP/67 outside of cambridge).

systems built for 360/67 virtual memory .... official tss/360 product, cp/67 from cambridge science center, and michigan terminal system. Boeing (with some? participation from somebody at Brown U? ... vague recollection) ... also modified a version of release 13, MVT to use 67 virtual memory hardware but not for paging. They had long-running interactive jobs under MVT driving 2250s. MVT had design issue that storage allocation had to be contiguous ... and there was severe storage fragmentation problems with long running applications. The MVT13 hack used the '67 virtual memory hardware to provide the appearance of contiguous storage.

random other ref:
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#64 PLX

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

SLAC 370 Pascal compiler found

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SLAC 370 Pascal compiler found
Newsgroups: alt.folklore.computers
Date: Sat, 05 Apr 2003 23:59:26 GMT
Peter Flass writes:
I still can't believe SHARE pulled the plug without doing something to keep their collection safe. The people at SHARE headquarters now don't even know what you're talking about if you mention the SHARE Program Library. Seems to me they were somewhat careless, to be charitable.

as an aside (terminating share program library):
http://www.share.org/share/website/shareweb.nsf/1d99cd65f2badf1e86256b78006ccb15/5c0fbdc229858d7586256bf40051f846?OpenDocument
http://www.share.org/info/POLICY.NSF/85255db800470aa485255d8b004e349a/58754fe6237fd13a852562130010e807?OpenDocument

and some drift:
https://www.share.org/share/website/shareweb.nsf/1d99cd65f2badf1e86256b78006ccb15/925f02a327e482ad86256b84006e3518?OpenDocument

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

ECPS:VM DISPx instructions

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECPS:VM DISPx instructions..
Newsgroups: alt.folklore.computers
Date: Sun, 06 Apr 2003 03:45:33 GMT
Ivan writes:
Hmmm.. Maybe someone knows here..

Anyone has any precise idea of how the DISPx (E607, E60D, E611) work ? DISP1 & DISP2 really have me baffled, especially because they use the same data & exit lists..


i don't have the documentation anymore .... and exact details have faded with time (work was done almost 30 years ago, and last time i looked at it was over 20 years ago) .... however they represent the exact sequence of the assembler instructions that they are in front of.

it should be something like ... if the initial entry can fast restart the same virtual machine w/o doing anything else, it does; otherwise drops out to do something like call the scheduler and/or handle pending cpexblok. from vague recollection then you would have something like pick a different virtual machine (since wasn't able to restart the previous one) and dispatch it. they can have the same data and exit lists .... even tho one function wouldn't have used all of the same exits. remember .... each of the functions were the migration of specific sequences of 370 code directly into m'code ... and it was possible that the initial m'code function actually would flow (internally) into a subsequent m'code function (and whether it was directly invoked from a new E6, or flowed into from a previous E6, it would expect the same passed structure).

at boot, dmkcpi would have had an adcon list of all eazy-sixers and did a preliminary test to see if it was correctly installed ... if not, it would change all the easy-sixers into no'ops.

in general for instructions dropped into m'code there was about a 10:1 performance improvement over straight 370 (aka the native m'code engine tended to execuate an avg. of ten instructions to emulate each 370 instruction, ecps got almost a one-for-one translation from 370 to native)
https://www.garlic.com/~lynn/94.html#21

the above table has:


dsp+4 to dsp+c84             15105   374     2.18
asysvm entry until enter prob state
dsp+4 to dsp+214             84674   110     3.61
main entry to start of 'unstio'
dsp+214 to dsp+8d2           70058   45.     1.21
'unstio' with no calls
dsp+8d2 to dsp+c84           67488   374.    9.75
from 'unstio' end to enter problem state
dsp+93a to dsp+c84           11170   374     1.62
sch call to entry problem mode

the following table wasn't measure of the effect of direct translation ... but the overall change for some function with and w/o assist; where the function would have only had a portion actually executing in m'code.

-----------------------------------
|            Benchmarks             |
                            |-----------------------------------|
|                 | DOS/VS and CMS  |
                            |  VS1 Under VM   |   under VM      |
|-----------------|-----------------|
|Number of|      % Supervisor State Time      |
Function Areas    |Functions|Unassist.|Assist.|Unassist.|Assist.|
------------------|---------|---------|-------|---------|-------|
CCW / CSW Trans.  |    4    |  12.6   |  3.5  |   9.5   |  2.6  |
Dispatching       |    3    |  13.9   |  4.1  |  19.2   |  5.6  |
Free Storage Mgmt |    2    |   4.4   |  1.0  |   5.3   |  1.2  |
Call / Return     |    2    |   4.7   |   .9  |   5.2   |  1.0  |
Virtual Storage   |    4    |   6.8   |  2.2  |   5.3   |  1.7  |
  Mgmt and Locking|         |         |       |         |       |
Control Block     |    2    |   2.2   |   .7  |   2.6   |   .8  |
  Scanning        |         |         |       |         |       |
Trans. Table Mgmt |    2    |   1.3   |   .2  |   2.7   |   .5  |
|---------|---------|-------|---------|-------|
Total             |   19    |  45.9   | 12.6  |  49.8   | 13.4  |
                   ---------------------------------------------

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

unix

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: unix
Newsgroups: alt.folklore.computers
Date: Sun, 06 Apr 2003 04:32:19 GMT
CBFalconer writes:
I haven't heard of any commercial (or free) network databases for a very long time. In these days of memory mapped files and monstrous virtual memories one would think they would be in vogue.

one of the big battles (issues?) that system/r had with prior generation ... was as much the use of physical pointers vis-a-vis indexes ... as the organization of data. lots of the databases (including network) from the '60s & '70s used physical pointers. one of the arguments against system/r was that replacing the physical pointers with indexes typically doubled the physical space required to house the database.

the issue of network vis-a-vis relational is somewhat orthoganal to network dbms using physical pointers (requiring database manager maintenance) vis-a-vis relational with indexes which tended to hide some amount of the physical management infrastructure.

the issue of memory mappeds files and large virtual memories wasn't a whole lot of help for production systems since cache hit management and non-blocking operation is critical (not so much so for demo & academic exercises). Over ten plus years ago ... psuedo storage resident databases that did pointer swizzling. If the elements were in real storage ... the pointers could be addresses .... otherwise the pointers were things that caused the dbms to move things in/out of storage.
http://citeseer.nj.nec.com/moss92working.html
http://citeseer.nj.nec.com/white92performance.html
http://www.informatik.uni-trier.de/~ley/db/journals/vldb/KemperK95.html
http://redbook.cs.berkeley.edu/lec26.html

recently i ran across comparison of a major production DBMS configured so that the whole database was physically resident in the DBMS cache vis-a-vis design for being physically residient .... showing something like 10:1 performance improvement. I can't find it at the moment but ... misc. other refs via search engine:
http://portal.acm.org/citation.cfm?id=266925&dl=ACM&coll=portal
http://citeseer.nj.nec.com/cha95objectoriented.html
http://citeseer.nj.nec.com/447405.html

misc past
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Timesharing TOPS-10 vs. VAX/VMS "task based timesharing"

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Timesharing TOPS-10 vs. VAX/VMS "task based timesharing"
Newsgroups: alt.folklore.computers
Date: Sun, 06 Apr 2003 14:10:33 GMT
Steve O'Hara-Smith writes:
few years ago (ouch make that nearly 10) a typical conversation between a small business owner (or large business department head) and an ISP might go "I've read your brochures and I can see how email might be handy and maybe we could advertise on these discussion groups ?", "Well no actually that's rather frowned upon and will get you a worldwide bad reputation", "Oh, so how do we advertise then", "You could set up a web site, you can do pretty much anything on there", "Sounds good, how do we make people see it ?".

it also enabled flat-rate. they found that 99 percent would just connect maybe once a day (or a couple times a week) and do the upload/download email exchange and then hang up. that activity, in effect, subsidized most of the other activity that was going on at the time. web took another couple years ... and even then it was awhile before it amounted to a significant percentage of the activity.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Any DEC 340 Display System Doco ?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Any DEC 340 Display System Doco ?
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sun, 06 Apr 2003 13:50:26 GMT
als@usenet.thangorodrim.de (Alexander Schreiber) writes:
One of the standard uses for Microsofts Flightsimulator in the days of (MS/PC/DR)-DOS seems to have been as a standardized hardware compatibility test.

after some finagling ... i got a copy of source for early copy of adventure that ran on cms; took some tracking down ... a couple weeks and actually got it via somebody hand carrying it someplace in the UK and then transmitting (this was early '79).

I made the executable available on the internal network and people that made 300 could ask for the source. This is before the 100 move limitation, during 1st shift was put in (and even later versions wouldn't move at all during 1st shift)

there were a couple of internal go-arounds .... since some labs found 1/3 to 1/2 their processing time going to playing adventure.

1) claim was that adventure was good example of configurable interactive software

2) total eradication on the system would drive it underground with people having private copies under various psuedonyms.

some labs did have to declare amnesty ... everybody had 24hrs ... and then they had to get back to spending the majority of their time actually working.

past refs:
https://www.garlic.com/~lynn/98.html#56 Earliest memories of "Adventure" & "Trek"
https://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#83 "Adventure" (early '80s) who wrote it?
https://www.garlic.com/~lynn/99.html#84 "Adventure" (early '80s) who wrote it?
https://www.garlic.com/~lynn/99.html#169 Crowther (pre-Woods) "Colossal Cave"
https://www.garlic.com/~lynn/2000b.html#72 Microsoft boss warns breakup could worsen virus problem
https://www.garlic.com/~lynn/2000d.html#33 Adventure Games (Was: Navy orders supercomputer)
https://www.garlic.com/~lynn/2001m.html#14 adventure ... nearly 20 years
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2001m.html#17 3270 protocol
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2002d.html#12 Mainframers: Take back the light (spotlight, that is)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

ECPS:VM DISPx instructions

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECPS:VM DISPx instructions..
Newsgroups: alt.folklore.computers
Date: Sun, 06 Apr 2003 14:01:14 GMT
"Glen Herrmannsfeldt" writes:
Is there a manual that describes these? Is it still available?

there were never customer available manual .... i use to have a dozen or so copies of the internal m'code detailed spec (at various version levels) .... most had been printed on fanfold paper (and they all had some sort of restricted availability labeling). I can't seem to find any softcopy that I might have accidentally forgot to erase.

the guy from PASC and I had done the original measurements ... and then I had worked with the manager of endicott assist microprogramming and his two microcode engineers. then the manager and I spent a period of a year or so, off & on, running around the world doing the product dog & pony show to various product managers and market forecasters.

random refs:
https://www.garlic.com/~lynn/submain.html#mcode

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 06 Apr 2003 23:16:08 GMT
jfc@mit.edu (John F. Carr) writes:
I recall reading about ten years ago that the early VM software for (Unix on?) the VAX 11/780 was deliberately simple because the cost of a page fault and pagein was not very high, as measured by instruction count. Simple was faster.

there are several measures of simple.

i had rewritten and optimized the cp/67 code so the pathlength to take a fault, select a replacement page and do the pagein ... was around 500 instructions ... at least 1/4th to 1/5th the pathlength of the next best implementation that i knew of (this included page fault, page replacement algorithm, prorated portion of performing page write on fraction of pages selected for replacement that needed writing, schedule page read, task switch, page read complete, task switch). The next best implementation quoted numbers for an I/O trick with the fixed head drum that kept a continuous i/o operation going ... and the page supervisor just needed to update pointers in the continuous i/o operation ... as opposed to actually scheduling an independent asynchronous i/o operation.

There was also the slight-of-hand thing with the page replacment algorithm that was a variation of clock (although it preeceded the clock phd thesis by 10 years or so) ... where full instruction simulation showed it to be better than true LRU.

Turning 150 pages/sec ... 100 page-reads/sec plus 50 page-writes/sec ... was 50,000 instructions ... about 15 percent of a 1/3rd mip processor ('60s). 4341 was about same mip as the 11/780 in the '80 time-frame
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

with a processor three times faster means that the 15 percent could drop to 5 percent of processor to turn 150 page i/o sec (it was actually slightly higher than that because the morph of cp/67 to vm/370 introduced some inefficiencies).

most of the dynamic adaptive stuff .... had to do with determining when memory was over commuted and do you need to suspend somebody .... or is memory under commuted and can more tasks be run concurrently.

the other thing .... was that tasks could be suspended for things other than memory over commitment. some of the brain dead implementations would do global sweep of a task's pages at moment of suspension ... even if it wasn't a memory over commitment. the dynamic adaptive work left the pages around ... if it there seemed to be a high probability that the task would resume .... before somebody else might need the real memory locations (this was the reference to both tss/360 and mvs having a deterministic sweep of all pages on task suspend ... it was always done .... even in lots of situations where it wasn't necessary).

before the big pages implementation .... there used to be a joke about how could you tell MVS was heavily paging? ... it was cpu bound.

there is the straight-line stuff ... and then there is the more global stuff ... as an aside ... the amortized overhead of the more global stuff was included in that 300-500 instruction avg.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

unix

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: unix
Newsgroups: comp.os.vms,alt.folklore.computers
Date: Mon, 07 Apr 2003 14:02:18 GMT
Sami S. Sihvonen writes:
IBM has gone Free Software (GNU General Public License) in a big way. Yes, they support the one and only original GNU Project that has been almost 20 years creating Free Software with hacker ethics.

that was the way pretty much everything was in the '60s and much of the '70s. ... the non-charged/charged for software threads:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001b.html#74 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002n.html#3 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002q.html#36 HASP:
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003e.html#18 unix
https://www.garlic.com/~lynn/2003e.html#20 unix
https://www.garlic.com/~lynn/2003e.html#35 unix
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 07 Apr 2003 22:43:41 GMT
Anne & Lynn Wheeler writes:
Turning 150 pages/sec ... 100 page-reads/sec plus 50 page-writes/sec ... was 50,000 instructions ... about 15 percent of a 1/3rd mip processor ('60s). 4341 was about same mip as the 11/780 in the '80 time-frame
https://www.garlic.com/~lynn/2001m.html#15 departmental servers


slightly related is the comparison of disk performance that i started in the late '70s. i highlighted it with comparison of a nominal 360/67 system circa 1968 with CMS workload against a 3081k system circa 1983 with similar CMS workload (15 years later). The basic claim was that number of users, number of page I/Os per second, and number of user/filesystem disk i/os per second should have increased by a factor of 50 (based on cpu and memory resources). instead of going from 80 users to 320 users with subsecond response ... it should have gone to 4,000 users, along with going from 150 page i/os per second to 7,500 page i/os per second and from 100 to 5,000 user i/os per second.

i tweaked the noses of the disk product people by claiming that the releative system performance of disk technology had declined by an order of magnitude during the 15 year period. they assigned their performance organization to proove me wrong .... however after a bit of study, they came back and basically said that i had somewhat understated the situation.

as an aside ... when i started on cp/67 ... it would peak out/saturage at around 80 page i/os per second consuming on the order of 40 percent of the processor related to virtual page stuff. As previously mentioned .... i significantly optimized the pathlength (more like 150 page i/os/sec avg, 15percent processor) ... but also restructured various pieces to peak-out at 300 page i/os/sec. this allowed it to achieve something like 80 concurrent users, with mixed-mode workload with subsecond interactive response.

some of the people at grenoble science center had taken system with lots of my pathlength changes in it and implemented a "traditional" working set dispatcher. with something like 50 percent more non-fixed real storage (154 available 4k pages vis-a-vis 104 available 4k pages) they got about the same thruput and interactive response with 35 users as I was getting with 75-80 users:
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France

lots of repeat of the 3081k discussion:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2003.html#21 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

inter-block gaps on DASD tracks

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: inter-block gaps on DASD tracks
Newsgroups: bit.listserv.ibm-main
Date: Mon, 07 Apr 2003 22:22:27 GMT
pa3efu@YAHOO.COM (Jan Jaeger) writes:
iirc there was no disconnect/reconnect on searches, a search on key for example used to re-fetch the key on every record that was read by the control unit.

to large extent set sector was specifically designed so that channel/controller could disconnect until just before the search needed to be done (with known formats).

vtoc & pds multi-track searches still had to be done. the interesting thing was that they were so horrendous ... that few people really realized it unless running in an environment when they nominally didn't occur.

i've related the tale at san jose research where mvs was on 168 and vm was on 158. although the dasd farm was fully interconnected ... there was a strict edict that any MVS (3330) pack could ever be mounted on a VM drive. The few times that it accidentally happened ... the operators would immedately get irate phone calls from the cms users about what had happened to make cms interactive performance go into the crapper (one indication about how inured TSO users are to really terrible performance .... was that you never heard them complaining when there were MVS packs mounted on MVS systems with TSO running ... presumably they just believed that was normal state of affairs, that you couldn't run TSO w/o MVS ... and you couldn't run MVS w/o MVS packs).

the one incident where the MVS operators refused to react to all the complaints from CMS users about the MVS mis-mounted pack .... we brought up a VS! (heavily optimized with VM handshaking) and put one of its packs on an MVS string ... and started doing some multi-track searches. Even VS1 on an extermely heavily loaded VM/158 system .... could cause enuf pain to the MVS/168 system that the MVS operators reconsidered their decision about not moving the MVS pack.

past telling of the multi-track search tales:
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#19 OT?
https://www.garlic.com/~lynn/2000f.html#42 IBM 3340 help
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#60 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#6 index searching
https://www.garlic.com/~lynn/2002.html#10 index searching
https://www.garlic.com/~lynn/2002d.html#22 DASD response times
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002g.html#13 Secure Device Drivers
https://www.garlic.com/~lynn/2002l.html#49 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003c.html#48 "average" DASD Blocksize

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

ECPS:VM DISPx instructions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECPS:VM DISPx instructions..
Newsgroups: alt.folklore.computers
Date: Tue, 08 Apr 2003 16:11:44 GMT
ararghNOSPAM writes:
I just looked at a 370/145 listing, and I didn't see anything that resembles a restrictive legend. (It was a copy that I 'rescued' from a trash can, floppys and all)

I wasn't referring to microcode listings ... it was, effectively the architecture (red-book) equivalent for the ECPS instructions, ref:
https://www.garlic.com/~lynn/2003f.html#43 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#47 ECPS:VM DISPx instructions

the ECPS instructions having never even been in the principles of op, aka, esa/390 POP:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CONTENTS?SHELF=
or Z/ POP:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/DZ9ZR000/CONTENTS?DT=20010102160855&SHELF=

the architecture redbook for 360&370 was cms script file ... which could either print the whole thing (and it would be the architecture redbook) or with a conditional set, just print the subset that was the 360 (& then 370) principles of operation (w/o all the architecture notes, engineering issues, justifications, trade-offs, unannounced instructions, etc).

redbook comes from the fact that it was distributed in dark red three ring binder ... no relationship to redbooks for customers:
http://www.redbooks.ibm.com/

random past refs to architecture red-book:
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/2000.html#2 Computer of the century
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2000f.html#35 Why IBM use 31 bit addressing not 32 bit?
https://www.garlic.com/~lynn/2001b.html#55 IBM 705 computer manual
https://www.garlic.com/~lynn/2001m.html#39 serialization from the 370 architecture "red-book"
https://www.garlic.com/~lynn/2001n.html#43 IBM 1800
https://www.garlic.com/~lynn/2002b.html#48 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002g.html#52 Spotting BAH Claims to Fame
https://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
https://www.garlic.com/~lynn/2002h.html#69 history of CMS
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2002m.html#2 Handling variable page sizes?
https://www.garlic.com/~lynn/2003b.html#59 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003d.html#76 reviving Multics
https://www.garlic.com/~lynn/2003f.html#44 unix

misc. past ecps post/refs:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/2000.html#12 I'm overwhelmed
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002i.html#80 HONE
https://www.garlic.com/~lynn/2002j.html#5 HONE, xxx#, misc
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#62 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2003.html#4 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#7 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#16 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#61 MIDAS
https://www.garlic.com/~lynn/2003d.html#21 PDP10 and RISC
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 08 Apr 2003 16:42:07 GMT
"Don Chiasson" writes:
The DEC-20 had a timer for last page access (that is the KL processor, first shipped 1975; I don't know about other versions of the CPU) so it was not a patent preventing the implementation in the VAX. Good example of a company not communicating internally: had the VAX folks talked with the 10/20 people, it might have been done.

i think one of the people that worked on CP/67 at brown .... had an ACM article about 1973(?) on a page ref timer .... something like 8 bits for each page that would (hardware) tic/decrement for every page on regular interval (instead of ref. bit that was hardware set & software reset). This was somewhat akin to the 1, 2, 3, & 4 bit article published out of project mac/multics group.

the issue with clock ... was that it was doubly, naturally, dynamically adaptive .... the interval between resets was related to how fast the hand cycled all pages, how fast it cycled was related to how fast pages were being used and the demand for pages. If cycled faster/slower based on demand for pages. It also cycled faster/slower based on how many pages were being used in a cycle. Therefor it was naturally adjusting feedback within broad range of normal operation environments. The two extremes .... demand for pages is so low that all bits are set ... and page thrashing needed additional help.
https://www.garlic.com/~lynn/subtopic.html#wsclock

As an aside, standard clock (&LRU) pathelogically degenerats to FIFO. My slight-of-hand two-bit ... looked exactly like a normal clock algorithm (the instructions looked exactly the same and the pathlength effectively was exactly the same) ... but had the interesting side-effect that it degenerated to random rather than FIFO (which a straight LRU does) ... w/o actually having any explicit pathlength or instructions that invoked any randomnization.

in the 71/72 time-frame when CSC was capturing page traces ... and for something like vs/repack ... full instruction traces .... and feeding it into replacement algorithm simulator .... the clock-hack that degenerated to random .... would always outperform straight clock, and with slight tweaks ... also, always outperform simulated, true LRU.

This is somewhat the claim from my youth where I liked to do extreme pathlength optimizations .... and the most extreme was to have something done in zero instructions .... typically some peculiar side-effect. It had downside that it was frequently totally opaque to anybody else that might be still maintaining the code 10-15 years later.

as an aside ... some number of the vm/370 group migrated to dec/vax development ... after the burlington mall site was shutdown (as opposed to moving to POK) ... but that didn't happen to late '76.

some past discussions of making LRU clock degenerate to random rather than fifo:
https://www.garlic.com/~lynn/2000f.html#9 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#32 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2001f.html#55 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2002j.html#31 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002j.html#32 Latency benchmark (was HP Itanium2 benchmarks)
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#10 lru, clock, random & dynamic adaptive
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/98.html#17 S/360 operating systems geneaology

vs/repack topic drift:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing

and a whole lot of drift with burlington mall:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/98.html#7 DOS is Stolen!
https://www.garlic.com/~lynn/99.html#179 S/360 history
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000b.html#55 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002h.html#34 Computers in Science Fiction
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002m.html#9 DOS history question
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#14 Multics on emulated systems?
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

ECPS:VM DISPx instructions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECPS:VM DISPx instructions..
Newsgroups: alt.folklore.computers
Date: Tue, 08 Apr 2003 17:03:36 GMT
"Glen Herrmannsfeldt" writes:
There are some manuals describing things that aren't in the POP, though. SIE has its own manual, and there is another for the OS assists. I don't happen to remember the number for either, though. So it wouldn't seem too unusual for a manual for the ECPS instructions.

i worked with endicott on it ... and every one that i saw on ecps was labeled with restrictions. SIE survived into another time & place ... where ECPS pretty much died out.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

Alpha performance, why?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Alpha performance, why?
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 08 Apr 2003 19:44:50 GMT
"Don Chiasson" writes:
As I remember it, the KL clock was single speed, and you could crash TOPS20 by making a working set larger than physical memory then touching every page faster than a clock tick. The operating system could not find a page to swap out and it hung till the front end processor detected a hung back end and forced a reboot.

the clock that i did in the 60s .... and the stanford phd thesis on clock (over ten years later) .... didn't tic a hardware clock. The clock reference comes from the fact that the RRB operation swept around the pages in real storage ... somewhat analogous to the hands on an analog clock. The amount of history information represented by a single reference bit was the interval between the "hand" reseting the bit to zero and the time the page was examined again. If the page had an opportunity to have been used, then it would have its reference bit set when it was next examined. The hand speeded up based on either/both 1) the frequency of page faults and/or 2) having most of the pages examined having their bits set. The hand slowed down if the frequency of page faults slowed down and/or 2) lots of pages didn't have their bits set.

In some sense, within broad operating range, it was a naturally, self-correcting system. If page faults were happening too fast, and the hand started to sweep too fast, then the interval between sweeps would decrease, pages would have less chance of being referenced, the hand would find higher percentage of pages to replace and therefor slow down.

page-thrashing was a characteristic of spending majority of the time waiting for page i/os to complete ... not directly a characteristic of the hand sweep interval. if there was an infinitely fast page i/o operation, and zero overhead page replacement infrastructure ... then there might not be any system page thrashing .... regardless of the interval of the hand sweep.

as an aside .... these clock algorithms were global LRU replacement ... i.e. swept all real pages.

the traditional working set paper that was first published at the same time I was doing the original clock .... was local LRU and fixed wall clock timer ... and was not a natively, dynamically, self-correcting system.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

ECPS:VM DISPx instructions

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ECPS:VM DISPx instructions..
Newsgroups: alt.folklore.computers
Date: Tue, 08 Apr 2003 19:31:07 GMT
"Glen Herrmannsfeldt" writes:
Some things, I think ECPS related, are in the VM/PC manual. I think different than the ones discussed here, though.

vm/pc (washington) was 7-9 years after vigil/tulley (& ecps).

remember the low-end machines were vertical m'code and were doing soemthing like ten native micro-engine instructions per 370 instruction (something akin to the current generation of 370 simulators running on intel platforms). the high-end machines were horizontal m'coded machines ... and typically measured in avg. machine cycles per 370 instructions; ... aka the 370/165 was avg. 2.1 machine cycles per 370 instruction, the 370/168 got it down to 1.6 machine cycles per 370 instruction ... and the 3033 (which started out simply as 168 wiring diagram remapped to faster chip technology) got it down to 1 machine cycle per 370 instruction.
https://www.garlic.com/~lynn/submain.html#mcode

the fort knox stuff was targeted at replacing the low & mid range microengines (370, controllers, rochester products, etc), with 801. part of the stuff that aborted fort knox was that the mid-range was starting to implement 370 directly in silicon. the 4341 follow-on (4381) had just about all of 370 instructions directly in silicon.
https://www.garlic.com/~lynn/subtopic.html#801

... digression warning ... for VM (virtual machine simulator) there were two types of "overhead"

1) traditional kernel overhead for managing machine resources and 2) simulation of privilege instructions where their virtual machine definition was slightly modified from their real machine definition.

Most of ECPS was mostly of type #1 ... although there was some additional 370 privilege instructions that were modified to be virtual machine sensitive ... i.e. the native hardware implementation of the 370 instruction was different in real machine mode than in virtual machine mode. In any case, as later generations of mainframe hardware had instructions directly implemented in hardware, there was much less benefit to moving 370 instructions into microcode (since there was little or no difference in the time for execution).

The first instance of method number #2 was implemented on the 370/158 as microcode changes for certain privilege instructions (and predates ECPS). Basically VM would load a control block pointer into CR6. If real CR6 was zero, the hardware microcode would execute real machine operations as defined by the POP. If real CR6 contains a VM control block, then the microcode would implement various real machine operations as per the virtual machine restrictions. This continued to be of significant benefit, even on later generations of machines .... since it was slight bump on the native m'code implementation (compared to interrupting into the kernel, saving state, simulating the instruction from scratch, restoring state).

The ultimate of this was SIE instruction ... which sort-of swapped the situation ... rather than specific privilege instructions checking to see if real CR6 had a value in it; the SIE instruction basically put the real hardware into virtual machine mode (and had a pointer to a virtual machine control block with all the information necessary to operate in virtual machine mode). Basically, SIE was a hardware architected feature, somewhat analogous to the way virtual memory and virtual memory control blocks are hardware architected feature. Furthermore, in theory, SIE can be used by any operating system (in the same way as different operating systems can uttilize the same virtual memory hardware). Whereas the easy-six ECPS instructions were very kernel specific.

This was further extended with LPARS (or logical partitions) .... which effectively put a restrictive subset of the vm kernel function into the native microcode of the machine ... and the normal machine operation was psuedo virtual machine mode (or logical partitions).

... warning about different digression ...
the follow-on to xt/at/370 (vm/pc) was a74. and for some reason, i did find a set of my updates for a74 ... included misc. changes to my page-mapped filesystem/mmap api for running on a74 ... from some unknown time warp, long ago and far away:


405  dmkcfi.updta74
405  dmkdsd.updta74
     405  dmkium.updta74
1134  dmkmov.updta74
     405  dmkser.updta74
2025  dmkpam.updta74
1377  dmkmch.updta74
3564  dmkcpi.updta74
    1215  dmkpgv.updta74
162  dmkpgr.updta74
    4374  dmkpgt.updtdbg
4455  dmkpgt.savea74

past pc/xt/at/370, washington, & a74 posts.
https://www.garlic.com/~lynn/94.html#42 bloat
https://www.garlic.com/~lynn/96.html#23 Old IBM's
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000.html#29 Operating systems, guest and actual
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000e.html#52 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#55 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001b.html#69 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#89 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001f.html#28 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001i.html#19 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#51 DARPA was: Short Watson Biography
https://www.garlic.com/~lynn/2001k.html#24 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#44 Blade architectures
https://www.garlic.com/~lynn/2002f.html#49 Blade architectures
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2002f.html#52 Mainframes and "mini-computers"
https://www.garlic.com/~lynn/2002h.html#50 crossreferenced program code listings
https://www.garlic.com/~lynn/2002i.html#76 HONE was .. Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002l.html#27 End of Moore's law and how it can influence job market
https://www.garlic.com/~lynn/2003f.html#8 Alpha performance, why?

past lpar, sie postings:
https://www.garlic.com/~lynn/94.html#37 SIE instruction (S/390)
https://www.garlic.com/~lynn/98.html#45 Why can't more CPUs virtualize themselves?
https://www.garlic.com/~lynn/98.html#57 Reliability and SMPs
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#62 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000c.html#50 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#3 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001b.html#72 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001d.html#67 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#61 Estimate JCL overhead
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#33 D
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#32 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#53 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002e.html#25 Crazy idea: has it been done?
https://www.garlic.com/~lynn/2002e.html#75 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002n.html#6 Tweaking old computers?
https://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#28 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#0 Home mainframes
https://www.garlic.com/~lynn/2002o.html#15 Home mainframes
https://www.garlic.com/~lynn/2002o.html#16 Home mainframes
https://www.garlic.com/~lynn/2002o.html#18 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#4 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002p.html#40 Linux paging
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2002p.html#55 Running z/VM 4.3 in LPAR & guest v-r or v=f
https://www.garlic.com/~lynn/2002q.html#26 LISTSERV Discussion List For USS Questions?
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#7 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#14 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#15 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#41 How much overhead is "running another MVS LPAR" ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
Internet trivia 20th anv https://www.garlic.com/~lynn/rfcietff.htm

next, previous, index - home