List of Archived Posts

2006 Newsgroup Postings (02/01 - 02/12)

IBM 610 workstation computer
IBM 610 workstation computer
Mount a tape
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
Mount a tape
Free to good home: IBM RT UNIX
Is there a workaround for Thunderbird in a corporate environment?
IBM 3090/VM Humor
IBM 610 workstation computer
IBM 610 workstation computer
Change in computers as a hobbiest
Expanded Storage
{SPAM?} Re: Expanded Storage
{SPAM?} Re: Expanded Storage
{SPAM?} Re: Expanded Storage
{SPAM?} Re: Expanded Storage
IBM 3090/VM Humor
Seeking Info on XDS Sigma 7 APL
IBM 3090/VM Humor
Would multi-core replace SMPs?
Seeking Info on XDS Sigma 7 APL
Seeking Info on XDS Sigma 7 APL
Multiple address spaces
Multiple address spaces
IBM 610 workstation computer
Multiple address spaces
IBM 610 workstation computer
Empires and Imperialism
Seeking Info on XDS Sigma 7 APL
Multiple address spaces
IBM 610 workstation computer
Multiple address spaces
Seeking Info on XDS Sigma 7 APL
Multiple address spaces
X.509 and ssh
blast from the past ... macrocode
another blast from the past
another blast from the past ... VAMPS

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Wed, 01 Feb 2006 12:33:01 -0700
greymaus writes:
The vision is of holding a tiger by the tail. There was a thing on TV about the Taiwanese electronics industry, a guy was interviewed that worked at making moulds (forms) for plastic parts. He had been working at one project for over two days nonstop to get it finished on time. What can you say?... Also your message explains how so much stuff is sold as remainders of lines.

when i was undergraduate, i would regularly get the machine room to myself from 8am sat. until 8am monday ... and would work 48hrs non-stop and then go off to class (sometimes nearly 60hr day).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Thu, 02 Feb 2006 09:34:43 -0700
greymaus writes:
And then you grew up! What machine?

misc. past posts:
https://www.garlic.com/~lynn/93.html#15 unit record & other controllers
https://www.garlic.com/~lynn/93.html#17 unit record & other controllers
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#4 1401 overlap instructions
https://www.garlic.com/~lynn/97.html#21 IBM 1401's claim to fame
https://www.garlic.com/~lynn/98.html#9 ** Old Vintage Operating Systems **
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#59 Living legends
https://www.garlic.com/~lynn/99.html#130 early hardware
https://www.garlic.com/~lynn/2000.html#79 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#11 IBM 1460
https://www.garlic.com/~lynn/2000d.html#34 Assembly language formatting on IBM systems
https://www.garlic.com/~lynn/2001.html#11 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001b.html#22 HELP
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2001k.html#31 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2002b.html#13 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#15 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002f.html#47 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002f.html#48 How Long have you worked with MF's ? (poll)
https://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
https://www.garlic.com/~lynn/2002o.html#19 The Hitchhiker's Guide to the Mainframe
https://www.garlic.com/~lynn/2002q.html#29 Collating on the S/360-2540 card reader?
https://www.garlic.com/~lynn/2003h.html#30 Hardware support of "new" instructions
https://www.garlic.com/~lynn/2003i.html#8 A Dark Day
https://www.garlic.com/~lynn/2003i.html#12 Which monitor for Fujitsu Micro 16s?
https://www.garlic.com/~lynn/2003i.html#51 Oldest running software
https://www.garlic.com/~lynn/2003n.html#41 When nerds were nerds
https://www.garlic.com/~lynn/2004d.html#10 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#49 can a program be run withour main memory?
https://www.garlic.com/~lynn/2004g.html#39 spool
https://www.garlic.com/~lynn/2004k.html#40 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004q.html#66 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2005c.html#54 12-2-9 REP & 47F0
https://www.garlic.com/~lynn/2005g.html#52 Software for IBM 360/30
https://www.garlic.com/~lynn/2005l.html#34 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005n.html#3 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mount a tape

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mount a tape
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 02 Feb 2006 15:06:19 -0700
swiegand@ibm-main.lst (Stephen M. Wiegand) writes:
I was trying to stay out of this thread because I thought it was a homework or some other such question but having seen some of the responses, I have an urgent need to add my thoughts. When I first saw the question, I was thinking the person wanted to know how to physically mount a tape. After all he didn't ask about JCL or a program or a console command. That got me to thinking about how I learned back in the barely light years. (They didn't have computers in the dark ages). In my very first programming job we had an IBM 370-?? CPU with DOS and Power operating systems. We didn't use tape much in our shop but we had to get a drive because IBM in those days sent everything out on tape. So we had some unit (don't remember the numerals) that had two spindles and a sliding glass door. It definitely had a vacuum load because you could hear it sucking when you readied a tape. Having never been exposed to this hardware in college and having to come in on weekends to load operating system and other IBM software without the presence of an operator, I had to learn how to "mount a tape". I remember you had to press a button to open the doors, mount the tape reel on a spindle or hub (whatever you call that thingy in the middle), thread the tape across the heads and through another path. Press another button and the doors closed, the vacuum sucked the tape up the rest of the way and onto the take-up reel and readied the tape at the first readable block, which might be the label if it was a labeled tape.

my first programming job was to implement a 360/30 version of 1401 mpio (rather than running 360/30 in 1401 emulation mode). the 1401 was used for unit record<->tape frontend for 709; loaded cards on tape, physically moved tape from 1401 drive to 709 drive, 709 ran outputing to new tape, took that tape from 709 drive back to 1401 tape drive and produced whatever print & punch output.

tapes came in canisters which had to be opened, reel of tape removed. tape drives had full sized swing open doors that you manually opened; you had to mount the reel and then manually feed the tape to the tape-up reel (somewhat similar to the old audio tape open reel).

later they had those straps that just wrapped around the reel of tape, instead of canisters that completely enclosed the reel.

later still you got those straps that didn't have to be removed, there was new drives that would open the strap a smidgen and feed the tape from the reel.

for the 360/30 mpio version i got to design and implement my own monitor, interrupt handlers, device drivers, error recovery, storage allocation, multitasking, etc. i could concurrently handle card to one tape ... while concurrently processing another tape to printer.

recent posting with lots of references to MPIO activity:
https://www.garlic.com/~lynn/2006b.html#0
https://www.garlic.com/~lynn/2006b.html#1

much later i had implemented a backup/archive system that i deployed on a number of internal systems (originally, mostly used 6250bpi tape). eventually it made it out the door as workstation datasave, which subsequently morphed into adsm and is now called tsm.
https://www.garlic.com/~lynn/submain.html#backup

the 360 green card had tape ccws. several years ago, an ios3270 of the green card was done (ios3270 was a full-screen menu app on cms from the 70s, lots of people saw it as the service processor menus on the 3090; the 3090 service processor was a pair of 4361s running a hihgly customized version of vm370 release 6 with all the menu screens in ios3270). the 360/67 "blue" card also had sense bit definitions for a number of devices. I had added some of that sense information (including tape) to the ios3270 gcard. recently i did a rough cut at translating the ios3270 gcard to html
https://www.garlic.com/~lynn/gcard.html
mag tape ccws:
https://www.garlic.com/~lynn/gcard.html#25
sense data
https://www.garlic.com/~lynn/gcard.html#17

recent gcard ref:
https://www.garlic.com/~lynn/2006.html#0 EREP, sense ... manual

picture of 2311 7mbyte disk drvies in the forground and a couple 2400 tape drives in the left middle (picture also drum in upper middle behind tape drives):
http://ftp.columbia.edu/acis/history/2311.html

the whole front of the tape drive was a door that opened. tape reel was mounted on the hub and feed thru the heads and onto the take-up reel. the reels had small finger indented finger depression ... you would wind the tape around the take up reel once (until it had overlapped and friction would keep it from slipping) and then you would spin the take-up reel several times (index finger in the finger depression) ... getting the tape position past the small strip of reflective foil that marked the start of tape. you closed the door and hit rewind/ready. that would spin tape from the take-up reel until the heads sensed the reflective foil (if you didn't get the tape spun manually past the reflective foil, all the tape would come off the take-up reel and you would have to feed it again from the start).

the hub in the middle of the tape reel had a handle that pull out that released and/or locked the tape reel on the hub.

earlier 701 tape drive
http://ftp.columbia.edu/acis/history/701-tape.html

later 3420 tape drive
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3420.html

the mounted tape is on the right with the (white) auto-strap still around the reel. this had a small clasp that the tape drive opened and pulled the tape thru the opening and fed the tape under the heads and onto the take-up reel.

another picture of 3420 tape drive (on ebay, picture isn't likely to be around for long)
http://cgi.ebay.com/Vintage-LIKE-NEW-IBM-3420-8-MAGNETIC-TAPE-DRIVE-Rare_W0QQitemZ5217468698QQcategoryZ74946QQcmdZViewItem

here is a closeup of the white strap around a tape reel with the clasp on the left that the drive could automatically be opened and feed the tape.
http://ftp.columbia.edu/acis/history/media.html

picture of 360/30 with tape drives on the left and 2314 disks on the right.
http://ed-thelen.org/comp-hist/vs-ibm-360-30.jpg

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Thu, 02 Feb 2006 15:37:55 -0700
hancock4 writes:
I think he said when he was an undergraduate, so it was in college, not in high school, a bit older than a teenager.

when i started my first undergraduate student programing job, (got to design and implement my own monitor, interrupt handlers, device drivers, pull 48hr weekend shift, have the whole machine room to myself, and then go to class, etc) a little more detail recently x-posted from ibm-main n.g.
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape

i was still a teenager.

i had worked construction in high-school and the previous summer had been foreman on construction crew ... recent posting making reference (among a number of things).
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs

the particular construction project with passing mention in the above posting got behind schedule because of weather problems and the last 6-8 weeks we pulled 85hr weeks (time and half for 41-60hr and double time for over 60). the programming job was somewhat more interesting (designing and implementing my own system, etc, and a single 48hr shift was different than 85hr work week but student programmer didn't pay anywhere near as well).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Thu, 02 Feb 2006 16:07:07 -0700
hancock4 writes:
Some computers had Fort Knox level security, but definitely not all, some were rather lax. I merely walked into my summer employer's computer room (I wasn't in computers then) and asked to use the phone, I was directed to the one right at the CPU console. I could've came in off the street, I think I was only 17 at the time.

re:
https://www.garlic.com/~lynn/2006b.html#3 IBM 610 workstation computer

i started out taking intro to fortran 2hr credit class, half-way thru the semester i was trying to write my own fortran program to calculate orbital positions. also about that time they got a 360/30 to replace the 1401 ... which then ran mostly in 1401 emulation mode ... ref
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape

they got used to seeing me around ... and were trying to figure out what this new 360 stuff was ... so eventually they turned a lot of it over to me to figure out. I got my own key to the machine room (which was kept locked at all times). the univ. normally turned everything off by 8am sat. and nothing officially was scheduled until 8am monday. I could come in a little before 8am sat. before the friday night 3rd shift left .. and then have the whole machine room to myself for the weekend. since i was doing an application that did a lot of tape and unit record stuff ... i also had to learn how to be my own operater ... not only mounting tapes ... but doing the regular 8hr shift cleaning of tape drives, card punch/readers, printers, etc.

the 360/30 was supposedly interim getting ready for 360/67 (which was going to replace the 709/1401 setup) running tss/360. tss/360 was never successfully deployed (at the univ). although some of the ibm'ers would interrupt parts of my weekends (after the 360/67 came in for weekend tss/360 testing). however, ibmers never worked more than a single 8hr shift (or rarely two shifts) on the weekends ... so i would still have most of the 48hr period to myself.

lots of postings about getting to play w/computers as undergraduate:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/97.html#7 Did 1401 have time?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#33 ... cics ... from posting from another list
https://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#95 Early interupts on mainframes
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/2000c.html#42 Domainatrix - the final word
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001c.html#36 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2001d.html#23 why the machine word size is in radix 8??
https://www.garlic.com/~lynn/2001d.html#48 VTOC position
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#78 HMC . . . does anyone out there like it ?
https://www.garlic.com/~lynn/2001g.html#29 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001h.html#12 checking some myths.
https://www.garlic.com/~lynn/2001h.html#60 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#7 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002b.html#13 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#15 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#24 Infiniband's impact was Re: Intel's 64-bit strategy
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002f.html#57 IBM competes with Sun w/new Chips
https://www.garlic.com/~lynn/2002i.html#42 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002n.html#32 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002p.html#62 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#5 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003k.html#8 z VM 4.3
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2003n.html#50 Call-gate-like mechanism
https://www.garlic.com/~lynn/2003p.html#9 virtual-machine theory
https://www.garlic.com/~lynn/2003p.html#23 1960s images of IBM 360 mainframes
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004d.html#10 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#43 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004l.html#29 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004m.html#26 Shipwrecks
https://www.garlic.com/~lynn/2004m.html#36 Multi-processor timing issue
https://www.garlic.com/~lynn/2004n.html#3 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#23 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#2 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#56 1401-S, 1470 "last gasp" computers?
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
https://www.garlic.com/~lynn/2005j.html#28 NASA Discovers Space Spies From the 60's
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005k.html#8 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005k.html#14 virtual 360/67 support in cp67
https://www.garlic.com/~lynn/2005k.html#38 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005k.html#42 wheeler scheduler and hpo
https://www.garlic.com/~lynn/2005k.html#50 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005m.html#16 CPU time and system load
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
https://www.garlic.com/~lynn/2005n.html#24 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005o.html#12 30 Years and still counting
https://www.garlic.com/~lynn/2005o.html#25 auto reIPL
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2006.html#2 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006.html#7 EREP , sense ... manual
https://www.garlic.com/~lynn/2006.html#15 S/360
https://www.garlic.com/~lynn/2006.html#17 {SPAM?} DCSS as SWAP disk for z/Linux
https://www.garlic.com/~lynn/2006.html#40 All Good Things

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Thu, 02 Feb 2006 17:43:51 -0700
hancock4 writes:
I don't know the percentage of programming done on the early machines (ie 709x series) in assembler vs. Fortran, but I suspect assembler was the main language for the reasons above. After S/360, with its more sophisticated operating system, disk drivers, and more memory, Fortran was more widely. Also, there were more college courses in Fortran by that time.

on the university's 709, the vast majority of programming was fortran and cobol. classes were all fortran, and most of the departments did fortran ... math, business statistics, biometric statistics, soc & psyc stats, etc. admin had cobol jobs (there may have been some univ. assembler 709 programs, but i wasn't aware of any). the 709 ran ibsys and almost everything was tape-to-tape ... using 1401 as unit record front end i.e. tapes were manually moved between 709 drives and 1401 drives as mentioned in this tape related x-posting from ibm-main n.g.
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape

later on the 360/67 (mostly running 360/65, os/360), most of the student stuff had moved to watfor fortran ... while departments were mostly fortg, (fortran) scientific subroutine lib., etc. admin had converted their 709 cobol to 360 cobol.

one day i was wandering thru the machine room and everything had come to a halt and people were standing around waiting. apparently an important admin daily cobol job had been run and produced some different ending results. i had never paid any attention before, but i was told that it was a 407 plug-board job that had been converted to 709 cobol (emulating 407 operation) and produced emulation of 407 sense switch settings (at the end of the run) on the printer. this had then been ported from 709 cobol to 360 cobol. the responsible admin person didn't know what to make of the unanticipated and different results. after about an hour, it was decided to repeat the run and if the (printed) 407 switch settings came out the same ... then they would assume everything was ok.

past postings mentioning ibsys and/or 407s:
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/98.html#9 ** Old Vintage Operating Systems **
https://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
https://www.garlic.com/~lynn/99.html#137 Mainframe emulation
https://www.garlic.com/~lynn/2000.html#19 Computer of the century
https://www.garlic.com/~lynn/2000.html#20 Computer of the century
https://www.garlic.com/~lynn/2000.html#55 OS/360 JCL: The DD statement and DCBs
https://www.garlic.com/~lynn/2000f.html#58 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#67 Building so big it generates own weather?
https://www.garlic.com/~lynn/2001b.html#27 HELP
https://www.garlic.com/~lynn/2001f.html#5 Emulation (was Re: Object code (was: Source code - couldn't resist compiling it :-))
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001i.html#33 Waterloo Interpreters (was Re: RAX (was RE: IBM OS Timeline?))
https://www.garlic.com/~lynn/2001m.html#52 Author seeks help - net in 1981
https://www.garlic.com/~lynn/2002.html#22 index searching
https://www.garlic.com/~lynn/2002.html#49 OT Friday reminiscences
https://www.garlic.com/~lynn/2002d.html#21 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002d.html#53 Mainframers: Take back the light (spotlight, that is)
https://www.garlic.com/~lynn/2002f.html#53 WATFOR's Silver Anniversary
https://www.garlic.com/~lynn/2002q.html#39 HASP:
https://www.garlic.com/~lynn/2003j.html#23 A Dark Day
https://www.garlic.com/~lynn/2003n.html#41 When nerds were nerds
https://www.garlic.com/~lynn/2003n.html#42 When nerds were nerds
https://www.garlic.com/~lynn/2004.html#48 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#53 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004d.html#44 who were the original fortran installations?
https://www.garlic.com/~lynn/2004d.html#59 Happy Birthday Mainframe
https://www.garlic.com/~lynn/2004f.html#49 can a program be run withour main memory?
https://www.garlic.com/~lynn/2005e.html#29 Using the Cache to Change the Width of Memory
https://www.garlic.com/~lynn/2005g.html#56 Software for IBM 360/30
https://www.garlic.com/~lynn/2005n.html#3 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005q.html#7 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Thu, 02 Feb 2006 18:03:58 -0700
Anne & Lynn Wheeler writes:
on the university's 709, the vast majority of programming was fortran and cobol. classes were all fortran, and most of the departments did fortran ... math, business statistics, biometric statistics, soc & psyc stats, etc. admin had cobol jobs (there may have been some univ. assembler 709 programs, but i wasn't aware of any). the 709 ran ibsys and almost everything was tape-to-tape ... using 1401 as unit record front end i.e. tapes were manually moved between 709 drives and 1401 drives as mentioned in this tape related x-posting from ibm-main n.g.
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape


some years later, i was at the san jose research center ... backus' office was several doors away ... although he rarely came in. he worked from home a lot.

a few old posts that started out trying to track down original fortran distribution:
https://www.garlic.com/~lynn/2004d.html#24 who were the original fortran installations?
https://www.garlic.com/~lynn/2004d.html#27 who were the original fortran installations?
https://www.garlic.com/~lynn/2004d.html#44 who were the original fortran installations?
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?

since boeing was one of the original installations, i had tried to track down some of the people that i had known at boeing; minor recent ref. to bcs:
https://www.garlic.com/~lynn/2006.html#40 All Good Things

One of the people that I had worked with at that time, I actually was able to track down, he had retired and still lived in Seattle and had some amount of old/early Fortran stuff in boxes.

for some drift, codd's office was floor above and not too far away. this was during the days of the original relational/sql work and system/r ... minor refs:
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mount a tape

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mount a tape
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 03 Feb 2006 08:46:20 -0700
Louis Krupp wrote:
Don't forget the write ring. Leave it off when you intended to write to the tape, and you had to unload and reload the tape all over again. Leave it on when you didn't mean to, and the tape might get purged by mistake.

i had some stuff from the late 60s and early 70s replicated on three different tapes (over the years copying from 800bpi to 1600bpi to 6250bpi tape) ... but all in the same datacenter library in the mid-80s. it included some stuff on periodically monitoring system activity and using the information for dynamic adaptive feedback control. i had created dynamic adaptive feedback scheduler in the 60s as an undergraduate that included fairshare as a default policy ... that was shipped in cp67. much of the code was dropped in the morph from cp67 to vm370. however, i was given a chance to re-introduce it with the resource manager (shipped spring 1976).

for a little more drift, the resource manager was selected to be the guinea pig for priced kernel software. with the unbundling announcement on 6/23/69, application software started being charged for ... but kernel software was still shipped free (under the justification that it was needed to run the hardware ... aka bundled). by the mid-70s, various factors (like 370 clones) were contributing to pressure to pricing kernel software. i got to spend six months or so on and off with business and pricing people on policies for pricing kernel software. misc. past bundling/unbundling posts
https://www.garlic.com/~lynn/submain.html#unbundle

anyway, in mid-80s, some people from corporate came looking for early examples of periodic monitoring of system performance activity ... having to do with some litigation or patent issue. i went to retrieve the code and found that all three tapes had been written over. apparently there was some problem in the datacenter with operators randomly selecting tapes to be mounted as scratch (i.e. the ring was inserted for a request to mount a scratch tape for writing).

recent post reproducing part of the resource manager "blue" announcement letter (actually beige at this time, but still commonly referred to as "blue" ... i don't actually still have a paper copy but was presented an engraved plaque of the announcement letter that has survived).
https://www.garlic.com/~lynn/2006.html#19 DCSS as SWAP disk for z/Linux

Free to good home: IBM RT UNIX

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Free to good home: IBM RT UNIX
Newsgroups: comp.sys.ibm.pc.rt,comp.unix.aix,alt.folklore.computers
Date: Sat, 04 Feb 2006 08:40:07 -0700
Tux Wonder-Dog writes:
I have a _dream_! (Okay, to bring it down to earth, a wish that IBM would consider old operating systems and tools, etc, including their source code, as both fodder for future students and as an inexpensive means to bring people up to speed on IBM equipment, which are no longer the first-in-line for people to use or experience in training. And AOS features in this as an example of how IBM took from the Unix community an OS with apparently few of the IBM-specific features that might conceivably prevent IBM from releasing it plus source under the original BSD license.)

???

note that old operating systems and source use to be freely available. both cp67 and vm370 were free and shipped with source and source maint. some of this may be picked up with respect to hercules activity.

the big issue was the gov. (and other) litigation that resulted in the 6/23/69 unbundling announced when application software started be priced/charged. kernel software was still bundled (and free) under the policy that it was required to operate the hardware.

i had made extensive cp67 source modifications as undergraduate and a lot of it was picked up and shipped in the product (dynamic adaptive scheduling policies, fairshare policy, working set like operationg, page replacement stuff, lots of pathlength optimization, etc). some amount of that was dropped in the morph to vm370. some amount of customer advocacy (in share user group organization and other places) resulted in being able to package and (re-)release some of it as the resource manager (in 1976). however, resource manager was tagged as the guinea pig for pricing kernel software (somewhat motivated by clone 370 processors being able to pickup free kernel). misc. past unbundling related posts
https://www.garlic.com/~lynn/submain.html#unbundle

the advent of clone processors continued pressure on software and in the early 80s you started to see the "OCO" wars (object code only) where there was push to not only to charge for (unbundled) software but also to stop shipping source.

also in the early 80s, ibm formed (academic) ACIS organization ... and initially provided with $300m to give away to educational institutions for computer related stuff. CMU activity got $50m (mach, camelot, andrew, etc). MIT (project athena) got $25m (project athena also got matching $25m from dec; project athena saw things like X-windows, kerberos, etc). CSNET and later NSFNET backbone got some amount. Independent of CSNET (& NSFNET) there was a lot poured into BITNET (and EARN in europe). misc. bitnet & earn posts
https://www.garlic.com/~lynn/subnetwork.html#bitnet

minor specific csnet, nsfnet refs:
https://www.garlic.com/~lynn/rfcietf.htm#history
https://www.garlic.com/~lynn/internet.htm#nsfnet
https://www.garlic.com/~lynn/internet.htm#0

ACIS group was also responsible for doing BSD-based AOS for PC/RT. The ACIS also took LOCUS (another unix-alike from UCLA, i don't know how much of the initial $300m, acis may have provided to ucla or Berkeley) and it was turned into AIX/370 (later aix/esa) and AIX/PS2 products.

A lot of the various IBM funded activities ... later fed into OSF (open software foundation) for stuff like DCE (distributed computing environment) which drew on mach, andrew and locus, etc work. couple random osf (open software foundation) refs:
http://www.auditmypc.com/acronym/OSF.asp
https://en.wikipedia.org/wiki/Open_Software_Foundation
http://www.opengroup.org/dce/

note that cmu's mach also shows up in other places like the basis for the (current) apple operating system. in the middle of our doing ha/cmp
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/subtopic.html#hacmp
the executive we reported to, moved over to head up somerset ... the apple, motorola, ibm, etc ... effort to do a single-chip version of rios/801/power ... somerset turned out the power/pc chips ... used by apple and for some number of other things.

i've joked in the past about ibm paying three times for transarc work ... the initial $50m grant to cmu, the initial investment when transarc was spun off from cmu, and then when they bought transarc outright.

in the late 70s there had been a big push to consolidated a wide variety of internal microprocessors to 801 risc. by the early 80s, the primary surviving was the office products effort with ROMP for displaywriter replacement using pl.8 and cpr. when the displaywriter follow-on was killed, somebody observed that a lot of hardware vendors were turning out unix workstation systems with much reduced effort (doing a unix port). it was decided to retarget romp to the unix workstation market and the company that had taken at&t unix and turned out pc/ix (for the ibm/pc) was contracted to do a similar port for romp. this became the pc/rt (and the aix unix for the pc/rt ... as opposed to various other AIXs like aix/370 and aix/ps2 that came from other origins). misc. romp, rios, 801, power, posts
https://www.garlic.com/~lynn/subtopic.html#801

the pc/rt was also the target for the bsd-based AOS effort. it had initially started out as doing a port to 370. that got side-tracked to the pc/rt. however, essentially the same group also did the ucla locus stuff for aix/370 and aix/ps2. I had been doing some stuff for getting a C-language front end to the 370 pascal compiler. in the middle of this, the person left and joined metaware in santa cruz. when the aos for 370 group started up, i talked them into working with metaware for a c compiler for the port. when aos got retargeted to the pc/rt they retained the use of the metaware compiler for the effort.

random past posts mentioing metaware:
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2002n.html#66 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002q.html#19 Beyond 8+3
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?
https://www.garlic.com/~lynn/2004f.html#42 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#61 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005b.html#14 something like a CTC on a PC
https://www.garlic.com/~lynn/2005e.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of R&D

misc. past posts mentioning somerset
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2001g.html#23 IA64 Rocks My World
https://www.garlic.com/~lynn/2001i.html#28 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001j.html#37 Proper ISA lifespan?
https://www.garlic.com/~lynn/2002g.html#12 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002g.html#14 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh
https://www.garlic.com/~lynn/2002l.html#37 Computer Architectures
https://www.garlic.com/~lynn/2003d.html#45 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003j.html#22 why doesn't processor reordering instructions affect most
https://www.garlic.com/~lynn/2004.html#28 Two subjects: 64-bit OS2/eCs, Innotek Products
https://www.garlic.com/~lynn/2004d.html#1 IBM 360 memory
https://www.garlic.com/~lynn/2004k.html#39 August 23, 1957
https://www.garlic.com/~lynn/2004q.html#36 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#40 Tru64 and the DECSYSTEM 20
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005e.html#7 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005m.html#12 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#13 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005o.html#37 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005q.html#40 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#11 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#34 logical block addressing

random past posts mentioning acis, locus, etc
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#64 Old naked woman ASCII art
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000d.html#68 "all-out" vs less aggressive designs
https://www.garlic.com/~lynn/2000d.html#69 "all-out" vs less aggressive designs
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2001.html#44 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001f.html#1 Anybody remember the wonderful PC/IX operating system?
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
https://www.garlic.com/~lynn/2002h.html#65 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002i.html#54 Unisys A11 worth keeping?
https://www.garlic.com/~lynn/2002i.html#81 McKinley Cometh
https://www.garlic.com/~lynn/2002j.html#36 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
https://www.garlic.com/~lynn/2002o.html#40 I found the Olsen Quote
https://www.garlic.com/~lynn/2002p.html#45 Linux paging
https://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
https://www.garlic.com/~lynn/2003h.html#45 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003h.html#52 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003h.html#53 Question about Unix "heritage"
https://www.garlic.com/~lynn/2003o.html#49 Any experience with "The Last One"?
https://www.garlic.com/~lynn/2004d.html#57 If you're going to bullshit, eschew moderation
https://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
https://www.garlic.com/~lynn/2004h.html#41 Interesting read about upcoming K9 processors
https://www.garlic.com/~lynn/2004h.html#42 Interesting read about upcoming K9 processors
https://www.garlic.com/~lynn/2004n.html#12 XML: The good, the bad, and the ugly
https://www.garlic.com/~lynn/2004n.html#19 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#30 First single chip 32-bit microprocessor
https://www.garlic.com/~lynn/2004p.html#53 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2004q.html#38 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#39 CAS and LL/SC
https://www.garlic.com/~lynn/2004q.html#61 will there every be another commerically signficant new ISA?
https://www.garlic.com/~lynn/2005b.html#22 The Mac is like a modern day Betamax
https://www.garlic.com/~lynn/2005f.html#28 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005h.html#5 Single System Image questions
https://www.garlic.com/~lynn/2005j.html#26 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005q.html#14 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#26 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#49 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005s.html#33 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2005s.html#34 Power5 and Cell, new issue of IBM Journal of R&D
https://www.garlic.com/~lynn/2005t.html#19 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005u.html#61 DMV systems?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Is there a workaround for Thunderbird in a corporate environment?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Is there a workaround for Thunderbird in a corporate environment?
Newsgroups: netscape.public.mozilla.general,netscape.public.mozilla.mail-news,netscape.public.mozilla.wishlist
Date: Sat, 04 Feb 2006 09:46:17 -0700
Andrew DeFaria writes:
Hmmm... "We have 5000 logins and we find it much easier to assign each a nickname and have to remember an additional 5000 nicknames". To use your terms "AND YOU CALL THAT PROGRESS?!?". How 'bout cha refer to them by their given name? If you have LDAP, which you do in Outlook, then use their proper names. If you want nicknames then petition the LDAP admin to add the nickname as part of the name (e.g. "Netwton Kenneth Rogers (dog)".

in the late 70s, i started a small corporate nickname file ... initially with 15,000 entries that i collected from various places ... it grew to over 25,000 before i abandoned the effort (in the early 80s). by that time, the work on online corporate telephone directories was well along ... which added the ability to have email address listed. part of the problem was having to constantly come up with unique nicknames across 25,000-plus people (at the time, well under ten percent of all employees).

of course, the internal network was larger than the whole arpanet/internet from just about the beginning until sometime mid-85.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

misc old postings on doing the online corporate telephone directories (started somewhat in parallel with doing the nickname file):
https://www.garlic.com/~lynn/2000g.html#14 IBM's mess (was: Re: What the hell is an MSX?)
https://www.garlic.com/~lynn/2003b.html#45 hyperblock drift, was filesystem structure (long warning)
https://www.garlic.com/~lynn/2004c.html#0 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
https://www.garlic.com/~lynn/2005c.html#38 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#43 History of performance counters
https://www.garlic.com/~lynn/2005t.html#44 FULIST

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 3090/VM Humor

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3090/VM Humor
Newsgroups: bit.listserv.vmesa-l
Date: Sat, 04 Feb 2006 09:56:12 -0700
Paul Hanrahan writes:
VM was open source before it was cool ! - Paul Hanahan

recent posting on the subject ...
https://www.garlic.com/~lynn/2006b.html#8

various collected postings mentioning unbundling announcement of 6/23/69 (becuase of gov. and other litigation) and resulting transition to charging for software
https://www.garlic.com/~lynn/submain.html#unbundle

unbundling initially resulted in charging for application software, but kernel software was still free (aka bundled, in the theory that it was required to operate the hardware). the vm370 resource manager (1976) was chosen as guinea pig for priced kernel software.

... vm was not only open source ... but also source maintenance. the original multi-level source update process was done as part of joint endicott/cambridge project to emulate 370 virtual machines under cp/67 (running on 360/67) ... recent post mentioning effort
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory

misc. old posts mention source update procedure:
https://www.garlic.com/~lynn/99.html#9 IBM S/360
https://www.garlic.com/~lynn/2001e.html#57 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002h.html#35 Computers in Science Fiction
https://www.garlic.com/~lynn/2002p.html#2 IBM OS source code
https://www.garlic.com/~lynn/2003.html#27 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#58 Card Columns
https://www.garlic.com/~lynn/2003.html#62 Card Columns
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003e.html#77 unix
https://www.garlic.com/~lynn/2003f.html#1 History of project maintenance tools -- what and when?
https://www.garlic.com/~lynn/2003f.html#7 unix
https://www.garlic.com/~lynn/2003k.html#46 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2003k.html#47 Slashdot: O'Reilly On The Importance Of The Mainframe Heritage
https://www.garlic.com/~lynn/2004b.html#59 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004g.html#29 [IBM-MAIN] HERCULES
https://www.garlic.com/~lynn/2004g.html#40 IBM 7094 Emulator - An historic moment?
https://www.garlic.com/~lynn/2004g.html#43 Sequence Numbbers in Location 73-80
https://www.garlic.com/~lynn/2004m.html#30 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#36 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#20 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#15 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005i.html#30 Status of Software Reuse?
https://www.garlic.com/~lynn/2005i.html#39 Behavior in undefined areas?
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005r.html#5 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005r.html#6 What ever happened to Tandem and NonStop OS ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Sat, 04 Feb 2006 14:16:04 -0700
blmblm writes:
Recommendation heartily seconded, especially for this group, where people are more likely than the average techie to understand most of the humor. TECO is mentioned!

But, um, isn't the actual title of the essay "Real Programers Don't Use Pascal"?


real programers and varitions take-offs, previously posted:
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:
https://www.garlic.com/~lynn/2002e.html#39 Why Use *-* ?
https://www.garlic.com/~lynn/2002o.html#69 So I tried this //vm.marist.edu stuff on a slow Sat. night,
https://www.garlic.com/~lynn/2002o.html#72 So I tried this //vm.marist.edu stuff on a slow Sat. night,
https://www.garlic.com/~lynn/2003b.html#58 When/why did "programming" become "software development?"
https://www.garlic.com/~lynn/2003j.html#43 An a.f.c bibliography?
https://www.garlic.com/~lynn/2004b.html#35 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions

... i.e. *real programmers don't eat quiche* and *real software engineers don't read dumps*

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Sat, 04 Feb 2006 14:40:50 -0700
oh, and for some other random stuff ... an old version of jargon leaked onto the internet and can be found here:
http://www.212.net/business/jargon.htm

for instance, and entry in the Bs
http://www.212.net/business/jargonb.htm
[backbone] n. The central nodes of an electronic communication network. The backbone of IBM's VNET network is managed directly by a corporate organisation, and in the mid 1980s ran a much-enhanced version of the RSCS product, known as IPORSCS. The nodes of the backbone, for example HURBB, are identified by three characters of the location name, followed by BB (for BackBone). At a time of a major software upgrade numerous problems occurred, which led to the suggestion that in fact the BB stands for bit-bucket (q.v.).

...

the internal network was larger than the arpanet/internet from just about the beginning until around summer of '85. the network was well along before corporate even was aware of its existance.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

in fact, there was an incident when a networking academic type from corporate hdqtrs was making some rounds ... and given talks. somebody explained to him about the internal network and how it worked in a fully distributed manner w/o need for centralized (and/or corporate) control.

the individual replied that it couldn't exist ... that it had been prooved that such an implementation would require a specific minimum amount of massive resources to implement ... which would have had to show up as a significant line-item cost at the corporate level. he personally knew that no such cost line-item had ever shown up in the corporate financials ... and therefor it couldn't exist. it was after corporate started getting involved that you started seeing incidences like the one mentioned in the above reference.

at the time arpanet with approx. 250 nodes, converted to internetworking protocol on 1/1/83 ... minor reference
https://www.garlic.com/~lynn/internet.htm#0

the internal network was nearing 1000 nodes, which it passed later that year. minor reference
https://www.garlic.com/~lynn/internet.htm#22

... the internal network ... another item brought to you by the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

externally, bitnet and earn was a university implementation using the same technology (although the count of bitnet and earn nodes aren't included in the count of internal network nodes) ....
https://www.garlic.com/~lynn/subnetwork.html#bitnet

minor past reference to the formation of earn (european academic research network) ... some email from spring of 1984
https://www.garlic.com/~lynn/2001h.html#65

for even more drift, gml was also invented at the science center, percursor to sgml, html, xml, etc ... misc. collected postings:
https://www.garlic.com/~lynn/submain.html#sgml

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Change in computers as a hobbiest

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Change in computers as a hobbiest...
Newsgroups: alt.folklore.computers
Date: Sun, 05 Feb 2006 09:23:20 -0700
"David Wade" writes:
3. The Windows/2000 resource kit (not sure about the XP one) include implementations of Perl and REXX

past postings about doing applications in rexx ... before it was released to customers ... and when it was still called rex (there was something about conflict with existing software(?) that necessitated the renaming).
https://www.garlic.com/~lynn/submain.html#dumprx

from a little later; part of assembly listing from dmsrex source from 1983
https://www.garlic.com/~lynn/2004d.html#26 REXX still going strong after 25 years

there was some recent posting complaining about object rexx not being available on all platforms ... i think mainframe where it had originated:
http://www-306.ibm.com/software/awdtools/obj-rexx/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Expanded Storage
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Tue, 07 Feb 2006 00:21:01 -0700
Rob van der Heij writes:
Yes, in your configuration you should define expanded storage. It's for providing a hierarchy in storage management as well as a circumvention to reduce the impact of contention under the bar. Especially when the total active memory of your Linux server is getting close to 2G (and unless you do things, eventually the entire Linux virtual machine main memory will appear active to VM). 25% has been suggested as a starting point, but measurements should help you determine the right value. The right value depends a lot on what Linux is doing. And make sure to disable MDC into expanded storage as I suggested yesterday.

note if you have 16gbytes of expanded store and 16gbytes of regular storage ... then only stuff in the 16gbytes of regular store and be used/executed. stuff in expanded store has to be brought into regular store to be accessed (and something in regular store pushed out ... possibly exchanging places with stuff in expanded store).

if you have 32gbytes of regular store ... then everything in regular store can be directly used/executed ... w/o being moved around.

expanded store was introduced in 3090 because of memory physical packaging problem. a lot more electronic memory could be attached cost effectively to a 3090 than could be physically packaged within the normal processor execution/access latency requirements.

rather than going to something like numa & sci ... it was packaged on a special wide bus (with longer latency) and special software introduced to manage pages. it might be considered akin to electronic paging drums or controller caches ... but the movement was singificantly more efficient being done with high-performance synchronous instruction rather than very expensive asynchronous i/o handling. as an aside, when 800mbit hippi i/o was attached to 3090 ... it was crafted into the expanded storage bus using peek/poke semantics since it was the only interface on the 3090 that was capable of handling the data rate.

part of the issue was some cache and i/o studies performed by SJC in the late 70s and early 80s. a special vm system was built that efficiently captured all record references (much more efficently than monitor) and this was deployed on a number of systems in the san jose area (standard product vm/cms ... but also some number of production mvs systems running virtually).

the detailed i/o trace information was captured for weeks of data on different systems. various cache, record, and paging models were built and run with the actual trace data. for a given, fixed amount of electronic store, the most efficient use of that electronic store was a single global system cache ... that divided the same amount of stored into (partitioned) channel, controller, and/or drive caches was always less efficient than having a single large global system cache.

this also supports the issue i raised as an undergraduate in the 60s with the enhancements i had done to cp/67. the original cp/67 had a very inefficient thrashing controls and replacement algorithm. about that time, there was some literature published about working set for controlling thrashing and "local" LRU replacement algorithms. For cp/67, I implemented a highly efficient global LRU replacment algorithm and my own variation on working set for thrashing controls.

However, in much the same way that my global LRU replacement algorithm was much more efficient that local LRU replacement algorithm ...the I/O cache simulation studies showed that a single global cache was more efficient that any partitioned cache implementation (given the same amount of fixed electronic storage).

Somewhat in the same time frame as the electronic cache studies, (better than ten years after i had done the original global LRU work) there was a big uproar over a draft stanford phd thesis that involved global LRU replacement strategies. There was significant pushback on not granting the stanford phd on the grounds that the global LRU strategies were in conflict with the local LRU stuff that had been published in the literature in the late 60s. After much conflicts, the stanford Phd thesis was finally approved and the person was awarded their phd.

in any case, back to the original example. if you have 16gbytes of normal storage and 16gbytes of expanded storage, then there can be a total of 32gbytes of virtual pages resident in electronic storage, but only 16gbytes of those virtual pages can be used at any one time. Any access to virtual pages in expanded storage (at best) requires moving a page from expanded to normal and a page from normal to expanded (exchanging pages).

however, if you configure 32gbytes of normal storage and no expanded storage ... then you can also have 32gbytes of virtual pages resident in electronic storage ... but all 32gbytes of virtual pages are usable directly (no fiddling moving pages back & forth between expanded storage and normal storage).

the possible exception is if the paging supervisor has some difficiencies in identifying virtual pages with a variety of activity levels ... and there is going to be access to more total virtual pages than real pages (resulting in some real page i/o). reducing real storage by allocating some to expanded storage can fix some page management problems by forcing the kernel to more frequently "trim" what it considers the number of active pages in an address space. the downside of this trimming is mitigated by pages being moved back&forth to expanded storage. with more frequent trimming, the code might do a better job of deciding which pages go to disk and which stay in electronic storage some place. the hope is that bad decisions about what is on disk and what is in memory are reduced and the better decisions offset both things like more frequent trimming and also the overhead of the brownian motion of any pages going to & fro between expanded storage and normal storage.

of course the ideal situation is to not have expanded storage at all (eliminating the unnecessary overhead of moving pages back & forth). and simply do a much more sophisticated job of managing all the pages in a single global storage.

for some additional topic drift, a side effect of the i/o record trace work was that it was noticed that there was daily, weekly and monthly cycles ... where collections of data that wasn't normally being used on a constant basis would have clustered bursty use. some of this later showed up in places like adms (now tsm) having to do with migration of clusters of data (that was used together) as part of a "container". past collected postings on having done the internal backup system that eventually morphed into workstation datasave product and then into adsm (and since renamed tsm).
https://www.garlic.com/~lynn/submain.html#backup

past mention of the detailed i/o cache work:
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/99.html#105 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2003g.html#55 Advantages of multiple cores on single chip
https://www.garlic.com/~lynn/2003n.html#33 Cray to commercialize Red Storm
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#2 Athlon cache question
https://www.garlic.com/~lynn/2005m.html#12 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#13 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?

past mention of the stanford phd global LRU work:
https://www.garlic.com/~lynn/98.html#2 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002k.html#63 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#55 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#0 Alpha performance, why?
https://www.garlic.com/~lynn/2003k.html#8 z VM 4.3
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005n.html#23 Code density and performance?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

{SPAM?} Re: Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: {SPAM?} Re: Expanded Storage
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Tue, 07 Feb 2006 10:30:36 -0700
Barton Robinson writes:
The real reason for ensuring a hierarchy of storage in z/VM is to meet the objective of paging less to DASD.

The page stealing algorithm used to take pages from main storage is not as efficient as the LRU algorithm for moving pages from Expanded Storage.

Memory constrained systems found that their external paging rate dropped when they converted some real storage to expanded. The stealing algorithm steals a lot of the wrong pages, often taking needed pages and moving them to dasd. bad.

Sure moving pages back and forth between expanded and real cost CPU - but paging to disk is orders of magnitude worse.


that was one of the issues that happened in the initial morph from cp67 to vm370. i had introduced global LRU into cp67 as an undergraduate.

in the morph from cp67 to vm370, they severely perverted the global LRU implementation (beside changing the dispatching algorithm and other things). the morph to vm370 introduced a threaded list of all real pages (as opposed to real storage index table). in theory the threaded list was suppose to approx the real storage index table ... however, at queue transitions ... all pages for a specific address space was collected and put on a flush list. if the virtual machine re-entered queue ... any pages from the flush list were collected and put back on the "in-q" list. the result was that the ordering that pages were examined for stealing tended to have a fifo ordering with most pages for the same virtual machine all clustered together.

the original global LRU implementation was based on having a relatively uniform time between examining pages; aka a page was examined and had its page reference bit reset. then all the other pages in real storage was examined before that page was examined again. this was uniformly true for all pages in real storage. the only change was that if the demand for real storage increased, the time it took to cycle around all real pages decreased ... but decreased relatively uniformly for all pages.

in any case, the list approached introduced in the morph from cp67 to vm370 would drastically and constantly reorder how pages were examined. there was an implicit feature of LRU algorithms (local or global) that the examination process be uniform for all pages. the list manipulation totally invalidated an implicit feature of LRU implementation ... and while it appeared to still examine and reset reference bits ... it was no longer approx. true LRU (either global or local) ... and the number of "bad" decisions went way up.

this was "fixed" when the resource manager was shipped ... and the code put back like I had originally done in cp67 ... and restored having true LRU. now the resource manager was still a straight clock (as defined later in the stanford PHD thesis). basically the way i had implemented clock had a bunch of implicit charactierstics that had a drastically reduced the pathlength implementation ... however that made a lot of things about the implementation "implicit" ... there not being necessarily an obvious correlation between the careful way that pages were examined and how it preserved faithful implementation of LRU.

i had somewhat similar argument with the people putting virtual memory support into mvt for vs2 (first svs and then mvs). they observed if they "stole" non-changed pages before "stealing" changed pages (while still cycling examining and reseting reference bits corresponding to some supposedly LRU paradigm) ... they could be more efficient. No matter how hard i argued against doing it (that it violated fundamental principles of LRU theory) ... they still insisted. so well into mvs (3.8 or later) ... somebody finally realized that they were stealing high-use linkpack (shared executable) instruction/non-changed pages before stealing much lower used, private data pages. another example if you are going to violate some fundamental principles of the implementation ... you no longer really had an LRU implementation.

there was a side issue. shortly after joining the science center,
https://www.garlic.com/~lynn/subtopic.html#545tech
i discovered another variation (although this was not deployed in the resource manager). basically it involved two observations

1) the usefullness of the history information degrades over time. implicit in LRU is that if a page has been referenced ... it is more likely to be used in the future than a page that hasn't been referenced. since there is only a single bit, all you can determine is that the page was referenced at some point since it was reset. if it is taking a long time between examinations ... some bits may have a lot more meaning than other bits ... but it isn't determinable. in this time frame, the guys on the 5th floor also published an article about having multiple reference bits ... and instead of a straight reset operations ... it became a one-bit shift operation (with zero being introduced in the nearest bit). their article was performance effects on using one, two, three, four, etc bits.

2) LRU is based on applications actually following references based on LRU ... that if a page has been recently used ... it is more likely to be used in the near future than pages that haven't been recently used. However, there are (sometimes pathelogical) cases where that isn't true. one case that can crop up is when you have a LRU implementation running under an LRU implementation (the 2nd level can be a virtual machine doing its own LRU page approx or a database system that is managing a cache of records using an LRU-like implementation). So I invented this slight of hand implementation ... it looked and tasted almost exactly like my standard global LRU implementation except it had the characteristic that in situations when LRU would nominal perform well, it approximated LRU ... but in situations when LRU was not a good solution, it magically was doing random replacement selection. It was hard to understand this ... because the code still cycled around resetting bits ... and it was a true slight of hand that would result in it selecting based on LRU or random (you had to really understand some very intricate implicit relationship behind code implementation and the way each instruction related to true LRU implemenation).

the science center was doing a lot of performance work including lots of detailed traces and modeling ... both event-based model and analytical models. this included a lot of stuff that today is taken for granted ... including the early foundation for capacity planning. some past collected posts on performance work
https://www.garlic.com/~lynn/submain.html#bench

this included what was eventually made available on HONE as the performance predictor ... an APL analytical model ... SE and salesmen could get on HONE ... feed in customer performance, configuration and workload information and ask "what-if" questions about changing configuration and/or workload.
https://www.garlic.com/~lynn/subtopic.html#hone

in any case, there was a detailed virtual memory and page replacement model. we got exact page reference traces and fed it into the model simulating lots of different page replacement algorithms. for one, the model had a true, exact LRU implementation, as well as various operating system global LRU approximation implementations, local LRU implementations, etc. It turns out that the modeling also showed up that global LRU was better than local LRU ... and that "true" LRU tended to be 5-15 percent better than global LRU approximation. However, the magic, slight-of-hand implementation tended to be 5-10 percent bettern than true LRU. It turned out that the slight-of-hand implementation was magically changing from LRU approximation replacement to random replacement in situations where LRU didn't apply (i.e. the assumptions about least recently used pages being the most likely next to be used wasn't holding true). So in the domain environment where LRU tended to hold true, the code tended to approx LRU (but not quite as good as "exact" LRU ... where all pages in real memory are exactly ordered as to when they were most recently referenced). However, in execution periods when LRU assumptions weren't applicable ... the implementation started randomly selecting pages for replacement. It was in these situations that LRU-algorithm started making bad choices ... and random tended to be better than LRU-based decisions.

In any case, there are a lot of assumptions about execution patterns built into LRU replacement algorithms. Furthermore, there are several implementation pitfalls ... where you may think you have an LRU implementation and it is, in fact, exhibiting radically different replacement selections. An issue is to know that you are really doing LRU replacement when LRU replacement is appropriate ... and to really know you are doing something else ... when something else is more applicable (particularly when assumptions about execution patterns and applicable of LRU replacement don't apply).

So there are a lot of pit-falls having to do with stealing pages ... both because 1) the implementation can have significant problems with correctly implementating any specific algorithm and 2) assumptions about a specific algorithm being valid may not apply to specific conditions at the moment.

Either of these deficiencies may appear to be randomly and/or unexplicably affected by changes in configurations. trading off real executable memory for expanded storage can be easily shown to cause more overhead and lower thruput (i.e. pages start exhibiting brownian motion ... moving back and forth between real storage and expanded storage). however, the configuration change may have secondary effects on a poorly implemented page steal/replacement implementation which results in fewer wrong pages being shipped off to disk. the inexplicable effects on the poorly implementated page steal/replacement algorithm resulting in fewer bad choices being sent to disk may more than offset any of the brownian motion of pages moving back and forth between normal storage and expanded storage.

the original purpose of expanded store was to add additional electronic memory more than could be available by straight processor execution memory (and used for paging in lieu of doing some sort of real i/o). in the current situations you are trading off real executable memory for a memory construct that has fundamentally more overhead. however, this trade-off has secondary effects on a steal/replacement implementation that is otherwise making bad choices (that it shouldn't be making).

various collected posts about clock, local lru, global LRU, magically switching between lru and random, (wsclock was the stanford phd on global LRU ... some ten plus years after i had done it as undergraduate) etc
https://www.garlic.com/~lynn/subtopic.html#wsclock

for even more drift ... one of the other things done with the detailed tracing was a product that eventually came out of the science center (announced and shipped two months before i announced and shipped resource manager) was called vs/repack. this basically took detailed program storage traces and did semi-automated program re-organization to improve page working set characteristics. i'm not sure how much customer use the product got, but it was used extensively internally by lots of development groups ... especially the big production stuff that was migrating from os/360 real storage to virtual storage operation (big user that comes to mind was the ims development group). the traces also turned out to be useful for "hot-spot" identification (particular parts of applications that were responsible for majority of exeuction).

misc. past vs/repack posts
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

{SPAM?} Re: Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: {SPAM?} Re: Expanded Storage
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Tue, 07 Feb 2006 10:59:18 -0700
ref:
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Expanded Storage

minor addeda to what went wrong in the initial morph from cp67 to vm370.

as i mentioned, LRU is based on the reference history being an correct predictor of future references.

a one-bit clock basically orders all pages in real memory and then cycles around them testing and resetting the reference bit. the cycle interval looking at all other pages in storage establishes a uniform interval between resetting the bit and testing it again.

the initial vm370 implementation went wrong by both reordering all pages at queue transition as well as resetting the reference bits.

for small storage sizes ... the time it took to cycle thru all pages in memory was less than the nominal queue stay ... so we are looking at a reference bit that represents an elapsed period less than a queue stay. as real storage sizes got larger ... the time to cycle all pages became longer than the avg. queue stay. that required that the reference period represented by the reference bit be a period longer than the queue stay. however, at queue transition ... the pages were both being reordered and the reference bit being reset. as a result it only had memory about the most recent queue stay ... even tho pages had real storage lifetimes that were becoming much longer than the most recent queue stay. as a result of both the queue transition reset and the constant reordering ... the testing and resetting implementation bore little actual resemblance to any algorithm based on theoritical foundation (even tho the testing and resetting code looked the same).

on the other hand, the same could be said of my slight-of-hand change to the testing and resetting code. however, I could demonstrate that my change actually corresponded to well provable theoritical principles and had well describable and predictable behavior under all workloads and configurations.

qagain, collect postings related to wsclock, global LRU, local lru, etc.
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

{SPAM?} Re: Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: {SPAM?} Re: Expanded Storage
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Tue, 07 Feb 2006 11:27:07 -0700
Anne & Lynn Wheeler writes:
on the other hand, the same could be said of my slight-of-hand change to the testing and resetting code. however, I could demonstrate that my change actually corresponded to well provable theoritical principles and had well describable and predictable behavior under all workloads and configurations.

re:
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Expanded Storage
https://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Expanded Storage

ok, and for even more drift.

one of the things done for the resource manager was a automated benchmarking process was developed.
https://www.garlic.com/~lynn/submain.html#bench

over the years, there had been lots of work done on workload and configuration profiling (leading into the evoluation of things like capacity planning). one of these that saw a lot of exposure was the performance predictor analytical model available to SEs and salesmen on HONE
https://www.garlic.com/~lynn/subtopic.html#hone

based on lots of customer and internal datacenter activity, in some cases spanning nearly a decade ... an initial set of 1000 benchmarks were defined for calibrating the resource manager ... selecting a wide variety of workload profiles and configuration profiles. these were specified and run by the automated benchmarking process.

in paralle a highly modified version of the performance predictor was developed that would, the modified performance predictor would take all workload, configuration and benchmark results done to date. the model then would select a new workload/configuraiton combination, predict the benchmark results and dynamically specify the workload/configuraiton profile to the automated benchmark process. after the benchmark was run, the results would be feed back to the model and checked against the predictions. then it would select another workload/configuration and repeat the process. this was done for an additional 1000 benchmarks ... each time validating that the actual operation (cpu usage, paging rate, distribution of cpu across different tasks, etc) corresponded to the predicted.

the full 2000 automated benchmarks took three months elapsed time to run. however, at the end, we were relatively confident that the resource manager (cpu, dispatching, scheduling, paging, i/o, etc) operating consistently and preditably with respect to theory as well as developed analytical models across an extremely wide range of workloads and configurations.

as a side issue. one of the things that we started with before with the automated benchmarks ... before the actual 2000 final were run ... were some extremely pathelogical and extreme benchmarks (i.e. number of users, total virtual pages, etc that were ten to twenty times more than anybody had ever run before). this put extreme stress on the operating system and initially resulted in lots of system failures. as a result, before starting the final resource manager phase ... i redesigned and rewrote the internal serialization mechanism ... and then went thru the whole kernel fixing up all sorts of things to use the new sycronization and serialization process. when i was done all cases of zombie/hung users had been eliminated as well as all cases of system failures because of synchronization/serialization bugs. this code then was incorporated into (and whipped as part of) the resource manager.

unfortunately, over the years, various rewrites and fixes corrupted the purity of this serialization/synchronization rework ... and you started to again see hung/zombie users as well as some serialization/synchronization failures.

misc. collected past posts on debugging, zombine/hung users, etc
https://www.garlic.com/~lynn/submain.html#dumprx

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

{SPAM?} Re: Expanded Storage

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: {SPAM?} Re: Expanded Storage
Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers
Date: Tue, 07 Feb 2006 13:03:26 -0700
Anne & Lynn Wheeler writes:
based on lots of customer and internal datacenter activity, in some cases spanning nearly a decade ... an initial set of 1000 benchmarks were defined for calibrating the resource manager ... selecting a wide variety of workload profiles and configuration profiles. these were specified and run by the automated benchmarking process.

re:
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Expanded Storage
https://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Expanded Storage

and minor addenda to actual (implementation) benchmarking results corresponding to theory/model/prediction ...

the modified predictor not only specified the workload profile (things like batch, interactive, mixed-mode, etc) and configuration ... but also scheduling priority. so not only did the actual (implementation) overall system benchmarking results had to correspond to theory/model/predictation ... but each individual virtual machine measured benchmark resource use (cpu, paging, i/o, etc) also had to correspond to the theory/model/prediction for that virtual machine ... including any variations introduced by changing the individual virtual machine scheduling priority.

a side issue was when i released the resource manager ... they wanted me to do an updated release on the same schedule as the monthly PLC releases for the base product. my problem was that i was responsible for doing all the documentation, classes, support, changes, benchmarking, maintenance, and (initially) answering all trouble calls ... basically as a sideline hobby ... indepedent of other stuff I was supposed to be doing at the science center (aka i was not part of the development organization ... at the time, occupying the old SBC building in burlington mall). I argued for and won ... only having to put out a new release every three months instead of along with every monthly PLC.

part of this was that it was just a sideline hobby ... the other was that i insisted that I repeat at least 100-200 benchmarks before each new minor (3 month) release to validate that nothing had affected the overall infrastructure (and major changes to the underlying system might require several hundred or thousands of benchmarks to be repeated).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 3090/VM Humor

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3090/VM Humor
Newsgroups: bit.listserv.vmesa-l
Date: Tue, 07 Feb 2006 19:32:54 -0700
Tony Harminc writes:
You're not thinking of PL/360 by any chance? I believe PL/S and the others are descendants of BSL (Basic Systems Language, which much of e.g. TSO for MVT Release 20 was written in), and are True Blue inventions.

PL/360 has essentially Algol syntax, while PL/S has that of PL/I.


an old pl/? thread
https://www.garlic.com/~lynn/2004g.html#46 PL/? History
https://www.garlic.com/~lynn/2004g.html#47 PL/? History

there was also pl.8 developed in the 70s for 801/risc (one story is that it is 80 percent of pl/1). cp.r was written in pl.8 as was a lot of code for various 801 processors of the period.

there was a large project in endicott to make the follow-on to the 4341 a 801 based chip. this was part of a effort in the period to converge the vast variety of internal microprocessors to 801. the low-end and mid-range 370s were various microprocessors that had 370 implemented in native machine microcode. there was a position paper written opposing the 801 strategy for the 4341 follow-on ... based on the fact that silicon chips were getting to the point where 370 could be directly implemented in silicon .. rather than a microcode layer on top of some chip silicon. i contributed to that paper. misc. 360/370 mcode posts, including description of original ecps
https://www.garlic.com/~lynn/submain.html#mcode

a few past posts mentioning pl/s:
https://www.garlic.com/~lynn/2002h.html#35 Computers in Science Fiction
https://www.garlic.com/~lynn/2002j.html#20 MVS on Power (was Re: McKinley Cometh...)
https://www.garlic.com/~lynn/2004m.html#6 a history question
https://www.garlic.com/~lynn/2005e.html#1 [Lit.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Seeking Info on XDS Sigma 7 APL

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking Info on XDS Sigma 7 APL
Newsgroups: alt.folklore.computers,comp.lang.apl
Date: Wed, 08 Feb 2006 12:44:47 -0700
Al Balmer writes:
I'm not sure what you mean by "wet copier." IBM made a xerographic dry copier at the time, I'm pretty sure. Kodak got into the game, too, at one point producing a copier that was technically superior to the Xerox. I remember a conversation with a Xerox rep where he "proved" that a xerographic copier couldn't produce large black areas (charge cancellation problem) just before I showed him a Kodak in-house newsletter clearly demonstrating otherwise. He insisted it must have been printed, not copied :-)

there was joke about the tv ads for IBM Copier III ... the copier III had problems with paper jamming ... and so they put out tv ads that touted how easy it was to clear paper jams in the copier-III. it backfired ... since people didn't like being reminded of the paper jams (i.e. featuring a bug/problem can backfire).

nothing to do with sigma ... but lots of apl related posts
https://www.garlic.com/~lynn/subtopic.html#hone

not to be totally off-topic ... a few past posts mentioning sigma
https://www.garlic.com/~lynn/2002h.html#53 Bettman Archive in Trouble
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#0 Newsgroup cliques?
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004c.html#10 XDS Sigma vs IBM 370 was Re: I/O Selectric on eBay: How to use?
https://www.garlic.com/~lynn/2004m.html#15 computer industry scenairo before the invention of the PC?
https://www.garlic.com/~lynn/2005r.html#44 What ever happened to Tandem and NonStop OS ?

copier III was the basis for 6670 ... computer connected printer. misc. past posts mentioning 6670
https://www.garlic.com/~lynn/99.html#42 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#43 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/99.html#52 Enter fonts (was Re: Unix case-sensitivity: how did it originate?
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000d.html#81 Coloured IBM DASD
https://www.garlic.com/~lynn/2000e.html#1 What good and old text formatter are there ?
https://www.garlic.com/~lynn/2001b.html#50 IBM 705 computer manual
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001n.html#31 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002h.html#7 disk write caching (was: ibm icecube -- return of
https://www.garlic.com/~lynn/2002m.html#52 Microsoft's innovations [was:the rtf format]
https://www.garlic.com/~lynn/2002o.html#24 IBM Selectric as printer
https://www.garlic.com/~lynn/2002o.html#29 6670
https://www.garlic.com/~lynn/2003c.html#43 Early attempts at console humor?
https://www.garlic.com/~lynn/2004c.html#1 Oldest running code
https://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
https://www.garlic.com/~lynn/2004k.html#48 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004l.html#61 Shipwrecks
https://www.garlic.com/~lynn/2005f.html#34 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#48 1403 printers
https://www.garlic.com/~lynn/2005f.html#51 1403 printers
https://www.garlic.com/~lynn/2005f.html#54 1403 printers
https://www.garlic.com/~lynn/2005r.html#29 Job seperators

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 3090/VM Humor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 3090/VM Humor
Newsgroups: bit.listserv.vmesa-l
Date: Wed, 08 Feb 2006 13:23:27 -0700
Jack Woehr writes:
7. But the best feature of VM on the 3x0 architecture is that the same jokes stay fresh forever, since that platform changes at roughly the same rate as the continents drift !

somewhat more suptle joke from the resource manager ... initially (re)release was 1976.

it introduced a new module ... dmkstp ... which was a take-off on a old tv commercial ... something about the "racer's edge".

the resource manager had some policy setting parameters and all this dynamic adaptive feedback stuff. however, leading up to release of the product ... somebody from corporate insisted that all the "modern" performance management implementations had enormous numbers of performance turning knobs. the major operating system release of the period had a system resource manager ... with a humongous matrix of performance tuning options. there used to be frequent share presentations about enormous number of benchmarks where the numerous performance options were somewhat randomly changed ...attempting to discover static combinations of tuning knob settings that showed up (on the avg.) better thruput for specific kinds of workloads.

somehow it was felt that all the dynamic adaptive feedback features weren't significantly modern ... and it required static performance tuning nodes that could be tweaked this way and that.

so before release, some number of static tuning knobs were introduced and fully documented. the joke had to do with the nature of dynamic adaptive feedback algorithms and something sometimes referred to as "degrees of freedom" (and what had the greatest degrees of freedom, the static tuning knobs or the dynamic adaptive feedback controls, aka could the dynamic feedback control compensate for all possible tuning knob changes).

misc. collected scheduling & resource manager posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

there is a story about a couple years before the resource manager was released, an early version leaked out to AT&T longlines. longlines migrated this kernel to some number of machines ... including bringing it up on newer generaion of machines as they were installed. coming up to about 3090 timeframe , I was contacted by the national account rep for at&t ... who was facing a problem. this early, leaked kernel predated smp support and it wouldn't be possible to sell SMP processors to longlines unless they could be migrated off this machine. however, the dynamic adaptive stuff in this leaked kernel had managed to survive nearly ten years and operate on a range of processors that had a two-orders of magnitude different in computer power (i.e. there was an increase computing power of one hundred times between the earliest, entry machine and the latest, highest end machine). misc. past post mentioning at&t longlines
https://www.garlic.com/~lynn/95.html#14 characters
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001f.html#3 Oldest program you've written, and still in use?
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002i.html#32 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2002p.html#23 Cost of computing in 1958?
https://www.garlic.com/~lynn/2003.html#17 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003d.html#46 unix
https://www.garlic.com/~lynn/2003k.html#4 1950s AT&T/IBM lack of collaboration?
https://www.garlic.com/~lynn/2004e.html#32 The attack of the killer mainframes
https://www.garlic.com/~lynn/2004m.html#58 Shipwrecks
https://www.garlic.com/~lynn/2005p.html#31 z/VM performance

a 3090 specific story has to do with erep data. i had done the drivers for hyperchannel as part of migrating something like 300 people from the ims group in santa teresa lab to off-site building. lots of collected hyperchannel & hsdt (high speed data transport) project posts:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

they had considered remote-3270s, but discarded it as intolerable. hyperchannel supported mainframe channel extension over telco links ... so that local 3270s could be used at the remote location. the 3274 local channel controllers operated at something like 640kbytes and the telco channele xtenders ran over T1-links (aka around 150kbytes). instead of response slightly declining, it improved because of some secondary issues with local channel attached 3274 and overall system thruput.

in any case, i adopted the use of simulating channel check error in situations where there was an unrecoverable telco error and i needed to bump error retry/recovery up a level.

after 3090 had been in customer shops a year, somebody from POK contacted me about a problem they were seeing in the industry reporting error statistics about 3090. 3090 channels had been designed to have something like 3-5 total channel check errors in a year over all customers (not 3-5 errors per 3090 per year ... but total aggregate 3-5 errors per year across all 3090s). Reports had shown up a total of something like 15-20 rather than 3-5 (for the first year). They had tracked it down to some customers with hyperchannel installed (aka the extra were these simulated errors). I looked into it and determined that reflected IFCC (interface control check) instead of CC would kick off essentially the identical error retry operations.

misc. past mention of CC/IFCC 3090 issue:
https://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/2004j.html#19 Wars against bad things
https://www.garlic.com/~lynn/2004q.html#51 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#28 Adversarial Testing, was Re: Thou shalt have no
https://www.garlic.com/~lynn/2005e.html#13 Device and channel
https://www.garlic.com/~lynn/2005u.html#22 Channel Distances

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Would multi-core replace SMPs?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Would multi-core replace SMPs?
Newsgroups: comp.arch
Date: Wed, 08 Feb 2006 19:01:20 -0700
"John Mashey" writes:
There is nothing magically different about having multiple CPUs/die, just the usual effect of getting more transistors in smaller spaces at lower cost. The same old architecture problems and bottlenecks still happen, but cheaper.

as mentioned, not just limited to CPUs ... but functional units in general. i have vaque memories of some 3090 processor engineer complaining about offering vector processing option on 3090; claiming that they had already optimized scalar fp to the point that it would saturate the memory bus ... and the vector processing option wouldn't be able to get any additional thruput out of the memory bus (however, the scientific community apparently considered that it couldn't be a serious numerical intensive processor if it didn't have a vector processing unit).

misc. posts in same/related thread:
https://www.garlic.com/~lynn/2006.html#14 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006.html#32 UMA vs SMP? Clarification of terminology
https://www.garlic.com/~lynn/2006.html#34 UMA vs SMP? Clarification of terminology

there are other gimmicks possible at specific points in time. normal (mainframe) 370 slowed cache cycle down for two-processor SMP (allowing cross-cache chatter) so that a 2-smp raw hardware mip rate was only 1.8 times a single processor mip rate (and only 1.5 times effective thruput when smp kernel processing overhead was thrown in). when i was first writing some kernel smp support ... i did some slight-of-hand with regard to processor affinity and asynchronous interrupt processing. I had one example where a uniprocessor ran about 1mip thruput ... and the two-processor smp had one processor running about .6mips and the other processor running about 1.5mips (i.e. aggregate 2.1mips). the finagling with processor affinity improved the cache hit ratio (on one of the processors) to more than compensate for the smp slower cache cycle time. Of course, the slight-of-hand smp support ... not only resulted in some improved cache hit ratios with effective processor affinity processing ... but close to negligible additional smp-related kernel overhead processing (compared to a uniprocessor kernel). misc. collected smp related postings
https://www.garlic.com/~lynn/subtopic.html#smp

i've frequently asserted that some amount of the uniprocessor only, simple architecture of the original risc/801 design philosiphy was reaction

1) excessive hardware complexity in future system project (which was canceled before it was ever announced) misc. past FS postings
https://www.garlic.com/~lynn/submain.html#futuresys

2) the significant smp cache consistency overhead supporting mainframe (extremely) strong memory consistency model.

lots of collected 801, risc, romp, rios, power, etc posts
https://www.garlic.com/~lynn/subtopic.html#801

for some topic drift ... recent 3090 related posts.
https://www.garlic.com/~lynn/2006b.html#10 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#19 IBM 3090/VM Humor
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Seeking Info on XDS Sigma 7 APL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking Info on XDS Sigma 7 APL
Newsgroups: alt.folklore.computers,comp.lang.apl,bit.listserv.vmesa-l
Date: Thu, 09 Feb 2006 09:52:40 -0700
phil chastney writes:
I'm happy to say I never worked on Sigma 7's APL -- the people I worked alongside would frequently get messages along the lines of "System Crash, backups lost, all files restored as at last Monday" -- the sites I worked at found it more cost-effective to use external time-sharing: expensive but reliable

I don't have any assembler, but somewhere I have a copy of an informal write-up of the internals of the Sigma's interpreter -- its APL was pretty much in line with APL/360 -- native files only, no quad-functions for files or formatting, IIRC -- and no shared variables, I believe

the documentation was interesting -- more informal but, at a certain level, more informative that IBM's Logic Manual


iverson and falkoff were at the phili science center and phili was supporting apl\360. cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

took apl\360 and ported it to cp67/cms and virtual memory for cms\apl. apl\360 installations typically provided 16kbyte to 32kbyte workspaces. part of cms\apl was moving apl\360 interpretor to (large) virtual memory environment. this initially ran into big problem with apl\360 storage allocation ... on every assignment would allocate new storage location ... and when memory was exhausted, would perform garbage collection and compact storage allocation. this wasn't bad on 16k-32k real-storage workspace paradigm where the whole workspace was always swapped in total. however, for a possibly couple megabyte workspace in paged virtual memory ... this was guaranteed to quickly touch every virtual page ... regardless of the aggregate size of the variables (if it ran long enuf with enuf assignment operations). this would quickly exhibit page thrashing appearance (and touch every virtual page) in the configurations of the period.

one of the things used was the early precursor to vs/repack (before it was released as product ... also done by cambridge) which monitored all data fetch and stores and all instruction fetches (which was also used for hotspot execution analysis). i've commented before that we had these floor-to-ceiling "plot" printouts that ran down the office corridor; time was along the horizontal (running down the corridor), and storage address was vertical (giving storage location fetch/store over time). apl had this very saw tooth appearance ... a sloped black band that quickly went from bottom of storage to top of storage very quickly and then a solid vertical band where garbage collection was performed. the whole thing had to be rewritten for virtual memory environment.

the other thing that was done for cms\apl was to allow it to directly invoke system calls. this caused quite a bit of heart burn in philly since it violated apl purity. however it (along with large virtual address spaces) allowed some real applications to be done. eventually we had the business people in armonk loading all the (exremely sensitive) customer sales & install information and using the cambridge apl facilities to perform business analysis and planning (aka apl was being used for a lot of stuff that spreadsheets are commoningly used for today).

this also created something of a significant security issue since the data was the most sensitive/valuable the company had ... and the cambridge system also allowed a lot of students from various univ. in the area (mit, harvard, bu, etc).

this also opened the way for the HONE APL applications that eventually was the basis for worldwide sales and marketing support (the US hone vm370 datacenter system consolidated in northern cal. in the late 70s had nearly 40k user definitions, and there were clones of the system all over the world) ... misc. past HONE and/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

the system call abomination violation of apl purity was eventually resolved with the introduction of shared variable paradigm ... and apl\sv.

before that, the palo alto science center had taken cms\apl and done a number of enhancements in the vm370 time-frame and produced apl\cms. they also had done the 370/145 apl microcode performance enhancment (lots of apl\cms applications on 370/145 with microcode assist ran at thruput of 370/168 ... aka nearly factor of 10 improvement).

and a repeat from a previous post:

here is falkoff's "The IBM family of APL systems"
http://www.research.ibm.com/journal/sj/304/ibmsj3004C.pdf

for some drift ... the vs/repack product in addition to doing storage fetch/storage capture and plots ... also provided semi-automated application re-organization ... optimizing for virtual memory, paged environment. misc. past vs/repack posts:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2005.html#4 Athlon cache question
https://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005d.html#48 Secure design
https://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
https://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
https://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Seeking Info on XDS Sigma 7 APL

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking Info on XDS Sigma 7 APL
Newsgroups: alt.folklore.computers
Date: Fri, 10 Feb 2006 07:42:09 -0700
Michael Widerkrantz writes:
UTM? Wasn't Amdahl's Unix offering called UTS? What was UTM?

Google hints at "Universal Transaction Monitor". Was that a transaction solution on top of UTS?

Does anyone know what happened with UTS? Is it still available? Is anyone running it? Presumable, you could run it on IBM hardware and a as VM guest as well.


and before it was released as UTS ... it was called? ....

one of the issues of running UTS (and aix/370) under VM ... was that VM would then provide erep, retry, lots of RAS, etc. Adding normal mainframe RAS to unix system was significantly larger undertaking than the straight-forward port of unix to 370. field service division even had positions about servicing a machine that didn't have reasonable EREP and RAS.

and an old post where somebody mentions (some?) UTS for Xerox Sigma systems
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?

misc past posts mentioning Amdahl's uts:
https://www.garlic.com/~lynn/99.html#2 IBM S/360
https://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
https://www.garlic.com/~lynn/99.html#191 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001l.html#18 mainframe question
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002j.html#75 30th b'day
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
https://www.garlic.com/~lynn/2005m.html#4 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005m.html#7 [newbie] Ancient version of Unix under vm/370
https://www.garlic.com/~lynn/2005p.html#44 hasp, jes, rasp, aspen, gold
https://www.garlic.com/~lynn/2005q.html#26 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005s.html#34 Power5 and Cell, new issue of IBM Journal of R&D

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multiple address spaces

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: alt.folklore.computers
Date: Fri, 10 Feb 2006 10:07:59 -0700
Johnny Luo wrote:
MVS(Multiple Virtual Storage) is the basic concept for z/os.However,after entering the mainframe world for eight months I still cannot understand it thoroughly, especially for ' multiple address spaces'.

in the initial translation from "real-storage" MVT operating system, to VS2-SVS ... single virtual storage, a single 16mbyte virtual address space was created and some paging code was hacked onto the side of MVT ... and ccw translation routine from CP67 (CCWTRANS) was glued into MVT. In effect, for most of MVT, it was as if it was running on 16mbyte real machine (and there was little awareness that it was running in a virtual machine environment. The MVT kernel continued to occupy the same address space as all applications.

The real machine might have 4mbytes of real storage, but there was a total of 16mbytes of virtual storage defined. The virtual memory paging infrastructure would define to the hardware which virtual pages were currently resident in real storage (and at what location). The rest of the virtual pages (not currently resident in real storage) would be loated out on (disk) secondary storage. If there was access to a virtual page that wasn't currently in real storage, there would be an interrupt into the (kernel) paging code, which would fetch the required page into real storage (from disk). This mechanism of virtual memory being larger than real storage and pages moving between real storage and disk is similar in all operating systems with virtual memory support.

For the transition from VS2-SVS to VS2-MVS .... the MVT kernel and address space was re-organized. A single virtual address space was created for every applicaion ... with an image of the kernel code occupying 8mbytes of every defined address space. Compared to some systems that grew up in virtual memory environment that used message passing between address spaces ... the real-storage heritage of MVT (with everything in the same, real, address space) made heavy use of pointer-passing paradigm. As a result, there are all sort of implicit infrastructures that require application, kernel, and services to all occupy the same address space when executing.

An issue in the transition from SVS to MVS was a number of sub-system sevices ... that weren't directly part of the kernel (and therefor present in the 8mbyte kernel image that shows up in every address sapce) ... but did provide essential services for applications and were dependent on the pointer-passing paradigm. In the transition from SVS to MVS, where everything in the system no longer occupied the same, single address space ... these subsystem services got their own address space ... different from each application address space. This created a complication when the application would pass a pointer to some set of parameters that a subsystem service in a different virtual address space needed to access.

To address the pointer-passing paradigm between application address space and subsystem services address space ... the "common segment" was defined. In much the same way, the same 8mbyte kernel image occupied every virtual address space, the "common segment" also occupied every address space. Applications could stick parameters in the common segment and make a call to some subsystem service (which pop'ed into the kernel, the kernel figured out which address space was being called, and trasnfered control to that address space ... using the pointers to parameters located in the common segment that was the same in all address spaces).

This was back in the days when only 24bit/16mbyte addressing was available. For large installations, with lots of subsystems and applications ... it wasn't unusual to find common segments being defined as 4mbytes-5bytes. This was causing problems for some applications ... given you started with a 16mbyte virtual address space for an application; 8mbytes of that was taken for the kernel image (in every address space) and potentially 5mbytes was taken for the common segment image (in every address space). As a result some installations only were left with maximum of 3mbytes (out of the 16mbytes) for application use (instructions and data).

Introduced in 3033 was something called dual-address space. This was special provisions that could be setup so that instructions in one address space could access data in a different address space. This somewhat alleviated the pressure on the "common segment" size (potentially growing to infinity for large installations with lots of applications and services). An application could call a subsystem service (in a different address space), passing a pointer to some parameters. Rather than the parameters having to be squirreled away in the common segment ... the parameters could continue to reside in private application address space area ... and the subsystem service (in its own address space) could use the dual-address space support to "reach" into the application address space to retrieve (or set) parameters.

3081 and 370-xa introduced 31-bit (virtual) addressing and also generalized the dual-address space support with "access registers" and "program call". These were special set of kernel hardware tables where an application could make a "program call" to a subsystem in a different address space. Rather than the whole process having to run thru kernel code to switch address spaces ... the whole process was implemented in hardware "program call" support (in theory you could have all sorts of library code that instead of residing in the application address space ... can now reside in separate address spaces.

access-register introduction ... from esa/390 principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/5.7?SHELF=EZ2HW125&DT=19970613131822

program call instruction description ... from esa/390 principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.26?SHELF=EZ2HW125&DT=19970613131822

with regard to the guestion about maximum virtual memory for an application and exactly how many virtual pages might exist at any moment ... there was a recent discussions on "zero" pages in some comp.arch thread. most virtual memory systems (mvs, vm370, windows, unix, linux, apple, etc) usually don't actually create a virtual page until it has been access for the first time. on first access, the system allocates a page in real storage and initializes it to all zeros. system utilities typically also provide a process that allows individual pages to be "discarded" when no longer needed. If an application attempts to access a virtual page that has been previously discarded, the system will dynamically create a new "zeros" page.

i mention the early zero page implementation in cp67 which actually had a special page on disk that was all zeros. virtual memory pages were initialized to point to the (same) zeros page on disk. this would be read into storage on first access ... and then a new, unique location allocated after first access. i modified it to instead recognize that the virtual page didn't yet exist ... and create one dynamically on the fly by storing zeros in a newly allocated real storage page location.

a couple past posts mentioning zeros page:
https://www.garlic.com/~lynn/2004h.html#19 fast check for binary zeroes
in memory
https://www.garlic.com/~lynn/2005c.html#16 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#24 [Lit.] Buffer overruns

a couple comp.arch threads from google groups that mention zero page
http://groups.google.com/group/comp.arch/browse_thread/thread/db9e349754c2c0bd/ed3c64f4160ecf41?lnk=st&q=zero+page+group%3Acomp.arch&rnum=4&hl=en#ed3c64f4160ecf41
http://groups.google.com/group/comp.arch/browse_thread/thread/ae7e455f75d9ccc5/c6e5905ac0ffdb4f?lnk=st&q=zero+page+group%3Acomp.arch&rnum=5&hl=en#c6e5905ac0ffdb4f

Multiple address spaces

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Fri, 10 Feb 2006 10:38:42 -0700
re:
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces

as an aside ... some systems implement virtual memory but don't support paging. in these situations, the amount of virtual memory will be the same as the amount of real storage.

one such system was a hacked version of osmvt release 13 done by boeing huntsville on a duplex (two-processor) 360/67. it was sort of like the vs2-svs effort but w/o paging support. the problem was that boeing huntsville were supporting a number of 2250 graphic displays ... which looked like long running jobs to mvt. mvt required contiguous storage for applications and had problems with storage fragmentation with long running jobs. the hack done by boeing huntsville using virtual memory hardware ... allowed disjoint real storage locations to appear to be contiguous.

misc. past posts mentioning boeing huntsville use of 360/67
https://www.garlic.com/~lynn/99.html#174 S/360 history
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001h.html#14 Installing Fortran
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2002j.html#22 Computer Terminal Design Over the Years
https://www.garlic.com/~lynn/2002q.html#47 myths about Multics
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2004.html#16 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004.html#53 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
https://www.garlic.com/~lynn/2004f.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2006.html#40 All Good Things

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Fri, 10 Feb 2006 11:53:59 -0700
Charles Richmond writes:
I saw a NOVA on PBS a few years back. Some guy flew for a little airline in the Boston/New York area...was talking about the plane. He said that the DC-3 he flew had more miles on it than the one in the Smithsonian.

for lots of topic drift ....

week before aug69 share (boston) meeting, i was suppose to have a HASP discussion with somebody at cornell. I flew into la guardia and then had to go over to marine terminal to catch a flight to Ithaca. it turned out to be dc3.

it was hot, humid august and a thunderstorm was in the area ... so we were held on the ground for an hr ... hot, humid, no air conditioning and thick smell of airline fuel in the air. we finally took off, but about half-way to elmira ... hit a severe thunderstorm. i got the most airsick ever in my life (severe turbulence and lightening all around ... even seeming to strike the plane). i wobbled off the plane in elmira and went in search of a motel. next morning i got a rental car and drove to Ithaca.

collected past posts mentioning hasp
https://www.garlic.com/~lynn/submain.html#hasp

couple sites mentioning army cargo version of dc3
http://www.generalaviationnews.com/editorial/articledetail.lasso?-token.key=11276&-token.src=column&-nothing
http://www.ruudleeuw.com/dc3_history.htm
http://www.ruudleeuw.com/skytrain.htm
http://www.geocities.com/b31640/dc3.html

we were recently going thru a pile of old letters that my wife's mother had written her mother from china in the 40s. one letter describes being given 3hrs evacuation notice in nanking. they were bundled on an army cargo plane and arrived at tsingtao airfield after dark ... had a bunch of trucks and cars rounded up to light the airfield.

there is another letter by my wife's father from tsingtao to both sets of parents apologizing for not telling them that his wife (their daughter) was pregnant and announcing that she had just given birth to a girl in tsingtao harbor (envelope postmarked "USS Repose").

some "uss repose" web pages:
http://troopships.pier90.org/ships/r/repose/default.htm
http://www.ibiblio.org/hyperwar/USN/ships/dafs/AH/ah16.html
http://www.historycentral.com/navy/hospital/repose.html
http://ussrepose.org/
http://members.aol.com/ussrepose/shipmate2.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multiple address spaces

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: alt.folklore.computers,bit.listserv.ibm-main
Date: Fri, 10 Feb 2006 14:03:55 -0700
re:
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces

somebody just reminded me that access registers didn't ship until 3090 (they weren't on 3081).

i have some recollection of various architecture discussions about access registers that i remember being in the 811 candy-stripe document time-frame i.e. 3081/370-xa architecture documents. i had a drawer full and was once approached by some vendor about their possible availability. And even tho i denied even knowing what they were talking about; the fact that they even approached me turned up in some later litigation and earned me a debriefing with the FBI.

3081 had some mvs addressing constraint relief that resulted from the combination of mvs kernel (8mbyte) and the common segment (large installations having it grow to 5megs) ... with dual-address space stuff (introduced on 3033) as well as 370-xa introducing 31-bit virtual addressing (although it took some time for environments to transition from 24-bit addressing and take advantage of the additional addressing above the 16mbyte line).

the person responsible for dual-address space architecture was also heavily involved in the microprocessor convergence to 801 and endicott planning on using 801 microprocessor for the 4341 follow-on. lots of past 801, romp, rios, power, etc posts
https://www.garlic.com/~lynn/subtopic.html#801

recent post mentioning thread between (3033) dual-address space and itanium-2
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Fri, 10 Feb 2006 14:22:02 -0700
"Charlie Gibbs" writes:
And the cost of this in the aerospace industry makes the computer industry look like child's play.

actually many things that are "human" rated have a lot of design/testing expense that don't show up in a lot of consumers electronics. you will see it in computers related to medical or transportaion. the modern digital computer flight control systems in airplanes have a lot of extra expense; also the FAA flight control systems.

an old reference somewhat related to Y2K ... but involving changing how time was calculated in shuttle MUT (master timing unit) ... and the re-certification cost was considered so high that the suggestion was dropped:
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
https://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...

misc. past posts mentioning human-rated systems:
https://www.garlic.com/~lynn/2003p.html#21 Sun researchers: Computers do bad math ;)
https://www.garlic.com/~lynn/2004q.html#45 C v. Ada
https://www.garlic.com/~lynn/2004q.html#46 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#75 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns

i also have some recollection of some article about the 757/767/777 having much more sophisticated level of computer design and simulation which resulted in significantly reduced testing for FAA flight certification (compared to what was needed for 747 in 1969).

random past posts mention 747 test flights in the skies of seattle in the summer of '69
https://www.garlic.com/~lynn/99.html#32 Roads as Runways Was: Re: BA Solve
https://www.garlic.com/~lynn/99.html#130 early hardware
https://www.garlic.com/~lynn/2003.html#51 Top Gun
https://www.garlic.com/~lynn/2004.html#53 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2005s.html#47 Gartner: Stop Outsourcing Now
https://www.garlic.com/~lynn/2006.html#40 All Good Things

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Empires and Imperialism

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Empires and Imperialism
Newsgroups: alt.folklore.computers
Date: Fri, 10 Feb 2006 14:33:05 -0700
Brian Inglis writes:
Scots rejected absolute monarchy, feudalism, and episcopalianism when English influenced kings tried to introduce them, and the law of the land is, unlike English Common Law, based on principles of law instead of case law.


https://en.wikipedia.org/wiki/Declaration_of_Arbroath
https://en.wikipedia.org/wiki/Scots_law
"De Jure Regni Apud Scotos Dialogus", George Buchanan, Edinburgh, 1579 (can't find original or translation online).

Arbroath Declaration (extracts from Fergusson translation
http://www.clanstirling.org/Main/lib/research/TheDeclarationofArbroath.html
"... But from these countless evils we have been set free, by the help of Him Who though He afflicts yet heals and restores, by our most tireless Prince, King and Lord, the Lord Robert.
...
Him, too, divine providence, his right of succession according to our laws and customs which we shall maintain to the death, and the due consent and assent of us all have made our Prince and King. To him, as to the man by whom salvation has been wrought unto our people, we are bound both by law and by his merits that our freedom may be still maintained, and by him, come what may, we mean to stand. Yet if he should give up what he has begun, and agree to make us or our kingdom subject to the King of England or the English, we should exert ourselves at once to drive him out as our enemy and a subverter of his own rights and ours, and make some other man who was well able to defend us our King; for, as long as but a hundred of us remain alive, never will we on any conditions be brought under English rule. It is in truth not for glory, nor riches, nor honours that we are fighting, but for freedom - for that alone, which no honest man gives up but with life itself.
...
Given at the monastery of Arbroath in Scotland on the sixth day of the month of April in the year of grace thirteen hundred and twenty and the fifteenth year of the reign of our King aforesaid.
..."


i was recently reading an old history book (published around 1880) that claimed that it was extremely fortunate that the declaration of independence (as well as other founding efforts) were much more influenced by scottish descendants in the (state of) virginia area ... than any english influence from the (state of) mass. area ... that the USA would be a markedly different nation if more of the Massachusetts/English influence had prevailed (as opposed to the Virginia/Scottish influence).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Seeking Info on XDS Sigma 7 APL

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking Info on XDS Sigma 7 APL
Newsgroups: alt.folklore.computers
Date: Fri, 10 Feb 2006 15:07:24 -0700
"Charlie Gibbs" writes:
I give the latter group the title "Official Worrier". I've been in the same position - forced to give up time doing useful work in order to stand around with the others and worry about a problem which I can do nothing to fix. It's a political thing, which serves no purpose other than to give PHBs the warm fuzzies.

I was once told to go sit at a customer site for six months.

I was on good standing with several people at the customer site, it was one of the largest commercial mainframe customers with football fields full of mainframes.

The local branch manager had done something to severely irritate the customer ... and in retaliation they had ordered an Amdahl computer (up until that time Amdahl had shipped to educational and research institutions ... but actually hadn't broken into any "true" blue commercial accounts).

I was told that I needed to spend all my time at the account trying to convince them to not install the Amdahl machine. My response was that they were going to install an Amdahl machine and it had nothing at all to do with technical issues ... effectively if I spent all my time at the account and if the customer did install the "FIRST" commercial Amdahl machine ... it would appear to the rest of the world as if it was my fault (as opposed to the customer just being ticked off at the local branch manager) ... and I already knew from the customer that they were going to install the Amdahl machine regardless.

I was told that refusing would be a significant career limiting act ... that if i didn't fall on the sword in the place of the branch manager ... that the branch manager would be blamed ... and he was very good friends with the CEO ... being one of the people that regularly crewed on the CEO's sail boat ... and that i would then get the blame for allowing the branch manager to be blamed.

mention of another career limiting move
https://www.garlic.com/~lynn/2005j.html#32 IBM Plugs Big Iron to the College Crowd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multiple address spaces

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: alt.folklore.computers
Date: Fri, 10 Feb 2006 17:30:17 -0700
Rich Alderson writes:
Lynn, I think you mean VS1-SVS. That, at least, was the name on the manuals I worked with during my stint as a systems programmer in the IBM world at the University of Chicago Computation Center. One of my jobs was writing a utility to translate the SVS JCL, with some 3rd party extension whose name I don't remember after 24 years--used /*FOO for additional control cards, to MVS JCL with its standard //*FOO control cards. (Hmm, RACF? Nah, I think that was a security package.)

The changeover from SVS to MVS accompanied the replacement of an Amdahl 470/V8 by an IBM 3031. We had a 4341 on loan to test MVS functionality while the new water pipes were installed--the air-cooled 470 sat where the old 370/168 had been.


mft -> vs1 mvt -> vs2 ... initially svs, later upgraded to mvs

MVS ... a long history
http://www.os390-mvs.freesurf.fr/mvshist2.htm

from above ...
OS/VS1 provided a single virtual storage address space system, while OS/VS2 allowed multiple virtual storage address spaces. However, the first release was restricted to a single virtual storage address space and became known as OS/VS2 SVS. The following release, made available in July 1974, contained multiple virtual storage, address space support and was named OS/VS2 MVS Release 2. Both OS/VS1 and OS/VS2 SVS supported a total of 16MB of virtual storage. Because the OS/VS2 MVS release supported multiple virtual storage address spaces, each of which provided 16MB, most people assumed it would be years before additional storage would be required.

... snip ...

note because of the pervasive use of pointing passing paradigm, half of each virtual address space was occupied by the 8mbyte kernel image, with the other half supposedly for applications. however, with the transition from (effectively) all applications laid out in the same address space (as was the case in MVT and SVS), all the subsystems applications occupied their own address space. In order to support the pointer passing paradigm between applications address space and subsystem address space ... the "common segment" was created. In larger configuraitons with lots of subsystems and applications, the "common segment" could occupy as much as five megabytes that appeared in every virtual address space (taken out of the eight mbytes supposedly available for applications) ... leaving only a maximum of three mbytes for application actual instructions and data.

the 370/168 TLB (table look aside buffer) was tailored for MVS. the 168 hardware TLB had 128 (cached) entries for translating virtual to real addresses. Specific bits were used from the virtual address to select a subset of TLB-entriess ... to see if there was already a saved entry translating giving the virtual->real translation. On the 168, one of the bits was the "8mbyte" bit ... which resulted in half of the entries being available for MVS kernel addresses and half of the entries being available for applications.

not however, both VS1 and CMS virtual address spaces typically started at zero and rarely went over four mbytes. As a result, both VS1 and CMS rarely had any virtual addresses with the "8mbyte" bit set ... and therefor half the TLB entries on 370/168 would go unused (in vs1 and cms environments).

VS1 was the mft to virtual upgrade and became an "Endicott" product for the mid-range 370s (DOS became dos/vs for the entry virtual memory 370s and MVT became VS2 for the high-end virtual memory 370s).

where it was standard for VS2/SVS to have defined a single 16mbyte virtual address space ... a typical VS1 configuration was a single 4mbyte virtual address space. Endicott started making heavy investment in mid-range product line being vm/370 oriented. It created ECPS vm370 performance microcode assist for the 138/148 ... a couple ECPS postings:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

The vm370/vs1 handshaking was also created. A 4mbyte virtual machine would be defined for VS1. VS1 then defined a single 4mbyte virtual address space that had its pages laid out sequentially in the 4mbyte virtual machine address space. VS1 then would rely on VM to do paging operations (moving pages to/from disk). In addition to eliminating "double" paging (both VS1 and VM moving pages to/from disk), it turned out that VM did the movement to/from disk significantly more efficiently than VS1 (or for that matter VS2). On CP67, I had optimized the aggregate, complete round trip pathlength (page fault, page replacement, schedule page i/o, perform page i/o, reschedule task, etc) to a few hundred instructions. It significantly went back up in the initial morph from cp67 to vm370 ... but i got it back down again with the paging code released in the resource manager.

a couple past posts about vs2 corrupting their LRU page replacement algorithm ... and not getting it corrected until well into MVS releases (3.8+?).
https://www.garlic.com/~lynn/2004.html#13 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage

misc. past posts mentioning Ludlow(?) working 3rd shift (pok 706? machine room) on initial vs2/svs prototype called AOS2.
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002p.html#49 Linux paging
https://www.garlic.com/~lynn/2002p.html#51 Linux paging
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004e.html#40 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#45 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers,misc.transport.road
Date: Fri, 10 Feb 2006 18:20:05 -0700
Anne & Lynn Wheeler writes:
there is another letter by my wife's father from tsingtao to both sets of parents apologizing for not telling them that his wife (their daughter) was pregnant and announcing that she had just given birth to a girl in tsingtao harbor (envelope postmarked "USS Repose").

some "uss repose" web pages:
http://troopships.pier90.org/ships/r/repose/default.htm
http://www.ibiblio.org/hyperwar/USN/ships/dafs/AH/ah16.html
http://www.historycentral.com/navy/hospital/repose.html
http://ussrepose.org/
http://members.aol.com/ussrepose/shipmate2.htm


ref:
https://www.garlic.com/~lynn/2006b.html#27 IBM 610 workstation computer

modern technology miracles ... postmark scanned from reference letter (sent from tsingtao harbor) ... somewhere along the way, somebody took the stamp for their collection.
https://www.garlic.com/~lynn/repose.jpg

envelop postmarked USS Repose

slightly earlier letter (after arriving in tsingtao)
https://www.garlic.com/~lynn/repose2.jpg

envelop postmarked USS Repose

picture of the ship, 20 some years later
http://ussrepose.org/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multiple address spaces

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 10 Feb 2006 22:58:19 -0700
Johnny Luo wrote:
A little suprising for me.With 31-bit addressing,if you have 10G real storage,only 2G can be addressable as central storage.Then,how about the use of other 8G real storage?Used as a substitiution for paging data sets?

ref:
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#26 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces

3033 had a gimmick ... it was still 24bit virtual (and real) addressing (aka 16mbytes) ... however, it offered a 32mbyte real storage option.

the (370) PTE was 16bits ... 12bits used for specifying a real page number (12bit/4k virtual pages ... or 4096 4096byte pages ... 16mbyte). two more bits were used for status ... and two bits were undefined.

for the 32mbyte real storage option ... one of the undefined bits were used as an additional real page number (making 13bits) ... allowing up to 8192 real 4k pages to be specified (or 32mbytes).

each application had its own 16mbyte virtual address space ... however possibly 13mbytes of each virtual address space was common ... leaving ... effectively at most 3mbytes unique for application.

lets say most of the mvs kernel (8mbytes) and most of a 5mbyte common segment was resident at a particular moment ... that would take up 13mbytes of real storage ... leaving possibly 19mbytes (out of 32mbyte real). you could have dozens of applications and subsystems running concurrently ... each with their own unique address space and unique pages. lets say there were 40 concurrent applications and subsystems that might be running at any point in time ... if they had the max. 3mbytes each ... that would account for 120mbytes of total unique virtual memory (spread across the 40 different virtual address spaces). with possibly only 19mbytes left of available real storage ... only 19mbytes of the 120mbyte possible virtual pages could be resident in real storage at any point in time.

real addressing couldn't address more than 16mbytes of real storage ... and virtual addressing couldn't address more than 16mbytes of virtual storage ... however there could be hundreds of megabytes of total virtual storage spread across scores of different virtual address spaces.

the only additional thing required as being able to generate a real addresses for I/O larger than 16mbytes. That was provided with 31bit idal i/o addressing.

In any case, even if the virtual (and/or real) instruction addressing can't directly address all of real storage ... it is still possible to fully utilize real storage that is much larger than the instruction addressing capability (using the ability of the virtual memory addresses to have hardware that can resolve/translate to a much larger real address ... along with the ability to have multiple concurrent virtual address spaces).

So if you have 31bit virtual (instruction) addressing and much more than 2gbyte real storage ... all you need is for the virtual->real hardware addressing mechanism to handle maping to real page numbers that are greater than 2gbyte. each 2gbyte virtual address space can't address/occupy more than 2gbyte real storage ... however, with scores or hundreds of different 2gbyte virtual address spaces ... it is easily possible to have aggregate virtual pages well in excess of 10gbytes (not all of which could be real storage resident ... even with 10gbytes of real storage).

note that the 3090 had a different issue with expanded storage feature. as noted in these related posts ... the physical packaging limited the amount of memory that could be within the 3090 instruction fetch latency specification. they went to a two-level software managed hiearchy called expanded storage. this had a longer latency, wider memory bus than was used for standard executable memory (allowing for being placed at greater distance). kernel software instructions was used to move 4k pages back and forth between regular memory and physical memory (in that sense it was sorted of treated as an electronic paging device ... but using synchronous instruction moves instead of asynchronous i/o).

recent posts in a thread talking about partitioning current storage as emulated "3090" expanded storage:
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#18 {SPAM?} Re: Expanded Storage

past posts mentioning 3033 32mbyte support
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2004g.html#20 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005.html#43 increasing addressable memory via paged memory?
https://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#19 address space
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?

as mentioned in some of the above references ... one of the issues prompting 32mbyte feature on the 3033 was significant real storage requirements by mvs kernel and subsystems (not only was there significant pressure being placed on amount of virtual addressable storage by the mvs kernel and subsystem requirements ... but the same features were also consuming significant amounts of real storage causing enormous paging pressure).

the other was that the endicott midrange 4341 was a real price/performance killer ... not only was it extremely competitive with other offerings (like dec vax) in its market segment ... but a cluster of six 4341s was about the same price as a 3033, had higher aggregate mip rate than 3033, could configured with an aggregate of 96mbytes (16mbytes for each 4341) and an aggregate of 36 i/o channels (6 i/o channels for each 4341).

Seeking Info on XDS Sigma 7 APL

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking Info on XDS Sigma 7 APL
Newsgroups: alt.folklore.computers
Date: Sat, 11 Feb 2006 09:03:30 -0700
"Charlie Gibbs" writes:
Don't tantalize us! Tell us what happened next!

ref:
https://www.garlic.com/~lynn/2006b.html#31 Seeking Info on XDS Sigma 7 APL

i shipped the resource manager (30 years ago in a couple months),
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

the customer got their Amdahl machine, i transferred from east cost to sjr on the west coast. I didn't bother to follow what happened at that branch office ... and tried not to think about what they had met by references to my career.

I got into mailing lists and computer conferencing ... and got blamed for mailing lists and computer conferencing ... there was even a datamation article blaming me. a researcher was hired to sit in the back of my office for 9 months and take notes on how i communicated (phone, face-to-face, computer, go along with me to meetings). they also had copies of all my incoming and outgoing email as well as logs of all my instant messages. this also turned into a stanford phd thesis (joint between language and computer ai) and the subject of subsequent books and papers. misc. postings related to cmc
https://www.garlic.com/~lynn/subnetwork.html#cmc

remember the internal network was larger than the arpanet/internet from just about the start until sometime mid-85.
https://www.garlic.com/~lynn/subnetwork.html#internalnet

at the time of the great arpanet switch-over to internetworking protocol on 1/1/83, it had approx. 250 nodes ... and the internal network was nearing 1000 nodes ... which it passed a few months later
https://www.garlic.com/~lynn/internet.htm#22

somebody once told me a line about the best that "you" (technical person) can expect (for having a successful project) is to not be fired and to be allowed to do it again.

slightly related story in the same vein
https://www.garlic.com/~lynn/2005j.html#32 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005s.html#16 Is a Hurricane about to hit IBM ?
https://www.garlic.com/~lynn/2006.html#21 IBM up for grabs?
https://www.garlic.com/~lynn/2006.html#22 IBM up for grabs?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Multiple address spaces

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 11 Feb 2006 10:58:04 -0700
Johnny Luo wrote:
Great example.If IBM readbooks have such detailed examples for beginners ..I know it's impossible,so I would thank you again for it. Maybe the last question I would like to raise is for the system common area

it helps to have lived thru the whole thing and worked on much of the transition.

previous posts in the thread
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#26 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Sat, 11 Feb 2006 17:20:21 -0700
"Richard E. Silverman" writes:
With a standard, distributed trust system such as X.509 PKI, this problem simply goes away. It is only necessary to distribute to clients, once, a single root certificate under which server hostkey certificates are issued. Servers may then be added, removed, or rekeyed at will, with no client updates needed. Similar improvements are realized if certificates are also used for user authentication, although that entails much more overhead and hence is less likely to be necessary or used.

GSSAPI/Kerberos solves the server authentication problem as well, using a different technology, and solves the hostname aliasing problem too (modulo security of the DNS). However, it is often not feasible to implement Kerberos due to practical limitations of today's networks, such as NAT and firewalls. X.509 is much more self-contained.


aka digital certificates are an analogy to the letters of credit/introduction from the sailing ship days where the relying parties (clients) otherwise didn't have access to the necessary information (either their own local repository or realtime/online access to some certifying authority).

one of the issues from the early 90s with x.509 identity certificates and trusted third party PKI certification authorities ... was that the TTP/CAs were looking at making the identification useful for relying parties helping justify the prices that they would be charging the key-owners (for the certificates). not knowing ahead of time ... all possibly relying-parties (that might be making use of the certificates), there was a tendency to grossly overload the x.509 identity certificates with personal information (in the hopes that majority of future relying parties might find something of use).

one of the issues going into the mid-90s, was that a dawning realization that x.509 identity certificates, grossly overloaded with personal information represented significant privacy and liability issues.

the original pk-init draft for kerberos ... simply involved registering a public key (in lieu of a password) ... and then performing digital signature verification using the onfile public key (in place of password matching). I periodically get email from the person that claims to have been responsible for pki-based certificates added to the pk-init draft, apologizing for not realizing (at the time) that it was redundant and superfluous.

misc past kerberos related posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos

when working on the original payment gateway ... there were proposals that consumers could register a public key with their certificatoin authority ... aka their financial institution and then digitally sign the transactions ... for verification by their financial institution using the onfile public key. one such is the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959

there were also PKI-oriented suggestions that adding digital signatures and attached digital certificates to financial transactins would modernize financial transactions. the counter observation was that the purpose of digital certificates were to allow offline verification (i.e. when the relying party didn't have online access to the certifying authority) which was the case prior to the infrastructure going online in the 70s (aka the certificate-oriented proposal was essentially to return to the offline days of 30 years earlier).

again, given that the relying party has their own repository of information and/or online access to some certifying authority, then the digital certificates were redundant and superfluous. an adequant modernization would be to digitally sign the transactions and verify the digitally signature with the onfile public keys.

there was a separate problem with appending digital ceritificates to such financial transactions. the typical financial transaction is on the order of 60-80 bytes ... and the typical PKI certificate appending protocols of the mid-90s added another 4k-12k bytes to the financial transaction ... and enormous payload bloat (increase of 100 times) for something that was redundant and superfluous.

for a time, there was a standardization effort in the X9 financial standards group looking at creating PKI digital certificates that would be on the order of 300 bytes ... eliminating the enormous payload penalty for redundant and superfluous digital certificates. One approach was itemizing the various fields in a standard digital certificate that were common across all digital certificates issued by the same financial institution, and then eliminating the redundant fields. This would only result in retaining those fields that were unique to a specific certificate. However, it was possible to show that the financial institution (as both the relying party and the certificate issuing party) would be in possesion of all certificate fields and therefor it was possible to reduce such digital certificates to zero fields and zero bytes ... an enormous improvement in the payload bloat associated with appending redundant and superfluous digital certificates.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

blast from the past ... macrocode

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: blast from the past ... macrocode
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 12 Feb 2006 09:56:40 -0700
Date: 3/18/81 10:19:45
From: wheeler
Subject: processor speed

Amdahl has announced (of course) the 5880 and an MP version of the same. It is supposedly is a 12 mip processor. It also has something called macro-code. MAcro-code apparently is something very close to micro-code in speed but is a subset of the 370 instruction set. Comment from endicott is that ECPS proposed something similar at ECPS time. By restricting 370 instruction set & certain things like self-modifying instructions you could get it to run at nearly the same speed as micro-code. Result is that you have micro-code speed but not a problem where you have only two or three people in the whole world capable of programming the box. Code developement & ECPS enhancment becomes very simple. 5880 also has dual caches, one for instruction & one for data, something very similar to idea that 801 architecture uses.

In the current world there is the NAS AS9000. The week of the VMITE, xxxxx was down @ Lockheed Dialog which has one. He had just finished benchmarking the box at 9 mip UP. Box is micro-coded & has VMA. xxxxx also said that Natsemi also has ECPS ready to go & would like to work with people enhanching microcode features. The AS9000 box is made by Hitachi. SLAC was in Japan in Dec. to benchmark the box. They commented that the first box that Hitachi built was actually an MP with built-in array processer in each CPU (MP wasn't an add on technology to come out later). Would seem to be MP 15-18 mips in 370 mode and 30-40+? mips in the array processors.


... snip ... top of post, old email index

Date: 4/21/81 10:05:31
From: wheeler
Subject: AS9000

did I ever send you a copy of the message about the AS9000 at Dialog. AS9000 is a uni-processor marketed in this country by NAS. It is made by Hitachi (i think they call it H-240M or some such thing). xxxxxx (who does some consulting work for Dialog) has clocked the AS9000 at 9 mips. There is also the story about the first AS9000 built by Hitachi. It was a full duplex machine with built in array processors on each CPU. Also the machine is a fully micro-coded machine. It has VMA, national has full ECPS but has not yet installed it on Dialog machine. They have also talked to xxxxxx about any design help in enhancing the micro-coded assists. AS9000 is several years old & Hitachi is working on more advanced machine.


... snip ... top of post, old email index

collected posts mentioning microcode
https://www.garlic.com/~lynn/submain.html#mcode

801 was the original risc architecture from the 70s, showed up in romp, rios, power, power/pc, etc. misc. collected postings
https://www.garlic.com/~lynn/subtopic.html#801

Amdahl did its hypervisor implementation in "macrocode" ... and ibm eventually responded with pr/sm on the 3090.

misc. past posts mentioning (Amdahl) macrocode
https://www.garlic.com/~lynn/2002p.html#44 Linux paging
https://www.garlic.com/~lynn/2002p.html#48 Linux paging
https://www.garlic.com/~lynn/2003.html#9 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005d.html#60 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#43 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?

misc. past posts mentioning ecps:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/2003f.html#43 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#47 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#54 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

another blast from the past

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: another blast from the past
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 12 Feb 2006 10:50:57 -0700
re:
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... microcode

Date: 3/10/80 08:08:43
From: wheeler

they were looking at doing things on a chip. I don't know if it is conflict or complement. Some of the people worked on VAMPS and would like to do CMS but I don't think their chips are that large (yet?). xxxxxx (who used to be at Cambridge) and yyyyyy (who was in Burlington). Burlington is still a strategic MVS shop with no VM. They may have to install at least one. Their MVS is 9 meg (leaving 7 meg.), and they have a Fortran design program exceeding that size. VM/CMS could provide him 16 meg. less about 200k or so. zzzzzzz in defense of POK said they are seriously looking at segment protect in the hardware now. Several weeks ago a TSO product admin. called me about putting the scheduler into MVS. They appear to be running scared. It seems that they would like to avoid having salesman going into MVS accounts and admit that MVS isn't the answer to everything (and by implecation the IBM salesman was wrong) and now they will have to install VM for interactive computing. Also heard that POK management tried to brow beat DP marketing into giving equal billing to TSO (after the VM/CMS is the interactive way to go).
--
Talked to both TSS and Amdahl people about UNIX. That may appear to be a more realistic threat to VM/CMS. It would appear that not only do most IBM'ers not know what is going on outside the company but a lot don't even know what is going on outside of their own Lab.
--
Something about two-pi. NCSS has 100 3200 machines out in the field and two-pi has another 130. They are all at different EC levels and nobody appears to have records of which ECs are on which machines.
--
I'll be getting machine readible copy of the VMSHARE data base and hope to be putting it up on the HONE system, in addition to our machine. I would like to have a VMSHARE userid (something like TYMSHARE has) so that outside users can dial-up and look at the data base. Heard some PSR say that putting a problem onto VMSHARE usually takes a day or two for an answer instead of a month or better on RETAIN. I would eventually like to get some sort of restricted VNET dial-up to pick up incremental changes (currently plan on receiving monthly tapes).
--
After the 4300 experience session, somebody from POK siad the user benchmarks probably drove the final nails into the 3031's coffin.
--
Oh yes, I flew back to San Jose with two people from a TSO performance group. I'm afraid that I brutilized them. Their parting shot was that CMS & TSO were in different divisions and that bot divisions have to spend money on interactive computing. POK has to continue to spend money on TSO since that is the only thing they have.


... snip ... top of post, old email index, HONE email

misc. refs/terms:

scheduler ... my resource manager ... dynamic adaptive feedback policies that had fairshare policty as default. originally done when i was an undergraduate for cp67. dropped in the morph of cp67 to vm370 ... but reinroduced as the resource manager. old posting reproducing resource manager blue letter
https://www.garlic.com/~lynn/2001e.html#45
various collected postings
https://www.garlic.com/~lynn/subtopic.html#fairshare

VAMPS ... a project that i worked on about the same time i was working on ecps, resource manager, etc. it was a 5-way SMP design with lots of stuff moved into microcde. misc. past postings mentioning VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

MVS had (16mbyte) application space constraint (kernel and other system stuff occupying majority of every application address space) ... misc. recent postings
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#26 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#28 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#32 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#34 Multiple address spaces
https://www.garlic.com/~lynn/2006b.html#36 Multiple address spaces

NCSS was the first spin-off of cp/67 and cambridge science center offering commercial time-sharing service
https://www.garlic.com/~lynn/submain.html#timeshare

two-pi produced a entry-level 370 clone that NCSS also marketed under their own logo.

vmshare is computer conferencing provided by tymshare starting in the mid-70s for discussing vm
http://vm.marist.edu/~vmshare/

hone is the internal online interactive vm370-based system providing world-wide support to sales, marketing and field people. in the late 70s all the US hone datacenters were consolidated in cal. there were clones of the system at numerous places around the world.
https://www.garlic.com/~lynn/subtopic.html#hone

there were several benchmarking activities comparing 4341 and 3031 (as well as clusters of 4341 and 3033). misc. past 4341/3031 benchmarking references:
https://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002b.html#0 Microcode?
https://www.garlic.com/~lynn/2002d.html#7 IBM Mainframe at home
https://www.garlic.com/~lynn/2002f.html#8 Is AMD doing an Intel?
https://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#22 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#37 IBM was: CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
https://www.garlic.com/~lynn/2003.html#10 Mainframe System Programmer/Administrator market demand?
https://www.garlic.com/~lynn/2005m.html#25 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005q.html#30 HASP/ASP JES/JES2/JES3
https://www.garlic.com/~lynn/2005q.html#38 Intel strikes back with a parallel x86 design

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

another blast from the past ... VAMPS

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: another blast from the past ... VAMPS
Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l
Date: Sun, 12 Feb 2006 14:31:31 -0700
Anne & Lynn Wheeler writes:
they were looking at doing things on a chip. I don't know if it is conflict or complement. Some of the people worked on VAMPS and would like to do CMS but I don't think their chips are that large (yet?). xxxxxx (who used to be at Cambridge) and yyyyyy (who was in Burlington). Burlington is still a strategic MVS shop with no VM.

ref:
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past

note, i've been redacting names to protect the guilty

Date: 9/16/82 13:57:14
From: wheeler
Subject: bounce lock

re: bounce lock; The VAMPS design had only one processer (at a time) executing CP code ... all the other processors would be executing virtual machines. CP processor would execute microcode dispatcher instruction to add virtual machine to runnable list. CP would then see if there was anymore work to do & then execute dispatch instruction. Dispatch instruction would either pull 1st available virtual machine off the dispatch queue & execute it or enter wait state. All the other enginess, when not executing a virtual machine would sit idle waiting for something to be placed on the dispatching queue. When a CP service was required that could not be handled by the extended microcode assist, control would attempt to enter CP. If there was already processor in CP, this resulted in a CPEXBLOK being queued for the CP processor and control transferring to execute another virtual machine on the dispatch list. The microcode queuing of this CPEXBLOK would cause a special interrupt in the CP processor to dequeue it.

VAMPS was effectively killed in Sept. of 1975. In Nov. of 1975, I began adapting the VAMPS design to a non-microcode Lexington (168AP). The concept of a CP processing interrupt, I replaced with the 'single system lock' design. The innovation that I contributed to the single system lock design, was that a processor requiring CP services that were not equivalent to the extended microcode function (i.e. those things that potentially required global resources) ... would go for the single system lock ... rather than spinning on that lock ... like other single system lock designs did ... it would "bounce" off the lock, queue the CPEXBLOK and go off to the dispatcher to find another virtual machine to execute.


... snip ... top of post, old email index

Date: 9/16/82 14:07:34
From: wheeler
Subject: bounce lock

bounce lock .... My term for the design is a bounce lock reflecting what happens in the processor executing the instruction (i.e. the processor bounces off the system lock and goes looking for something else to do). The development team eventually began to refer to it as a "defer" lock ... reflecting their orientation as a process/tread (rather processor) orientated system program ... i.e. the virtual machine (or process) request was queued for latter execution.

I believe "defer" lock is an incorrect name since the single system lock is a processor orientated function (i.e. it prevents a processor from entering "protected" cp code). It does not directly affect the state of the virtual machine ... My orientation when I created the design was to solve a processor execution problem ... and therefor a nomenclature orientation representing what the processor has to do ... not what is happening in/to process.


... snip ... top of post, old email index

some VAMPS (& SMP) background. a lot of the stuff that i had "fastpathed" in cp67 was targeted to microcode in VAMPS (interrupt handling, dispatching, and some process specific stuff that didn't affect global system resources).

in the translating VAMPS to a software implemention, a minimal amount of kernel code was parallelized ... leaving the vast majority of kernel code "protected" by the single kernel lock. the majority of the code was not parallelized ... but high-performance, critical parts of the kernel was parallelized somewhat (also) drawing on experience gained by doing ecps. the trade-off was to get nearly all of the thruput from a highly parallelized kernel with only slightly more work than doing a kernel spin-lock implementation (which was common in the period).

one of the emails is specifically about the semantic appropriateness referring to the single lock as a bounce lock (my original nomenclature) or a "defer lock" (used later).

various collected postings mentioning VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

note that charlie had done quite a bit of work at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

on fine-grain locking and highly parallelized smp. that is where the invention of compare&swap instruction came out of (mnemonic chosen because CAS are charlie's initials).
https://www.garlic.com/~lynn/subtopic.html#smp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/




previous, next, index - home