From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: What's a mainframe? Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 17 Dec 2006 15:53:44 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
"San Jacinto" morphed into RIOS and RS/6000, old news item
Date: 25 August 1987, 14:39:42 EDT
To: wheeler
From this week's Management Information Systems Week...
IBM's Austin, Texas, manufacturing facility - where the RT was born -
is currently putting the final touches on a 10-mips Unix-based
workstation, code-named "San Jacinto," according to an industry source.
"It's a follow-on to the RT, due in the first or second quarter" said the
source. The San Jacinto will be Posix-compatible, as well.
... snip ... top of post, old email index
as i've mentioned before RT originally started out with ROMP (chip) and cp.r (written in pl.8) as followon to the office product division displaywriter. when that project was killed, they decided to retarget it to the unix workstation market ... subcontracting for a at&t unix port with the same company that had done the pc/ix port.
misc. past romp, rios, 801, fort knox, etc posts
http://www.garlic.com/~lynn/subtopic.html#801
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM sues maker of Intel-based Mainframe clones Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 17 Dec 2006 16:41:15 -0700phil@ibm-main.lst (Phil Payne) writes:
the cambridge science center
http://www.garlic.com/~lynn/subtopic.html#545tech
adapted it for tracing instructions and storage references for application that did semi-automated program reorganization ... optimizing operation for virtual memory operation.
i had gotten involved in rewriting some of the redcap interfaces to improve the operation/performance for use in the science center application.
the science center application was used quite a bit internally by a number of product developers .... for instance the IMS group in STL made extensive use of it for analysing IMS execution.
eventually it was released as a product called Vs/Repack in the spring of 76.
systems journal article describing some of the early work:
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory,
IBM Systems Journal, v10n3, 1971
misc. past posts mentioning redcap, program restructuring, vs/repack
http://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/93.html#5 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
http://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
http://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
http://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
http://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
http://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
http://www.garlic.com/~lynn/2002f.html#50 Blade architectures
http://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
http://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
http://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
http://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
http://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
http://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
http://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
http://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
http://www.garlic.com/~lynn/2005.html#4 Athlon cache question
http://www.garlic.com/~lynn/2005d.html#41 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005d.html#48 Secure design
http://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005j.html#62 More on garbage collection
http://www.garlic.com/~lynn/2005k.html#17 More on garbage collection
http://www.garlic.com/~lynn/2005m.html#28 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
http://www.garlic.com/~lynn/2005o.html#5 Code density and performance?
http://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
http://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
http://www.garlic.com/~lynn/2006e.html#20 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006j.html#18 virtual memory
http://www.garlic.com/~lynn/2006j.html#22 virtual memory
http://www.garlic.com/~lynn/2006j.html#24 virtual memory
http://www.garlic.com/~lynn/2006l.html#11 virtual memory
http://www.garlic.com/~lynn/2006o.html#23 Strobe equivalents
http://www.garlic.com/~lynn/2006o.html#26 Cache-Size vs Performance
http://www.garlic.com/~lynn/2006r.html#12 Trying to design low level hard disk manipulation program
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Mon, 18 Dec 2006 10:30:54 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
for some another reference to the above
http://www.garlic.com/~lynn/2006t.html#13 VM SPOOL question
has old email from 92 about run in with somebody had done some consulting
work at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
in the early 70s ... and more recently had designed the Amdahl Huron database system and was working on the implementation. Part of the discussion was that he was also involved at the time in co-author of paper on LRU replacement algorithms ... in addition to replacement algorithms as they applied to DBMS managing buffer caches.
for additional topic drift ... other recent posts about database buffer
caching
http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
http://www.garlic.com/~lynn/2006o.html#22 Cache-Size vs Performance
http://www.garlic.com/~lynn/2006r.html#31 50th Anniversary of invention of disk drives
http://www.garlic.com/~lynn/2006w.html#27 Generalised approach to storing address details
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Why so little parallelism? Newsgroups: comp.arch,alt.folklore.computers Date: Mon, 18 Dec 2006 11:32:12 -0700eugene@cse.ucsc.edu (Eugene Miya) writes:
it turned out that there was major product announcement later, but it was by kingston, not us, and we never did do any scale-up announcements. if you have anything in the computer museum, it didn't come from us.
previous cluster-in-a-rack/MEDUSA refs:
http://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
http://www.garlic.com/~lynn/2006w.html#26 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#38 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#39 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?
as several previous references, here is old references to the meeting
at oracle
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/96.html#15
Date: 29 January 1992, 16:28:48 PST
From: wheeler
Subject: "cluster computing"
I believe we got a charter last week to do generalized "cluster
computing" with both horizontal growth and availability. We are going
full steam ahead now with plans for major product announce directions
in 6-8 weeks.
Hester gave Larry Ellison (pres. of Oracle) some general "technology
directions".
I'm now in the middle with nailing down overall ... total computing
system environment ... for that year end time frame (database,
horizontal growth, availability, connectivity, function, fileserving,
applications, system management, enterprise wide services, etc, etc).
I wasn't able to make the LLNL meeting tues & weds. this week ... but
XXXXX and YYYYY came by this afternoon (after the meeting).
YYYYY had already put together pictures of the visionary direction
(i.e. for LLNL national storage center) titled "DATAstore" with NSC
providing a generalized fabric switch/router with lots of things
connected to it ... both directly & fully-meshed and in
price/performance hierarchy ... that had HA/6000 as the controlling
central "brains". He effectively said I can get a generalized NSC
switch/router built off combining current NSC/DX technology (including
the RISC/6000 SLA interface) and their HiPPI switch by 2nd qtr. By ye
he should have for me a generalized switch fabric (called UNIswitch)
that has variety of "port" boards
• Sonet,
• FDDI,
• ESCON,
• FCS,
• serial HiPPI,
• parallel HiPPI,
• NSC/DX
In theory, anything coming in any port ... can come out any other
port.
Also, YYYYY has built into the "switch fabric" a "security"
cross-matrix function that can limit who can talk to who
(i.e. otherwise the default fabric is fully-meshed environment,
everybody can talk to everybody). I can use this for the HA "I/O
fencing" function ... which is absolutely necessary for going
greater than two-way.
XXXXX brought up the fact that we have a larger "scope" here and that
immediately there are about a dozen large "hot Unitree" activities
going on at the moment and that (at least) we three will have to
coordinate. One of them is the current LLNL physical data repository
technical testbed ... but there are two other production environments
at LLNL that have to be addressed in parallel with this work ... and
there are another 9 or so locations that we also have to address.
In addition, both NSC and DISCOS have been having some fairly close
dealings with Cornell ... both Cornell proper and also with regard to
the bid on the NSF stuff. Also the Grummen SI stuff for
Nasa/Huntsville came up.
ZZZZZ was also in town visiting Almaden about some multi-media stuff
... and I invited him to sit in on the meeting with YYYYY and
XXXXX. That gave us the opportunity to discuss a whole other series of
opportunities (like at Cargil(sp?)). The tie-in at Discos is
interesting since General Atomics also operates the UCSD
supercomputing center ... and at least two of the papers at last fall
SOSP on multi-media filesystem requirements were from UCSD (XXXXX
knows the people doing the work).
Also in the discussions with XXXXX about Unitree development we
covered various things that WWWWW (LLNL) had brought up in the past
couple days (off line) and the Cummings Group stuff (NQS-exec, network
caching, log-structured filesystem, etc). XXXXX wants to have two
3-way meetings now ... one between WWWWW, XXXXX and me ... in addition
to the 3-way (or possibly 4-way) meeting between Cummings, XXXXX, and
me.
This is all the visionary stuff that we sort of ran thru for the total
computing environment that we would like to have put together for next
year (hardware, software, distributed, networking, system management,
commercial, technical, filesystems, information management).
Effectively YYYYY, XXXXX, and I came out of the meeting with
ground-work platform for both hardware & software to take over the
whole worlds' computing environment. Little grandiose, but we will be
chipping away at it in nice manageable business justified
"chunks/deliverables".
This is consistent with an overall theme and a series of whitepapers
that we have an outside consultant working on (was one of the founders
of Infoworld and excellent "tech writer") ... talking about the
computing vision associated with "cluster computing" (which includes
the MEDUSA stuff ... and HA/MEDUSA being base for HA/6000 scaleup).
... snip ... top of post, old email index
as mentioned ... within a few days of sending the above email, the
whole project was taken away from us and transferred to another
organization and we were told we couldn't work on anything with
more than four processors. and then within a couple weeks
2001n.html#6000clusters1
... scientific and technical only
and then a little later in the year
2001n.html#6000clusters2
... caught by surprise
other old MEDUSA related email
http://www.garlic.com/~lynn/lhwemail.html#medusa
and of course, we were producing a product, i.e ha/cmp ... misc.
past posts mentioning
http://www.garlic.com/~lynn/subtopic.html#hacmp
the reference to enterprise wide services was part of our 3-tier
architecture ... misc. recent postings
http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe?
http://www.garlic.com/~lynn/2006v.html#10 What's a mainframe?
http://www.garlic.com/~lynn/2006v.html#14 In Search of Stupidity
http://www.garlic.com/~lynn/2006v.html#35 What's a mainframe?
and past collected postings mentioning 3-tier
http://www.garlic.com/~lynn/subnetwork.html#3tier
for other relational drift and scale-up
http://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
and couple other old rdbms/oracle references:
http://www.garlic.com/~lynn/2004o.html#40 Facilities "owned" by MVS
http://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
and, of course, misc. and sundry posts about system/r
http://www.garlic.com/~lynn/submain.html#systemr
part of ha/cmp scaleup was work on distributed lock manager ...
misc past posts:
http://www.garlic.com/~lynn/2000.html#64 distributed locking patents
http://www.garlic.com/~lynn/2000g.html#32 Multitasking and resource sharing
http://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
http://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
http://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
http://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
http://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
http://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
http://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
http://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
http://www.garlic.com/~lynn/2001l.html#5 mainframe question
http://www.garlic.com/~lynn/2001l.html#8 mainframe question
http://www.garlic.com/~lynn/2001l.html#17 mainframe question
http://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
http://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
http://www.garlic.com/~lynn/2002b.html#37 Poor Man's clustering idea
http://www.garlic.com/~lynn/2002d.html#31 2 questions: diag 68 and calling convention
http://www.garlic.com/~lynn/2002e.html#67 Blade architectures
http://www.garlic.com/~lynn/2002e.html#71 Blade architectures
http://www.garlic.com/~lynn/2002f.html#1 Blade architectures
http://www.garlic.com/~lynn/2002f.html#4 Blade architectures
http://www.garlic.com/~lynn/2002f.html#5 Blade architectures
http://www.garlic.com/~lynn/2002f.html#6 Blade architectures
http://www.garlic.com/~lynn/2002f.html#17 Blade architectures
http://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
http://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
http://www.garlic.com/~lynn/2002n.html#27 why does wait state exist?
http://www.garlic.com/~lynn/2002o.html#14 Home mainframes
http://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
http://www.garlic.com/~lynn/2003d.html#2 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003d.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003d.html#54 Filesystems
http://www.garlic.com/~lynn/2003h.html#35 UNIX on LINUX on VM/ESA or z/VM
http://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
http://www.garlic.com/~lynn/2003k.html#10 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003k.html#17 Dealing with complexity
http://www.garlic.com/~lynn/2004c.html#53 defination of terms: "Application Server" vs. "Transaction Server"
http://www.garlic.com/~lynn/2004d.html#72 ibm mainframe or unix
http://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
http://www.garlic.com/~lynn/2004m.html#5 Tera
http://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosophy
http://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
http://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?
http://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
http://www.garlic.com/~lynn/2005.html#55 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
http://www.garlic.com/~lynn/2005f.html#32 the relational model of data objects *and* program objects
http://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
http://www.garlic.com/~lynn/2005h.html#28 Crash detection by OS
http://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
http://www.garlic.com/~lynn/2005m.html#8 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005q.html#49 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005r.html#23 OS's with loadable filesystem support?
http://www.garlic.com/~lynn/2005u.html#38 Mainframe Applications and Records Keeping?
http://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006j.html#20 virtual memory
http://www.garlic.com/~lynn/2006o.html#24 computational model of transactions
http://www.garlic.com/~lynn/2006o.html#32 When Does Folklore Begin???
http://www.garlic.com/~lynn/2006o.html#33 When Does Folklore Begin???
http://www.garlic.com/~lynn/2006o.html#62 Greatest Software, System R
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: S0C1 with ILC 6 Newsgroups: bit.listserv.ibm-main Date: Mon, 18 Dec 2006 12:24:57 -0700shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
one of the reasons that 360/67 had an 8-entry associative array (dlat, tlb, etc) ... was the worse case for "EXECUTE" of SS instructions required eight different page addresses.
"EX" (execute) of another instruction
2 pages - instruction start and end (crossing page boundary)
target (SS) instruction
2 pages - instruction start and end (crossing page boundary)
2 pages - operand1 start and end (crossing page boundary)
2 pages - operand2 start and end (crossing page boundary)
------
8 pages
=============
load address instruction fetch could have two page faults (when instruction crosses page boundary).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: S0C1 with ILC 6 Newsgroups: bit.listserv.ibm-main Date: Mon, 18 Dec 2006 14:01:06 -0700Edward Jaffe wrote:
the note in the PoP refers to instruction execution program exceptions (storage references) ... as opposed to possible instruction fetch program exception.
repeat from previous post/reference:
aka, instruction fetch can page fault ... independent of the instruction execution page fault.
one of the reasons that 360/67 had an 8-entry associative array (dlat, tlb, etc) ... was the worse case for "EXECUTE" of SS instructions required eight different page addresses.
"EX" (execute) of another instruction
2 pages - instruction start and end (crossing page boundary)
target (SS) instruction
2 pages - instruction start and end (crossing page boundary)
2 pages - operand1 start and end (crossing page boundary)
2 pages - operand2 start and end (crossing page boundary)
------
8 pages
=============
load address instruction fetch could have two page faults (when instruction crosses page boundary).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Multics on Vmware ? Newsgroups: alt.os.multics Date: Tue, 19 Dec 2006 07:09:24 -0700Renaissance <mapsons.gianl@libero.uk> writes:
misc. other recent posts about service virtual machine and/or virtual
appliance
http://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
http://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC
http://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
http://www.garlic.com/~lynn/2006v.html#22 vmshare
http://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history)
http://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC
http://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: vmshare Newsgroups: alt.folklore.computers Date: Tue, 19 Dec 2006 08:17:38 -0700ref:
a little more topic drift ... from the bureau of misinformation
Date: 03/02/87 13:42:13
To: wheeler
Re: VM and executives -It came as a surprise in my meeting with <a top
corporate executive> that Profs ran on VM. He had been led to believe
it was a VTAM application and that was why vm networking had to be
linked with VTAM.
... snip ... top of post, old email index
the above wasn't an isolated incident, i'd heard other similar
reports. in this period, there was an enormous amount of
misinformation being pushed up to corporate executives in an attempt
to get a corporate directive to repopulate the internal network with
dumb terminal communication operation ... as well as claims that it
could be used for NSF:
http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
the internal network had nothing to do with dumb terminal
communication operation ... misc. past posts mentioning the internal
network
http://www.garlic.com/~lynn/subnetwork.html#internalnet
my wife ran into conflict with some of this same group when she served
her stint in pok in charge of loosely coupled architecture (mainframe
for "cluster"). there was eventually an uneasy truce where everything
that crossed glass house boundary supposedly had to be under control of
the strategic dumb terminal communication operation (even that truce
they would chip away at). misc. past posts making reference to her
stint in pok in charge of loosely coupled architecture
http://www.garlic.com/~lynn/submain.html#shareddata
we also ran into conflict a little later when we were doing 3-tier
architecture ... and taking a lot of heat from the SAA crowd
http://www.garlic.com/~lynn/subnetwork.html#3tier
and some other drift ... references about presentation that claimed
that the same organization was going to be responsible for the demise
of the disk division (the dumb terminal communication operation was
increasingly isolating the glass house from the emerging online,
networking world)
http://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2003c.html#23 difference between itanium and alpha
http://www.garlic.com/~lynn/2004f.html#39 Who said "The Mainframe is dead"?
http://www.garlic.com/~lynn/2004m.html#59 RISCs too close to hardware?
http://www.garlic.com/~lynn/2005j.html#33 IBM Plugs Big Iron to the College Crowd
http://www.garlic.com/~lynn/2005j.html#59 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005t.html#30 AMD to leave x86 behind?
http://www.garlic.com/~lynn/2006k.html#25 Can anythink kill x86-64?
http://www.garlic.com/~lynn/2006l.html#4 Google Architecture
http://www.garlic.com/~lynn/2006l.html#38 Token-ring vs Ethernet - 10 years later
http://www.garlic.com/~lynn/2006r.html#4 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006r.html#20 50th Anniversary of invention of disk drives
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: vmshare Newsgroups: alt.folklore.computers Date: Wed, 20 Dec 2006 00:42:10 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
the person that had given the presentation predicting the demise of the disk division had much earlier worked on some large customer accounts and had written about some of his experiences.
some of his writings from long ago and far away ....
The Large Conglomerate In 1975, a large international conglomerate customer accepted the idea that it was possible to use IBM 3600 banking systems in a manufacturing shop floor environment. As a result, the IBM team and customer installed what was to be the second MVS SNA system in the world. The system (hardware and software) was installed by four people and was on-line in 10 weeks. While the effort required 300 to 400 hours overtime by each of the four people, the numerous problems that were experienced were generally regarded as unique and isolated situations. Based on post-installation experiences, the customer and the IBM team no longer hold that belief; the change in attitude gradually occurred as various situations developed. After the above system had been installed for several months, the 3600 system was enhanced to support dial lines as well as leased lines. This announcement was particularly attractive to the customer since it had two remote 3600 systems that each required 1000 mile leased lines which were only used for 30 minutes (maximum) a day. After investigation, it was determined that the customer would have to change the level of microcode in the 3600 controller to obtain the new function. This required the customer to • reassemble his 3600 application programs (APBs) • reassemble his 3600 SYSGENS (CPGENs) • install and use the new microcode • use a new level of 3600 starter diskette. However, the new level of microcode required a new level of Subsystem Support Services (SSS) and Program Validation Services (PVS). • The new level of SSS required a new level of VTAM. • The new level of VTAM required a new level of NCP • reassembly of the customer written VTAM programs. I do not recall if the new level of VTAM also required a new level of MVS and hence IMS, as the inquiry into this announcement had halted by this point. The message was clear: the change from a leased line to a dial line would have required that virtually every line of IBM system code in the entire complex be reinstalled. The change was viewed as a very small configuration change by the customer but resulted in a very large system change. Since this was a corporate data center and served multiple user divisions in addition to the particular division using the 3600 system, the impact of the change was totally unacceptable. In short, the change would have challenged two data processing maxims. • First, any given change can and often does impact service (availability) levels of seemingly unrelated components in a data processing system. The impact is generally unpredictable and usually undesirable. (For example, the customer written VTAM 3600 support program ran for two years without a problem until VSPC was added to the system. Suddenly, the customer's program randomly failed when run during the day but not when run at night. Later it was discovered that VSPC was slowing down VTAM enough to cause the application's VTAM allowable buffer count occasionally to be exceeded. Hence, VTAM destroyed the session.) • Second, each system change should be implemented in isolation of other changes whenever possible.... snip ...
Another example of such intertwined software interdependencies can be seen in the contrast between JES2 "networking" and VNET/RSCS "networking".
VNET/RSCS had relatively clean separation of function ... including
what could be considered something akin to gateway function in every
node. I've periodically claimed that the arpanet/internet didn't get
that capability until the great switchover to internetworking protocol
on 1/1/83 ... and one of the reasons why the internal network was
larger than the arpanet/internet from just about the beginning until
possibly mid-85. misc. past posts about the internal network.
http://www.garlic.com/~lynn/subnetwork.html#internalnet
In the JES2 networking implementation ... it somewhat reflected the intertwined dependencies of many of the communication implementations dating back to the 60s & 70s. Because of the intertwined dependencies, JES2 implementations typically had to be at the same release level to interoperate. Futhermore, it wasn't unusual for JES2 systems, at different release levels, attempting to communicate, to crash one or the other of the JES2 systems ... and even take down the associated MVS systems.
In the large world-wide internal network with hundreds (and then
thousands) of different systems, it would be extremely unusual to have
all systems simultaneously at the same release level. However such a
feature was frequently required by many traditional communication
implementations of the period. There are even historical arpanet BBN
notes about regularly scheduled system-wide IMP downtime for support
and maintenance, aka periodic complete arpanet outage ... minor ref
http://www.garlic.com/~lynn/2006k.html#10 Arpa address
for slight drift, projection that there might be as many as 100
arpanet nodes by (sometime in) 1983 (from 1980 arpanet
newsletter):
http://www.garlic.com/~lynn/2006k.html#40 Arpa address
Over time, a collection of "JES" (nji/nje) line drivers evolved for VNET/RSCS ... that simulated JES protocol and allowed JES/MVS machines to participate in the internal network. There tended to be unique (VNET) JES drivers specific to each JES/MVS release ... specific driver would be started on a communication line for whatever JES/MVS that was at the other end of the line. Furthermore, over time, VNET/RSCS evolved sort of a canonical representation of JES communication ... and provided format conversion interoperability between different JES systems (as a countermeasure to JES systems at different release levels causing each other to crash, even bringing down the whole MVS system).
There is the relatively well known story about a San Jose plant site JES2/MVS system attempting to communicate with a JES2/MVS system in Hursley and causing the Hursly MVS system to repeatedly crash. The problem appeared to then be blamed on VNET ... for allowing MVS systems to cause each other to crash.
As a result of the enormous vaguries in their implementations ... JES/MVS systems tended to be restricted to boundary nodes ... with VNET/RSCS being the internal corporate networking platform.
disclaimer ... my wife did a stint in the g'burg JES product group
before doing her time in POK in charge of loosly coupled architecture
http://www.garlic.com/~lynn/submain.html#shareddata
misc. past posts mentioning JES2 and/or HASP (not just networking):
http://www.garlic.com/~lynn/submain.html#hasp
for a little drift ... recent posts mentioning service virtual
machines and/or virtual appliances:
http://www.garlic.com/~lynn/2006p.html#10 What part of z/OS is the OS?
http://www.garlic.com/~lynn/2006t.html#45 To RISC or not to RISC
http://www.garlic.com/~lynn/2006t.html#46 To RISC or not to RISC
http://www.garlic.com/~lynn/2006v.html#22 vmshare
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network
http://www.garlic.com/~lynn/2006w.html#16 intersection between autolog command and cmsback (more history)
http://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC
http://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006x.html#6 Multics on Vmware ?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Plurals and language confusion Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 20 Dec 2006 09:22:17 -0700giltjr@EARTHLINK.NET (John S. Giltner, Jr.) writes:
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: re: The Future of CPUs: What's After Multi-Core? Newsgroups: bit.listserv.vmesa-l,alt.folklore.computers Date: Wed, 20 Dec 2006 12:37:51 -0700Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
as an undergraduate in the 60s, i had done dynamic adaptive resource management ... it was sometimes referred to as fair share scheduling since the default resource management policy was fair share. this was shipped as part cp67 for 360/67.
in the morph from cp67 to vm370 ... much of it was dropped. charlie's cp67 multiprocessor support also didn't make it into vm370.
i had done a lot of pathlength optimization and fastpath stuff for
cp67 which was also dropped in the morph to vm370 ... i helped put a
small amount of that back into vm370 release1 plc9 ... a couple past
posts mentioning some of the cp67 pathlength stuff
http://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
i then got to work on porting a bunch of stuff that i had done for
cp67 to vm370 ... some recent posts (includes old email from the early
and mid 70s)
http://www.garlic.com/~lynn/2006v.html#36 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#7 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006w.html#10 long ago and far away, vm370 from early/mid 70s
and of course mentioned in the above referenced email ... a small amount of the virtual memory management stuff showed up in vm370 release 3 as DCSS.
there was eventually a decision to release some amount of the features
as the vm370 resource manager. some collected posts on scheduling
http://www.garlic.com/~lynn/subtopic.html#fairshare
and other posts on page management
http://www.garlic.com/~lynn/subtopic.html#wsclock
and for something really different old communication (from 1982) about
work i had done as undergraduate in the 60s (also in this thread):
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's after Multi-Core?
in any case, some resource manager issues/features
• by continually doing real time dynamical monitoring and adjusting operations, I was able to operate at much higher resource utilization and still provide decent level of service. prior to resource manager ship, somebody from corporate stated that the current state of the art for resource managers were large number of static tuning parameters and that the resource manager couldn't be considered really advanced unless it had some number of static tuning parameters (installation system tuning expert would look at daily, weekly and monthly activity ... and would select some set of static tuning values that seemed to be suited to that installation). it did absolutely no good explaining that real-time dynamic monitoring and adapting was much more advanced that static tuning parameters. so, in order to get final corporate release approval ... i had to implement some number of static tuning parameters. I fully documented the implementation and formulas and the source code was readily available. Nobody seemed to realize that it was a joke ... somewhat from "operations research" ... it had to do with "degrees of freedom" ... aka the static tuning parameters had much less degrees of freedom than the dynamic adaptive features.
i had always thot that real-time dynamic adaptive control was
preferable to static parameters ... but it took another couple decades
for a lot of the rest of the operating systems to catch up. it is now
fairly evident ... even showing up in all sorts of embedded
processors for real-time control and optimization. for some slight
boyd dynamic adaptive drift
http://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive
and collected posts mentioning boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
and various URLs from around the web mentioning boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2
• there was transition in the mid-70s with respect to charging
for software. the 23jun69 unbundling announcement had introduced
charging for application software (somewhat because of
gov. litigation). however the excuse was that kernel software should
still be free since it was required for operation of the
hardware. however, with the advent of clone processors by the mid-70s,
the opinion was starting to shift, and the resource manager got chosen
to be the guinea pig for kernel software charging. as a result, i got
to spend some amount of time with business and legal people on kernel
software charging.
http://www.garlic.com/~lynn/submain.html#unbundle
• some amount of the code in the resource manager had been
originally built for multiprocessor operation .... it was then added
to base vm370 system that didn't have support for multiprocessor
hardware operation. the next release of vm370 did introduce support
for multiprocessor hardware operation ... some amount based on the
VAMPS work ... misc. past posts mentioning VAMPS
multiprocessor work:
http://www.garlic.com/~lynn/submain.html#bounce
the problem was that the multiprocessor support was going to be part
of the (still) free, base kernel (aka hardware support ) ... while
much of the multiprocessor kernel code structure was part of the
"priced" resource manager (and pricing policy said that there couldn't
be free software that had priced software prerequisite). so before
the multiprocessor support was shipped, a lot of the resource manager
code base was re-organized into the free kernel. lots of past posts
mentioning multiprocessor support (and/or charlie's invention of
compare&swap instruction)
http://www.garlic.com/~lynn/subtopic.html#smp
....
previous posts in this thread:
http://www.garlic.com/~lynn/2006t.html#27 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#31 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#32 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#34 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#42 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#43 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#49 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#50 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#0 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#6 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#8 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#9 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#10 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006v.html#21 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006v.html#43 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006x.html#2 The Future of CPUs: What's After Multi-Core?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Wed, 20 Dec 2006 13:22:27 -0700eugene@cse.ucsc.edu (Eugene Miya) writes:
... i.e. HSC ... LANL doing a standards flavor of cray channel ... morped into HiPPI ... 800mbit/sec.
recent past posts mentioning HSC or HiPPI
http://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
http://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
http://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006l.html#43 One or two CPUs - the pros & cons
http://www.garlic.com/~lynn/2006m.html#52 TCP/IP and connecting z to alternate platforms
http://www.garlic.com/~lynn/2006u.html#19 Why so little parallelism?
http://www.garlic.com/~lynn/2006v.html#10 What's a mainframe?
http://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
at about the same time, LLNL was pushing for fiber-optic version (fiber channel standard, FCS) of non-blocking switch technology they had installed that used serial copper (as opposed to serial fiber-optics) .... and SLAC was pushing SCI (another serial fiber-optic standard) ... which had a mapping for SCSI commands.
In that time-frame, POK was trying to get around to releasing escon ... 200mbit fiber-optic emulating standard mainframe half-duplex parallel channel. This had been knocking around in POK since the 70s ... never quite making it out. While POK was trying to make a decision about releasing escon, one of the Austin engineers took the base escon technology and made a few changes ... up'ed the raw bandwidth to 220mbits/sec, enhanced it to full-duplex operation, and redid the optical drivers (well under 1/10th the cost of the ones being used for escon). This was released as "SLA" (serial link adapter) for rs/6000.
we were doing some stuff with LLNL (on FCS), LANL (on HiPPI) and SLAC (on SCI) about the time the engineer had finished the SLA work and wanted to start on a 800mbit version. It took almost six months to talk him into abandoning SLA and going to work on FCS (where he became the secretary/owner of the FCS standards document).
later he was one of the primary people putting together much of the
details for MEDUSA (cluster-in-a-rack). misc. recent posts mentioning
MEDUSA:
http://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
http://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?
http://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
if you go back to 370s ... there were some 1.5mbyte/sec channels (primarily driven at that rate by fixed head disks). 3380 disks and 3880 disk controllers increased that to 3mbyte/sec operation (and the newer generation of 3mbyte/sec channels on various processors seen in the 80s). the earlier generation of channels did a protocol handshake on every byte transferred. the new 3mbyte/sec "data streaming" channels relaxed that requirement ... which both increased the data rate ... but also doubled the maximum channel distance from 200ft to 400ft i.e. you could have a machine room configuration with controllers out at a 400ft radius rather than just a 200ft radius. The 200ft limitation had gotten so severe for some installations that there were starting to appear multi-floor configurations (i.e. instead of a 200ft radius circle limitation ... a 200ft radius sphere).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers,bit.listserv.vmesa-l Date: Wed, 20 Dec 2006 15:54:45 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
for other topic drift, i had done some work in the disk engineering
(bldg14) and disk product test (bldg15) labs. ... misc. past posts
mentioning work in bldg14 &/or bldg15
http://www.garlic.com/~lynn/subtopic.html#disk
and misc. past posts this year mentioning 3380 disks and/or 3880
disk controllers
http://www.garlic.com/~lynn/2006.html#4 Average Seek times are pretty confusing
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away
http://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
http://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
http://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
http://www.garlic.com/~lynn/2006g.html#0 IBM 3380 and 3880 maintenance docs needed
http://www.garlic.com/~lynn/2006i.html#12 Mainframe near history (IBM 3380 and 3880 docs)
http://www.garlic.com/~lynn/2006i.html#41 virtual memory
http://www.garlic.com/~lynn/2006j.html#2 virtual memory
http://www.garlic.com/~lynn/2006j.html#3 virtual memory
http://www.garlic.com/~lynn/2006j.html#11 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#14 virtual memory
http://www.garlic.com/~lynn/2006l.html#6 Google Architecture
http://www.garlic.com/~lynn/2006l.html#13 virtual memory
http://www.garlic.com/~lynn/2006l.html#18 virtual memory
http://www.garlic.com/~lynn/2006m.html#5 Track capacity?
http://www.garlic.com/~lynn/2006m.html#8 Track capacity?
http://www.garlic.com/~lynn/2006m.html#13 Track capacity?
http://www.garlic.com/~lynn/2006n.html#8 Not Your Dad's Mainframe: Little Iron
http://www.garlic.com/~lynn/2006n.html#33 CRAM, DataCell, and 3850
http://www.garlic.com/~lynn/2006n.html#35 The very first text editor
http://www.garlic.com/~lynn/2006o.html#44 When Does Folklore Begin???
http://www.garlic.com/~lynn/2006q.html#50 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006r.html#35 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006r.html#40 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006s.html#30 Why magnetic drums was/are worse than disks ?
http://www.garlic.com/~lynn/2006s.html#32 Why magnetic drums was/are worse than disks ?
http://www.garlic.com/~lynn/2006s.html#33 Why magnetic drums was/are worse than disks ?
http://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#18 Why magnetic drums was/are worse than disks ?
http://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006v.html#0 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006v.html#16 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#20 Ranking of non-IBM mainframe builders?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Wed, 20 Dec 2006 20:58:10 -0700Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
hsc/hippi was 800mbits/sec parallel, half-duplex
escon was 200mbits ... and although dual fiber-optics, it operated as if half-duplex parallel channel. that put escon at 1/4 hsc/hippi.
mainframe genre refers to it as 17mbyte/sec channel. the rs/6000 SLA somewhat originating from same original technology ... but upgrading to 220mbits/sec (instead of 200mbits).
as mentioned, part of upgrade to 3mbyte "data streaming" also allowed increasing aggregate channel distances from 200ft to 400ft. along with the 3mbyte, "data streaming" channels there was also 3380 disks with 3mbyte/sec transfer.
some of the FCS standards activity had some number of representation
from mainframe channel organization and there was lots of contention
about including half-duplex mainframe channel operation as part
of the FCS channel. it currently goes by the term "FICON". old
posts mentioning FICON
http://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
http://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
http://www.garlic.com/~lynn/2002n.html#50 EXCP
http://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
http://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
http://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
http://www.garlic.com/~lynn/2004h.html#29 BLKSIZE question
http://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
http://www.garlic.com/~lynn/2005e.html#13 Device and channel
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
http://www.garlic.com/~lynn/2005m.html#55 54 Processors?
http://www.garlic.com/~lynn/2005v.html#0 DMV systems?
old posts comparing cp67 360/67 thruput with vm370 3081 thruput and
pointing out that disk relative system thruput had declined by a
factor of more than an order of magnitude over a period of 10-15 yrs
... also mentioned that disk division had taken exception and assigned
the performance modeling organization to refute my statement. a
couple weeks later they came back and commented that i had
somewhat understated the problem:
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
http://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: IBM ATM machines Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 21 Dec 2006 10:01:27 -0700jsavard@ecn.ab.ca wrote:
ditto the san jose ibm credit union
offices were in the basement of bldg.12 ... they had to move out when bldg.12 under went seismic retrofit.
and for the heck of it and more topic drift, other posts in the pin attack threads
http://www.garlic.com/~lynn/2006u.html#42 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006u.html#43 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006u.html#47 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006u.html#48 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#1 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#2 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#33 New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#39 On sci.crypt: New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#42 On sci.crypt: New attacks on the financial PIN processing
http://www.garlic.com/~lynn/2006v.html#46 Patent buster for a method that increases password security
http://www.garlic.com/~lynn/2006v.html#49 Patent buster for a method that increases password security
the original post in the above thread
http://www.garlic.com/~lynn/2006u.html#40 New attacks on the financial PIN processing
was oriented towards insiders being able to take advantage of their position to compromise PIN processing.
this is different than the skimming/harvesting attacks
http://www.garlic.com/~lynn/subintegrity.html#harvest
that can be done with compromised terminals (which may involve
insiders or outsiders) ... i.e. capture of static authentication data
for replay attacks. traditionally this has involved magstripe cards
(with or w/o PINs) ... but has also been used against chipcards that
rely on static authentication data
http://www.garlic.com/~lynn/2006v.html#45 On sci.crypt: New attacks on the financial PIN processing
in some cases ... some of the chipcard implementations using static authentication data has made it even more attractive for the attacker doing skimming exploits.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Thu, 21 Dec 2006 11:19:16 -0700eugene@cse.ucsc.edu (Eugene Miya) writes:
you wrote "channel i/o rates were 4x IBM's" ... the only mainframe channel that was 1/4th 800mbits was escon (200mbits) which didn't start shipping until the time-frame of hsc/hippi ... 90s.
i was only answering based on what you had written. if you feel that you got the time-frame reference wrong, it isn't my fault.
maybe i misunderstood what you were trying to say and you were really were referring to I/O operations per second ... independent of bytes transferred? or maybe you weren't referring to rates of a channel ... but actually were referring to overall number system I/O operations per second (which happened to be channel I/O operations ... again independent of the bytes tranferred). Or maybe you met something different than what you typed?
I apologize if I misunderstood you and you weren't actually referring to a cray channel byte transfer rate being four times IBMs.
if you really met to reference 1970s "Cray-1 (when this started)" ... then you would be referring to mainframe bus&tag 1.5mbyte/sec channel ... around 12mbit/sec ... or do you mean to state that the 1970s Cray-1 channel was 4*12mbit/sec ... or 48mbit/sec????.
part of the bus&tag limitation was doing protocol handshake on every byte transferred (which also imposed the aggregate 200ft limit on each channel). The only devices that actually ran at that (1.5mbyte/sec) rate were fixed-head disks ... and even then many systems required significantly reduced channel distance limitations (in order to operate at that channel transfer rate).
if the decade that you want to reference was with respect to an ibm channel that was 1/4th the 800mbits/sec ... then you are talking about escon and 90s. if you want to reference 70s when ibm channels were 1.5mbytes/sec ... then are you implying that the 1970s cray-1 channel was around 6mbytes/sec data transfer rate?
as to raid ... reference to patent awarded 1978 to somebody at the san
jose plant site ... past posts referencing the patent:
http://www.garlic.com/~lynn/2002e.html#4 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2004d.html#29 cheaper low quality drives
http://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2006p.html#47 "25th Anniversary of the Personal Computer"
http://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
wiki reference:
https://en.wikipedia.org/wiki/Redundant_array_of_independent_disks
misc. past posts mentioning work with bldg. 14 disk engineering and
bldg. 15 disk product test labs
http://www.garlic.com/~lynn/subtopic.html#disk
somewhere in the archives, I even have email with the person granted the patent.
i got involved starting in 80s looking at various kinds of parallel transfers as way of mitigating disk performance bottlenecks.
lots of past posts mentioning disk striping and/or raid:
http://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#4 Schedulers
http://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
http://www.garlic.com/~lynn/96.html#33 Mainframes & Unix
http://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
http://www.garlic.com/~lynn/99.html#200 Life-Advancing Work of Timothy Berners-Lee
http://www.garlic.com/~lynn/2000c.html#24 Hard disks, one year ago today
http://www.garlic.com/~lynn/2000c.html#61 TF-1
http://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2001.html#13 Review of Steve McConnell's AFTER THE GOLD RUSH
http://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
http://www.garlic.com/~lynn/2001.html#34 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#35 Where do the filesystem and RAID system belong?
http://www.garlic.com/~lynn/2001.html#36 Where do the filesystem and RAID system belong?
http://www.garlic.com/~lynn/2001.html#37 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#38 Competitors to SABRE?
http://www.garlic.com/~lynn/2001.html#41 Where do the filesystem and RAID system belong?
http://www.garlic.com/~lynn/2001.html#42 Where do the filesystem and RAID system belong?
http://www.garlic.com/~lynn/2001.html#61 Where do the filesystem and RAID system belong?
http://www.garlic.com/~lynn/2001b.html#14 IBM's announcement on RVAs
http://www.garlic.com/~lynn/2001c.html#78 Unix hard links
http://www.garlic.com/~lynn/2001c.html#80 Unix hard links
http://www.garlic.com/~lynn/2001c.html#81 Unix hard links
http://www.garlic.com/~lynn/2001d.html#2 "Bootstrap"
http://www.garlic.com/~lynn/2001d.html#17 "Bootstrap"
http://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors (Re: FA:
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001f.html#71 commodity storage servers
http://www.garlic.com/~lynn/2001g.html#15 Extended memory error recovery
http://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
http://www.garlic.com/~lynn/2001n.html#70 CM-5 Thinking Machines, Supercomputers
http://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2002d.html#14 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002e.html#4 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002l.html#47 Do any architectures use instruction count instead of timer
http://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
http://www.garlic.com/~lynn/2002n.html#9 Asynch I/O
http://www.garlic.com/~lynn/2002n.html#18 Help! Good protocol for national ID card?
http://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
http://www.garlic.com/~lynn/2003b.html#68 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#69 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#70 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003c.html#66 FBA suggestion was Re: "average" DASD Blocksize
http://www.garlic.com/~lynn/2003d.html#64 IBM was: VAX again: unix
http://www.garlic.com/~lynn/2003h.html#14 IBM system 370
http://www.garlic.com/~lynn/2003i.html#48 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2003i.html#54 Fix the shuttle or fly it unmanned
http://www.garlic.com/~lynn/2003j.html#64 Transactions for Industrial Strength Programming
http://www.garlic.com/~lynn/2003m.html#42 S/360 undocumented instructions?
http://www.garlic.com/~lynn/2003n.html#36 Cray to commercialize Red Storm
http://www.garlic.com/~lynn/2004.html#3 The BASIC Variations
http://www.garlic.com/~lynn/2004b.html#41 SSL certificates
http://www.garlic.com/~lynn/2004c.html#38 ATA drives and vibration problems in multi-drive racks
http://www.garlic.com/~lynn/2004d.html#29 cheaper low quality drives
http://www.garlic.com/~lynn/2004d.html#30 cheaper low quality drives
http://www.garlic.com/~lynn/2004e.html#7 OT Global warming
http://www.garlic.com/~lynn/2004e.html#25 Relational Model and Search Engines?
http://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
http://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#22 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004h.html#43 Hard disk architecture: are outer cylinders still faster than inner cylinders?
http://www.garlic.com/~lynn/2004k.html#28 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004p.html#38 funny article
http://www.garlic.com/~lynn/2004p.html#59 IBM 3614 and 3624 ATM's
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005d.html#33 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About DASD
http://www.garlic.com/~lynn/2005e.html#6 He Who Thought He Knew Something About DASD
http://www.garlic.com/~lynn/2005e.html#10 He Who Thought He Knew Something About DASD
http://www.garlic.com/~lynn/2005e.html#11 He Who Thought He Knew Something About DASD
http://www.garlic.com/~lynn/2005h.html#44 First assembly language encounters--how to get started?
http://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
http://www.garlic.com/~lynn/2005k.html#4 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2005m.html#33 Massive i/o
http://www.garlic.com/~lynn/2005m.html#35 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005m.html#41 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
http://www.garlic.com/~lynn/2005n.html#8 big endian vs. little endian, why?
http://www.garlic.com/~lynn/2005n.html#42 Moz 1.8 performance dramatically improved
http://www.garlic.com/~lynn/2005n.html#51 IPSEC and user vs machine authentication
http://www.garlic.com/~lynn/2005r.html#18 SATA woes
http://www.garlic.com/~lynn/2005t.html#17 winscape?
http://www.garlic.com/~lynn/2006b.html#39 another blast from the past
http://www.garlic.com/~lynn/2006d.html#1 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006d.html#3 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006d.html#24 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006l.html#6 Google Architecture
http://www.garlic.com/~lynn/2006l.html#14 virtual memory
http://www.garlic.com/~lynn/2006o.html#9 Pa Tpk spends $30 million for "Duet" system; but benefits are unknown
http://www.garlic.com/~lynn/2006p.html#47 "25th Anniversary of the Personal Computer"
http://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
http://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#37 Is this true? (Were gotos really *that* bad?)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Thu, 21 Dec 2006 11:49:07 -0700eugene@cse.ucsc.edu (Eugene Miya) writes:
again do you have a meaning other than what you had typed?
i've repeatedly used examples comparing 360/67 operating in 360/65 mode (w/o hardware relocation) compared to 360/67 operating with dat enable.
basic 360/65 (and 360/67) double word memory cycle time was 750ns (for uniprocessor). DAT (dynamic address translation) added 150ns to that ... or 900ns (multiprocessor calculation got a lot more complex).
that is just the base hardware overhead ... doesn't include associative array miss ... i.e. the 150ns is only for when the virtual->real translation is in the eight entry associative array. it also doesn't include the software overhead of managing the tables ... or software overhead of doing page i/o operations (moving pages into/out of memory).
as to the software overhead ... part of some of my statements with regard to cp67 virtual memory support was i rewrote most of it as an undergraduate in the 60s and reduced the overhead by better than an order of magnitude ... but still didn't make it disappear.
maybe you are confusing some of my statements about having rewritten the code and reduced the software overhead by better than an order of magnitude with my not being able to understand that the overhead is non-zero.
later the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
developed extensive metrics as to all aspects of system performance ...
some of it mentioned in collected posts on extensive benchmarking,
work load profiling and precursor to what has become capacity planning,
virtual memory operation
http://www.garlic.com/~lynn/submain.html#bench
another aspect of it was vs/repack ... which was a application/product
for doing extensive program monitoring and implemented semi-automated
program reorganization for operation in virtual memory environment ...
recent post about vs/repack technology (dating back to published
article from 71)
http://www.garlic.com/~lynn/2006x.html#1
in combination there was quite a bit of investigation of things like "weak" and "strong" working sets (something sometimes seen today with processor caches as locality of reference) as well as percentage of total real storage required.
misc. past posts mentiong 750/900ns difference, associative array, etc
http://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
http://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2000.html#88 ASP (was: mainframe operating systems)
http://www.garlic.com/~lynn/2000b.html#52 VM (not VMS or Virtual Machine, the IBM sort)
http://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2000g.html#9 360/370 instruction cycle time
http://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time
http://www.garlic.com/~lynn/2001.html#71 what is interrupt mask register?
http://www.garlic.com/~lynn/2001c.html#7 LINUS for S/390
http://www.garlic.com/~lynn/2001c.html#84 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
http://www.garlic.com/~lynn/2002b.html#6 Microcode?
http://www.garlic.com/~lynn/2002c.html#44 cp/67 (coss-post warning)
http://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
http://www.garlic.com/~lynn/2002f.html#13 Hardware glitches, designed in and otherwise
http://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
http://www.garlic.com/~lynn/2003.html#13 FlexEs and IVSK instruction
http://www.garlic.com/~lynn/2003g.html#10a Speed of APL on 360s, was Any DEC 340 Display System Doco ?
http://www.garlic.com/~lynn/2003g.html#20 price ov IBM virtual address box??
http://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
http://www.garlic.com/~lynn/2003g.html#23 price ov IBM virtual address box??
http://www.garlic.com/~lynn/2003g.html#33 price ov IBM virtual address box??
http://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
http://www.garlic.com/~lynn/2003m.html#29 SR 15,15
http://www.garlic.com/~lynn/2003o.html#52 Virtual Machine Concept
http://www.garlic.com/~lynn/2004.html#16 Holee shit! 30 years ago!
http://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004.html#53 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
http://www.garlic.com/~lynn/2004c.html#46 IBM 360 memory
http://www.garlic.com/~lynn/2005b.html#62 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005d.html#66 Virtual Machine Hardware
http://www.garlic.com/~lynn/2005g.html#17 DOS/360: Forty years
http://www.garlic.com/~lynn/2005h.html#18 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005j.html#43 A second look at memory access alignment
http://www.garlic.com/~lynn/2005k.html#5 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005s.html#20 MVCIN instruction
http://www.garlic.com/~lynn/2005s.html#21 MVCIN instruction
http://www.garlic.com/~lynn/2006.html#15 S/360
http://www.garlic.com/~lynn/2006e.html#0 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006x.html#4 S0C1 with ILC 6
http://www.garlic.com/~lynn/2006x.html#5 S0C1 with ILC 6
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Thu, 21 Dec 2006 12:22:30 -0700eugene@cse.ucsc.edu (Eugene Miya) writes:
again do you have a meaning other than what you typed?
when I was an undergraduate in the 60s, nearly all of the mainframes were real memory systems. i had responsibility for building and supporting the univ. production (real memory) system.
three people from the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
the last week in jan68 to deliver cp67 (virtual memory and virtul machine support).
in my spare time between classes and my primary job responsibilities, I got to play a little with cp67. nearly all of the applications to be run under cp67 came over from real memory environment ... so as a result there was constant comparison of how the application ran in the real memory environment vis-a-vis the degradation cp67 due to 1) hardware translation, 2) software virtual memory support and 3) software virtual machine support ... which were all heavily measured and identified as sources of degradation.
these days you have very few A/B comparisons (of the same exact application running with the same exact libraries and operating system) in real memory mode compared to virtual memory mode.
However, in the little spare time that I had to play cp67 ... i did
manage to come up with a lot of technology that I was then able to
design, implementat and deploy ... recent reference to global LRU
replacement algorithm work
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
recent reference to dyanmic adaptive scheduling work
http://www.garlic.com/~lynn/2006x.html#10 The Future of CPUs: What's After Multi-Core?
a few posts referencing part of a conference presentation that i made
later in '68 on some of my cp67 enhancements comparing real memory
operating system performance ... running on real memory processor and
running in virtual address space under cp67 (after having a few months
to play with the cp67 source code)
http://www.garlic.com/~lynn/93.html#1 360/67, was Re: IBM's Project F/S ?
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
http://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
http://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
http://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
http://www.garlic.com/~lynn/99.html#93 MVS vs HASP vs JES (was 2821)
http://www.garlic.com/~lynn/2001f.html#26 Price of core memory
http://www.garlic.com/~lynn/2001i.html#33 Waterloo Interpreters (was Re: RAX (was RE: IBM OS Timeline?))
http://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
http://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390
http://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
http://www.garlic.com/~lynn/2002f.html#38 Playing Cards was Re: looking for information on the IBM
http://www.garlic.com/~lynn/2002l.html#55 The problem with installable operating systems
http://www.garlic.com/~lynn/2002m.html#3 The problem with installable operating systems
http://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
http://www.garlic.com/~lynn/2002n.html#53 SHARE MVT Project anniversary
http://www.garlic.com/~lynn/2003d.html#72 cp/67 35th anniversary
http://www.garlic.com/~lynn/2004f.html#6 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2005n.html#40 You might be a mainframer if... :-) V3.8
http://www.garlic.com/~lynn/2005t.html#8 2nd level install - duplicate volsers
http://www.garlic.com/~lynn/2006.html#41 Is VIO mandatory?
http://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
http://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
http://www.garlic.com/~lynn/2006w.html#22 Are hypervisors the new foundation for system software?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Thu, 21 Dec 2006 15:21:18 -0700eugene@cse.ucsc.edu (Eugene Miya) writes:
there are lots of tales about ruggedized 360s & 370s all over the place.
for minor topic drift recent post referencing distributed
lock manager supporting distributed DBMS operation while meeting ACID
properties
http://www.garlic.com/~lynn/2006x.html#3 Why so little parallism?
then there is the reference to location that Boyd managed that
supposedly was a $2.5B (billion) windfall for IBM ... misc past
posts mentioning the windfall
http://www.garlic.com/~lynn/2005m.html#22 Old Computers and Moisture don't mix - fairly OT
http://www.garlic.com/~lynn/2005m.html#23 Old Computers and Moisture don't mix - fairly OT
http://www.garlic.com/~lynn/2005m.html#24 Old Computers and Moisture don't mix - fairly OT
http://www.garlic.com/~lynn/2005t.html#1 Dangerous Hardware
http://www.garlic.com/~lynn/2006q.html#37 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006q.html#38 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006u.html#49 Where can you get a Minor in Mainframe?
http://www.garlic.com/~lynn/2006u.html#50 Where can you get a Minor in Mainframe?
misc. past posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
and misc. URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2
you saw a lot of distributed computing starting to happen with 370 138s & 148s ... but it really started to come into full swing with 4331/4341 customer orders in the later 70s ... much more than vax market (from late 70s thru mid-80s). customers were ordering then in blocks of a hundred. one large chip design shop had a single order for nearly 1000. these 4341 "mainframes" frequently didn't go into traditional glasshouse locations (centralized fixed fortifications) ... they went into similar market segment as a many of the vax machines ... except there were a lot more of them. there were stories about locations co-opt'ing large percentage of the conference rooms for locating the machines.
and reference to air force order in late 70s for 210 4341s
http://www.garlic.com/~lynn/2001m.html#15 departmental servers
for comparison a few past posts given vax shipments broken out by
year, model, US, world-wide, etc.
http://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
http://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2006k.html#31 PDP-1
there are business critical data processing operations which formulate criteria for their "glass house" operation ... these fixed fortifications frequently are populated with mainframes, in part, because of various of the business critical data processing requirements ... however, many such "glass houses" may also be the location for several hundred (or thousand) corporate "distributed servers" ... again because of the corporations' business critical data processing requirements ... and pretty much totally orthogonal to any networking and/or distributed computing architecture issues.
earlier decades there was some driving factor for distributed computing being co-located at remote sites ... in large part because of constrained communication facilities. however, with advances in communication and networking technology ... there starts to be more latitude and trade-offs with regard to physical location.
some recent posts including old email about physical distributed as well
as distributed computing
http://www.garlic.com/~lynn/2006p.html#34 "25th Anniversary of the Personal Computer"
http://www.garlic.com/~lynn/2006p.html#40 "25th Anniversary of the Personal Computer"
http://www.garlic.com/~lynn/2006s.html#41 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#47 To RISC or not to RISC
http://www.garlic.com/~lynn/2006v.html#11 What's a mainframe?
http://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#19 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#23 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#25 Ranking of non-IBM mainframe builders?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Fri, 22 Dec 2006 08:42:53 -0700jmfbahciv writes:
i've joked before about the perception regarding some companies providing batch or timesharing ... as if it was an either/or scenario.
at one point there was perception that multics provided a lot more timesharing than ibm mainframes ... presumably because there were so many ibm mainframes being used for batch business critical operations.
i've mentioned before that there were significantly larger number of customers using ibm mainframes for batch business critical operations than were using them for strictly timesharing services ... ala cms, cp67, and vm370. and the internal corporate use of ibm mainframes for cp67, cms, and vm370 was smaller number than the total customer use of cp67, cms, and vm370.
at one point
http://www.garlic.com/~lynn/2006w.html#8 Why these original FORTRAN quirks?
I was building highly customized vm370 systems and shipping them directly to internal timesharing corporate installation. these internal timesharing corporate installations that i was directly supporting was only a small percentage of the total number of internal timesharing corporate installations. the total number of internal timesharing corporate installations was smaller than the total nubmer of customer timesharing installations. However, at its peak, the total number of internal timesharing corporate installations that I was building systems for was about about the same as the total number of Multics systems that ever existed in the lifetime of Multics operation (it didn't seem to be fair to compare the total number of vm370 systems to the total number of Multics systems ... or even the total number of internal corporate vm370 systems to the total number of Multics systems ... I'm only talking about the number of systems that I directly built compared to the total number of Multics systems).
While the number of IBM mainframes used for such timesharing operations was larger than any other vendor building systems for timesharing use, the perception seems to have been that such use was almost non-existent ... possibly because the number of such timesharing systems was so dwarfed by the number of batch commercial systems (i.e. the number of dedicated timesharing systems wasn't actually that small ... it was that the number of batch commercial systems was so much larger ... which appeared to warp peoples' perception).
misc. past posts mentioning comparing the number of multics timesharing
systems
http://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
http://www.garlic.com/~lynn/2001h.html#34 D
http://www.garlic.com/~lynn/2003d.html#68 unix
http://www.garlic.com/~lynn/2003g.html#29 Lisp Machines
http://www.garlic.com/~lynn/2004c.html#47 IBM 360 memory
http://www.garlic.com/~lynn/2005d.html#39 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005q.html#39 How To Abandon Microsoft
http://www.garlic.com/~lynn/2006m.html#25 Mainframe Limericks
Also as pointed at before ... no commercial timesharing offerings ever
appeared based on Multics ... however, there were several such
commercial timesharing offerings based on vm370 (with some even going
back to cp67 days starting in the late 60s).
http://www.garlic.com/~lynn/submain.html#timeshare
some number of these commercial timesharing services even supported
commercial corporate applications that would involve extremely
sensitive corporate information ... and even hosted different
competitive corporate clients (with extremely sensitive operations) on
the same platform. as such they had to have fairly significant
security measures to kept their (varied) corporate clients operations
private/confidential (from other clients and from outsiders) a small
example of that was alluded to here
http://www.garlic.com/~lynn/2006v.html#22 vmshare
and as to others with various kinds of integrity and confidentiality
requirements ... a small reference here
http://www.garlic.com/~lynn/2006w.html#0 Patent buster for a method that increases password security
and as referred to here ... some amount of the physical security &
integrity offered a computing platform ... may reflect the applications
being used and/or the data being processed ... as opposed to being
inherently a characteristics of the hardware.
http://www.garlic.com/~lynn/2006x.html#18 The Future of CPUs: What's After Multi-Core?
To some extent, how tools are used and deployed ... will reflect what it is the tools are being used for. If tools are for extreme business critical dataprocessing ... the deployment and operation can be different than when a similar platform is used for less critical purposes.
I've referenced before that circa 1970 ... the science center started
on joint project with endicott to add support for 370 virtual machines
with 370 virtual memory ... to cp67 kernel (running on 360/67). this
project included using network link between endicott and cambridge
for form of distributed development ... referenced here
http://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006w.html#3 IBM sues make of Intel-based Mainframe clones
... as well as early driver for the development of cms multi-level
source management referenced here
http://www.garlic.com/~lynn/2006w.html#42 vmshare
http://www.garlic.com/~lynn/2006w.html#48 vmshare
At the time, information that virtual memory support was going to be available on 370 machines was a closely guarded corporate secret ... and required significant security measures.
Another undertaking at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech
was the porting of apl\360 to cms. while the existance of apl on cms wasn't a closely guarded corporate secret ... cambridge offered online cms\apl services to internal corporate users. typical apl\360 offerings at the time offered users 16kbyte (or possibly 32kbyte) workspaces. cms\apl opened that up so that several mbyte workspaces were then possible. This enabled using cms\apl for some real-live business modeling uses (some of the stuff typically done with spreadsheets today) ... and cambridge got some corporate hdqtrs users ... who proceeded to load the most sensitive corporate business data on the cambridge system (in support of their business modeling activities).
the 370 virtual memory activity and the corporate business modeling involved some of the most senstive and critical corporate information of the period ... being hosted on the cambridge cp67 system. At the same time, there was significant use of the cambridge cp67 system by people (students and others) at various universities in the cambridge area (mit, harvard, bu, ne, etc). This created some amount of security tension ... having to provide the very highest level of corporate security protection on the same platform that was being used by univ. students and other non-employees.
misc. past mention of cambridge cp67 system and security exposure
http://www.garlic.com/~lynn/2001i.html#44 Withdrawal Announcement 901-218 - No More 'small machines'
http://www.garlic.com/~lynn/2002h.html#60 Java, C++ (was Re: Is HTML dead?)
http://www.garlic.com/~lynn/2004b.html#31 determining memory size
http://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
http://www.garlic.com/~lynn/2004h.html#27 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005p.html#20 address space
http://www.garlic.com/~lynn/2005p.html#27 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
http://www.garlic.com/~lynn/2006b.html#23 Seeking Info on XDS Sigma 7 APL
http://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
http://www.garlic.com/~lynn/2006h.html#14 Security
http://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
http://www.garlic.com/~lynn/2006o.html#19 Source maintenance was Re: SEQUENCE NUMBERS
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "The Elements of Programming Style" Newsgroups: alt.folklore.computers Date: Fri, 22 Dec 2006 09:15:07 -0700"John Coleman" <jcoleman@franciscan.edu> writes:
lets say it is implicitly part of the "C" programming style.
PLI and many system implementations have conventions where buffers have actual lengths ... both max and current. simple operation of copying a string from one place to another ... there is an explicit length of the origin string and an explicit max. length of the target ... and string copy/move operations explicitly support the lengths ... i.e. lots of buffer overflows aren't program subscript at a time ... it involves move/copy that is apparently totally ignorant of the target location size as well as the origin location size. With explicit lengths, some large number of buffer overflows can never occur whether or not SUBSCRIPTTRANGE is enabled.
misc. past post on buffer overflows
http://www.garlic.com/~lynn/subintegrity.html#overflow
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "The Elements of Programming Style" Newsgroups: alt.folklore.computers Date: Fri, 22 Dec 2006 09:34:28 -0700"John Coleman" <jcoleman@franciscan.edu> writes:
for some straight-foward application flow ... it turned into relatively straight-forward readable code. however, there was some highly optimized kernel routines where the if/then/else/do/while/until could be less clear than than the original assembler. it typically involved a larger amount of state testing with all sorts of condiitional processing. I eventually implemented a nested threshold limit ... and dropped back to GOTOs when the threashold was exceeded.
recent post about some of the Fort Knox people (effort to replace many
of the internal corporate microprocessors with 801) that were looking
at using 801 for low/mid range 370s ... and possibility of doing 370
machine language program analysis and a form of JIT.
http://www.garlic.com/~lynn/2006u.html#29 To RISC or not to RISC
http://www.garlic.com/~lynn/2006u.html#31 To RISC or not to RISC
http://www.garlic.com/~lynn/2006u.html#32 To RISC or not to RISC
and further fort knox drift
http://www.garlic.com/~lynn/2006u.html#37 To RISC or not to RISC
http://www.garlic.com/~lynn/2006u.html#38 To RISC or not to RISC
and past thread about some of the characteristics/issues with doing
such machine instruction analysis and if/else program flow construction
http://www.garlic.com/~lynn/2006e.html#32 transputers again was: The demise of Commodore
http://www.garlic.com/~lynn/2006p.html#1 Greatest Software Ever Written?
http://www.garlic.com/~lynn/2006p.html#4 Greatest Software Ever Written?
and then there is other topic drift on subject of 3-value logic
http://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
http://www.garlic.com/~lynn/2004f.html#2 Quote of the Week
http://www.garlic.com/~lynn/2004l.html#75 NULL
http://www.garlic.com/~lynn/2005.html#15 Amusing acronym
http://www.garlic.com/~lynn/2005b.html#17 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005i.html#35 The Worth of Verisign's Brand
http://www.garlic.com/~lynn/2005m.html#19 Implementation of boolean types
http://www.garlic.com/~lynn/2005r.html#15 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005t.html#20 So what's null then if it's not nothing?
http://www.garlic.com/~lynn/2005t.html#23 So what's null then if it's not nothing?
http://www.garlic.com/~lynn/2005t.html#33 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005u.html#12 3vl 2vl and NULL
http://www.garlic.com/~lynn/2006e.html#34 CJ Date on Missing Information
http://www.garlic.com/~lynn/2006q.html#22 3 value logic. Why is SQL so special?
http://www.garlic.com/~lynn/2006q.html#23 3 value logic. Why is SQL so special?
http://www.garlic.com/~lynn/2006q.html#29 3 value logic. Why is SQL so special?
http://www.garlic.com/~lynn/2006s.html#27 Why these original FORTRAN quirks?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 'Innovation' and other crimes Newsgroups: alt.folklore.computers Date: Fri, 22 Dec 2006 10:30:38 -0700"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
... all kinds of drift ...
lots of past posts about threats, exploits, vulnerabilities, and
fraud
http://www.garlic.com/~lynn/subintegrity.html#fraud
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Multiple mappings Newsgroups: comp.arch Date: Fri, 22 Dec 2006 10:52:15 -0700"Eric P." <eric_pattison@sympaticoREMOVE.ca> writes:
MVS inherited global mapping from its real-memory ancestors ... which had paradigm that was also extensively pointer-passing (not only the kernel ... but all the subsystem services that existed outside of the kernel ... everything essentially existing in a single *real* address space).
There was a difficult period for MVS before the move from 24-bit virtual address space to 31-bit virtual address spaces ... where the system mapping could take 12-13 of an application's virtual 16mbyte (in some cases leaving only 3mbytes for the application).
for lots of topic drift ...
While the MVS kernel got global mapping ... there was a problem with the non-kernel, semi-priviledged, subsystem services (things that might look like "demons" in other environments) ... which got their own individual "application" address space. The problem was that the paradigm was still extensively pointer-passing ... even between general applications and subsystem services.
The initial cut was a "COMMON" segment ... something that appeared in all address spaces ... where an application could stuff some parameters and then subsystem service could access them using passed pointer.
The next cut at addressing this was dual-address space mode ... actually having two active virtual address spaces simultaneously ... pointer was passed to subsystem service in a different address space ... and that subsystem service could use the pointer to reach directly into the application address space. The downside was that it still required a kernel transition to get to/from the subsystem service.
Dual-address space was then generalized to multiple address space and the "program call" function was added. program call was hardware defined table of subsystem functions and some operations that happen (like how the virtual address space pointers change) ... it allows transition directly from application address space into subsystem address space (and back) w/o requiring a software transition thru the kernel, pointer-passing paradigm continues ... and there is now all sorts of cross virtual address space activity going on.
From: lynn@GARLIC.COM (Anne & Lynn Wheeler) Subject: Re: IBM sues maker of Intel-based Mainframe clones Newsgroups: bit.listserv.ibm-main Date: 22 Dec 2006 10:12:18 -0800Nigel Hadfield wrote:
minor folklore ... my dumprx was selected for use as diagnostic
(software system) support for the 3090 service processor ... in the
early 80s, i had (re-)implemented large superset of IPCS function
totally in rexx
http://www.garlic.com/~lynn/submain.html#dumprx
... topic drift, a couple recent postings mentioning vm370 release 6
http://www.garlic.com/~lynn/2006b.html#2 Mount a tape
http://www.garlic.com/~lynn/2006t.html#24 CMSBACK
http://www.garlic.com/~lynn/2006w.html#25 To RISC or not to RISC
... for other drift ... this is crudely HTML'ed version of GCARD IOS3270
http://www.garlic.com/~lynn/gcard.html
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Executing both branches in advance ? Newsgroups: comp.arch Date: Fri, 22 Dec 2006 16:16:13 -0700"Stephen Sprunk" <stephen@sprunk.org> writes:
if processor thruput is primary system bottleneck and processor chip is on the order of 1/10th the overall system cost ... then you may get 10:1 ROI ... improvement in overall system thruput ... for investment in processor improvement. the question becomes is there anything else, for the same investment, that results in a bigger improvement in overall system thruput.
there is an analogous discussion about multiprocessor configurations
regarding the incremental system thruput improvement for each
additional processor added ... this is a thread on mainframe LSPR
ratios ... avg. configuration MIP thruput numbers for different
mainframe models as number of processors in the configuration are
increased
http://www.garlic.com/~lynn/2006l.html#41 One or two CPUs - the pros & cons
and
http://www.redbooks.ibm.com/abstracts/sg246645.html
from above ...
a 2094-701 (one processor) is rated at 608 SI MIPs and a 2094-702 (two processor) is rated at 1193 SI MIPs ... for an increase of 585 SI MIPs for twice the processors ... 100percent increase in processor hardware represents only 96percent increase in processor thuput.
a 2094-731 (31 processors) is rated at 11462 SI MIPs and a 2094-732 (32 processors) is rated at 11687 SI MIPs or an increase of 225 SI MIPs. At 32 processor configuration, adding the equivalent of a 608 SI MIP processor represents only a 225/608 or 37percent effective system thruput increase (compared to a single processor). that is way less than equivalent of adding 25percent processor circuits and getting 24percent effective processor thruput increase.
24/25 is effective net benefit of 96percent ... which is similar net benefit form going from 2094-701 (one processor) to a 2094-702 (two processor) configuration (100percent increase in number of processor circuits for 96percent improvement in thruput).
and for what-ever reason, various installations continue to incrementally add processors ... even as the incremental benefit of each added processor continues to drop. Part of the issue is that overall system thruput improvement is measured as a percentage of total system costs ... which can still be significantly better than each incremental processor cost.
lots of past multiprocessor postings
http://www.garlic.com/~lynn/subtopic.html#smp
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Multiple mappings Newsgroups: comp.arch Date: Fri, 22 Dec 2006 16:57:12 -0700"mike" <mike@mike.net> writes:
this is one of problems with tss/360 ... on 360/67 ... vis-a-vis cp67 with cms on the same hardware. even tho cms had many of the problems with program relocation and no paged mapped filesystem ... it would attempt to do program loading in up to 64kbyte disk transfers (instead of 4kbyte at a time on a demand page fault basis).
tss/360 also had other issues, like bloat ... excessively long pathlengths as well as fixed kernel real storage requirements that possibly represented 80percent of total real storage (not only excessive demand page faulting but excessive fixed kernel size contributing to page thrashing)
one of the things that i tried to do when i implemented page mapped
filesystem for cms in the early 70s ... was to preserve the benefits
of large block disk transfers for things like program loading ... and
avoid regressing to single 4k at a time demand page faulting.
http://www.garlic.com/~lynn/submain.html#mmap
part of the trick is to have page mapping but still preserve paradigm supporting large block transfers (and minimize degenerating too much to 4k at a time demand page faults).
for some trivia ... s/38 is supposedly some of the refugees retreating
to rochester after the future system effort failed (future system
including concept of one level store ... some of it coming from
tss/360 effort) ... misc. future system postings
http://www.garlic.com/~lynn/submain.html#futuresys
of course even even after doing page mapped filesystem for cms
... there was still a large part of the infrastructure that was
oriented around program relocation paradigm ... various postings on
the trials and tribulations attempting to deal with those problems
http://www.garlic.com/~lynn/submain.html#adcon
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 08:00:26 -0700jmfbahciv writes:
sjr had a 370/195 running mvt for batch jobs ... job queue backlog could be several weeks.
i've also mentioned somebody at palo alto science center getting turn around ever 2-3 months. palo alto science center had 370/145 for vm timesharing. they basically configured their job for background running on vm (soak up spare cycles mostly offshift and weekends). they found that they would get slightly more computation in three month period out of their 370/145 background than the turn-around on the sjr 370/195 (370/195 was peak around 10mips for appropriately tuned computational jobs ... compared to around 300kips for 370/145).
anyway ... i was playing part time in bldg14 (disk engineering) and
bldg15 (disk product test).
http://www.garlic.com/~lynn/subtopic.html#disk
they both had dedicated 370s for running various kinds of dedicated stand alone regression testing. much of this was engineering and prototype hardware. they had done some experimenting with trying to run MVS operating system on these machines (being able to use the operating system support for testing more than one device concurrently) ... but found the MTBF for MVS was about 15 minutes (hang or crash) just testing a single device (i.e. the operational and failure characteristics of engineering and prototype devices was extremely hostile to normal operating system operation).
so I thot this might be interesting problem and took to rewrite the operating system i/o subsystem to be never fail ... regardless what the i/o devices did, the i/o subsystem would not precipitate a system crash or hang. the result was that they could now use the processors to do regression testing on multiple different devices concurrently (rather than having carefully scheduled dedicated processor time for testing each device).
the interesting scenario was that they now had all these processors ... and even testing multiple concurrent devices ... only a couple percent of each processor was being used. that left a whole lot of spare processor cycles that could be used for other stuff.
so things got a little more interesting when bldg. 15 (product test lab) got an early engineering 3033. the 370/195 would hit 10mips for carefully tuned applications but normally ran around 5mips for most codes (mostly branch instructions stalling the pipeline). the 3033 ran about 4.5mips. so the air-bearing simulation work was maybe getting a couple hrs a month on the 370/195 ... but all of a sudden we could move it off the 370/195 in bldg. 28 over to the 3033 in bldg. 15 and provide it with almost unlimited 3033 time. Now, not only were we enabling a lot of concurrent device testing, effectivley on demand (that previously required scheduled dedicated time), but also able to provide nearly unlimited computational time for the air-bearing simulation work.
for topic drift, bldg. 14 was where the person that got the raid
patent in the 70s was located ... misc. recent posts this year
mentioning raid:
http://www.garlic.com/~lynn/2006b.html#39 another blast from the past
http://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
http://www.garlic.com/~lynn/2006d.html#1 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006d.html#3 Hercules 3.04 announcement
http://www.garlic.com/~lynn/2006d.html#24 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#26 Caller ID "spoofing"
http://www.garlic.com/~lynn/2006h.html#34 The Pankian Metaphor
http://www.garlic.com/~lynn/2006l.html#14 virtual memory
http://www.garlic.com/~lynn/2006o.html#9 Pa Tpk spends $30 million for "Duet" system; but benefits are unknown
http://www.garlic.com/~lynn/2006p.html#9 New airline security measures in Europe
http://www.garlic.com/~lynn/2006p.html#47 "25th Anniversary of the Personal Computer"
http://www.garlic.com/~lynn/2006r.html#37 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006t.html#5 Are there more stupid people in IT than there used to be?
http://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
http://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#37 Is this true? (Were gotos really *that* bad?)
http://www.garlic.com/~lynn/2006x.html#15 The Future of CPUs: What's After Multi-Core?
...
and although shugart left before i got there, i believe shugart's
offices would have also been in bldg. 14 (recent mention of
shugart's passing) ... misc. past posts mentioning shugart
http://www.garlic.com/~lynn/2000.html#9 Computer of the century
http://www.garlic.com/~lynn/2002.html#17 index searching
http://www.garlic.com/~lynn/2002l.html#50 IBM 2311 disk drive actuator and head assembly
http://www.garlic.com/~lynn/2004.html#5 The BASIC Variations
http://www.garlic.com/~lynn/2004j.html#36 A quote from Crypto-Gram
http://www.garlic.com/~lynn/2004l.html#14 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004p.html#0 Relational vs network vs hierarchic databases
http://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
http://www.garlic.com/~lynn/2005b.html#1 Foreign key in Oracle Sql
http://www.garlic.com/~lynn/2005c.html#9 The mid-seventies SHARE survey
http://www.garlic.com/~lynn/2005h.html#37 Software for IBM 360/30
http://www.garlic.com/~lynn/2006n.html#30 CRAM, DataCell, and 3850
http://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders?
...
misc. past posts mentioning air-bearing simulation work
http://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
http://www.garlic.com/~lynn/2002j.html#30 Weird
http://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
http://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
http://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
http://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
http://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
http://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
http://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004b.html#15 harddisk in space
http://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
http://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
http://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
http://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#13 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006d.html#14 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006l.html#6 Google Architecture
http://www.garlic.com/~lynn/2006l.html#18 virtual memory
http://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#41 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#18 Why so little parallelism?
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 08:23:45 -0700jmfbahciv writes:
for some, possibly as important as various compute-bound stuff, is a lot of the business critical dataprocessing (things like payroll, contracts, disbursements, etc)
there was news article in the last couple months about DFAS, after having been temporarily relocated to Phili (from new orleans in the wake of the hurricane/flood) is now being moved to denver.
DFAS home page
http://www.dod.mil/dfas/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "The Elements of Programming Style" Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 09:05:37 -0700CBFalconer <cbfalconer@yahoo.com> writes:
the mainframe tcp/ip product had been implemented in vs/pascal ... i
had done the enhancements to support rfc 1044 ... which had
significantly higher thruput than the base system (1mbyte/sec
vis-a-vis 44kbytes/sec ... and something like 3-4 orders of magnitude
bytes per processor time efficiency)
http://www.garlic.com/~lynn/subnetwork.html#1044
as far as i knew, the vs/pascal implementation never had any buffer overflow exploits ... the ingrained paradigm was that all string and buffer operations were always done with explicit lenghts.
in much of the 90s, the major cause of exploits on the internet was buffer overflow associated with code implemented in C language. common convention has been that static and/or dynamic buffers are specified by the programmer ... and these buffers carry no associated length information ... the management of buffer related length information is forced on the programmer ... and requires frequent and constant diligenece (in every single operation involving such buffers). in pascal, the buffer length information is part of the characteristic of the buffer ... and most operations directly utilize the available buffer length information w/o additional programmer assistance ... it happens automatically as part of the semantics of the operations.
in C, there is widely used programming practices where the programmer either fails to provide for explicit length-sensitive operations or makes a mistake in providing for explicit length-sensitive operations (in pascal, the semantics were such that it didn't require the programmer to do or not do something ... it just happened automatically ... and as a result, there was were fewer operations where it was possible for a programmer to make a mistake or not remember to do something, aka it was automatic part of the infrastructure and frequently the semantics of the operation).
as automatic scripting become more prevalent the exploit statistics shifting to about half buffer overflows and half automatic scripting by 1999. with the rise of things like phishing, the exploits statistics then shifting to 1/3rd social engineering, 1/3rd automatic scripting, and 1/3rd buffer overflows.
I tried to do some analysis of some of the exploit & vulnerabilitiy data bases. However, the descriptive information has been free form and tended to have a great deal of variability. since then there have been a number of efforts to introduce a lot more standardization in exploit/vulnerability descriptions.
posting with some analysis of CVE entries
http://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE
http://www.garlic.com/~lynn/2004q.html#74 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005k.html#3 Public disclosure of discovered vulnerabilities
postings about some published exploit reports coming up with nearly
same percentages as my analysis (for buffer overrun/overflow)
http://www.garlic.com/~lynn/2005b.html#20 Buffer overruns
http://www.garlic.com/~lynn/2005c.html#28 Buffer overruns
and past posts mentioning buffer related exploits
http://www.garlic.com/~lynn/subintegrity.html#overflow
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "The Elements of Programming Style" Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 09:21:59 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
and 3-or-4 value logic
here is previous post in a buffer overrun thread
http://www.garlic.com/~lynn/2005b.html#17 Buffer overruns
making some reference to analysing 360/370 assembler listings and attempting to produce higher-level program flow constracts.
as mentioned in the above post one of the issues was that 360/370 instructions provided for two-bit condition (four states) and there was numerous code segments (especially in highly optimized kernel code) which made use of 3 and/or 4 "value" logic ... there was condition setting and then 3 or 4 resulting code paths as the result of the condition setting.
the if/then construct provided for much simpler binary true/false type of logic operations.
that is in addition to some of the complex branch implementations were actually much simpler to understand than attempting to represent them with if/then binary logic nested 20-30 (or sometimes more) deep.
past posts mentioning buffer overruns
http://www.garlic.com/~lynn/subintegrity.html#overflow
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Future of CPUs: What's After Multi-Core? Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 09:46:30 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
the product test lab also got early engineering model of 4341. it turned out at that time, I had better access to 4341 for running various things than the endicott performance metrics group (responsible for doing performance profiling of the 4341). as a result I got asked to make various runs on the bldg. 15 4341 in support of various activities.
for instance the previous post in this thread
http://www.garlic.com/~lynn/2006x.html#18 The Future of CPUs: What's After Multi-Core?
mentioned customers snarfing up 4331s and 4341s in orders of multi-hundred
at a time. the above post also references an older post/email
http://www.garlic.com/~lynn/2001m.html#15 departmental servers
where the air force data systems (that was a multics installation) ordering 210 in the late 70s.
few old posts mentioning doing benchmarks for endicott on
bldg. 15s 4341
http://www.garlic.com/~lynn/2000d.html#0 Is a VAX a mainframe?
http://www.garlic.com/~lynn/2000d.html#7 4341 was "Is a VAX a mainframe?"
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2002b.html#0 Microcode?
http://www.garlic.com/~lynn/2002i.html#7 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002i.html#19 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002k.html#4 misc. old benchmarks (4331 & 11/750)
one of the benchmarks done for endicott was rain&rain4 (from national lab) ... results from late 70s:
Following are times for floating point fortran job running same fortran
158 3031 4341 Rain 45.64 secs 37.03 secs 36.21 secs Rain4 43.90 secs 36.61 secs 36.13 secs also times approx; 145 168-3 91 145 secs. 9.1 secs 6.77 secs rain/rain4 was from Lawrence Radiation lab ... and ran on cdc6600 in 35.77 secs.... snip ...
effectively customers were ordering hundreds (or thousands) of machines at a time (that were approx. equivalent of 6600).
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Toyota set to lift crown from GM Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 10:05:24 -0700Toyota set to lift crown from GM
from above:
TOKYO: Toyota Motor said Friday that it planned to sell 9.34 million vehicles next year, a figure that analysts said would be big enough to put it ahead of the troubled General Motors as the world's largest auto company.
... snip ...
sort of an update on past threads this year in a.f.c. mentioning us
automobile industry
http://www.garlic.com/~lynn/2006.html#23 auto industry
http://www.garlic.com/~lynn/2006.html#43 Sprint backs out of IBM outsourcing deal
http://www.garlic.com/~lynn/2006.html#44 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
http://www.garlic.com/~lynn/2006m.html#49 The Pankian Metaphor (redux)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: NSFNET (long post warning) Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 10:42:50 -0700while the arpanet 1jan83 switch-over to internetworking protocol was somewhat the technology basis for the internet, the operational basis for the modern internet (large scale internetworking and internetworking backbone) would be NSFNET.
references here
http://www.garlic.com/~lynn/internet.htm#nsfnet
http://www.garlic.com/~lynn/rfcietf.htm#history
the pre-1jan83 infrastructure failed to scaleup very well
... especially compared to the internal corporate network
http://www.garlic.com/~lynn/subnetwork.html#internalnet
with issues like needing to perform synchronized, system-wide
network node maint
http://www.garlic.com/~lynn/2006k.html#10 Arpa address
and continuing "high-rate" of growth, expecting to reach 100 nodes by
(sometime in) 1983 (from arpanet newsletter) at a point when the
internal network was approaching 1000 nodes
http://www.garlic.com/~lynn/2006r.html#7 Was FORTRAN buggy?
recent post discussing some of the issues
http://www.garlic.com/~lynn/2006x.html#8 vmshare
some recent NSFNET related postings:
http://www.garlic.com/~lynn/2005d.html#13 Cerf and Kahn receive Turing award
http://www.garlic.com/~lynn/2006s.html#50 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#6 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET
http://www.garlic.com/~lynn/2006w.html#43 IBM sues maker of Intel-based Mainframe clones
and old NSFNET/HSDT related email (in sequence)
http://www.garlic.com/~lynn/2006t.html#email850506
http://www.garlic.com/~lynn/2006w.html#email850607
http://www.garlic.com/~lynn/2006t.html#email850930
http://www.garlic.com/~lynn/2006t.html#email860407
http://www.garlic.com/~lynn/2006s.html#email860417
and was leading up to a number of activities, including scheduling the
meeting referenced here
http://www.garlic.com/~lynn/2005d.html#email860501
then the people that would be attending the meeting were called and
told the meeting was canceled ... accompanied by some number of
efforts to push SNA/VTAM solutions on NSF ... as referenced in this
email
http://www.garlic.com/~lynn/2006w.html#email870109
there was some followup efforts by NSF to extract HSDT activity under
a number of different guises, old email reference:
http://www.garlic.com/~lynn/2006s.html#email870515
lots of past posts mentioning various HSDT (high speed data transport)
project related activities over the years
http://www.garlic.com/~lynn/subnetwork.html#hsdt
part of HSDT included having done the RFC 1044 implementation for the
mainframe tcp/ip product
http://www.garlic.com/~lynn/subnetwork.html#1044
later some of the HSDT project activity morphed into 3-tier
architecture
http://www.garlic.com/~lynn/subnetwork.html#3tier
and along the way, we had opportunity to play in high speed protocol
activities
http://www.garlic.com/~lynn/subnetwork.html#xtphsp
and another position from the dumb terminal communication operation
http://www.garlic.com/~lynn/2006i.html#17 blast from the past on reliable communication
from this email
http://www.garlic.com/~lynn/2006i.html#email890901
other topic drift regarding the dumb terminal communication operation
http://www.garlic.com/~lynn/2006x.html#7 vmshare
http://www.garlic.com/~lynn/2006x.html#8 vmshare
my wife had done a stint in POK in charge of loosely-coupled
architecture (mainframe for cluster)
http://www.garlic.com/~lynn/submain.html#shareddata
we used some amount of that along with hsdt and 3-tier when we started
the HA/CMP product project
http://www.garlic.com/~lynn/subtopic.html#hacmp
more topic drift, some recent posts about HA/CMP scaleup with MEDUSA:
http://www.garlic.com/~lynn/2006w.html#13 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#14 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006w.html#20 cluster-in-a-rack
http://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#41 Why so little parallelism?
http://www.garlic.com/~lynn/2006x.html#3 Why so little parallelism?
http://www.garlic.com/~lynn/2006x.html#11 The Future of CPUs: What's After Multi-Core?
for other drift ... two of the people that were at this HA/CMP scaleup
meeting
http://www.garlic.com/~lynn/95.html#13
http://www.garlic.com/~lynn/96.html#15
later showed up responsible for something called a commerce server at
a small client/server startup. they wanted to process payment
transactions on their server and we got called in as consultants
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3
for something that came to be called electronic commerce or e-commerce
misc. past posts mentioning NSFNET
http://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
http://www.garlic.com/~lynn/98.html#59 Ok Computer
http://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
http://www.garlic.com/~lynn/99.html#37a Internet and/or ARPANET?
http://www.garlic.com/~lynn/99.html#37b Internet and/or ARPANET?
http://www.garlic.com/~lynn/99.html#38c Internet and/or ARPANET?
http://www.garlic.com/~lynn/99.html#40 [netz] History and vision for the future of Internet - Public Question
http://www.garlic.com/~lynn/99.html#138 Dispute about Internet's origins
http://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
http://www.garlic.com/~lynn/2000.html#49 IBM RT PC (was Re: What does AT stand for ?)
http://www.garlic.com/~lynn/2000c.html#26 The first "internet" companies?
http://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#78 Free RT monitors/keyboards
http://www.garlic.com/~lynn/2000d.html#16 The author Ronda Hauben fights for our freedom.
http://www.garlic.com/~lynn/2000d.html#19 Comrade Ronda vs. the Capitalist Netmongers
http://www.garlic.com/~lynn/2000d.html#43 Al Gore: Inventing the Internet...
http://www.garlic.com/~lynn/2000d.html#56 Is Al Gore The Father of the Internet?
http://www.garlic.com/~lynn/2000d.html#58 Is Al Gore The Father of the Internet?
http://www.garlic.com/~lynn/2000d.html#59 Is Al Gore The Father of the Internet?
http://www.garlic.com/~lynn/2000d.html#63 Is Al Gore The Father of the Internet?
http://www.garlic.com/~lynn/2000d.html#70 When the Internet went private
http://www.garlic.com/~lynn/2000d.html#71 When the Internet went private
http://www.garlic.com/~lynn/2000d.html#72 When the Internet went private
http://www.garlic.com/~lynn/2000d.html#73 When the Internet went private
http://www.garlic.com/~lynn/2000d.html#74 When the Internet went private
http://www.garlic.com/~lynn/2000d.html#77 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#5 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#10 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#11 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#28 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2000e.html#29 Vint Cerf and Robert Kahn and their political opinions
http://www.garlic.com/~lynn/2000e.html#31 Cerf et.al. didn't agree with Gore's claim of initiative.
http://www.garlic.com/~lynn/2000f.html#44 Al Gore and the Internet (Part 2 of 2)
http://www.garlic.com/~lynn/2000f.html#47 Al Gore and the Internet (Part 2 of 2)
http://www.garlic.com/~lynn/2000f.html#50 Al Gore and the Internet (Part 2 of 2)
http://www.garlic.com/~lynn/2000f.html#51 Al Gore and the Internet (Part 2 of 2)
http://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
http://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
http://www.garlic.com/~lynn/2001e.html#76 Stoopidest Hardware Repair Call?
http://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
http://www.garlic.com/~lynn/2001i.html#6 YKYGOW...
http://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002g.html#45 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
http://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
http://www.garlic.com/~lynn/2002h.html#80 Al Gore and the Internet
http://www.garlic.com/~lynn/2002h.html#82 Al Gore and the Internet
http://www.garlic.com/~lynn/2002h.html#85 Al Gore and the Internet
http://www.garlic.com/~lynn/2002h.html#86 Al Gore and the Internet
http://www.garlic.com/~lynn/2002i.html#15 Al Gore and the Internet
http://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
http://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
http://www.garlic.com/~lynn/2002k.html#12 old/long NSFNET ref
http://www.garlic.com/~lynn/2002k.html#18 Unbelievable
http://www.garlic.com/~lynn/2002k.html#56 Moore law
http://www.garlic.com/~lynn/2002l.html#13 notwork
http://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2002o.html#41 META: Newsgroup cliques?
http://www.garlic.com/~lynn/2003c.html#5 Network separation using host w/multiple network interfaces
http://www.garlic.com/~lynn/2003c.html#11 Networks separation using host w/multiple network interfaces
http://www.garlic.com/~lynn/2003c.html#46 difference between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#9 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003d.html#13 COMTEN- IBM networking boxes
http://www.garlic.com/~lynn/2003d.html#59 unix
http://www.garlic.com/~lynn/2003g.html#36 netscape firebird contraversy
http://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
http://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
http://www.garlic.com/~lynn/2003j.html#76 1950s AT&T/IBM lack of collaboration?
http://www.garlic.com/~lynn/2003l.html#58 Thoughts on Utility Computing?
http://www.garlic.com/~lynn/2003m.html#28 SR 15,15
http://www.garlic.com/~lynn/2004b.html#46 ARPAnet guest accounts, and longtime email addresses
http://www.garlic.com/~lynn/2004g.html#12 network history
http://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004l.html#1 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004l.html#3 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004l.html#5 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004l.html#7 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004m.html#62 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
http://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
http://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
http://www.garlic.com/~lynn/2005d.html#11 Cerf and Kahn receive Turing award
http://www.garlic.com/~lynn/2005e.html#46 Using the Cache to Change the Width of Memory
http://www.garlic.com/~lynn/2005j.html#30 IBM Plugs Big Iron to the College Crowd
http://www.garlic.com/~lynn/2005j.html#58 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
http://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
http://www.garlic.com/~lynn/2005n.html#28 Data communications over telegraph circuits
http://www.garlic.com/~lynn/2005n.html#30 Data communications over telegraph circuits
http://www.garlic.com/~lynn/2005p.html#10 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005p.html#16 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005q.html#3 winscape?
http://www.garlic.com/~lynn/2005q.html#6 What are the latest topic in TCP/IP
http://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
http://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
http://www.garlic.com/~lynn/2005q.html#46 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#32 How does the internet really look like ?
http://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
http://www.garlic.com/~lynn/2005t.html#3 Privacy issue - how to spoof/hide IP when accessing email / usenet servers ?
http://www.garlic.com/~lynn/2005u.html#53 OSI model and an interview
http://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
http://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#36 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#38 The Pankian Metaphor
http://www.garlic.com/~lynn/2006e.html#39 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#12 Barbaras (mini-)rant
http://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?
http://www.garlic.com/~lynn/2006i.html#21 blast from the past on reliable communication
http://www.garlic.com/~lynn/2006j.html#34 Arpa address
http://www.garlic.com/~lynn/2006j.html#43 virtual memory
http://www.garlic.com/~lynn/2006j.html#46 Arpa address
http://www.garlic.com/~lynn/2006m.html#10 An Out-of-the-Main Activity
http://www.garlic.com/~lynn/2006r.html#6 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006s.html#20 real core
http://www.garlic.com/~lynn/2006s.html#51 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#55 What's a mainframe?
http://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2006v.html#30 vmshare
http://www.garlic.com/~lynn/2006w.html#26 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#29 Descriptive term for reentrant program that nonetheless is
http://www.garlic.com/~lynn/2006w.html#38 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#39 Why so little parallelism?
http://www.garlic.com/~lynn/2006w.html#52 IBM sues maker of Intel-based Mainframe clones
http://www.garlic.com/~lynn/2006x.html#7 vmshare
=====
misc. past posts mentioning the 1jan83 switchover to internetworking protocol
http://www.garlic.com/~lynn/2000e.html#18 Is Al Gore The Father of the Internet?^
http://www.garlic.com/~lynn/2001b.html#81 36-bit MIME types, PDP-10 FTP
http://www.garlic.com/~lynn/2001c.html#4 what makes a cpu fast
http://www.garlic.com/~lynn/2001e.html#16 Pre ARPAnet email?
http://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
http://www.garlic.com/~lynn/2001j.html#28 Title Inflation
http://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
http://www.garlic.com/~lynn/2001l.html#35 Processor Modes
http://www.garlic.com/~lynn/2001m.html#48 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2001m.html#54 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2001n.html#5 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2001n.html#6 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2001n.html#87 A new forum is up! Q: what means nntp
http://www.garlic.com/~lynn/2002.html#32 Buffer overflow
http://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
http://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
http://www.garlic.com/~lynn/2002g.html#19 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002g.html#30 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002g.html#35 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002g.html#40 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002g.html#71 Coulda, Woulda, Shoudda moments?
http://www.garlic.com/~lynn/2002h.html#5 Coulda, Woulda, Shoudda moments?
http://www.garlic.com/~lynn/2002h.html#11 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#79 Al Gore and the Internet
http://www.garlic.com/~lynn/2002j.html#64 vm marketing (cross post)
http://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
http://www.garlic.com/~lynn/2002l.html#48 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2002o.html#17 PLX
http://www.garlic.com/~lynn/2002q.html#4 Vector display systems
http://www.garlic.com/~lynn/2002q.html#35 HASP:
http://www.garlic.com/~lynn/2003c.html#47 difference between itanium and alpha
http://www.garlic.com/~lynn/2003d.html#59 unix
http://www.garlic.com/~lynn/2003e.html#36 Use of SSL as a VPN
http://www.garlic.com/~lynn/2003f.html#0 early vnet & exploit
http://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual address translation
http://www.garlic.com/~lynn/2003g.html#44 Rewrite TCP/IP
http://www.garlic.com/~lynn/2003g.html#51 vnet 1000th node anniversary 6/10
http://www.garlic.com/~lynn/2003h.html#7 Why did TCP become popular ?
http://www.garlic.com/~lynn/2003h.html#16 Why did TCP become popular ?
http://www.garlic.com/~lynn/2003h.html#17 Why did TCP become popular ?
http://www.garlic.com/~lynn/2003i.html#32 A Dark Day
http://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
http://www.garlic.com/~lynn/2003l.html#0 One big box vs. many little boxes
http://www.garlic.com/~lynn/2003m.html#25 Microsoft Internet Patch
http://www.garlic.com/~lynn/2003n.html#44 IEN 45 and TCP checksum offload
http://www.garlic.com/~lynn/2003o.html#68 History of Computer Network Industry
http://www.garlic.com/~lynn/2003p.html#38 Mainframe Emulation Solutions
http://www.garlic.com/~lynn/2004.html#44 OT The First Mouse
http://www.garlic.com/~lynn/2004d.html#13 JSX 328x printing (portrait)
http://www.garlic.com/~lynn/2004e.html#12 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
http://www.garlic.com/~lynn/2004e.html#30 The attack of the killer mainframes
http://www.garlic.com/~lynn/2004f.html#30 vm
http://www.garlic.com/~lynn/2004f.html#32 Usenet invented 30 years ago by a Swede?
http://www.garlic.com/~lynn/2004f.html#35 Questions of IP
http://www.garlic.com/~lynn/2004g.html#7 Text Adventures (which computer was first?)
http://www.garlic.com/~lynn/2004g.html#8 network history
http://www.garlic.com/~lynn/2004g.html#12 network history
http://www.garlic.com/~lynn/2004g.html#26 network history
http://www.garlic.com/~lynn/2004g.html#30 network history
http://www.garlic.com/~lynn/2004g.html#31 network history
http://www.garlic.com/~lynn/2004g.html#32 network history
http://www.garlic.com/~lynn/2004g.html#33 network history
http://www.garlic.com/~lynn/2004k.html#30 Internet turns 35, still work in progress
http://www.garlic.com/~lynn/2004l.html#0 Xah Lee's Unixism
http://www.garlic.com/~lynn/2004m.html#26 Shipwrecks
http://www.garlic.com/~lynn/2004m.html#62 RISCs too close to hardware?
http://www.garlic.com/~lynn/2004n.html#42 Longest Thread Ever
http://www.garlic.com/~lynn/2004n.html#43 Internet turns 35 today
http://www.garlic.com/~lynn/2004p.html#13 Mainframe Virus ????
http://www.garlic.com/~lynn/2004q.html#44 How many layers does TCP/IP architecture really have ?
http://www.garlic.com/~lynn/2004q.html#56 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
http://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
http://www.garlic.com/~lynn/2005.html#45 OSI model and SSH, TCP, etc
http://www.garlic.com/~lynn/2005d.html#10 Cerf and Kahn receive Turing award
http://www.garlic.com/~lynn/2005d.html#44 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005d.html#63 Cranky old computers still being used
http://www.garlic.com/~lynn/2005e.html#39 xml-security vs. native security
http://www.garlic.com/~lynn/2005e.html#46 Using the Cache to Change the Width of Memory
http://www.garlic.com/~lynn/2005f.html#11 Mozilla v Firefox
http://www.garlic.com/~lynn/2005f.html#53 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005i.html#37 Secure FTP on the Mainframe
http://www.garlic.com/~lynn/2005k.html#5 IBM/Watson autobiography--thoughts on?
http://www.garlic.com/~lynn/2005l.html#16 Newsgroups (Was Another OS/390 to z/OS 1.4 migration
http://www.garlic.com/~lynn/2005n.html#16 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#36 Code density and performance?
http://www.garlic.com/~lynn/2005n.html#47 Anyone know whether VM/370 EDGAR is still available anywhere?
http://www.garlic.com/~lynn/2005n.html#52 ARP routing
http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005p.html#16 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005q.html#0 HASP/ASP JES/JES2/JES3
http://www.garlic.com/~lynn/2005q.html#3 winscape?
http://www.garlic.com/~lynn/2005q.html#6 What are the latest topic in TCP/IP
http://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
http://www.garlic.com/~lynn/2005r.html#32 How does the internet really look like ?
http://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
http://www.garlic.com/~lynn/2005t.html#3 Privacy issue - how to spoof/hide IP when accessing email / usenet servers ?
http://www.garlic.com/~lynn/2005u.html#56 OSI model and an interview
http://www.garlic.com/~lynn/2006b.html#12 IBM 610 workstation computer
http://www.garlic.com/~lynn/2006b.html#35 Seeking Info on XDS Sigma 7 APL
http://www.garlic.com/~lynn/2006e.html#35 The Pankian Metaphor
http://www.garlic.com/~lynn/2006j.html#34 Arpa address
http://www.garlic.com/~lynn/2006j.html#45 Arpa address
http://www.garlic.com/~lynn/2006j.html#49 Arpa address
http://www.garlic.com/~lynn/2006k.html#1 Hey! Keep Your Hands Out Of My Abstraction Layer!
http://www.garlic.com/~lynn/2006k.html#6 Hey! Keep Your Hands Out Of My Abstraction Layer!
http://www.garlic.com/~lynn/2006k.html#10 Arpa address
http://www.garlic.com/~lynn/2006k.html#12 Arpa address
http://www.garlic.com/~lynn/2006k.html#32 PDP-1
http://www.garlic.com/~lynn/2006k.html#40 Arpa address
http://www.garlic.com/~lynn/2006k.html#42 Arpa address
http://www.garlic.com/~lynn/2006k.html#43 Arpa address
http://www.garlic.com/~lynn/2006k.html#53 Hey! Keep Your Hands Out Of My Abstraction Layer!
http://www.garlic.com/~lynn/2006k.html#56 Hey! Keep Your Hands Out Of My Abstraction Layer!
http://www.garlic.com/~lynn/2006l.html#25 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
http://www.garlic.com/~lynn/2006l.html#52 Mainframe Linux Mythbusting
http://www.garlic.com/~lynn/2006n.html#5 Not Your Dad's Mainframe: Little Iron
http://www.garlic.com/~lynn/2006p.html#31 "25th Anniversary of the Personal Computer"
http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#6 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006x.html#8 vmshare
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Year-end computer bug could ground Shuttle Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 12:26:08 -0700Brian Inglis <Brian.Inglis@SystematicSW.Invalid> writes:
minor topic drift ... merged payment taxonomy and glossary
... including terms from federal reserve website
http://www.garlic.com/~lynn/index.html#glosnote
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: "The Elements of Programming Style" Newsgroups: alt.folklore.computers Date: Sat, 23 Dec 2006 12:40:50 -0700Anne & Lynn Wheeler <lynn@garlic.com> writes:
one of the comparisons i draw with regard to the tendency to programmers to make mistakes with regard to string & buffer length management in typical c language environment, is the mistakes by programmers in assembler language managing register content information.
In previous post, i referred to including register use before set in the PLI program that i wrote in the early 70s to analyze assembler listings. this was because a frequent programmer "mistake" was in the management of register contents.
Later when I did dumprx
http://www.garlic.com/~lynn/submain.html#dumprx
recent reference:
http://www.garlic.com/~lynn/2006x.html#24 IBM sues make of Intel-base Mainframe clones
i did work on analyzing failure scenarios and building automated failure analysis scripts ... looking for particular types of failure signatures.
A large percentage of the failures turned out to be programmer mistakes in the management of register contents.
Now, one of the benefits supposedly claimed for higher level languages is the automation of register content management ... freeing the programming from having to do it (and also freeing the programmer from making mistakes when doing it).
My assertion is that the programmer burden (in typical C language environments) with regard to length management is analogous to the programmer burden in assembler language involved in managing register contents. Expert programmers can (usually) do a perfect job of managing the various length issues (and avoid creating buffer overflow vulnerabilities) ... just like expert programmers can (usually) do a perfect job of managing register contents in assembler language programs. However, my point is that the difficulty and burden involved in such management gives rise to similar types of programmer mistakes. The argument for eliminating assembler programmer burder (with regard to management of register contents) applies equally to the typical C programmer burden (with regard to management of lengths).
misc. past posts mentiong c language buffer overflow issues
http://www.garlic.com/~lynn/subintegrity.html#overflow
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: SSL security with server certificate compromised Newsgroups: comp.security.misc Date: Sat, 23 Dec 2006 14:23:59 -0700"Gonzo" <gonzalo.diethelm@gmail.com> writes:
the (symmetric) session key is generated by the client, encrypted with the server's public key and then transmitted to the server ... the server then is able to obtain the session key value by decrypting it with the server's private key (aka, given that you have access to the server's private key ... then you can also access all session keys for SSL sessions with that server). given that you have the server's private key ... you can decrypt the transmitted session key ... w/o having to resort to any brute force.
for little topic drift, rfc 4772 announced today ... "Security
Implications of Using the Data Encryption Standard (DES)" which includes
discussion on brute-force attacks ... ref
http://www.garlic.com/~lynn/aadsm26.htm#16 Security Implications of Using the Data Encryption Standard (DES)
for various drifts ... lots of past posts mentioning SSL server digital
certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert
and lots of past posts mentioning (general) exploits, fraud,
vulnerabilities, and/or threats
http://www.garlic.com/~lynn/subintegrity.html#fraud
and lots of past posts discussing catch-22 for proposed improvements
in the SSL server digital certificate infrastructure
http://www.garlic.com/~lynn/subpubkey.html#catch22
basically implications of proposals for validation of SSL server
digital certificate applications which add digitally signatures and
verifying the application digital signature by doing a real-time
retrieval of public keys onfile with the domain name infrastructure
.... aka basically a certificate-less infrastructure
http://www.garlic.com/~lynn/subpubkey.html#certless
which could then lead to everybody doing real-time retrieval of onfile
public keys ... eliminating the requirement for any digital
certificates. a certificate-less public key infrastructure proposal
from old 1981 email:
http://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network
http://www.garlic.com/~lynn/2006w.html#15 more secure communication over the network
http://www.garlic.com/~lynn/2006w.html#18 more secure communication over the network
a recent thread with discussion of some other SSL server issues/vulnerabilities
http://www.garlic.com/~lynn/2006v.html#49 Patent buster for a method that increases password security
http://www.garlic.com/~lynn/2006v.html#51 Patent buster for a method that increases password security
http://www.garlic.com/~lynn/2006w.html#0 Patent buster for a method that increases password security
http://www.garlic.com/~lynn/2006w.html#4 Patent buster for a method that increases password security
http://www.garlic.com/~lynn/2006w.html#5 Patent buster for a method that increases password security