List of Archived Posts

2006 Newsgroup Postings (02/26 - 03/07)

IBM 610 workstation computer
Hercules 3.04 announcement
IBM 610 workstation computer
Hercules 3.04 announcement
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
Hercules 3.04 announcement
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
Hercules 3.04 announcement
Statistics on program constant sizes?
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
Caller ID "spoofing"
Caller ID "spoofing"
IBM 610 workstation computer
Caller ID "spoofing"
Caller ID "spoofing"
Caller ID "spoofing"
Caller ID "spoofing"
When *not* to sign an e-mail message?
When *not* to sign an e-mail message?
When *not* to sign an e-mail message?
Fw: Tax chooses dead language - Austalia
When *not* to sign an e-mail message?
transputers again was Re: The demise of Commodore
Fw: Tax chooses dead language - Austalia
transputers again was Re: The demise of Commodore
transputers again was Re: The demise of Commodore
Caller ID "spoofing"

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 15:42:31 -0700
"Rostyslaw J. Lewyckyj" writes:
Oh sure but unless the problem was such that you couldn't submit a second, or further independant jobs before you got back the output from the first job, then : On the 195 the process would have been like instructions in a pipeline. If you submitted jobs one week apart, the first results wouldn't be back until three months after submission. But then you would get a set of results out every week, as your jobs got to execute from the input queue. While on the 145, because the job was in the background to soak up available time, a second or subsequent jobs wouldn't run until the first job finished. So you would only get one set of results per month.

the issue on the 370/195 in bldg. 28 was that unless you had special priority ... the normal job execution backlog ... was in fact, 3 months (even if all the pending jobs executed at peak 370/195 thruput). the 370/195 service in bldg. 28 also had operational stuff to catch individuals and/or organizations having multiple instances of the same work concurrently submitted ... as an attempt to reduce their turn-around latency (frequently seen in student university environments and there usually are countermeasures developed to handle the situation).

there is a separate issue with concurrently executing jobs ... typically used in interactive environments and/or batch enviornments where no specific job was processor bound, processor was the most expensive resource in the system, and the system was optimized to maximum processor utilization.

the issue with the pasc job was that it would only run for a couple hrs ... but the workload backlog on the 370/195, was in fact, 3months (at least for the job priority given their workload). some other workload might get higher priority and more frequently turn-around. note this was effectively three months turn-around before it even started the job (much more FIFO than any round-robin). i've posted before about the air-bearing simulation job that was getting something like one week turnaround from the bldg. 28 370/195 service (instead of 3 month turn around) ... which was part of the effort that went into designing the thin-film floating heads for 3380 (and possibly considered higher priority to get out the thin-film floating disk head design).

now in the various concurrent workload scenarios ... like the 370/145 machine installed at pasc for providing interactive computing vm/cms service ... the scheduler started out basically being a combination of priority and round-robin. the original scheduler on cp67 that i saw (possibly adapted from ctss) was a ten level priority scheduler. certain type of events would reset a task (or a brand new task) and place it at the best priority with very short cpu time-slice (multiple tasks at the same level were handled fifo). once the task had executed for the very short cpu time-slice it was bumped down to the next level scheduling level (where it would get a slightly larger cpu time-slice but scheduled for execution behind any processes at higher level). concurrent, time-slice workload that required any amount of cpu resources, quickly filtered thru all ten levels to the bottom level ... where most of the long-running, concurrent processor intensive tasks found themsleves ... and they would be executed in round-robin fashion with cpu time-slice on the order of a second or so. if higher priority workload showed up, it could pre-empt workload running at lower scheduling priority.

as the number of concurrent tasks and workload increased in normal cp67 service ... this ten level dispatcher maintenance overhead was starting to consume something like 10-15% of total elapsed time. it also suffered from not having any sort of page thrashing control overhead. one of the people at lincoln labs, then replace the ten-level dispatcher with a much simpler two-level dispatcher that included very rudimentary page-trashing controls. the operation between the two queues were similar to the ten-level dispatcher ... basically high-priority, short time-slice, fifo queue and then quickly drop into bottom queue and effectively round-robin.

i then rewrote all this stuff and created the original fair-share dispatcher. it basically calculated advisery deadline based on things like recent resource consumptioin and the size of the allocated time-slice.
https://www.garlic.com/~lynn/subtopic.html#fairshare

i also rewrote the page thrashing control to estimate real storage requirements of each task (the rudimentary page-thrashing code put in by lincoln labs. just placed an arbitrary limit on the total number of concurrent tasks in the dispatch queue w/o actually tracking their real storage requirements).
https://www.garlic.com/~lynn/subtopic.html#wsclock

note that this was in the same era that there was publication in the technical literature about working sets and working set dispatcher. as previously mentioned, 10-15 yrs later there was a conflict when somebody was publishing a stanford phd thesis on technology very similar to the work that i had done nearly 15 years earlier (and also contridicted the technical literature that had been published at the time i did my original work). there was a major effort to not allow that the phd be granted on the work (and the thesis be published). one of the contributions that help break the deadlock was that there was an implementation done on cp/67 running on identical hardware of the working set dispatcher ... and you could have direct A:B comparison between my implementation and an implementation that faithfully reproduced what was in the technical literature.

so in the morphing from cp67 to vm370, much of the code that i had done as an undergraduate got dropped. somewhat because of that, there were some lobbying via various customers (in places like SHARE and other user group organizations) to have my code put back into the product. that was finally approved and I got to ship the "resource manager" .... coming up on 30 year anniv. in a couple months. It had pretty much the same dynamic adaptive resource management, advistory deadline scheduling, real-storage and page thrashing controls, etc. One of the things that I was able to add was an additional, somewhat more, background dispatching queue that was much more FIFO than round-robin. I even did a write-up on the effects of having things like ten identical tasks/jobs that would nominally run for several minutes. The resource manager write-up was something along the line that if I had ten concurrent tasks each requiring six minutes and executed round-robin ... then it would take nearly an hour for all ten to complete. The avg. completion time then is one hour. However, if i ran the same workload FIFO, the first task could complete after six months ... and only one would take the full hour (with the avg. completion time a little over 30 minutes instead of 60 minutes).

in any case, PASC had a choice between their in-house 370/145 and the bldg. 28 370/195. the 370/145 was installed to supply interactive computing for science center members that primarily worked 1st shift. as a result, the machine was otherwise idle much of 2nd shift, nearly all of 3rd shift, and nearly all of the weekends. The 370/195 workload was effectively run FIFO and the PASC turn-around still was 3 months.

now the 370/195 had another issue. most normal codes, unless specifically optimized for 195 pipeline ... ran about 50percent peak instruction thruput. somewhere along the way, there was a project that I got pulled in that was working on effectively hyperthreading for the 195. it involved created a second PSW (instruction counter) and registers and misc. other stuff ... providing dual i-stream support ... emulating a two-process smp. The pipeline stayed the same and the number of functional units stayed the same ... but all the work in the pipeline was tagged with an one-bit instruction stream flag (indicating which instruction stream that instructions and registers were associated with). this effort never shipped any hardware to customers. the idea with the dual i-stream was that if most normal codes could only keep the pipeline half busy ... then having a pair of i-streams might be able to maintain peak aggregate thruput.

misc. past posts mentioning bldg. 28 370/195 service and air-bearing simulation job
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer

misc. past posts mentioning the stanford clock phd thesis incident:
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/96.html#0a Cache
https://www.garlic.com/~lynn/98.html#2 CP-67 (was IBM 360 DOS (was Is Win95 without DOS...))
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2000e.html#34 War, Chaos, & Business (web site), or Col John Boyd
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002k.html#63 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
https://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#55 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#0 Alpha performance, why?
https://www.garlic.com/~lynn/2003k.html#8 z VM 4.3
https://www.garlic.com/~lynn/2003k.html#9 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#47 new to mainframe asm
https://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#35 Seeking Info on XDS Sigma 7 APL

misc. past posts mentioning dual i-stream work
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#19 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#29 IBM 610 workstation computer

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hercules 3.04 announcement

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules 3.04 announcement
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 26 Feb 2006 16:12:06 -0700
phil@isham-research.co.uk wrote:
I thought Iceberg was the MSS?

ref:
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement

mss/3850 provided simulated 3330s (icebergs) staged from tape cartridges.

originally had real 3330 drives for staging the simulated 3330 drives. mss/3850 had two modes ... staging a full cartridge or staging six 3330 cylinders (at a time). i/o done to a simulated 3330 address could get back a unit check with "cylinder fault" in the sense data ... prompting for the operating system to stage the appropriate data from tape to disk.

later there was a mss/3850 option that would use 3350 drives for staging the simulated 3330 data.

later during the ADSTAR period in san jose ... there was the C-STAR project which was working on a new disk controller that was to suppose to provide equivalent function to the STK (iceberg/9200, different project, different vendor, same name) controller that had virtualized disks (compression and various other functions) ... as well as raid support. Slightly later, IBM licensed and marketed the STK iceberg/9200 controller. the stk iceberg was some 25 years after the 3330 iceberg.

some 3850/mss pictures
http://www.columbia.edu/cu/computinghistory/mss.html
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850.html

a few references from search engine mentioning stk iceberg/9200
http://www.findarticles.com/p/articles/mi_m0EIN/is_1995_Feb_7/ai_16420403
http://calbears.findarticles.com/p/articles/mi_m0EIN/is_1995_April_18/ai_16828928
http://storageadvisers.adaptec.com/2005/12/02/the-origin-of-raid-6/

IBM 610 workstation computer

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 17:00:39 -0700
jmfbahciv writes:
If I understand what you're saying..I think this is where JMF/TW's macros were used; and I cannot recall what we called them. These guys wrote a series of MACRO-10 macroes that did all the right stuff around local pieces of code. Not only did this execute the correct code but the 6-character macro name could be cross-refed and, more importantly, documented in the sources where care had to be taken. It wasn't just in comments, which always have to be assumed to be wrong.

one of the things done with macros for the vm370 kernel ... was that all activity inside the kernel was associated a data structure called a vmblok ... and all processing was accounted for related to a vmblok. In single processor mode, the kernel switching from one task another would switch pointers to the active vmblok as well as the value loaded into the cpu timer.

there was a macro for this

SWTCHVM

which basically did a


            STPT   VMTTIME
            SPT    VMTTIME-VMBLOK(R1)
            LR     R11,R1

which saved the current value of the cpu timer in the previous vmblok, switch the vmblok pointers and loaded its current value into cpu timer. a vmblok's cptime value was initialized to a large positive double word. when loaded into the cpu timer, it was decremented in real time. when tasks were switched, the current cpu timer value was stored. amount of cpu consumed by a specific task was the difference between the current value and the original value (or some intermediate checkpointed value).

for fine-grain smp locking support, a data-structure lock was defined for each vmblok. the SWTCHVM macro then had additional code that was expanded when a smp kernel was generated. this included releasing the (vmblok data structure) lock on the previous vmblok and obtaining the (vmblok data structure) lock on the task being switched to.

relatively new discussion from vmshare about converting from "CHARGE" macro (which was UP-oriented and only did the cpu timer gorp) to the SWTCHVM macro ... which would expand to MP appropriate code if kernel was being built for SMP
http://vm.marist.edu/~vmshare/browse.cgi?fn=UP_TO_MP&ft=MEMO

this is much longer post from vmshare ... but embedded in the thread is a more detailed discussion of the SWTCHVM macro
http://vm.marist.edu/~vmshare/browse.cgi?fn=SUSPEND&ft=MEMO

another vmshare thread/post with some discussion of SWTCHVM macro and also mention of some installations ("like CIA") that generate all their kernels as SMP for all machines (even their non-smp processors)
http://vm.marist.edu/~vmshare/browse.cgi?fn=MP_ON_UP

there was also a lock specific macro that had this form

LOCK OBTAIN,TYPE=VMBLOK,SPIN=NO,SAVE
LOCK RELEASE,TYPE=VMBLOK,SAVE


recent thread discussing 360 location 80 cpu timer, and the tod clock and other timers introduced with 370:
https://www.garlic.com/~lynn/2006c.html#20 Military Time?
https://www.garlic.com/~lynn/2006c.html#21 Military Time?
https://www.garlic.com/~lynn/2006c.html#22 Military Time?
https://www.garlic.com/~lynn/2006c.html#23 Military Time?

section discussing tod clock and cpu timer
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6?SHELF=EZ2HW125&DT=19970613131822

SET CPU TIMER (SPT) instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.36?SHELF=EZ2HW125&DT=19970613131822

STORE CPU TIMER (STPT) instruction
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/10.47?SHELF=EZ2HW125&DT=19970613131822

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hercules 3.04 announcement

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules 3.04 announcement
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 26 Feb 2006 17:20:14 -0700
Anne & Lynn Wheeler wrote:
later during the ADSTAR period in san jose ... there was the C-STAR project which was working on a new disk controller that was to suppose to provide equivalent function to the STK (iceberg/9200, different project, different vendor, same name) controller that had virtualized disks (compression and various other functions) ... as well as raid support. Slightly later, IBM licensed and marketed the STK iceberg/9200 controller. the stk iceberg was some 25 years after the 3330 iceberg.

ref:
https://www.garlic.com/~lynn/2006d.html#1 Hercules 3.04 announcement
and the original post
https://www.garlic.com/~lynn/2006c.html#46 Hercules 3.04 announcement

i shouldn't have been so flip ... it wasn't actually called c-star ... it was called seastar. there was also a seahorse that would provide fileserver capability. some of this was "jupiter" controller project reborn ... but some ten years later. if i remember correctly, dynamic pathing survived out of jupiter (someplace i may have some long dissertation on how to slightly modify the original dynamic pathing architecture proposal and save something like an order of magnitude or more in the number of circuits to implement as you scaled up).

oh, lots of collected past posts on having fun in bldg. 14 (disk engineering lab) and bldg. 15 (product test lab).
https://www.garlic.com/~lynn/subtopic.html#disk

IBM 610 workstation computer

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 20:52:46 -0700
"Rostyslaw J. Lewyckyj" writes:

On the 195
On day 001  submit the Pasc job with data  {a,b,c,...}
On day 008  submit the Pasc job with data  {d,e,f,...}
On day 015  .... with data {g,h,i,...)
....... and so on........
On day 091 get back the results for {a,b,c,...}
On day 098 get back the results for {d,e,f,...}
On day 105 get back the results for {g,h,i....}
........ and so on......
i.e. each job has three months turn around. But independant
outputs can pop out one week, at a time, or however densely
the independaant jobs are permitted to be submitted.

On the 145
On day 001 submit the job (A) with data {a,b,c,...}
On day 007 submit job (B) with data {d,e,f,...}  etc.

But because job A only gets service during idle time,
which it soaks up, job B doesn't get to run till A
completes on day 030. Or else B competes with A and
delays its completion past day 030 etc.
So on the 195 the 5'th job submitted on day 1+4*7 = 029
would come back on day 91+28 = 119, while on the 145
the 5'th job would come back on day 151 (as long as it
had been submitted before 121 :) ) .

The rest of what you wrote while informative and interesting
does not explain how using the 145 could give the Pasc crew
better service after the first several jobs.  :)

PASC owns the 145 ... they've paid for the machine and it otherwise sits idle most nights and weekends.

the 370/195 doesn't have any idle time ... one of the reasons that pasc can only get one turn-around every three months. the application was somewhat that the results from the previous run is needed before the next run.

because of the significant backlog for 370/195 ... availability to pasc 370/195 was controlled by both policy and budget constaints. they would have needed to provide significant policy and budget justification to improve their ability to submit additional jobs and/or increase amount of work.

that is one of the reasons i referenced the student scenario for such strategies ... because there was frequently no budgetary constaint on their machinations.

in your strategy ... since the next run was dependent on the output from the previous run ... and there was somewhat limited storage space ... so you wouldn't have good mechanism from passing the output from a job that hadn't been executed by the time the next job was submitted ... you could have 12 jobs all queued that would be exact duplicates and repeat the same calculations, all getting the same results ... but using 12 times the resources and 12 times the budget.

the policy restrictions however, would have prevented PASC from getting more than one such turn-around per 3 month period ... no matter how you staggered the submissions. again the submission strategy scenario you describe is more like what a lot of students scheme ... based on much simpler straight-forward computing scheduling strategy that was only based on simple submission strategy ... and not assuming any more sophisticated resource scheduling strategies in use by the computing systems (like aggregate per organization resource per period).

some univ. would do similar type of stuff for student classes per semester. some bright student would come up with a strategy for a large number of submissions ... and within the first couple days blow the complete semester's budget for that class (leaving the class w/o computing resources for the rest of the semester). then smarter policies would even preclude any single student using more than there pro-rated share of the resources allocated for the class. if some student blew their complete resource budget within the first day or two ... that would just mean that no more of their jobs would be run ... as opposed to penalizing the whole class.

long ago i saw an aggravated case of this when a state univ. system went from owning their own computer to the state legislature contracting for univ. computing time with an independent computer service. then such student machinations were no longer simple case of internal univ. "funny" money ... but involved real money budgeted from the state legislature.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 21:49:21 -0700
Morten Reistad writes:
There are levels of SMP in terms of scalability.

These OS'es (and other complex stuff like databases, rendering systems etc) tend to go through the same "levels" of SMP.

There seems to be a particular point at 4-6 way parallellism that a lot of coders and hardware builders struggle with.


two-way 370 processor adds cache overhead such that a two-way 370 is considered to be at best 1.8 times the performance of a uniprocessor. part of the issue is that the 370 caches are slowed down 10percent to allow for cross-cache chatter in support of cache coherency. any cache thrashing invalidates would further degrade the smp thruput compared to uniprocessor.

using the 1.8times hardware thruput rule-of-thumb ... actual system thruput was frequently pegged at 1.5times ... because of additional kernel overhead managing smp environment.

the orignal smp software version adapted from VAMPS ... turned out to require minimal number of changes to a uniprocessor kernel, almost zero lock contention in normal operation and managed close to the 1.8 times thruput (having almost zero incremental smp software overhead). there were even a couple cases of greater than 2times thruput (with some funny situations involving improved cache hit ratios compared to uniprocessor).

this changed with a rewrite of the code for sp1. with the 3081, there was no longer going to be a uniprocessor option. one of the major operating systems was TPF (transaction processing facility, the renamed airline control program used heavily by airline reservation systems ... but also starting to see a lot of use in financial transaction networks).

the problem was that the TPF didn't have smp support ... and the generation of computers was smp only. you could take 3081, bring up vm370 on the machine (with smp support) and run tpf in a single process virtual machine (under vm370). the issue in a dedicated TPF environment the 2nd 3081 processor would be idle most of the time.

part of the issue was that standard virtual machine operation ... involve the virtual machine execution and then when there were various kind of privilege operations which then be handled by the vm370 kernel. this processing was serialized for a single virtual machine (alternating virtual machine execution with kernel execution ... multiple processing execution was normally achieved by having multiple processor virtual machines and/or having lots of independent virtual machines).

so a gimmick was created to re-organize the multiprocessing implementation that would enable asynchronous/concurrent execution of TPF virtual machine with kernel code executing stuff on behalf of the TPF virtual machine. This rework increased the overall smp kernel overhead by about ten percent of each processor ... but it allowed the 2nd 3081 processor to get about another 50precent thruput in the single, dedicated TPF operation (with overlapped execution of kernel code and virtual machine operation ... on a processor that was otherwise idle).

the issue was that the dedicated TPF operation was only a small precentage of the multiprocessor customers ... but the new kernel rework took an additional ten percent of every customer smp processor (reducing their overall thruput ... except for the small number of dedicated TPF 3081 installations).

lots of past posts mentioning smp and compare&swap
https://www.garlic.com/~lynn/subtopic.html#smp

eventually a 3083, uniprocessor was announced for the TPF market. this was a uniprocessor version of the 3081 ... with the smp cache slow-down removed.

misc. past posts mentioning 3083:
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000d.html#9 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002m.html#67 Tweaking old computers?
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
https://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#7 Dyadic
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2005.html#22 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005s.html#7 Performance of zOS guest
https://www.garlic.com/~lynn/2005s.html#38 MVCIN instruction

all sorts of past posts mentioning TPF or ACP
https://www.garlic.com/~lynn/96.html#29 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#152 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000b.html#20 How many Megaflops and when?
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000e.html#21 Competitors to SABRE? Big Iron
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#20 Competitors to SABRE?
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#45 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#47 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2002c.html#9 IBM Doesn't Make Small MP's Anymore
https://www.garlic.com/~lynn/2002g.html#2 Computers in Science Fiction
https://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
https://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2002o.html#28 TPF
https://www.garlic.com/~lynn/2002p.html#58 AMP vs SMP
https://www.garlic.com/~lynn/2003.html#48 InfiniBand Group Sharply, Evenly Divided
https://www.garlic.com/~lynn/2003g.html#30 One Processor is bad?
https://www.garlic.com/~lynn/2003g.html#32 One Processor is bad?
https://www.garlic.com/~lynn/2003g.html#37 Lisp Machines
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2003p.html#45 Saturation Design Point
https://www.garlic.com/~lynn/2004.html#24 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004.html#49 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004.html#50 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004c.html#35 Computer-oriented license plates
https://www.garlic.com/~lynn/2004e.html#44 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#5 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#22 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005b.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005h.html#22 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#17 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005n.html#4 54 Processors?
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005s.html#7 Performance of zOS guest
https://www.garlic.com/~lynn/2005s.html#38 MVCIN instruction

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 22:38:29 -0700
"Dennis Ritchie" writes:
I don't doubt this, but I seem to recall an old paper (perhaps in Computing Reviews or some other ACM pub.) about the performance of the fully-duplexed and highly symmetric 360/67 running MTS. The paper was from U. Mich.

It was reported that on some test job mix the 2 CPU system achieved a slightly greater than 2:1 speedup over the single CPU version. This was attributed to the fact that the clever 360/67 hardware sent interrupts preferentially to a momentarily idle processor, generating less state-saving, thus less interrupt overhead.


ref:
https://www.garlic.com/~lynn/2006d.html#5 IBM 610 workstation computer

there was a separate issue/paper involving TSS/360 on 360/67 which claimed over three times the thruput on two-processor 360/67 compared to single processor.

my scenario was a particular workload which had heavy asynchronous i/o interrupts and causing lots of cache line replacement (high cache miss rate). some hacks with the smp support did some stuff for processor/cache affinity ... the improved cache hit rate because of two caches more than offset the degradation introduced by smp cross-cache chatter.

the 360/67 didn't have cache ... the maximum memory on a uniprocessor was one megabyte but you could double that to two megabytes in a two processor configuration. the tss/360 kernel fixed memory requirements was neary 700kbytes ... on a one mbyte sysetm, that left possibly 300kbytes ... which resulted in lot of page thrashing (and very low cpu utilization). going to two-processor system increase the memory for applications pages by nearly a factor of four (from about 300k to about 1.3m). the evidence was thruput was highly paging constrained ... since it strictly only doubled the processor thruput (didn't have the cache gimmick where processor thruput is a function of both the cache hardware performance as well as the cache hit ratio). however, the tss/360 thruput was almost directly proportional to the amount of real storage available for application execution.

what i didn't say in the 370 uniprocessor to 370 two-processor operation was that both workloads ran at 100percent cpu utilization of all available processors. assuming identical cache hit rate, strict hardware thruput should have only 1.8 (with an ideal smp kernel pathlength implementation) ... the additional thruput was because of some gimmicks with cache hit rate because of processor/cache affinity.

the tss/360 360/67 "improvement" was because the single processor thruput was highly real storage constrained and had very low cpu utilization. the two processor approx. doubled the cpu power (which wasn't the limit in the single processor scenario) but increased real storage for application execution by approximately a factor of four ... which was the limiting factor.

the 360/67 multiprocessor nominally had a lower mip rate ... not because of cache and cache coordination ... but the 360/67 uniprocessor had a single ported memory bus with 750ns cycle time for 8byte storage access. the 360/67 multiprocessor implementation used a multi-ported memory bus ... which slightly increase memory cycle time (and resulted in slower mip rate).

however, in an workload that was both heavy cpu utilization as well as heavy i/o activity ... in the uniprocess/simplex machine there was lots of memory contention between processor and i/o (resulting in reduced processor thruput). in the multiprocessor, multi-ported memory bus ... heavy i/o had much lower memory interferance contention with cpu use and/or memory contention between multiple cpus and i/o.

a "half-duplex" 360/67 with a single processor had lower idealized mip rate than a "simplex" 360/67 because of the fixed additional latency introduced by the multi-ported memory bus. however, in a workload that was both cpu intensive and i/o intensive, a "half-duplex" 360/67 had higher effective mip rate than a simplex 360/67 (because of reduced memory bus contention between cpu and i/o)

misc. past posts mentioning half-duplex and/or multi-ported memory:
https://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#26 System/360 Model 30
https://www.garlic.com/~lynn/96.html#39 Mainframes & Unix
https://www.garlic.com/~lynn/97.html#18 Why Mainframes?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2000.html#88 ASP (was: mainframe operating systems)
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000e.html#2 Ridiculous
https://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#46 Small IBM shops
https://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001e.html#62 Modem "mating calls"
https://www.garlic.com/~lynn/2001j.html#14 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2002f.html#7 Blade architectures
https://www.garlic.com/~lynn/2002f.html#37 Playing Cards was Re: looking for information on the IBM 7090
https://www.garlic.com/~lynn/2002g.html#33 ESCON Distance Limitations - Why ?
https://www.garlic.com/~lynn/2002h.html#26 Future architecture
https://www.garlic.com/~lynn/2002i.html#43 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#36 Difference between Unix and Linux?
https://www.garlic.com/~lynn/2002j.html#74 Itanium2 power limited?
https://www.garlic.com/~lynn/2002j.html#78 Future interconnects
https://www.garlic.com/~lynn/2002q.html#35 HASP:
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003b.html#46 internal network drift (was filesystem structure)
https://www.garlic.com/~lynn/2003g.html#10a Speed of APL on 360s, was Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003g.html#10 Speed of APL on 360s, was Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2003j.html#2 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2003o.html#54 An entirely new proprietary hardware strategy
https://www.garlic.com/~lynn/2003o.html#64 1teraflops cell processor possible?
https://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004c.html#46 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2004g.html#35 network history (repeat, google may have gotten confused?)
https://www.garlic.com/~lynn/2004n.html#45 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005.html#11 CAS and LL/SC
https://www.garlic.com/~lynn/2005.html#38 something like a CTC on a PC
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005h.html#7 IBM 360 channel assignments
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005l.html#26 ESCON to FICON conversion
https://www.garlic.com/~lynn/2005n.html#7 54 Processors?
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005r.html#43 Numa-Q Information
https://www.garlic.com/~lynn/2005s.html#20 MVCIN instruction
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces
https://www.garlic.com/~lynn/2006c.html#7 IBM 610 workstation computer
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 23:32:20 -0700
Brian Inglis writes:
The assumption here is that you have sufficient contention on your global system lock(s): if you don't, then going to fine grained doesn't make much sense. With global system lock(s), you can get contention costing 15-20% of each CPU, and I can't imagine an OS designed such that fine grained locks, where contention is less likely, add up to the same 15-20% of each CPU. (Well, maybe there's one OS vendor where that could happen!)

this is somewhat dependent on the traditional spin-lock approach to global system lock. this was almost totally eliminated with the VAMPS bounce lock approach (reducing the cost of having lock contention) and moving some very selective kernel pieces outside the global system lock (reducing the probability of lock contention). misc. recent bounce lock post
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past
https://www.garlic.com/~lynn/2006b.html#40 another blast from the past ... VAMPS
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#47 IBM 610 workstation computer

part of the bounce lock benefit was a extremely high performance light-weight deferred execution mechanism specifically designed around asynchronous serialization of the global lock use. part of the benefit of the given the processors and the cache sizes of the period ... if there was existing code executing in the kernel and there was additional request for kernel execution ... when the current processor finished it kernel work, it would check if there was additional work for the kernel. this would then tend to be resumed on the processor that was already executing in the kernel ... and you would tend to get some cache affinity and reuse of kernel cache lines (improving overall cache hit rate and peak execution rate).

the spin-lock approach not only can loose a lot of cpu cycles in the spin ... but the processor acquiring the kernel lock would then typically have high cache miss rate as kernel lines were loaded into the cache of the newly acquiring processor.

there was a lot of smp overhead later introduced ... not directly with lock contention itself ... but overhead of effectively trying to break what had been a single thread of work into two pieces of work that can be executed concurrently on two separate processors. a lot of overhead was additional cpu signalling trying to get another processor to pick up the dynamically split of new piece of work ... allowing the current processor to return to executing the virtual machine. then there was some serialization that had to occur when the two pieces had to come back into sync.

recent ref:
https://www.garlic.com/~lynn/2006d.html#5 IBM 610 workstation computer

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 22:11:18 -0700
Andrew Swallow writes
Modern instruction sets containing index registers and return instructions from have practically eliminated self modifying software. The CPU does not need write access to the code areas of Linux, Windows and WORD whilst running them.

Note the buffer overflow bugs abused by virus writers would not work if code areas were read-execute only.

Placing the programs in rom would also cause them to run a lot faster, disk reads are very slow.



some amount of virus writers put instructions into a data area ... and then clobber stack/return address to point to the attacker's instruction in some data area. in this attack scenario, it hasn't actually modified existing instruction areas.

a countermeasure for this exploit is no-execute attribute placed on data areas ... i.e. preventing instructions being fetched from areas that are otherwise considered pure data.

lots of past posts mentioning buffer overflow
https://www.garlic.com/~lynn/subintegrity.html#overflow

some specific posts mentioning no-execute attribute (in contrast to read-only/execute-only attribute)
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#0 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#1 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#32 8086 memory space [was: The Soul of Barb's New Machine]
https://www.garlic.com/~lynn/2005b.html#25 360POO
https://www.garlic.com/~lynn/2005b.html#39 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005b.html#66 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#28 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#44 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#53 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#54 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#55 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#65 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005o.html#10 Virtual memory and memory protection

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 27 Feb 2006 07:54:46 -0700
jmfbahciv writes
However, after the second CPU is on-line, each CPU added after that should be a whole unit increase. Thus the third would provide, 2.8, the fourth would give 3.8, etc. We got to measure five but never got to see how the system did with a sixth.


ref:
https://www.garlic.com/~lynn/2006d.html#5 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#6 IBM 610 workstation computer

the problem was that with one cpu ... the cache ran at full-speed. with two cpus, the cache ran at full-speed ... but its use by local cpu was slowed down by 10percent to accomodiate cross-cache chatter coming from one other cache (as part of cache coherency) ... basically the overall processor machine cycle ran at 10 percent slower.

with four cpus ... each cache was further slowed down to accommodate cross-cache chatter from three other cpus. by the time they got to 3090, the cache technology implementation had to use technology that ran significantly faster than processor machine cycle ... in order to mask any degradation caused by the massive amounts of cross-cache chatter.

now that was just the basic cache slow-down to accommodate cross-cache processing ... any cache slow-down involving actually invalidating cache-lines from signals coming from other caches was over and above base machine cycle.

part-way into the 3084 (4-way) processor time-frame there was significant projects to restructure most of the major kernels to carefully cache align kernel data structures in order to minimize cache-line thrashing (one case was different data structures overlapping in the same cache line). this cache-line data structure reorganization is claimed to have resulted in 5-6 percent overall system thruput improvement (storage alterations was still causing the cross-cache cache-line invalidates to be broadcast, but the same cache line was much less frequently being subject of concurrent use by multiple processors).

the heavy penalty paid by 370 multiprocessor cache implementations for extremely strong memory consistency ... was one of the reasons that i've claimed that 801/risc went to opposite extreme .... and you didn't even find hardware cache consistency between the separate I(nstruction) and D(ata) caches on the same processor. this also manifested it in system "program loaders" requiring new (software) instruction to force changed cache-lines from the data cache to memory (so that program instruction memory areas that the program loader may have been treating as data and altered ... were forced to memory ... so the alterations would be picked up by the instruction cache when the loaded program started running). Basically the system program loader is a special case where instruction memory areas would be treated as data areas and modified.

this slightly drifts into the thread posting on buffer overflow exploits. it becomes slightly more problamatical for architectures with separate I & D caches (with no automatic cache consistency). any storage alterations that are resident in the (store into) data cache may take some time before appearing in memory and be visible to the instruction cache.
https://www.garlic.com/~lynn/2006d.html#8 IBM 610 workstation computer

collected past posts mention buffer overflow
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 27 Feb 2006 08:07:19 -0700
jmfbahciv writes
Right. The biz did a similar thing when disks were [emoticon tries to think of a description] primitive. What I don't understand is why memory development has lagged so far behind. Yea, yea. People spout "faster" numbers at me but nobody can spout a number that includes the full system because it's impossible. So controls need to be made available to the human who can then tweak and, based on experiment, judge the best settings for that particular system. Load balancing in the olden days, included discrimination of groups of users (kiddies got on System X and profs got on System Y). Profs might be more compute-intensive so their settings and imposed limitation will be vastly different than the limitations for the kiddies who are only learning how to use the gear.


it isn't that memory technology has lagged. there are certain physical constant issues that existing paradigms are constrained by. you may have to significantly change the paradigm to get around the limitations of various physical constants (like maximum signal propagation time).

tera (now renamed cray) attempted one such paradigm change with massive threading and eliminated (cpu) cache all together. this is similar to the 370/195 dual i-stream threading change. normal codes were only keeping the 195 pipeline half-full (and ran at half the peak 195 instruction thruput rate). the idea was to add a second instruction stream in hopes that would keep the (single) 195 pipeline full and hopefully maintain aggregate peak instruction thruput rate (by having dual i-stream/threads). tera talks about things like 256 concurrent threads.

misc. past posts mentioning tera
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2002l.html#6 Computer Architectures
https://www.garlic.com/~lynn/2003.html#34 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003p.html#5 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?

in the late 70s, i started noticing similar technology system mismatches with disk. i got in trouble with the disk division by claiming that disk technology system thruput had declined by a factor of 10 over a period of 10-15 years (i.e. disks thruput got five times faster but processor got 50 times faster and the amount of electronic memory increased by 50 times). as a result, you saw paradigm shift to significant amounts of electronic file caching ... at all levels in the system (analogous to use of cpu caching to compensate for memory latency restrictions).

misc. past posts:
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
https://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
https://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
https://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2003i.html#33 Fix the shuttle or fly it unmanned
https://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 27 Feb 2006 08:33:28 -0700
jmfbahciv writes
And that's what happens when people have a "task-based" thinking style. Timesharing thinking styles don't wait until task 1 is done before looking at task2.

I've been playing with the hypothesis that "task-based" is a male thinking style; timesharing is a female thinking style. Before you type bits at me, ask a female who know about the computing biz.



i don't think it is necessarily gender limited. there may be actually be two separate issues ... one is the different thinking styles and two actually being able to dynamically adapt thinking styles as appropriate to the circumstance.

note that single thread vis-a-vis multi-threaded thinking modes may also spill over into areas like doing single variable/objectiive optimizations vis-a-vis doing multi-model optimizations (aka trade-offs).

one of the things that i was doing back as an undergraduate with the dynamic adaptive fairshare scheduling ... was being able to dynamic adapt scheduling to the bottleneck. some strategies might be able to default cpu use to resource utilization and dynamically adapt scheduling to amount of (cpu) resource consumed. one of the things that i was playing with was attempting to dynamically determine different resource system bottlenecks on a moment-to-moment basis and dynamically adapt bottleneck resource factors given to the different types of resources that might be consumed by a task.

for instance, for a heabily paging constrained/bottleneck moment the amount of cpu consumed by a task may be nearly immaterial in the efficient operation of the overall system.

one of the things that i got into trouble with doing the scheduler is that a lot of kernel programmers tend to be strictly state oriented ... setting and testing particular state bits. there were places in the scheduler that i would arbitrarily switch back and forth from a highly optimized bit/state manipulation style of programming ... to much more a fortran/apl probabilistic (say operations research) style of programming (although done in assembler). individuals accustomed to a single programming style/mode frequently found it difficult to follow the logic flow across the arbitrary different programming styles.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 27 Feb 2006 08:45:26 -0700
Anne & Lynn Wheeler writes
i don't think it is necessarily gender limited. there may be actually be two separate issues ... one is the different thinking styles and two actually being able to dynamically adapt thinking styles as appropriate to the circumstance.

note that single thread vis-a-vis multi-threaded thinking modes may also spill over into areas like doing single variable/objectiive optimizations vis-a-vis doing multi-model optimizations (aka trade-offs).



ref:
https://www.garlic.com/~lynn/2006d.html#11 IBM 610 workstation computer

of course, i would now be remise if i failed to mention boyd.
https://www.garlic.com/~lynn/subboyd.html#boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2

part of his OODA-loops (repeative observe, orient, decide, act iterations) was that as you were cycling thru the iterations you were also constantly looking at things from different perspectives and dealing with issues from a multi-faceted standpoint.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 27 Feb 2006 09:53:41 -0700
jmfbahciv writes
Is this 3 months' runtime or 3 months' wallclock time? I've been told stories in the recent past about 3 months of runtime.


it was 3 months job queue elapsed time from the time that the job was submitted until it got to run. it only ran for several hrs, but there was big backlog for 370/195 service in bldg. 28. it also depended some of priority and/or importance ... some of the priority/importance was also by size of job (i.e. runtime duration).

misc. posts in this thread:
https://www.garlic.com/~lynn/2006c.html#44 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#4 IBM 610 workstation computer

by comparison, i've mentioned that the air bearing simulation work .... part of designing the first thin-film, floating disk heads was getting one week turn-around (again that was mostly job queue time, waiting for other things in the queue to finish).
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 27 Feb 2006 10:19:28 -0700
Bernd Felsche writes
Non-stop computing is a luxury for the vast bulk of systems that can and do benefit from SMP. If there's a CPU fault, they will probably crash and the failed CPU will simply not be "found" should the system be able to reboot without further physical intervention. That's the limit of fault tolerance.


part of the issue is fault isolation. also, i think Jim Gray, after he left sjr for tandem ... published a paper in the early 80s about major sources of faults were software rather than hardware (hardware reliability was significantly improving while software reliability wasn't seeing similar improvements ... and the amount of software was drastically increasing).

isolating various kinds of software faults can be a lot harder in a shared memory configuration.

i mentioned that one of the reasons that we did ha/cmp cluster scale-up was that 801/rios had no provisions for cache consistency ... and so cluster approach was about the only way left for adding processors. however, we also did quite a bit of work on fault isolation and recovery as part of ha/cmp product.
https://www.garlic.com/~lynn/subtopic.html#hacmp

recent post in this thread ... mentiong distribute lock manager for ha/cmp and some aspects of database logs and commits as part of recovery.
https://www.garlic.com/~lynn/2006c.html#8 IBM 610 workstation computer

an earlier scenario i faced was trying to get operating system into bldg. 14 disk development machine room. they had tried doing testing of components in development in operating system environment and found things like the (system software) MTBF for MVS was on the order of 15 minutes with a single test cell.

I undertook to rewrite the operating system input/output supervisor to be absolutely bullet-proof ... so they could not only perform testcell testing in an operating system environment ... but could do multiple concurrent testcell testing ... concurrently with other activity. Part of the effort in the redesign and rewrite was having a rich source of i/o errors ... a single testcell might generate more errors in 15 minutes (and/or other types of failures modes) than a typical datacenter with football field full of disk would see in a year (in addition to exhibiting behavior that was precluded by standard architecture specified operation). misc. past postings mentioning work for bldg. 14 and bldg. 15
https://www.garlic.com/~lynn/subtopic.html#disk

part of this intersects the scheduling of the air bearing simulation job mention in other posts in this thread
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer

bldg. 14 (engineering lab) and bldg. 15 (product test lab) got early/first processor models. bldg. 15 took delivery of one of the first 3033s (long before they shipped to customers) for testing with current and new disks. the 3033 had instruction thruput of about half that of peak 370/195 (but was the same for most workloads). I had gotten the rewritten ios operational and most of the machines in bldg. 14 & 15 were running it. previously machines had to be serially scheduled for stand-alone testing, a single testcell at a time. now any testcell could do its testing as needed w/o having to wait for stand-alone test time. with operating system environment ... even with multiple testcell testing going on, the 3033 rarely got more than 5percent busy. as a result, we now had essentially a whole lot of unused cpu power to play with. while there was one week to 3 month backlog for scheduling work on the 370/195 in bldg. 28, just across the street, in bldg. 15, we had seemingly unlimited amounts of processing power immediately available. so one of the things we did was get the air-bearing simulation application running on the 3033 in bldg. 15 ... where it got nearly immediate availabiltiy of almost all the processing it needed. Not only did getting fixing operating system so that it was usable in bldg. 14 & bldg. 15 ... significantly improve productivity of new hardware testing, it also made available significant amounts of additional cpu power that we put to use for things like getting the air-bearing simulation part of designing new thin-film floating heads speeded up.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hercules 3.04 announcement

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules 3.04 announcement
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 27 Feb 2006 11:26:11 -0700
Anne & Lynn Wheeler wrote:
i shouldn't have been so flip ... it wasn't actually called c-star ... it was called seastar. there was also a seahorse that would provide fileserver capability. some of this was "jupiter" controller project reborn ... but some ten years later. if i remember correctly, dynamic pathing survived out of jupiter (someplace i may have some long dissertation on how to slightly modify the original dynamic pathing architecture proposal and save something like an order of magnitude or more in the number of circuits to implement as you scaled up).

part of the issue with jupiter controller work starting in early 80s and then some 10+ years later with seastar and seahorse ... was that disk controller culture tended to be quite point/targeted ... much more like embedded system operation. you had a culture that was very box oriented (again more akin to embedded system) ... trying to move to extremely complex system design & implementation (huge amount of complex system being designed/built into the controller).

one might even assert this was some revival of some of the objectives from the (failed) future system project from the 70s
https://www.garlic.com/~lynn/submain.html#futuresys

with complex integration between main processor and outboard controllers (which somewhat implied that the outboard controllers needed to sport some amount of complex feature/function ... as prerequisite to having complex integration with the main processor).

https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

from above:
IBM tried to react by launching a major project called the 'Future System' (FS) in the early 1970's. The idea was to get so far ahead that the competition would never be able to keep up, and to have such a high level of integration that it would be impossible for competitors to follow a compatible niche strategy. However, the project failed because the objectives were too ambitious for the available technology. Many of the ideas that were developed were nevertheless adapted for later generations. Once IBM had acknowledged this failure, it launched its 'box strategy', which called for competitiveness with all the different types of compatible sub-systems. But this proved to be difficult because of IBM's cost structure and its R&D spending, and the strategy only resulted in a partial narrowing of the price gap between IBM and its rivals.


... &

This first quiet warning was taken seriously: 2,500 people were mobilised for the FS project. Those in charge had the right to choose people from any IBM units. I was working in Paris when I was picked out of the blue to be sent to New York. Proof of the faith people had in IBM is that I never heard of anyone refusing to move, nor regretting it. However, other quiet warnings were taken less seriously.


... snip ...

i didn't make myself popular at the time making references to future system project having some similarities with a cult film that had been playing non-stop for more than a decade down in central sq. (some reference to the inmates being in charge of the institution and that there was so much impractical stuff for the period that they couldn't possibly pull it off).

of course i was also somewhat opinionated that what i had already deployed and running in scheduling was more advanced than the daydreams about scheduling in the FS documents.

misc. past posts taking note of the crisis and change article
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003p.html#25 Mainframe Training
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005s.html#16 Is a Hurricane about to hit IBM ?
https://www.garlic.com/~lynn/2006.html#7 EREP , sense ... manual

the intro/abstract for the referenced article
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm

The rise and fall of IBM

Jean-Jacques DUBY Scientific Director of UAP Former Science & Technology Director of IBM Europe

After 40 years of unrivalled success, IBM is now in serious trouble. What has happened? Jean-Jacques Duby explains how the company's values and the cogs and wheels of its internal management system doomed IBM to failure, in the light of long developments in the technical, economic and commercial environment.

But why there should have been such a sudden shock remains a mystery. Perhaps IBM's mighty power had delayed its downfall, making this all the more brutal as a result, like the earthquake which follows the sudden encounter of two continental plates.



IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Tue, 28 Feb 2006 07:55:58 -0700
jmfbahciv writes
TOPS-10 called it system idle time. The system that had the most continuous high idle time was the boring one which not many people used. Thus the system load on the "slower" machine was so much lighter that it could deliver more computing service for this application than the "faster, sexier" machine because everybody else (a.k.a. users) was using the faster machine.


the 370/195 in bldg. 28 was setup as a service for lots of organizations. pasc's 370/145 was setup just for pasc's personal computing use.

pasc had a big number crunching job that only took several hrs to run on the 370/195 ... but over time the 370/195 service acquired a bigger and bigger backlog of number crunching jobs (and turn-around time is a combination of queuing time and actual service time).

at some point, somebody realized that the idle time on the pasc machine intended for personal computing use ... could be harnessed for doing number crunching intensive application. this is somewhat analogous to some of the current distributed applications that have harnessed the idle time on thousands of personal computers for tackling compute intensive applications (although the pasc application wasn't amenable to distributed implementation ... its use of idle time on computer installed for personal computing uses has some similarities).

part of the issue is realizing that a tool could be adapted for purposes that it wasn't originally intended to be used ... i.e. the idle time on a computer installed for interactive computing use could be adapted for single long running compute intensive application.

the 370/195 service was extremely popular with the computing intensive, number crunching community ... but wasn't very popular with the interactive, personal computing crowd. idle time was a characteristic of hard-to-use and/or unpopular computers ... but also started to become a characteristic of computers installed for interactive, personal computing use ... especially as the trade-off between people time costs and computer time costs shifted.

during this poriod ... before idle time was widely accepted as overhead for optimizing for personal computing use ... there were lots of studies on the effect of interactive response on personal productivity ... and different classes of systems that either did well or poorly at providing interactive response (could a system be created that optimized for total, 100 percent system resource utilization while still providing optimized interactive response). this was an era of the arguments about whether subsecond response was either good or bad for humans, whether it might actually make a different in human productivity, and/or whether optimizing for human productivity was actually a good or bad thing.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Tue, 28 Feb 2006 08:05:43 -0700
KR Williams writes
That is *not* SMP. Fault-tolerant computing comes closer and SMP isn't needed to tolerate faults. &Bigbank. in the UK used a four- way (2SPx2AP) 3090 as a MP (each side doubling for the other), then doubled that on-site, and doubled that again off-site to get to a level or FT they could live with. This certainly isn't SMP.


when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

we coined the terms disaster survivability and geographic survivability to distinguish from simple disaster/recovery. in that period, there had been some studies of dataprocessing operations surviving natural disaster ... and something about most of them tended to have localized geographic impact within 40mile radius (although noted in past threads, there could be some with natural resource specific distribution characteristic ... like less than a mile on each side of a flooding river ... but the length of the documented flood plains could stretch for much more than 40 miles).

also during that period we were asked to help write the corporate continuous availability strategy document ... we put in some stuff based on some work we were doing on ha/cmp geographic survivability ... which was later removed because of objections from other corporate product divisions (as not being able to be met by them).

misc. past posts mentioning disaster survivability and/or geographic survivability:
https://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
https://www.garlic.com/~lynn/99.html#145 Q: S/390 on PowerPC?
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/aadsm2.htm#availability A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#41 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#46 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#13 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001n.html#47 Sysplex Info
https://www.garlic.com/~lynn/2002.html#44 Calculating a Gigalapse
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#68 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002i.html#24 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002l.html#15 Large Banking is the only chance for Mainframe
https://www.garlic.com/~lynn/2002m.html#5 Dumb Question - Hardend Site ?
https://www.garlic.com/~lynn/2002p.html#54 Newbie: Two quesions about mainframes
https://www.garlic.com/~lynn/2003.html#38 Calculating expected reliability for designed system
https://www.garlic.com/~lynn/2003f.html#36 Super Anti War Computers
https://www.garlic.com/~lynn/2003h.html#31 OT What movies have taught us about Computers
https://www.garlic.com/~lynn/2003h.html#60 The figures of merit that make mainframes worth the price
https://www.garlic.com/~lynn/2003o.html#6 perfomance vs. key size
https://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
https://www.garlic.com/~lynn/2004c.html#16 Anyone Still Using Cards?
https://www.garlic.com/~lynn/2004h.html#20 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004m.html#24 IBM Spells Out Mainframe Strategy
https://www.garlic.com/~lynn/2004m.html#26 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#39 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004o.html#5 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004p.html#48 History of C
https://www.garlic.com/~lynn/2005.html#52 8086 memory space
https://www.garlic.com/~lynn/2005c.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005c.html#7 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005i.html#11 Revoking the Root
https://www.garlic.com/~lynn/2005k.html#28 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005n.html#0 Cluster computing drawbacks
https://www.garlic.com/~lynn/2005n.html#7 54 Processors?
https://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits
https://www.garlic.com/~lynn/2005o.html#38 SHARE reflections
https://www.garlic.com/~lynn/2005p.html#11 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005q.html#38 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005u.html#37 Mainframe Applications and Records Keeping?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Wed, 01 Mar 2006 12:14:35 -0700
Morten Reistad writes
The $TELCO procedures I mentioned could clearly be problematic in a setting like New Orleans; but they would be among the last services to go.

In addition to the separation requirement there are requirements that the sites not share a single CO for telco access, nor the same power substation. It is also a requirement that they are not in the same floodarea (i.e. a full set of dikes between them if they are lowlaying); also they are not to be served by the same main road; and some distance requirements to airports, pipelines etc.

They are not overdoing it either. Telcos don't expect to provide other than emergency services after martial law has been declared. Sending out notices about this is a routine occurrence at any large telco.



there was a single dataprocessing site that had carefully selected building in a block that had different water mains down two different sides of the building, different electrical substation from two different directions, and telco from four different COs from four different service directions (lots of this is frequently referred to as telco provisioning and/or diverse routing). one day a transformer in the basement blew and required that the building be evacuated ... in addition to taking out the service.

in the early 80s, my wife was involved in putting in a (non-AT&T) CO into westchester country area. turns out that they didn't get their 48v battery array setup correctly ... and at the first power outage ... the lack of having gotten the 48v battery array correctly setup ... took out all phone service in westchester country (lots of nasty finger pointing from multiple directions).

there is the well-known story about internet service to new england being taken out. the service had been carefully set up with 9 different lines going by 9 different trunks in 9 different directions out of the new england area. however, over the years the 9 different trunks had been slowly consolidated ... until they were all being carried by a single fiber optic cable that went thru some place in Connecticut ... where a backhoe was digging one day.

one of the scenarios is to consider no single point of failure ... which not only includes the provisioning of actual equipment in the datacenter, but all services required by the datacenter ... and making sure that none of them are likely to be subject to a common disaster (i.e. the rule-of-thumb about 40miles separation ... however that might not preclude all possible kinds of common disasters ... for instance something like a complete regional power blackout).

there is also the story about a major atm transaction processing datacenter ... that had its disaster/recovery site in WTC. after the bombing in the early 90s, the disaster/recover site was down .. and in that period there was heavy snowfall in the area and the roof of the prime site datacenter collapsed. it took them something like 48hrs to get something back up and running.

as US HONE evolved from a couple simple cp67 datacenters in the early 70s into several major vm370 by the mid-70s providing online services for all sales, marketing and field people in the US (sometime in mid-70s, mainframe computer orders were getting so complex that they could no longer be filled out by hand and had to be run thru a HONE configurator). around 77, all the US HONE datacenters were consolidated into a single datacenter in northern cal. this had multiple large SMP processors in clusters all interconnected with the disk farm and other services. load-balancing and fall-over was provided across all processors in the complex.

approx. 1980, the US HONE datacenter was replicated in Dallas because of earthquake issues as a failure mode. this provided load balancing and fall-over within the datacenters as well as between the northern cal. datacenter and the dallas datacenter. shortly afterwards a third redundant, load-balancing, fall-over datacenter was created in boulder. About that time, the US HONE datacenter had number of defined userids approaching 40k (sales, marketing, and field people in the US).

this is in addition to the US HONE datacenter being cloned at many places around the world ... providing world-wide support for sales, marketing, and field people.

misc. past postings mentioning HONE (and/or APL)
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hercules 3.04 announcement

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules 3.04 announcement
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 01 Mar 2006 12:17:41 -0700
"Kevin G. Rhoads" writes
Was that "King of Hearts"?


ref:
https://www.garlic.com/~lynn/2006d.html#15 Hercules 3.04 announcement

yep

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Statistics on program constant sizes?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Statistics on program constant sizes?
Newsgroups: comp.arch,comp.programming
Date: Wed, 01 Mar 2006 14:10:03 -0700
MitchAlsup writes
Address constants need to be considered in two situations, offsets and direct addresses. These constants are invented by the compiler and/or linker and not typically seen by the application programmer. There are typically more of these constants than there are arithmetic constants. Offsets are typically small, direct address constants are rarely small. Offsets are used to represent a displacement from the beginning of a structure to that element, displacements onto the stack for that element, and displacements into a frame. Only rarely are frames and stack depths such that these displacements need to be large (> 8-bits). Direct addresses are amost always left to the linker for virtual placement of the referenced data item. As such, these are almost invariably allocated a virtual address bit-size* in the instruction even if the linker succeeds in placing that data item in a location that could be addressed directly with fewer bits.


there is an issue with "absolute" address constants in virtual page-mapped operations ... avoiding needing to pin a page-mapped object to a specific virtual address because of direct address objects embedded in the page-mapped object and/or having to prefetch random pieces of the page-mapped object to swizzle the direct address objects to correspond to the address that the page-mapped object has been loaded in a specific address space (this is particularly aggravating in large systems with the possibility that the same page-mapped object may appear simulataneously in multiple different virtual address spaces concurrently on the same real machine ... and possibly have to be materizlied at different virtual addresses).

systems designed from the start for this type of virtual address space behavior have possibly done

1) a lot larger offset address objects that can cover the a large, complete page-mapped object

2) direct address objects in different structure from program ... where the direct address objects can be swizzled for mapping to specific virtual address (within a specific address space) w/o needing to impact the (program) page mapped object.

I got into trying to craft both techniques in the early 70s when introducing page-mapped paradigm into a system that hadn't been originally designed for virtual address page mapping operation. misc. past posts mentioning some of the difficulties
https://www.garlic.com/~lynn/submain.html#adcon

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Thu, 02 Mar 2006 10:21:40 -0700
jmfbahciv writes
As for sceduling, there are times when just doing the work brute force method is a huge performance win rather than spending a lot of CPU exec time trying to do the most elegant scheduling. IIRC, TTYCHK echoing fell into this category. It was more efficient to just do the echo at the interrupt rather than stack it. Note that this is based on my recall of the guys talking about it.


that was something akin to the original cp67 10 level scheduler (some of which may have even been inherited from ctss). the cpu overhead was more overhead than any benefit it provided (in part because it only looked at cpu and didn't have any page thrashing controls). it also had some stuff that rescanned the whole list ... that resulted in the overhead being proportional to the size of the queue (number of users) ... not the amount work performed.

the change done by lincoln labs to two level scheduler ... got lot less sophisticated about differentiating cpu utilization ... significantly reduced the scheduling cpu overhead and thru in a very rudimentary page thrashing control ... however it left in some frequent list scanning ... so while the overhead scanning the lists was reduced (compared to the earlier implementation) ... it still grew as the size of the queues grew ... not proportional to the thruput.

so the dynamic adaptive feedback stuff ... fundamental approach was attempt to achieve near zero scheduling pathlength overhead at the same time making it much more proportional to the work perform ... not proportional to the size of the queues (which can grow non-linear independent of the amount of resources handed out).

so that was part of the various slight of hand stuff ... to come up with coding solutions that didn't involve rescanning complete queues and attempting to do things like n*(n-1)/2 set of comparisons regarding various states.

rule of thumb, scheduling overhead was proportional to the thruput, not the size of the queues. for some topic drift ... we encountered a drastic example of this in the early web scale-up days. tcp implementations originally assumed relatively long-lived sessions. http came along and used tcp for reliable transaction operations ... with all the setup/teardown ... minimum of 7 packet exchange (we had earlier worked on xtp that had minimum of 3 package exchange for reliable transaction). tcp kept a finwait list of all terminated sessions on the off chance that there might be some dangling packets arrive after the termination handshake (because of out-of-order delivery feature of underlying ip). as webservers scaled up ... there was a period where several major platforms were spending 95percent of total cpu doing linear scan of the finwait list on packet arrival. misc. past finwait posts:
https://www.garlic.com/~lynn/99.html#1 Early tcp development?
https://www.garlic.com/~lynn/99.html#164 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/2000c.html#52 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2002.html#3 The demise of compaq
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002i.html#39 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002q.html#12 Possible to have 5,000 sockets open concurrently?
https://www.garlic.com/~lynn/2003e.html#33 A Speculative question
https://www.garlic.com/~lynn/2003h.html#50 Question about Unix "heritage"
https://www.garlic.com/~lynn/2004m.html#46 Shipwrecks
https://www.garlic.com/~lynn/2005c.html#70 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005g.html#42 TCP channel half closed
https://www.garlic.com/~lynn/2005o.html#13 RFC 2616 change proposal to increase speed

misc. past posts mentioning xtp (and hsp, high-speed protocol, as well as our problems with OSI in ISO standards meetings)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

another scheduling point i learned from cp67 in my undergraduate days is that frequently there is a whole bunch of scenarios where spending a lot of time making a decision has little or no overall effect. make the most frequent decisions a lot simpler and save more sophisticated stuff for less frequent. misc. past posts on scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare

one of the other places that i threw in slight of hand coding was in the page replacement. i had highly optimized the selection to be near zero overhead ... while still near optimally approx. true LRU replacement. a problem is that true LRU (and the various approx) degenerate to FIFO under various conditions. I came up with a slight of hand that degnerated to random (instead of fifo) under those conditions. part of the trick was the transition to random didn't involve any real coding ... it involved a slightly different way that the data structures were built/handled ... such that degenerating to random (instead of FIFO) was much more a side-effect of the data structure ... as opposed to lots of fancy coding (that would have tended to have significantly more overhead).

misc. past postings on page replacement, etc
https://www.garlic.com/~lynn/subtopic.html#wsclock

for even more drift ... effectively part of dynamic adaptive resource management was keeping track of the rate at which resources were consumed. xtp also wandered into doing rate-based pacing ... as opposed to windows and things like slow-start for congestion control. in the same period that slow-start was originally published ... there was paper in acm sigcomm conference showing that window-based congestion control was unstable in high-latency, heterogeneous networking environments (and the original point of windows is as latency compensation in high-latency environments). misc. past posts mentioning rate-based pacing
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/2002p.html#31 Western Union data communications?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003.html#59 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#64 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#19 tcp time out for idle sessions
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003p.html#15 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#16 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#29 CDC STAR-100
https://www.garlic.com/~lynn/2004n.html#35 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#62 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#3 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#57 high speed network, cross-over from sci.crypt
https://www.garlic.com/~lynn/2005d.html#6 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005g.html#4 Successful remote AES key extraction
https://www.garlic.com/~lynn/2005q.html#22 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#28 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
https://www.garlic.com/~lynn/2005t.html#38 What ever happened to Tandem and NonStop OS ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Thu, 02 Mar 2006 11:17:31 -0700
Steve O'Hara-Smith writes
No it's not. The most common way of providing delivery of non stop continuous computing services these days is massive redundancy at the machine level rather than redundancy within the machine.

The first setup I worked on with this approach used 20 Motorola 88K based UNIX boxes with a spec requirement that any single machine could fail and be replaced with a new box without interrupting the processing or losing a single bit of data. It took some careful design but that spec was achieved. These days it's considered to be an easy stunt to pull off and systems of several hundred machines in which several machines can die without anyone noticing a glitch are in use.



when were doing ha/cmp ... we talked to the 1-800 people. they had five-nines requirement (5 minutes downtime per year) ... and were using a platform that was hardware fault tolerant. however, the platform periodically required system software maint. ... even doing it only once a year ... they still saw outages on the order of 30+ minutes (using several years worth of down-time budget per year).

the ss7 had a pair of T1 links that it used to talk to the 1-800 lookup box ... and if it failed to get an answer from one side ... it would redrive the same request down the other line (in some sense 1-800 are like domain/host names that have to be looked up to get the real network address ... err, phone number)

a pair of ha/cmp boxes ... one attached to each T1 link could provide the same hardware MTBF as the fault tolerant hardware box ... with the redirve from the ss7 masking any failure/fall-over.

then, there was a proposal to use a pair of fail-over fault tolerant hardware boxes ... in order to mask their software downtime; however going to the expense of a pair of fail-over fault tolerant hardware boxes then negated the purpose of having the fault tolerant hardware boxes in the first place. part of this was that the reliability of basic (non-fault-tolerant) hardware had significantly advanced/improved over the years.

misc. past ha/cmp posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2006 07:54:12 -0700
cdl@deeptow.ucsd.edu (Carl Lowenstein) writes
LBTs?? Well, I cast my mind back about 35 years and say "aha Little Blue Tapes" for sticking down the ends of DECtape. I always thought they stuck on by static electricity not by spit.


they were also common on ibm tapes ... especially the ptf tapes ... small gray plastic ... barely bigger than the space for the hub ... pontentially having a 100ft or so. they didn't have cases. the little blue tapes were little less important on the tapes that came in cases. "ptf" ... program temporary fixes ... distributed on these really cheap mini-reels.

then came the drive sthat supported the autoloading band ... and you no longer needed such stuff.

a little topic drift, recent thread about mounting tape (have some URL pointers to pictures of tape drives and (large) reels of tape:
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape
https://www.garlic.com/~lynn/2006b.html#7 Mount a tape

this is picture of 3420 9track tape ... with the autoload strap ... you no longer needed the little blue tapes to keep the tape from unraveling around the hub.
http://ftp.columbia.edu/acis/history/media.html

the small gray plastic ptf tapes (with possible 100ft-200ft of tape, as opposed to 2400ft of tape for full-size tapes, there were also "mini-reels" that were 1200ft and mini/mini reels that came in a rubbermaid-like snap-off cover container with possibly as little as 400ft of tape) had maybe half inch wide flange for tape (maybe inch bigger diameter than the hole in the middle for the tape drive hub).

another picture of 3420 9track tape reel, along with a 3480 tape cartridge
http://researchweb.watson.ibm.com/journal/rd/474/brads7.jpg

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2006 12:44:17 -0700
David Scheidt writes
If the fault tolerent application has performance requirements, then you can't. Half the machine has to be able to meet the peak demands. If you're relying on the "redundant" gear to meet performance goals, then you've no longer got redundancy.

It's possible to build systems that take advantage of the redundant hardware for increased performance. I've even run some. But there is always pressure to add features, and those features eat system resources. If the system managers aren't careful, the resource usage will exceed what the cluster in a degraded state can do. For lots of applications, a system that is performing poorly is worse than one that's dead.



when we started project for ha/cmp product ...
https://www.garlic.com/~lynn/subtopic.html#hacmp

i created a cluster taxonomy around


• mode1 ... 1+1 fail-over (2nd processor idle)
• mode2 ... 1+1 mutual fall-over .. both processors running
but 2nd, fail-over processor running non-critical
            apps
• mode3 ... 2 ... i.e. concurrent operation running shared workload
and concurrent access to shared disks. this required
something more akin to mainframe loosely-coupled or
            vax/cluster concurrent disk access support.

from there we expanded from 2 to N ... where there could be N+1 ... spare processor could stand-in for any single failed processor (some years later found some other vendors using similar language in some of their marketing brochures).

this is more akin to raid for disk reliability ... aka simple mirroring has fully replicated disks ... but you have raid5 something like 8+1 ... eight data disks, plus a parity disk ... where any single disk failure and keep running with the remaining.

there are also some similarities to the resource consumption discussed in background batch ... i.e. applications soaking up resources not needed by other processors (and possibly lack any significant time-critical requirements).
https://www.garlic.com/~lynn/2006c.html#44 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006d.html#0 IBM 610 workstation computer

note that with software becoming more & more the primary failure cause ... partitioned, non-shared memory provides an extra level of fault isolation ... that may be compromised in shared-memory/smp configurations.

and, of course, shared-memory is very conducive for geographic separation implementations.

for a little topic drift ... my wife did a stint in pok where she was responsible for (mainframe) loosely-coupled (cluster) architecture ... where she created Peer-Coupled Shared Data architecture ...
https://www.garlic.com/~lynn/submain.html#shareddata

which has evolved into today's parallel sysplex; a few references:
http://www-03.ibm.com/servers/eserver/zseries/swprice/sysplex/
http://www.redbooks.ibm.com/abstracts/sg245346.html

and geographically dispersed parallel sysplex (geoplex):
http://www-306.ibm.com/software/success/cssdb.nsf/CS/AMWR-646P8F?OpenDocument&Site=software

and current ha/cmp web page
http://www-03.ibm.com/servers/systems/p/software/hacmp.html

which even mentions various remote site backup implementations

this mentions ha/cmp enhanced scalability
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg245327.html?Open

... which somewhat got side-tracked ... because we were doing scale-up as part of the original ha/cmp effort ... minor ref
https://www.garlic.com/~lynn/95.html#13

as mentioned in the above, we sort of got our hands slapped and told we couldn't work on anything with more than four processors.

for more drift, when we started hsdt (high-speed data transport) project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

... one of the things were some high-speed satellite links (pilot stuff we had dedicate transponder on sbs-4) and got involved with designing fault tolerant tdma earth stations (requirements more akin to telco/CO switching hardware ... had hot-pluggable boards ... could do real-time removal and re-insertion of processor and communication boards in the rack). unsubstantiated ... but one of the vendors building a set of tdma earth stations to our "spec" ... claimed to have been approached (by some other organization) to build a duplicate of earth stations to our same specification (a little industrial espionage?)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2006 13:12:11 -0700
greymaus writes
PS, Lynn was making the point a while ago about the rate of change in hardware. Marx's 'Das Kapital' tried to theorize about how changes in technology create changes, and forced further changes in society, Marx tried to extend this to show that genuinely socialist system would take over (In my opinion, forcing the original theories into his preferred world outlook), but that was technology working fairly slowly, the frantic pace at the moment will create a logjam (to make an analogy) that may crash the system..


part of that was boyd and OODA-loops
https://www.garlic.com/~lynn/subboyd.html#boyd

some of the other web-sites where you find boyd and OODA-loop references are now looking at it from standpoint of business agility and adaptability.
https://www.garlic.com/~lynn/subboyd.html#boyd2

boyd would do his briefing to commercial companies and the issues of agility and adaptability are very applicable ... but the historical agile/adapting examples in the core briefings tended to be pulled from military history. the principles are applicable to any sort of competitive situations ... but historical short-period agile/adaptible examples are harder to find outside of military history (unless possibly you go all the way to darwin).

with regard to mentioned ach pull/push (dda / checking accounts) ... the x9a10 financial standards working group was given the requirement to preserve the integrity of financial infrastructure for all retail payments. as a result we did a pretty detailed threat and vulnerability examination of various retail payment mechanisms; ach, debit, credit, stored-value, etc. when coming up with the x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

the requirement for x9.59 was "ALL" ... not just a single transaction type or just retail or just point-of-sale ... but "ALL".

in parallel with work on x9.59 standards work ... a parallel submission was made to nacha (we ghosted the submission, but it was submitted by somebody else since we weren't nacha members at the time):
https://www.garlic.com/~lynn/nacharfi.htm
more background
https://www.garlic.com/~lynn/x959.html#aads

nacha website
http://www.nacha.org/

which is national organization for ACH payment operations.

as part of some of the authentication work, we were called in to help word-smith the cal. state electronic signature legislation (and then later the federal legislation).
https://www.garlic.com/~lynn/subpubkey.html#signature

one of the industry groups participating in the electronic signature legislation was also looking at various privacy issues and had done a study of primary driving factors behind privacy regulation and legislation ... and found the two primary driving factors were 1) id theft and 2) denial of service (to individuals by institutions; gov, commercial, etc).

a major component of id theft has been account fraud ... and there has been recent efforts to explicitly differentiate account fraud from other forms of identity fraud ... also i was asked to give keynote on identity theft a couple years ago at us treasury's annual conference.

x9.59 finally passed in 2000 ... and within the past month or so ... the five year update passed (and the updated document should shortly be showing up on the ANSI website). there is also a field defined in the current ISO 8583 standard (credit/debit) for carrying any additional x9.59 payload.

various postings about risk, fraud, exploits, vulnerabilities, etc
https://www.garlic.com/~lynn/subintegrity.html#fraud

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Fri, 03 Mar 2006 13:50:50 -0700
Anne & Lynn Wheeler writes
a major component of id theft has been account fraud ... and there has been recent efforts to explicitly differentiate account fraud from other forms of identity fraud ... also i was asked to give keynote on identity theft a couple years ago at us treasury's annual conference.


we had worked on the original payment gateway for what has since become to be called e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

part of using this technology called SSL that a small client/server startup in the valley had ... for hiding the account numbers while they were being transmitted over the internet.

however, major exploits (predating the internet) have been skimming the account number at the transaction site and data breaches copying the transaction log file ... minor reference trying to put into perspective security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

using SSL for hiding the account number during transmission did nothing to address the major existing vulnerabilities. furthermore, the arrival of the internet tended to create additional vulnerabilities for the transaction log file (which weren't addressed by SSL). some amount of work was done on adding additional authentication processes for internet transactions; however that also failed to address the major vulnerabilities of skimming and breaches ... and then using the account number in non-authenticated transactions.

x9.59 looked at the requirement for preserving the integrity of the financial infrastructure for all retail payments from two ways

1) authenticated transactions (similar to other types of efforts going on in the mid-90s)

and

2) business rule that account numbers used for x9.59 transactions could not be used in non-authenticated transactions.

the second issue was a recognition that with the account numbers being needed by a broad range of different business processes ... not just any initial transaction authorization. therefore you could blanket the earth under miles of crypto attempting to hide account numbers ... and there would still be leakage. with x9.59, "x9.59" account numbers could be subject to all sorts of breaches and skimming, and the crooks could still not use them for account fraud.

not too long ago there was a question in some forum about security would require equal strength confidentiality and authentication. this is somewhat from the security taxonomy PAIN
P - privacy (sometimes CAIN, confidentiality)
A - authentication
I - integrity
N - non-repudiation


the existing account vulnerabilities and fraud is because just knowning the account number is sufficient to perform fraudulent transactions. as a result, the countermeasure is to have ever increasing amounts of cryptography for hiding the account numbers. In practice this is not viable, inpart because of the number of business processes that require access to the account numbers. Also, there has been a relatively recent study, re-affirming long standing information, that majority of data breaches involve insiders.

So the approach by x9.59 was to eliminate the shared-secret status of (x9.59) account numbers (i.e. akin to passwords, just knowing the value enables impersonation and/or fraud) ... using strong authentication as a substitute for strong privacy/confidentiality.

For x9.59, it is no longer necessary to have strong encryption to enforce privacy/confidentiality ... as part of preserving the integrity of the financial infrastructure for all retail payments ... it is just necessary to have consistent strong authentication applied to all transactions (transactions for the associated account number w/o required authentication are rejected).

for other drift ... merged security taxonomy and glossary
https://www.garlic.com/~lynn/index.html#glosnote

for some additional drift ... discussion of SSL exploits/vulnerability associated with MITM-attacks and/or phishing (in part, because SSL is frequently not deployed as originally intended):
https://www.garlic.com/~lynn/aadsm14.htm#5 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm15.htm#28 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm19.htm#13 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm19.htm#26 Trojan horse attack involving many major Israeli companies, executives
https://www.garlic.com/~lynn/aadsm20.htm#9 the limits of crypto and authentication
https://www.garlic.com/~lynn/2003n.html#10 Cracking SSL
https://www.garlic.com/~lynn/2004i.html#16 New Method for Authenticated Public Key Exchange without Digital Ceritificates
https://www.garlic.com/~lynn/2004q.html#42 browser without "padlock" secure?
https://www.garlic.com/~lynn/2005g.html#1 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#44 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005l.html#19 Bank of America - On Line Banking *NOT* Secure?
https://www.garlic.com/~lynn/2005m.html#0 simple question about certificate chains
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#40 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/2006c.html#36 Secure web page?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sat, 04 Mar 2006 09:52:51 -0700
Morten Reistad writes
These squabbels undoubtably harmed unix a lot.

Most of it was corporate empires (as in "Dilbert") not getting the message of their products; and fighting over turf.

But 30 years is way too much. Unix wasn't ready for large-scale deployment before around 1978, and an unencumbered version was ready late 1992. I didn't see serious corporate squabbles before around 1984, when the BSD/SYSV fight was out in earnest. In the mean time Sun and Apollo([HP]) had large successes with unix or unix-like OSes, and DEC, NCR and others also had pretty good unix solutions.



note true proprietary was also non-portable. in the 60s and 70s ... hardware vendors were spending enormous amounts of money on proprietary operating systems for their platforms (sometimes spending more money on core software as enabler for their hardware). the early 80s saw some number of hardware vendors being able to leverage unix to bring out a platform at significantly reduced cost ... compared to the vendor platforms of the 60s & 70s.

prior to the unbundling announcement of 6/23/69 ... software was pretty much all free (or "bundled"). various litigation by gov. and others prompted the unbundling and charging for software. kernel (operating system) software was still bundled/free ... telling the gov. that free kernel was needed for the correct operation of the hardware.

the appearance of clone processors by the mid-70s, prompted business decision to start charging for kernel software. my resource manager ... announced May 11, 1976 (transcribed "blue" letter):
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager

was guinea pig to start charging for kernel software (my reward was spending 6 months off and on with business, planning and legal people working on policies and practices for charging for kernel software).

misc. other posts mentioning unbundling:
https://www.garlic.com/~lynn/submain.html#unbundle

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Sat, 04 Mar 2006 11:57:19 -0700
greymaus writes
1) The reference I made to the young guy saying that there were faults in a internet banking system used by a local bank here was deliberatly vague. This kid was talking to what must have seemed to him to be some old man who may not understand the system, and he may have been bull********, but he seemed to be genuinely angry.. Possibly his first impression of life 'in the real world'?.


there has been lots of recent discussions on mitm and spoofing attacks
https://www.garlic.com/~lynn/subintegrity.html#mitm
on internet banking ... with attackers doing website impersonation for harvesting and skimming (phishing) information
https://www.garlic.com/~lynn/subintegrity.html#harvest

that then can be used for fraudulent transactions.
https://www.garlic.com/~lynn/2006d.html#25 Callder ID "spoofing"

part of this is some of the vulnerabilities in SSL ... not so much in the cryptography but in the business processes around its use. there are numerous browser efforts in progress for the SSL vulnerability countermeasures ... especially focused around spoofing internet banking sites.
https://www.garlic.com/~lynn/2006d.html#26 Callder ID "spoofing"

the early SSL for what came to be called e-commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

was that the end-user understood the relationship between a corporate entity and that entity's URL, the end-user typed in the URL to contact that website, and the end-user's browser used SSL to confirm that the website corresponded to the URL typed in (the site that the user thot they were talking to, was in fact the site they were talking to). This eliminated various website mitm-attacks, spoofing, and impersonations. However, it was predicated on the end-user understanding the corporate entity that they were dealing with and the relationship between that corporate entity and the URL.

As browsers and web evolved, URLs became more and more something that was supplied and clicked on ... and less and less something that the end-user actually types in (or even understood anything about).

Now you can have an attacker sending email and/or putting up a spoofed site ... the end-user clicks on something and goes to an "SSL" site. the browser confirms that the "SSL" site that the end-user is talking to corresponds to the URL clicked-on. Since the attacker can supply the URL ... the only thing prooved is that the attacker is who the attacker claims to be (with-out provable chain of evidence between who the end-user thinks "they are talking" to is actually who "they are talking to").

so one of the current countermeasures being floated for spoofed websites ... is a new class of SSL credentials. these will only be issued to businesses that have passed some level of investigation as being reputable business. browsers will then give some new visual indicator to end-users when dealing with new kind of SSL. the user still won't know if they are dealing with who they think they are dealing with ... but they will at least be assured that who-ever it is has passed some level of reputable business audit (rather than simply proven that they are entities who managed to have paid for a domain name).

this is sort of along the lines of my suggestion from the early days of SSL and e-commerce that such businesses audits include things like FBI background check of all employees (which they still aren't going to do). misc. references:
https://www.garlic.com/~lynn/2001j.html#5 E-commerce security????
https://www.garlic.com/~lynn/2001j.html#54 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2005v.html#4 ABN Tape - Found
https://www.garlic.com/~lynn/2006.html#33 The new High Assurance SSL Certificates

however, this doesn't necessarily address major exploits and vulnerabilities ... for long before the internet as well as thru-out the internet age ... the majority of sensitive information leakages and data breaches has always involved insiders. the internet age has introduced some new avenues for information leakage and data breaches ... but hasn't significantly reduced actual insider-based exploits (in fact, the possibility that something may have come from the internet may just be used to obfuscate that it really was an insider event).

so the first order response typically has been to heap on more and more security in attempt to stem the leakage of sensitive information (in some cases extremely internet oriented even tho it has been proven time and time again that the major threat is from insiders).

x9.59 financial standard took a different approach ... it was to eliminate acocunt numbers from the category of sensitive information ... even if (x9.59) account numbers were to leak all over the world ... crooks still couldn't use them for fraudulent transactions (this was recognition that it was practically impossible to provide sufficient information hiding technology to prevent account number leakage ... in part, because account numbers are required in so many different business processes).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Sat, 04 Mar 2006 12:38:45 -0700
ref:
https://www.garlic.com/~lynn/2006d.html#25 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006d.html#26 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006d.html#28 Caller ID "spoofing"

and of course, whole collection of past posts mentioning ssl certificates and exploring numerous aspects of the whole paradigm.
https://www.garlic.com/~lynn/subpubkey.html#sslcert

basically the current certification is to prove that the applicant for a SSL domain name certificate is, in fact, the owner of that domain name. this is countermeasure to some internet (especially domain name infrastructure) integrity issues where a website might be impersonate another website.

a problem in some of the current internet bank website exploits is that the process only validates that the website that the end-user is talking to is actually the website that corresponds to the URL supplied to the browser. However, it is dependent on the end-user for providing trust continuity between who the end-user believes they are talking to and the supplied URL. With convention of end-users simply clicking on things purported to be correct URLs ... that trust continuity is broken.

another problem is that the certification authorities that supply SSL domain name certificates require an application to provide a lot of information ... which the certification authorities then do an expensive, error-prone, and time-consumer identification process, attempting to match the supplied information with the information on file with the domain name infrastructure as to the owner of the domain name.

however, this is the same domain name infrastructure that has some integrity issues which, in turn, give rise to some of the requirements for SSL domain name certificates. so the certification authority industry is somewhat backing various activities to improve the integrity and trustfullness of the domain name infrastructure (since they are dependent on the integrity of the domain name infrastructure as to whether a ssl domain name certificate applicant is the actual owner of that domain name). one of the proposals is to have domain name owner's register a public key when they apply for a domain name. all future communication with the domain name owner and the domain name infrastructure is then digitally signed (helping reduce various kinds of exploits like domain name hijacking).

having public keys on file with the domain name infrastructure also provides an opportunity for certificate authority industry to require that ssl domain name certificate applications are digitally signed. Then the certificate authority can replace a time-consuming, error-prone, and expensive identification process with a much simpler, less-expensive and more reliable authentication process (simply by retrieving the onfile public key from the domain name infrastructure and validating the digital signature on the SSL domain name certificate application).

A catch-22 for the certification authority industry is that improving the integrity of the domain name infrastructure eliminates some of the original motivation for having SSL domain name certificates.

Another catch-22 for the certification authority industry is that the whole SSL domain name certificate paradigm is based on having trusted public keys come from the SSL domain name certificate (as part of proving that the end-user is actually talking to the webserver they think they are talking to). However, the domain name infrastructure is such that if the certification authority industry can directly retrieve on-file, trusted public keys directly from the domain name infrastructure, then so can the rest of the world ... eliminating the need for SSL domain name certificate as a mechanism for providing trusted public keys.

lots of past posts mentioning the catch-22 dilemma for the certificate authority industry
https://www.garlic.com/~lynn/aadsm4.htm#5 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm9.htm#cfppki5 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsmore.htm#client3 Client-side revocation checking capability
https://www.garlic.com/~lynn/aadsmore.htm#pkiart2 Public Key Infrastructure: An Artifact...
https://www.garlic.com/~lynn/aadsm13.htm#26 How effective is open source crypto?
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm15.htm#28 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm17.htm#18 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#43 SSL/TLS passive sniffing
https://www.garlic.com/~lynn/aadsm19.htm#13 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm19.htm#42 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm20.htm#42 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#43 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#24 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm22.htm#0 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#17 Major Browsers and CAS announce balkanisation of Internet Security
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2001l.html#22 Web of Trust
https://www.garlic.com/~lynn/2001m.html#37 CA Certificate Built Into Browser Confuse Me
https://www.garlic.com/~lynn/2002d.html#47 SSL MITM Attacks
https://www.garlic.com/~lynn/2002j.html#59 SSL integrity guarantees in abscense of client certificates
https://www.garlic.com/~lynn/2002m.html#30 Root certificate definition
https://www.garlic.com/~lynn/2002m.html#64 SSL certificate modification
https://www.garlic.com/~lynn/2002m.html#65 SSL certificate modification
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2002o.html#10 Are ssl certificates all equally secure?
https://www.garlic.com/~lynn/2002p.html#9 Cirtificate Authorities 'CAs', how curruptable are they to
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#29 SSL questions
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface
https://www.garlic.com/~lynn/2003f.html#25 New RFC 3514 addresses malicious network traffic
https://www.garlic.com/~lynn/2003l.html#36 Proposal for a new PKI model (At least I hope it's new)
https://www.garlic.com/~lynn/2003p.html#20 Dumb anti-MITM hacks / CAPTCHA application
https://www.garlic.com/~lynn/2004b.html#41 SSL certificates
https://www.garlic.com/~lynn/2004g.html#6 Adding Certificates
https://www.garlic.com/~lynn/2004h.html#58 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#5 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005.html#35 Do I need a certificat?
https://www.garlic.com/~lynn/2005e.html#22 PKI: the end
https://www.garlic.com/~lynn/2005e.html#45 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005e.html#51 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005g.html#0 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#1 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#9 What is a Certificate?
https://www.garlic.com/~lynn/2005h.html#27 How do you get the chain of certificates & public keys securely
https://www.garlic.com/~lynn/2005i.html#0 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005k.html#60 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005m.html#0 simple question about certificate chains
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2005t.html#32 RSA SecurID product
https://www.garlic.com/~lynn/2005t.html#34 RSA SecurID product
https://www.garlic.com/~lynn/2005u.html#9 PGP Lame question
https://www.garlic.com/~lynn/2005v.html#3 ABN Tape - Found
https://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#16 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Sat, 04 Mar 2006 13:53:49 -0700
greymaus writes
I would think that would be very difficult, and at that, wouldn't account for people, usually honest, who would have a financial difficulty, or would have a gun pointed at their eyes.


ref:
https://www.garlic.com/~lynn/2006d.html#28 Caller ID "spoofing"

in some sense, the pre-occupation with the internet has regressed some amount of focus on anti-fraud technologies.

at least, early 80s was period of fbi background checks as well as various evolving business processors requiring multiple person participation and technologies attempting to detect collusion involving multiple responsible parties.

anti-collusion and collusion detection technologies assume that their are business processes in place that require multiple responsible parties (as single individual fraud countermeasure).

business processes requiring multiple responsible parties are countermeasure to single individual fraud for things like embezzling. collusion is associated with fraud is attempting to deal with fraud countermeasures involving multiple responsible parties.

a trivial example might be purchasing department requiring multiple, independent signatures ... plus policies that require staggered vacations ... and vacation stand-ins are drawn from independent population.

a few past posts mentioning collusion as a threat/vulnerability (for infrastructures that have multiple responsible parties as countermeasure to single individual fraud)
https://www.garlic.com/~lynn/aadsm3.htm#kiss10 KISS for PKIX. (authentication/authorization seperation)
https://www.garlic.com/~lynn/aadsm7.htm#auth Who or what to authenticate?
https://www.garlic.com/~lynn/aadsm9.htm#pkcs12d A PKI Question: PKCS11-> PKCS12
https://www.garlic.com/~lynn/aadsm11.htm#10 Federated Identity Management: Sorting out the possibilities
https://www.garlic.com/~lynn/aadsm12.htm#33 two questions about spki
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)
https://www.garlic.com/~lynn/2004j.html#15 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2005g.html#37 MVS secure configuration standard
https://www.garlic.com/~lynn/2005g.html#38 MVS secure configuration standard
https://www.garlic.com/~lynn/2005k.html#1 More on garbage
https://www.garlic.com/~lynn/2005v.html#2 ABN Tape - Found

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Sun, 05 Mar 2006 15:30:06 -0700
Anne & Lynn Wheeler writes
as part of some of the authentication work, we were called in to help word-smith the cal. state electronic signature legislation (and then later the federal legislation).
https://www.garlic.com/~lynn/subpubkey.html#signature



ref:
https://www.garlic.com/~lynn/2006d.html#25 Caller ID "spoofing"

couple recent stories on caller id spoofing:

Technology Facilitates Caller ID Spoofing
http://abcnews.go.com/Technology/wireStory?id=1675281 Caller ID spoofing becomes all too easy
http://www.usatoday.com/tech/news/2006-03-01-caller-id_x.htm Caller ID Spoofing Becomes Easy
http://hardware.slashdot.org/hardware/06/03/02/2311218.shtml
FCC Probes Caller-ID Fakers
http://www.wired.com/news/technology/0,70320-0.html?tw=wn_index_1



and for some drift on authentication subject; multi-factor authentication assumes that the different authentication factors have different threat/exploits; from 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you have
something you know
something you are


pin-debit has the magstripe on the card as proof of something you have and the pin as something you know as independent authentication factors with independent failure/vulnerability modes. however, if you have written the pin on the card, and then card lost/stolen has a common failure/vulnerability.

Nailing fraud at the pump
http://www.buffalonews.com/editorial/20060305/1028422.asp


from above:
Motorists who pay for gas at the pump with credit cards soon may be asked to type in their home ZIP codes - if they haven't had to already.

... snip ...

the issue here is "ZIP code" is used as something you know authentication for credit cards (analogous to PIN for pin-debit). however, there is an issue that skimming technology has been applied at point-of-sale (including fuel pumps) against pin-debit (where both the magstripes and pins are captured for later use with counterfeit card and fraudulent transactions). "ZIP code" is a countermeasure for lost/stolen credit card in manner similar to PIN being countermeasure for lost/stolen debit card. However, they are all vulnerable to common skimming exploits.

Typical credit card transaction format definition will have an optional "AVS" field for carrying zip-code information. This somewhat originated for MOTO (mail-order/telephone-order) transaction as additional authentication for card-not-present transactions.

The x9a10 financial standards working group was given reguirement for x9.59 that it preserve the integrity of the financial infrastructure for all retail payments (aka not just internet or point-of-sale, or debit, or credit, or stored-value ... but "ALL").
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

part of the issue predating the internet was skimming & harvesting exploits; where the transaction information was skimmed/harvested.
https://www.garlic.com/~lynn/subintegrity.html#harvest

if there was magstripe value, that could be skimmed and used to produce a counterfeit magstripe; if there was pin, that could be skimmed also. as a result, various of the skimming and harvesting vulnerabilities resulted in compromise of all the static-data-based authentication mechanisms (whether they were magstripe, pin, password, whatever). common skimming exploit could record something you have (magstripe), something you know (pin/password), and even something you are (biometric values).

European Central Bank Urges Broader Card Payment Standards
http://www.cardtechnology.com/article.html?id=2006030109GMKSSE

from above:
Even with this, EMV only combats fraud from counterfeited or lost or stolen cards, not, for instance, card-not-present fraud. "Fraud could migrate to less protected transaction types", says the ECB. "Fraud prevention efforts should continue."

... snip ...

EMV has gone thru a couple generations as previous generations became vulnerable. one of the original EMV generations was vulnerable to the same skimming technology used for magstripe cards ... and the resulting counterfeit cards got the label of yes cards in the UK press. misc. past posts mentioning yes cards
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm17.htm#13 A combined EMV and ID card
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
https://www.garlic.com/~lynn/aadsm18.htm#20 RPOW - Reusable Proofs of Work
https://www.garlic.com/~lynn/2003o.html#37 Security of Oyster Cards
https://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
https://www.garlic.com/~lynn/2004j.html#12 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#13 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#14 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of ASCII,Invento
https://www.garlic.com/~lynn/2004j.html#35 A quote from Crypto-Gram
https://www.garlic.com/~lynn/2004j.html#39 Methods of payment
https://www.garlic.com/~lynn/2004j.html#44 Methods of payment
https://www.garlic.com/~lynn/2005u.html#13 AMD to leave x86 behind?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

When *not* to sign an e-mail message?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: When *not* to sign an e-mail message?
Newsgroups: sci.crypt,comp.security.pgp.discuss
Date: Sun, 05 Mar 2006 16:33:13 -0700
"nemo_outis" writes
The whole question of digital signing and non-repudiation is fatally flawed.

Why? Because it reverses the burden of proof.

With existing handwritten signatures the burden of verifying the signature falls on the recipient (e.g., banks re a cheque). With digital signatures the sender must prove he didn't send it (e.g., he might argue his key had been stolen).

The traditional basis of signatures is that the burden lies on the fellow relying on them; digital signatures reverse 1000 years of legal and commercial practice. While arguments can be advanced why such a reversal might be desirable they have to overcome this "who proves" hurdle and cannot rely solely on their "gee-whiz" gimcrackery as sufficient justification.



digital signature is technology that can be used for authentication aka something you have from 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you have
something you know
something you are



where verfication of digital signature with public key implies possession of corresponding private key.

this is something different than human signatures that imply having read, understood, agrees, approves, and/or authorizes.

there are all sort of short-comings if you believe that digital signatures translate straight-forward to the same as human signatures. one such is dual-use attack. a valid authentication use for digital signatures is to have a server transmit some random data (possibly as countermeasure to replay attack), the client digitally signs the random data (w/o having read the random data), and returns the digital signature. an attack (against a infrastructure that might mistakenly make straight-forward equivalence between digital signature and human signature) is to substitute a valid contract for the random data.

one such (possibly misguided) effort to make straight-foward equivalanece between digital signatures and human signatures was the addition of the non-repudiation flag to some digital certificates in the early 90s.

in much the same way that x.509 identity digital certificates started to become significantly depreciated by the mid-90s, so did any operation that took a digital certificate non-repudiation flag as having any valid meaning. it became readily apparent that to even approach the meaning of a human signature (read, understood, agrees, approves, and/or authorizes as well as demonstrating any sort of intent) there had to be significant additional processes in place.

In fact, there are some of point-of-sale terminal designs that may have digital signature purely as an authentication mechanism but requires totally separate operations to demonstrate "intent". The simpler example is point-of-sale terminal that uses two-factor authentication pin-debit as authentication ... and then requires separate sequence where the consumer is asked to press the "yes" button if they agree to the transaction (to establish intent and the equivalence of human signature of read, understood, agrees, approves, and/or authorizes). In such a scenario, the authentication is totally separate process from the "intent" process.

we were asked to come in and help word-smith the cal. state electronic signature legislation and then later the fed. electronic signature legislation. misc. past posts about electronic signatures
https://www.garlic.com/~lynn/subpubkey.html#signature

misc. past posts mentioning non-repudiation and/or dual-use attack on digital signatures (when they have conflicting uses for both authentication and human signature)
https://www.garlic.com/~lynn/aepay7.htm#nonrep0 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep1 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep2 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep3 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep4 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep5 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aepay7.htm#nonrep6 non-repudiation, was Re: crypto flaw in secure mail standards
https://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#7 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#8 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#9 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#11 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#12 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#13 Words, Books, and Key Usage
https://www.garlic.com/~lynn/aadsm11.htm#14 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm11.htm#15 Meaning of Non-repudiation
https://www.garlic.com/~lynn/aadsm12.htm#5 NEWS: 3D-Secure and Passport
https://www.garlic.com/~lynn/aadsm12.htm#12 TOC for world bank e-security paper
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm12.htm#37 Legal entities who sign
https://www.garlic.com/~lynn/aadsm12.htm#38 Legal entities who sign
https://www.garlic.com/~lynn/aadsm12.htm#59 e-Government uses "Authority-stamp-signatures"
https://www.garlic.com/~lynn/aadsm15.htm#32 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm15.htm#33 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm15.htm#34 VS: On-line signature standards (slight addenda)
https://www.garlic.com/~lynn/aadsm15.htm#35 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm15.htm#36 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm16.htm#14 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#17 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#18 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm16.htm#23 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#3 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#5 Non-repudiation (was RE: The PAIN mnemonic)
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm19.htm#33 Digital signatures have a big problem with meaning
https://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#28 solving the wrong problem
https://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#5 Is there any future for smartcards?
https://www.garlic.com/~lynn/aadsm21.htm#13 Contactless payments and the security challenges
https://www.garlic.com/~lynn/aadsm21.htm#18 'Virtual Card' Offers Online Security Blanket
https://www.garlic.com/~lynn/aadsm21.htm#27 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm22.htm#5 long-term GPG signing key
https://www.garlic.com/~lynn/aadsm22.htm#6 long-term GPG signing key
https://www.garlic.com/~lynn/aadsm22.htm#7 long-term GPG signing key
https://www.garlic.com/~lynn/2001c.html#30 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#40 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#41 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#42 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#43 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#44 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#46 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#47 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#50 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#51 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#52 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#54 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#56 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#59 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#60 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#72 PKI and Non-repudiation practicalities

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

When *not* to sign an e-mail message?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: When *not* to sign an e-mail message?
Newsgroups: sci.crypt,comp.security.pgp.discuss
Date: Sun, 05 Mar 2006 16:45:22 -0700
Mike Amling writes
When I first read Ender's Game by Orson Card, I couldn't figure how everyone could be so sure that a series of anonymous essays published under the name "Demosthenes" were all by the same author. Now I see that including a public key in each essay and a digital signature that verifies with that public key establishes that the essays are all by the same author, no, that they're all signed by the same person, no, that they're all signed by members of the cabal who know the corresponding private key.


also, ref:
https://www.garlic.com/~lynn/2006d.html#32 When *not* to sign an e-mail message?

this is the scenario that asymmetric key cryptography is technology (differentiated from symmetric key), where what one key encodes, the other key decodes.

there is a business process commoningly referred to as public key; where one key is labeled "public" and made freely available; the other key (of the key pair) is labled "private" and kept confidential and never divulged. at this level, there is no mystical properties related to public and private ... purely what is done in conforming to the public key business process.

there is a business process commomingly referred to as digital signature where the private key is used to encode the hash of a message. then the verification of the digital signature using the corresponding public key implies

1) the message has not changed since signing 2) something you have authentication (aka the signer has access to and use of the corresponding private key)


again no mystical properties other than what is been defined in conforming to the digital signature business process and the public key business process.

a relying party places their faith in the something you have authentication ... to the extent that they understand the corresponding business processes and believe that they have been followed.

there is nothing implicit in the digital signing process that carries with it that the signer has read (or written), understood, agrees, approves, and/or authorizes (what has been signed).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

When *not* to sign an e-mail message?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: When *not* to sign an e-mail message?
Newsgroups: sci.crypt,comp.security.pgp.discuss
Date: Sun, 05 Mar 2006 19:49:58 -0700
"nemo_outis" writes
Handwritten and digital signatures are not equivalent


ref:
https://www.garlic.com/~lynn/2006d.html#32 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006d.html#33 When *not* to sign an e-mail message?

that is along the lines of my theme about both terms containing the word "signature" can result in semantic confusion; believing that because both terms contain the same word that then it follows that the two terms have some similarities.

misc. past posts mentioning semantic confusion arising from both terms containing the word signature:
https://www.garlic.com/~lynn/aadsm3.htm#kiss5 Common misconceptions, was Re: KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
https://www.garlic.com/~lynn/aadsm12.htm#30 Employee Certificates - Security Issues
https://www.garlic.com/~lynn/aadsm13.htm#16 A challenge
https://www.garlic.com/~lynn/aadsm15.htm#36 VS: On-line signature standards
https://www.garlic.com/~lynn/aadsm19.htm#7 JIE - Contracts in Cyberspace
https://www.garlic.com/~lynn/aadsm19.htm#24 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#25 Digital signatures have a big problem with meaning
https://www.garlic.com/~lynn/aadsm20.htm#8 UK EU presidency aims for Europe-wide biometric ID card
https://www.garlic.com/~lynn/aadsm20.htm#44 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#13 Contactless payments and the security challenges
https://www.garlic.com/~lynn/aadsm21.htm#24 Broken SSL domain name trust model
https://www.garlic.com/~lynn/2003k.html#6 Security models
https://www.garlic.com/~lynn/2004i.html#27 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005f.html#20 Some questions on smart cards (Software licensing using smart cards)
https://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
https://www.garlic.com/~lynn/2005n.html#51 IPSEC and user vs machine authentication
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2005q.html#4 winscape?
https://www.garlic.com/~lynn/2005r.html#54 NEW USA FFIES Guidance
https://www.garlic.com/~lynn/2005v.html#3 ABN Tape - Found

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Fw: Tax chooses dead language - Austalia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fw: Tax chooses dead language - Austalia
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 06 Mar 2006 13:04:52 -0700
chrismason@ibm-main.lst (Chris Mason) writes
Something else that came to mind was a comparison of the text markup "languages" GML and SCRIPT since GML is created from SCRIPT using the SCRIPT macro function - if my memory serves me well.


stu did the original script for cms at the science center... using runoff-like dot-commands
https://www.garlic.com/~lynn/subtopic.html#545tech

one of the earliest documents (besides cp67/cms documentation) that was moved to script processing was principles of operation. script conditionals were used to maintain a single copy for two different versions. the "full" version was referred to as the "architecture red book" (distributed in red 3-ring binders). the subset version was the principles of operation and didn't contain all the architecture notes, engineering notes, notes justifying the instruction, etc. you used conditional on the cms script command line to control which version was produced. I have some vague recollection of some POPs being printed off a 1403 master ... where the diagram and other box vertical lines weren't continuous (you could get a 1403tn train that was capable of producing solid vertical lines).

then "G", "M", & "L" invented gml at the science center (gml selected because it was their initials ... then had to come up with "Generalized Markup Language" to go along with their initials). gml processing was then added to the cms script command.
https://www.garlic.com/~lynn/submain.html#sgml

later gml was standardized as sgml ... and waterloo did a enhanced version of the cms script command. there is story about how html was created off stuff done with the waterloo enhanced script command.
http://infomesh.net/html/history/early/

at cern ... cern and slac were sister organizations and big vm shops ... there is the old story circa 1974 about the cern tso/cms bakeoff report at share ... and internally, copies were classified confidential - restricted, available on a need-to-know basis only (attempting to restrict information internally on how bad tso comparison was).

slac put up the first webserver in the us.
http://www.slac.stanford.edu/history/earlyweb/history.shtml

When *not* to sign an e-mail message?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: When *not* to sign an e-mail message?
Newsgroups: sci.crypt,comp.security.pgp.discuss
Date: Mon, 06 Mar 2006 14:31:50 -0700
Andrew Swallow writes
You continue to require digital signature to supply a level of security millions of times higher than handwritten signatures. When we point this out you discard this information and reply with insults. You then cheat and chose the weakest form of signing of the grounds that it is stronger.


it turns out that the issue of digital signatures with integrity and authentication is different than the issue of human signatures with intent. one might be tempted to use an example of comparing the color orange in oranges with the color red in apples (except they have closer relationship being both colors and both involve fruit).

in any case, there was the whole attempt to try and establish the non-repudiation flag in digital certificate standards ... which eventually fell into much disrepute, in large part because the issue of integrity and authentication (associated with digital signatures) is unrelated to intent (associated with human signatures).

misc. past postings about signatures
https://www.garlic.com/~lynn/subpubkey.html#signature

previous pieces of thread:
https://www.garlic.com/~lynn/2006d.html#32 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006d.html#33 When *not* to sign an e-mail message?
https://www.garlic.com/~lynn/2006d.html#34 When *not* to sign an e-mail message?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

transputers again was Re: The demise of Commodore

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: transputers again was Re: The demise of Commodore
Newsgroups: alt.folklore.computers
Date: Mon, 06 Mar 2006 15:00:31 -0700
KR Williams writes
Yes indeed they were. When I worked in the 3090/ES9000 crypto hardware development group we were told (repeatedly and in no uncertain terms) that one could go to *jail* for violating these laws, either way and that they were watching us. We had to have an export license for every unit sold and any supporting documentation. I didn't so much as ship the bill-of-materials for a battery pack outside the country. It gave the marketing types something to do. ;-) ...and no, I didn't download anything from outside the country regarding crypto. The black-hats were serious, oblivious to the fact that the horse was long gone.


when were doing x9.59
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

it didn't escape our notice that when we replaced hiding/crypto of account numbers with authentication ... we also side-stepped a lot of the munition laws (not only in the us, but around the world).

the primary motivation was to change the paradigm so that hiding/crypto of account numbers wasn't necessary to satisfy the requirement (given the x9a10 financial standards working group) to preserve the integrity of the financial infrastructure for all retail payments. This was in large part recognizing the large number of different business processes where account numbers are required and therefor a hiding strategy becames impractical to stem leakage of account numbers. changing from hiding/crypto to authentication had nothing to do with preventing the leakage of account numbers, it was to change the paradigm eliminating account number leakage as a vulnerability/threat (recongizing that account number leakage was inevitable)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Fw: Tax chooses dead language - Austalia

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fw: Tax chooses dead language - Austalia
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 06 Mar 2006 17:53:05 -0700
Gerard Schildberger wrote:

Where can one find more information on the TSO/CMS bakeoff report (also known as the CMS/TSO bakeoff)? Is there a copy of it floating around in cyberspace ? ___________________________________Gerard S.


ref:
https://www.garlic.com/~lynn/2006d.html#35 Tax chooses dead language - Austalia

i did a quick look ... i thot i might find it along with a copy of 1979 SHARE LSRAD report ... but so far, no luck.

however, from the same era (circa 1974 cern cms/tso comparison), i did stumble across hand-out and for some unknown reason, several transparencies for "Early VS2 Release 2 Users' Experience", presented by Jeffry A. Alperin, Aetna Life and Casualty, Hartford Conn. given at Guide 39, Anaheim, Cal, Session no. ops-6 (thursday, nov. 7, 1974 - 8:30 am).

transputers again was Re: The demise of Commodore

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: transputers again was Re: The demise of Commodore
Newsgroups: alt.folklore.computers
Date: Mon, 06 Mar 2006 23:37:40 -0700
Brian Inglis writes
Doesn't the government use escrow internally because of some problems, or did this go away too?


the key escrow effort latched onto similar data availability theme that backup vendors use; ... the most valuable corporate assets can be information and the loss of a key can be equivalent to loosing a disk (that hasn't been backed up). confidentiality of corporate data potentially positioned at odds with availability of corporate data.

there was some study from the early 90s that 50 percent of the business that had failed disk containing non-backed-up critical corporate data, filed for bankruptcy within 30 days. trivial examples were small business that lost customer billing files.

backing up keys then became a business continuity and information availability issue akin to backing up any other corporate asset.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

transputers again was Re: The demise of Commodore

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: transputers again was Re: The demise of Commodore
Newsgroups: alt.folklore.computers
Date: Tue, 07 Mar 2006 10:25:08 -0700
"Dennis Ritchie" writes
Lynn mentioned that internal schemes for spreading out keys for safety purposes is reasonable (inside govt or elsewhere), but this is different from sending them to the govt.

key escrow backed (certified) repositories that archived/backed-up keys ... that could be served with federal court warrants. commercial companies could view them as protecting corporate assets; gov. could view them as convenient location for court orders.

... example from some old email somewhere


Commercial Key Escrow:

Something for Everyone
Now and for the Future

Stephen T. Walker
Steven B. Lipner
Carl M. Ellison
Dennis K. Branstad
David M. Balenson

Trusted Information Systems, Inc.

January 3, 1995



Summary

A tension has been growing for the past twenty years between the interests of the public to protect its sensitive information and the interests of governments to access the information of their adversaries. The Clipper Key Escrow program, introduced by the U.S. Government in 1993, was an attempt to overcome this tension by giving the public good cryptography while retaining for law enforcement the ability to decrypt communications when authorized. But Clipper has many problems that make it unattractive to the public.

The basic concepts of key escrow are very attractive to individuals and organizations who fear the consequences of losing their encryption keys. A key escrow system that satisfies the concerns of individuals and corporations and also meets governments interests could help resolve this growing national tension.

This paper reviews the reasons for this tension and the evolution of software key escrow systems. It then examines the variety of alternative key escrow systems and describes why the government must take urgent steps to promote commercial key escrow before serious and permanent harm is done to government s law enforcement and national security interests.



... snip ...

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Caller ID "spoofing"

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Caller ID "spoofing"
Newsgroups: alt.folklore.computers
Date: Tue, 07 Mar 2006 14:32:32 -0700
stanb45@dial.pipex.com (Stan Barr) writes:
The problem I have with PINs etc. is that I have so many of them. 5 cards, 3 door-lock numbers and about a dozen passwords to remember and like Barb my brain don't work so good any more. I _have_ to write them down and carry them with me. I encrypt them with a simple arithmetic system to foil any unauthorised person who acquires my wallet so I end up standing in the shop trying to do mental arithmetic - not my strong point! ... long post warning ...

there is general class of authentication implementations that basically involve shared-secrets (what's on file is the same as what is used to proove something). early on in the electronic age, you might have only one or two shared-secrets. since the same value was used to both provide proof and validate the information ... there were cross-domain vulnerabilities. security tended to then require unique shared-secrets in different security domains as countermeasure (you wouldn't use the same password with the local garage ISP as you did for home banking).
https://www.garlic.com/~lynn/subintegrity.html#secrets

the security departments then wanted to have hard-to-guess passwords, which frequently turned out also to be impossible-to-remember. that, coupled with the number of unique security domains growing to scores, resulted in normal human memory capability being overloaded (leading to a new common vulnerability where people had to record their scores of pins and passwords).

related to this are newer class of skimming/evesdropping vulnerabilities for authentication mechanisms involving "static" data (that also tend to be shared-secrets).

from 3-factor authentication model
https://www.garlic.com/~lynn/subintegrity.html#3factor

something you have
something you know
something you are



... all of the above have tended to be (recently) deployed using static data. something you have is suppose to be unique ... payment cards are an example; electronic proof of being in possession of such a card is reading the magstripe information. the magstripe information is static data and is subject to skimming/evesdropping exploits (using the information to produce counterfeit card).

as mentioned in earlier post,
https://www.garlic.com/~lynn/2006d.html#31 Caller ID "spoofing"

there is usually an implicit assumption associated with multi-factor authentication deployments ... that the different mechanisms are vulnerable to different threats i.e. something you know pin/password frequently considered countermeasure to lost/stolen something you have card.

however, then along comes skimming/evesdropping which represents a common threat/exploit for all forms of authentication that involve static data ... regardless of the number of different types involved.
https://www.garlic.com/~lynn/subintegrity.html#harvest

various one-time-password schemes have been put forward as countermeasure to skimming/evedropping exploits. rfc2289 is one such one-time-password that was suppose to be resistant to both evesdropping/skimming as well as mitm-attacks.
https://www.garlic.com/~lynn/subintegrity.html#mitmattack

frequently one-time-passwords implementations, result in a specific recorded value at an institution. these then can violate unique authentication per security domain (as countermeasure to things like cross-domain attacks; say a local low-security ISP and and high-security online banking; sometimes this may also be classified as escalation of privileges, attacking a low-security environment and using that to attack a higher-security environment). In any case 2289 has a variation where the same value works in multiple different security domains.

from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm
https://www.garlic.com/~lynn/rfcidx7.htm#2289

summary from above:

2289 S
A One-Time Password System, Haller N., Metz C., Nesser P., Straw M., 1998/02/26 (25pp) (.txt=56495) (STD-61) (Obsoletes 1938) (Refs 1320, 1321, 1704, 1760, 1825, 1826, 1827) (Ref'ed By 2444, 2808, 3552, 3631, 3748, 3888) (ONE-PASS)


as always in the rfc summaries, clicking on the ".txt=nnnn" field retrieves that actual RFC.

following are some posts describing a MITM-attack on 2289
https://www.garlic.com/~lynn/2003m.html#50 public key vs passwd authentication?
https://www.garlic.com/~lynn/2003n.html#0 public key vs passwd authentication?
https://www.garlic.com/~lynn/2005f.html#40 OTP (One-Time Pad Generator Program) and MD5 signature
https://www.garlic.com/~lynn/2005l.html#8 derive key from password
https://www.garlic.com/~lynn/2005o.html#0 The Chinese MD5 attack
https://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product
https://www.garlic.com/~lynn/2005t.html#31 Looking for Information on password systems
https://www.garlic.com/~lynn/aadsm20.htm#24 [Clips] Escaping Password Purgatory

so along come smartcards and other forms of dynamic data. one such dynamic data paradigm is digital signatures and public key. this relies on asymmetric key cryptography technology. what one key encodes, the other key decodes (to differentiate from symmetric key).

a business process is defined called public key, where one key is published publicly and labled "public key". the other key (of an asymmetric key cryptography key pair) is labeled "private key", kept confidential and never divulged.

a further business process is defined called digital signature, where a hash of a message/document is encoded with the private key. the relying party then has a copy of the corresponding public key and is able to decode a digital signature. they then recalculate the hash of the original message/document and compare the two hashes. if they are equal, then the relying party can assume:

1) the original message/document has not been modified since the digital signature was calculated
2) there is something you have authentication (i.e. the entity performing the digital signature had access to and use of the specific private key).

digital signature business process has advantage over shared-secret and static data paradigms ... since

1) any recorded public key is only used to verify the digital signature (not originate the digital signature, and therefor knowing the public key is not sufficient to impersonate),
2) the digital signature can be different and dynamic for every authentication operation.

since the public key can't be used to impersonate, recording the same public key in multiple security domains doesn't create a cross-domain attack vulnerability (as in shared-secret and static data authentication paradigms).

however, the most basic tenet of public key operation comes down to the protection afforded the private key. one of the better protection mechanisms are hardware chip tokens that do on-chip key generation and never divulages the private key (even to the token owner). this essentially creates an equivalence between the something you have private key and the something you have hardware token. As a countermeasure for lost/stolen token, it may necessary to enter a pin for correct token operation (this becomes a secret something you know, as opposed to a shared-secret, and a relying party might assume two-factor authentication given certification as to how the token operation).

there is some current institutional preferrence for issuing such hardware tokens. if such, two-factor authentication catches on, that approach has the potential of requiring individuals to carry around scores of different tokens and becoming as cumbersomb as having to deal with scores of different passwords.

there is some work with aads for enabling change to person-centric model (from a strickly institutional-centric model)
https://www.garlic.com/~lynn/x959.html#aads

misc. past posts about institutional-centric models vis-a-vis person-centric models for authentication
https://www.garlic.com/~lynn/aadsm12.htm#0 maximize best case, worst case, or average case? (TCPA)
https://www.garlic.com/~lynn/aadsm19.htm#14 To live in interesting times - open Identity systems
https://www.garlic.com/~lynn/aadsm19.htm#41 massive data theft at MasterCard processor
https://www.garlic.com/~lynn/aadsm19.htm#47 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#41 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm22.htm#12 thoughts on one time pads
https://www.garlic.com/~lynn/2003e.html#22 MP cost effectiveness
https://www.garlic.com/~lynn/2003e.html#31 MP cost effectiveness
https://www.garlic.com/~lynn/2004e.html#8 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2005g.html#47 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005g.html#57 Security via hardware?
https://www.garlic.com/~lynn/2005m.html#37 public key authentication
https://www.garlic.com/~lynn/2005p.html#6 Innovative password security
https://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
https://www.garlic.com/~lynn/2005t.html#28 RSA SecurID product
https://www.garlic.com/~lynn/2005u.html#26 RSA SecurID product

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/


previous, next, index - home