List of Archived Posts

2008 Newsgroup Postings (05/17 - 06/23)

Has anyone got a rule of thumb for calculation data center sizing
Do you belive Information Security Risk Assessment has shortcoming like
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
A Merit based system of reward -Does anybody ( or any executive) really want to be judged on merit?
Microsoft versus Digital Equipment Corporation
Removing the Big Kernel Lock
Annoying Processor Pricing
pro- foreign key propaganda?
Obfuscation was: Definition of file spec in commands
Different Implementations of VLIW
Different Implementations of VLIW
pro- foreign key propaganda?
pro- foreign key propaganda?
DASD or TAPE attached via TCP/IP
DASD or TAPE attached via TCP/IP
should I encrypt over a private network?
Does anyone have any IT data center disaster stories?
Microsoft versus Digital Equipment Corporation
American Airlines
Microsoft versus Digital Equipment Corporation
Worst Security Threats?
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Credit Card Fraud
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Scalable Nonblocking Data Structures
What is your definition of "Information"?
subprime write-down sweepstakes
Mastering the Dynamics of Innovation
A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
Mainframe Project management
American Airlines
American Airlines
A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
American Airlines
American Airlines
American Airlines
A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
American Airlines
Security Breaches
IT Security Statistics
Are multicore processors driving application developers to explore multithreaded programming options?
ARPANet architect: bring "fairness" to traffic management
Definition of file spec in commands
Seeking (former) Adventurers
Anyone know of some good internet Listserv's?
Can I ask you to list the HPC/SC (i.e. th High performace computers) which are dedicated to a problem?
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Microsoft versus Digital Equipment Corporation
Digital cash is the future?
Trusted (mainframe) online transactions
Is data classification the right approach to pursue a risk based information security program?
The Price Of Oil --- going beyong US$130 a barrel
Microsoft versus Digital Equipment Corporation
I am trying to find out how CPU burst time is caluculated based on which CPU scheduling algorithms are created ?
Microsoft versus Digital Equipment Corporation
Threat assessment Versus Risk assessment
Could you please name sources of information you trust on RFID and/or other Wireless technologies?
Ransomware
DB2 25 anniversary
DB2 25 anniversary: Birth Of An Accidental Empire
Is the credit crunch a short term aberation
How do you manage your value statement?
How do you manage your value statement?
Do you have other examples of how people evade taking resp. for risk
EXCP access methos
EXCP access methos
Next Generation Security
The End of Privacy?
Outsourcing dilemma or debacle, you decide
Should The CEO Have the Lowest Pay In Senior Management?
Should The CEO Have the Lowest Pay In Senior Management?
Outsourcing dilemma or debacle, you decide
Security Awareness
Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure?
Hypothesis #4 -- The First Requirement of Security is Usability
OS X Finder windows vs terminal window weirdness
Certificate Purpose
Selling Security using Prospect Theory. Or not
parallel computing book
Certificate Purpose
Stephen Morse: Father of the 8086 Processor
Which of the latest browsers do you prefer and why?
Own a piece of the crypto wars
Historical copy of PGP 5.0i for sale -- reminder of the war we lost
squirrels
Technologists on signatures: looking in the wrong place
Certificate Purpose
Certificate Purpose
Certificate Purpose
Certificate Purpose
Lynn - You keep using the term "we" - who is "we"?
Accidentally Deleted or Overwrote Files?
A Blast from the Past
We're losing the battle
dollar coins
We're losing the battle
OS X Finder windows vs terminal window weirdness
We're losing the battle
OS X Finder windows vs terminal window weirdness
OS X Finder windows vs terminal window weirdness
dollar coins

Has anyone got a rule of thumb for calculation data center sizing

Refed: **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 17, 2008
Subject: Has anyone got a rule of thumb for calculation data center sizing.
Blog: Computer Networking
When we were doing our ha/cmp product ... we had coined the terms disaster survivability and geographic survivability ... lots of old posts
https://www.garlic.com/~lynn/subtopic.html#hacmp
... and also worked ha/cmp cluster scale-up, misc. old email
https://www.garlic.com/~lynn/lhwemail.html#medusa

as hardware and software became more reliable, the major remaining failure/outage causes were becoming environmental ... which required countermeasures like geographic separation. lots of old posts specifically related to continuous availability
https://www.garlic.com/~lynn/submain.html#available

part of ha/cmp scale-up was physical packaging more computing into smaller footprint. Recent answer discussing BLADE/GRID theme increasing amount of computing in smaller & smaller footprint.
http://www.linkedin.com/answers/technology/information-technology/information-storage/TCH_ITS_IST/217659-23436977

Some number of datacenters can run multiple billions of dollars in a single location ... but if there is significant business dependency on dataprocessing availability ... the trend is to separating the operation into multiple different locations.

For old historical reference there was a 1970 datacenter that was characterized as being a $2.5billion "windfall" for IBM (in 1970 dollars).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Do you belive Information Security Risk Assessment has shortcoming like

Refed: **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 17, 2008
Subject: Do you belive Information Security Risk Assessment has shortcoming like
Blog: Information Security
one of the things that we worked on was what we called parameterised risk management. it basically did things link in-depth threat and vulnerability analysis and kept the individual threat/vulnerability of the individual components. parameterised risk management including the concept of threats/vulnerabilities could change over time ... i.e. as technology advances are made the threat/vulnerability of specific components can change.

one of the things that parameterised risk management allowed for was a large variety of different technologies in use across the infrastructure ... and the possibility that the integrity of any specific component can be affected in real time (in order to support real-time changes, original threat/vulnerability and integrity characteristics/profile have to be maintain so that changes can be mapped in real time, and include the sense of more semantic meaning as opposed to purely numeric)... which, in turn, might require real time changes in operations (possibly additional compensating procedures).

parameterised risk management was some of the work including in the aads patent portfolio reference here
https://www.garlic.com/~lynn/x959.html#aads

parameterised risk management was some of the work including in the aads patent portfolio reference here
https://www.garlic.com/~lynn/x959.html#aads

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 17 May 2008 19:19:43 -0400
Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
Do you have a cite on how these are doing it? My students seem to regard having to read the Alewife paper every semester as a punishment inflicted on them by an evil antique professor. Having a more recent, clearly described, directory-based protocol would be good.

recent posts with SCI reference:
https://www.garlic.com/~lynn/2008e.html#24 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation

SCI cache consistency directory mechanism was written up in the standard ... from approx. same period. Standard SCI was 64-way, convex used it with 64 two-processor pa-risc boards (exemplar) and sequent used it with 64 four-processor intel boards (numa-q).

SCI reference:
http://www.scizzl.com/Perspectives.html

above mentions SGI using SCI w/o mentioning SCI

Silicon Graphics Makes the Arguments for Using SCI!
http://www.scizzl.com/SGIarguesForSCI.html

for other drift, acm article from '97 ...

A Hierarchical Memory Directory Scheme Via Extending SCI for Large-Scale Multiprocessors
http://portal.acm.org/citation.cfm?id=523549.822844

abstract for above:
SCI (Scalable Coherent Interface) is a pointer-based coherent directory scheme for large-scale multiprocessors. Large message latency is one of the problems with SCI because of its linked list structure: the searching latency can grow as a linear order of the number of processors.In this paper, we focus on a hierarchical architecture to propose a new scheme - EST(Extending SCI-Tree), which may reduce the message traffic and also take the advantages of the topology property. Simulation results show that the EST scheme is effective in reducing message latency and communication cost when compared with other schemes.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sun, 18 May 2008 00:16:53 -0400
Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
Sure -- and Alewife and DASH are the two really well described directory-based systems. What's really nice about them is that the papers take exactly backwards approaches to describing them: Alewife just gives the messages and state transitions and leaves it up to the reader to figure out the sequence of events, while DASH gives the sequence really well and completely omits exactly what the states actually are. I like to assign them both. Unfortunately, when they read the specs on the hardware implementations (why does this seem to be required for any paper about a real system?) they tend to start snickering and talking about vacuum tubes. A clear description of a system in current use would be really nice.

re:
https://www.garlic.com/~lynn/2008e.html#24 Berkeley researcher describes parallel path
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation

alewife and dash were university research projects ... SCI was passed as standard with extensive write up in the standard and used by several vendors in shipped computer products.

i've commented before that at one point, we were asked if we would be interested in heading up effort to commercialize (SUN's object-oriented) SPRING operating system, turning it out as product (this was in the era when object-oriented was all the rage ... apple was doing PINK). It would have been done in conjunction with product that had possibly thousands of sparc processors interconnected with SCI.

I've got early writeup of the SCI directory cache consistency protocol from before standard was passed ... and just doing some picking around on the SCI website for more information ...


http://www.scizzl.com/

also from above:
At long last the IEC/IEEE publications deadlock has been resolved, and the corrected SCI spec has been published by the IEC as "ISO/IEC Standard 13961 IEEE". Unfortunately, the updated diskette was not incorporated. However, the updated C code is online, at sciCode.c text file (1114K). This release does not have the separate index file that shipped on the original diskette, because with the passage of time we lost the right of access to the particular software that generated that index. (People change employers.)

Unfortunately, the IEEE completely bungled the update, reprinting the old uncorrected standard with a new cover and a few cosmetic changes. Until this has been corrected, the IEEE spec should be avoided.


... snip ...

i've commented before in that period was that a lot of hippi standard work was backed by LANL, fcs standard work was backed by LLNL and SCI work came out of SLAC ... all furthering/contributing to commoditizing various aspects of high-performance computing.

the scizzl.com web site also lists SCI book
https://www.amazon.com/exec/obidos/ASIN/3540666966/qid=956090056/sr=1-1/103-0276094-5848643

from above:
Scalable Coherent Interface (SCI) is an innovative interconnect standard (ANSI/IEEE Std 1596-1992) addressing the high-performance computing and networking domain. This book describes in depth one specific application of SCI: its use as a high-speed interconnection network (often called a system area network, SAN) for compute clusters built from commodity workstation nodes. The editors and authors, coming from both academia and industry, have been instrumental in the SCI standardization process, the development and deployment of SCI adapter cards, switches, fully integrated clusters, and software systems, and are closely involved in various research projects on this important interconnect. This thoroughly cross-reviewed state-of-the-art survey covers the complete hardware/software spectrum of SCI clusters, from the major concepts of SCI, through SCI hardware, networking, and low-level software issues, various programming models and environments, up to tools and application experiences.

... snip ...

besides sci defining directory protocol for memory/cache consistency, it also defined a number of other uses.

Comparison of ATM, FibreChannel, HIPPI, Serialbus, SerialPlus SCI/LAMP
http://www.scizzl.com/SCIvsEtc.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

A Merit based system of reward -Does anybody (or any executive) really want to be judged on merit?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 18, 2008
Subject: A Merit based system of reward -Does anybody ( or any executive) really want to be judged on merit?
Blog: Organizational Development
this business school article mentions that there are about 1000 CEOs that are responsible for about 80% of the current financial mess (and it would go a long way to fixing the mess if the gov. could figure out how for them to loose their job)
http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933 (gone 404 and/or requires registration)

while this article points out there was $137 billion in bonus paid out in the period that the current financial mess was being created (in large to those creating the current financial mess)
http://www.businessweek.com/#missing-article

For slight drift ... the current financial mess heavily involved toxic CDOs which were also used two decades ago during the S&L crisis to hide the underlying value ... and I've used the analogy about toxic CDOs being used to obfuscate the "observe" in Boyd's OODA-loop.

This article includes mention of SECDEF recently honoring Boyd (to the horror of the air force)
http://www.time.com/time/nation/article/0,8599,1733747,00.htm

Now one of the things that Boyd use to tell young officers was that they had to choose between doing and being. Being led to all sorts of rewards and positions of honor ... while if you were effective at doing ... frequently the reward would be a kick in the stomach (a little cleaned up in the following):
http://www.d-n-i.net/dni/john-r-boyd/to-be-or-to-do/

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sun, 18 May 2008 22:16:15 -0700 (PDT)
On May 18, 10:09=A0am, Quadibloc <jsav...@ecn.ab.ca> wrote:
Bad news, though; although the site didn't say so, downloading the Tving thesis seems to require a password - and the link to register to get E-mails, which might also serve to establish such a password, is broken.

The main page explains that due to the tragic passing of a researcher, and more mundane matters such as people changing jobs, the site has some limitations, but apparently there is a state of desuetude beyond those limitations at present.

John Savard


the standard should also be available from the ieee site ... and various vendors who have shipped products may have stuff .... although the products date back decade or more ... convex used 128 pa-risc chips with sci in the exemplar ... but hp bought convex and it is no longer around; sequent used 256 intel chips with sci in the numa-q ... and ibm bought sequent ... data general (which is also long gone) had a 256 intel chip processor with sci (emc seemed to have bought up data general for the disk array products). sgi name is still around.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Removing the Big Kernel Lock

From: lynn@garlic.com
Subject: Removing the Big Kernel Lock
Newsgroups: alt.folklore.computers
Date: Mon, 19 May 2008 07:00:37 -0700 (PDT)
Removing the Big Kernel Lock
http://tech.slashdot.org/tech/08/05/17/1446219.shtml

from above:
"There is a big discussion going on over removing a bit of non-preemptable code from the Linux kernel. 'As some of the latency junkies on lkml already know, commit 8e3e076 in v2.6.26-rc2 removed the preemptable BKL feature and made the Big Kernel Lock a spinlock and thus turned it into non-preemptable code again. "This commit returned the BKL code to the 2.6.7 state of affairs in essence," began Ingo Molnar. He noted that this had a very negative effect on the real time kernel efforts, adding that Linux creator Linus Torvalds indicated the only acceptable way forward was to completely remove the BKL.'"

... snip ...

when charlie was doing fine grain locking work on cp67 smp support at the science center in the late 60s and early 70s
https://www.garlic.com/~lynn/subtopic.html#545tech

one of the things he invented was the compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

BKL or global system/kernel lock was (really) the state of the art at that time.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Annoying Processor Pricing

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Annoying Processor Pricing
Newsgroups: alt.folklore.computers,comp.arch
Date: Tue, 20 May 2008 12:52:06 -0400
"Sarr J. Blumson" <sarr.blumson@alum.dartmouth.org> writes:
The mistaken assumption that price is somehow connected to cost is one we engineers often fall into. :-) Here's a story that I know is true because I was there:

To convert a GE 255 system into a more expensive GE 265, you replaced the OS with a version that didn't have the delay loop to reduce apparent CPU performance.

The worst part is that GE actually made a slower CPU (the 225 vs th 235) but they were slightly different and we (Dartmouth) had removed all the code that supported the 225.


i've mentioned before, this was big problem transitioning to "high-speed" internet.

telco infrastructures had large fixed infrastructre, staff, and costs ... with the costs being recovered by usage charges. deployment of large amount of (dark) fiber in the early 80s ... significantly increased the capacity ... but there was a huge chicken/egg situation.

huge increases in usage wouldn't come w/o huge decreases in usage charges. huge increases in usage wouldn't come w/o whole new generation of bandwidth hungry applications ... but those wouldn't be invented without demand, and there wouldn't be demand w/o huge usage charge decreases.

just dropping the usage charges ... would still take maybe a decade for the demand to evolve along with new generation of bandwidth hungry applications (a decade where infrastructure might otherwise operate at enormous losses ... because of relatively fixed costs).

one of the scenarios was the educational/nsf infrastructure. provide significant resources for "sandbox" operation with limitations that the contributed resources weren't usurp standard commercial operation. this would provide "incubator" environment for development/evolution of bandwidth hungry applications w/o any significant impact on regular commercial revenue.

much of the links in that period were 56kbits ... but we were running T1 and higher speed links internally in the hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

recent hsdt topic drift posts mentioning encryption requirement:
https://www.garlic.com/~lynn/2008.html#79 Rotary phones
https://www.garlic.com/~lynn/2008e.html#6 1998 vs. 2008: A Tech Retrospective
https://www.garlic.com/~lynn/2008h.html#87 New test attempt

and we feel that strongly influenced nsfnet backbone rfp to specify t1 ... some old email
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

however, eventually we weren't allowed to bid on nsfnet ... director of nsf thot that writing a letter to the company 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) (including statements that what we already had running was at least five yrs ahead of all nsfnet backbone bids) ... but that just aggravated the internal politics. misc. past nsfnet posts
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

the winning bid actually put in 440kbit liinks (not t1) ... but somewhat to meet the letter of the bid ... they put in T1 trunks and used telco-type multiplexors to operate multiple 440kbit links over the T1 trunks (we made some disparaging remarks that the T1 trunks may have in turn, actually been multiplexed at some point over T5 trunks ... which then they could make claims about nsfnet backbone being T5 network???).

however, we've also commented that possibly resources, 4-5 times the amount of the nsfnet backbone bid was actually used (effectively improving the incubator atmosphere encouraging the development and use of bandwidth hungry applications ... w/o impacting existing commerical usage-based revenue).

there was even more significant resources contributed to many of the local networks (that were interconnected by the nsfnet backbone) as well to the bitnet & earn academic networks
https://www.garlic.com/~lynn/subnetwork.html#bitnet

a corresponding (processor specific) charging in the 60s, was the cpu-meter used for charging ... most of the machines were leased/rented and charging based on usage.

this impacting the migration to 7x24 commercial time-sharing use. in the virtual machine based offerings (cp67 and then morphing into vm370)
https://www.garlic.com/~lynn/submain.html#timeshare

normal 1st shift usage charges was enough to cover the fixed operating costs. a problem was frequently that offshift usage (revenue) wouldn't cover the corresponding usage charges (including vendor cpu-meter based charges). the 360/370 cpu-meter would run whenever the processor was active and/or when there was active channel i/o operations (and would continue to "coast" for 400millseconds after all activity ceased). A challenge was to significantly reduce off-shift and weekend costs ... while still leaving the system available for use ... including remote & home access ... i.e. I've had home dialup access starting in mar70 into the science center service
https://www.garlic.com/~lynn/subtopic.html#545tech

one of the issues was migration to "lights-out" operation ... i.e. the machine being able to operate/run w/o actually having a human present to perform operations (more & more automated operator).

the other was "channel i/o programs" that would be sufficiently active to allow/accept incoming characters ... but otherwise sufficiently idle that the cpu-meter to come to a stop (i.e. being "prepared" to accept incoming characters).

a corresponding phenomena was that off-shift charging (at various levels) has frequently been a fraction of 1st shift usage. The issue is that a lot of the infrastructure costs are fixed regardless of the time-of-day ... and there has tended to be heavy provisioning to handle (peak) 1st shift operation (in the past, cost of peak computer usage provisioning was much larger because computer hardware was significantly more expensive). off-shift charging policies were frequently focused at attempting to migrate usage in order to utilize otherwise idle capacity.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

pro- foreign key propaganda?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pro- foreign key propaganda?
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Tue, 20 May 2008 15:37:52 -0400
paul c <toledobysea@ac.ooyah> writes:
In the 1970 mainframe culture that Codd was trapped in, 'key' had an extremely physical connotation, in fact some hardware supported 'keys' directly with dedicated machine-level operators. Many practitioners had grown up depending on file-level keys, as for IMS, its various keys were all encumbered with various navigational meanings. I think Codd was just as much a pragmatist as a theorist and even though his keys weren't at all the same thing he might have used continued the term to ease his 'sales pitch'. If he had called them, say, the inference set', he might have expected even more resistance than he did from the ignorants of the day. While academics embraced his ideas quickly, he suffered many personal attacks from the powerful marketeers at a time when IBM was maybe more dominant than microsoft is today. Ironic, because IMS was used as way to help sell then big IO-oriented slow-cpu iron and the implementations that followed Codd were attacked for supposedly needing more hardware than IMS did. I remember an Amdahl salesman saying, "give me more of this relational stuff, I'll sell more cpu's!".

I've commented before that the 60s era databases had direct pointers exposed as part of the record.

Also the underlying disk technology had made a trade-off between relatively abundant i/o capability and the limited availability of real/electronic storage in the "CKD" ... count-key-data architecture ... misc. past posts
https://www.garlic.com/~lynn/submain.html#dasd

... it was possible to create i/o requests that performed extended searches for data &/or key pattern matches on disk ... w/o having to know specific location for some piece of information. This was used extensively in various kinds of directories (kept on disk w/o needing to use extremely scarce real storage).

However the 60s era databases tended to have direct record pointers exposed in database records (independent of the multi-track "searching" operations which would tell the disk to try and find the record).

I've posted several times before about the discussions between the IMS group in STL/bldg90 and the (original relational/SQL) System/R group in SJR/bldg28 (where Codd was located). Misc. past posts mentioning System/R
https://www.garlic.com/~lynn/submain.html#systemr

The IMS group pointed out that the index implementation in relational (which contributed to eliminating exposed record pointers as part of the database schema) typically doubled the physical disk space ... vis-a-vis IMS ... and greatly increased the number of physical disk i/os (as part of processing the index to find actual record location). The relational counter-argument was that eliminating the exposed record pointers as part of the database schema significantly reduced the administrative and management costs for large complex databases.

Going into the 80s, there was significant increases in availability of electronic stoarge and also significant decline in computer hardware costs (especially compared to increasing people costs). This shift helped with the uptake of relational ... the reduction in disk costs (and significant increase in bytes/disk) eliminated much of the argument about the disk space requirements issue for relational index. The increases in sizes of computer memories (and reduction in cost) allowed for significant amounts of relational indexes to be cached in real storage (mitigating the significant increase in i/os that had been needed to process the indexes). The significant reduction in administrative and management for relational (vis-a-vis IMS) was not only a cost issue but also became a skills availability issue (it became much easier to obtain and justify skills for relational deployment).

The database direct record pointers as well as the extensive "searching" capability (both from the 60s) could be considered a result of the extremely constrained amount of available system memory/storage.

The shift in relative amounts of system memory/storage vis-a-vis i/o capacity actually started by the mid-70s. in the early 80s, I was making statements that relative disk thruput had declined by better than an order of magnitude (ten times) over a period of 10-15 yrs (i.e. memory and cpu had increased by a factor of 40-50, disk thruput had only increase by factor of 3-5). This got me into some amount of trouble with the executives that ran the disk division. at one point they assigned their performance group to refute my statements. After a couple of weeks ... they came back and observed that I had actually somewhat understated the technology shift. on the other hand ... they did let me periodically play disk engineer in the disk engineering and product test labs ... misc. past posts
https://www.garlic.com/~lynn/subtopic.html#disk

Other purely historical topic drift ... the first relational product was for Multics ... from the 5th flr at 545 tech sq.

The science center was on the 4th flr of 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech

which had come up with the original virtual machine operating systems ... thru various generations cp40, cp67, vm370, etc. it was also where (gml) markup language was invented in 1969 (subsequently morphing in various sgml, html, xml, etc).

And all the System/R development was done on vm370 virtual machine operating system at SJR.

The march in the increases in system real storage has seen another transition. Relational implementations during the 70s and 80s were mostly oriented towards primary location of information was on disk and copies were kept cached in real storage. However, in the 90s there started appearing implementations that assumed the whole database was in real storage and disks were purely for transaction integrity.

These claimed ten times the performance of the earlier generation of "caching" oriented implementations (even on the same hardware where the caching oriented implementations had their complete database also resident in real storage).

These high-performance relational databases saw some uptake in the telco and cellphone markets ... used for large number of call detail/charge records. There was also some speculation in this period that telcos might be able to move into the payment transaction market ... leveraging their highly efficient call transaction database implementations to first gain a foothold with "micro-payments" and then moving into the rest of the payment transaction market (being able to process significantly larger volume of transactions at a significantly lower cost). The big inhibitor for telcos pulling this off seems to be figuring out how to deal with the financial liability issues with regard to handling payments.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Obfuscation was: Definition of file spec in commands

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Obfuscation was: Definition of file spec in commands
Newsgroups: alt.folklore.computers
Date: Tue, 20 May 2008 15:58:45 -0400
Walter Bushell <proto@xxx.com> writes:
That is the reason for time machine, of course. Of course, everyone is an idiot sometimes. Pulling 25 hour days for a few weeks can lead to weird thinking.

in the 60s, the univ. use to turn over the datacenter to me for the weekend ... basically mainframes as personal computers from 8am sat. until 8am monday ... however, pulling a straight 48hr shift then made it little difficult attending classes on monday.

a couple decades later there were some jokes about my 4shift week,

1st shift in sjr/bldg. 28 ... things like System/R
https://www.garlic.com/~lynn/submain.html#systemr

2nd shift in bldgs 14&15 (disk engineering and product test lab)
https://www.garlic.com/~lynn/subtopic.html#disk

frequent 3rd shift in bldg90/stl on various activities

and 4th shift (weekends) up at HONE (virtual machine based time-sharing service providing online sales & marketing support)
https://www.garlic.com/~lynn/subtopic.html#hone

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Different Implementations of VLIW

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Different Implementations of VLIW .
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 20 May 2008 18:45:02 -0400
"Phil Weldon" <not.disclosed@example.com> writes:
In 1979 the price of RAM for the IBM System/360 was cut from $75,000 US to $50,000 US per MByte.

By 1985 IBM had the System/370 3090-200 and the 1 Mbit SAMOS chip, with a dual processor 64 MByte system priced at $5,000,000 US [IBM Archives]. In the same year IBM shipped its 4,000,000th PC (and discontinued the PC Jr.) Memory was not upgraded separately from the system in this class of mainframes; the next step up was to the 3090 - 400 field upgrade with twice the memory and twice the number of processors for an additional $4,000,000.

But for mainframes of this class the memory and consequently power density required heroic cooling that added significantly to the cost; a cost that was not incurred by lower densities (the IBM PC, for example - or a VAX.)


3090 had a separate memory issue ... in order to meet "capacity planning" thruput ... it needed more memory ... than the technology (of the period) could easily package ... as normal processor memory ... so they did sort of a numa architecture ... that was under software control ... called *expanded store* (bascially the same chips as in processor memory ... but on a different bus).

the software paradigm to support it was sort of like an electronic paging disk ... except, instead of asynchronous i/o operations ... it had synchronous move instructions (the claim being that while the instruction took some amount of time ... it was way less than traditional asynchronous i/o interrupt handling pathlengths).

it was a "fast", wide bus between expanded store (placing the extended store futher away with longer latency) and processor memory that moved 4k bytes at a time ... could also think of processor memory as software controlled (store-in) cache with 4kbyte cache lines ... although the amount of expanded store tended to be about the same as regular processor memory.

the expanded store bus came in handy when they went to add hippi i/o support to 3090. the standard channel interfaces wouldn't handle the i/o rate. they cut into the side of the expanded store bus to add hippi. however, the mainframe interface was still 4k move instruction ... so hippi i/o programming was done with a kind of peek/poke paradigm using 4k move to/from instructions to reserved expanded store addresses.

later generations, memory densities and packaging technology eliminated the need for expanded store ... however, there continued to be LPAR configuration support for emulated expanded store (using standard memory) ... apparently because of various legacy software implementation considerations.

LPAR has been evoluation of PR/SM introduced in 3090s ... somewhat in response to Amdahl's hypervisor. LPARs ... or Logical PARtitions implement a significant subset of virtual machine capability directly in the "hardware" (doesn't require a *software* virtual machine operating system).

... i don't have recollection of costs ... however 3090 archives web page:
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_PP3090.html

the above mentions the (two) 3092 processor controller ... service processor ... which were really a pair of 4361s running a modified version of vm370 release 6 ... recent post discussion 3090 of (4361) service processors
https://www.garlic.com/~lynn/2008h.html#80 Microsoft versus Digital Equipment Corporation

the 3090 archive also mentions that the (4361) 3092 processor controller required two 3370 Model A2 disks ... and access to 3420 tape drives (for read/write files).

for other memory topic drift x-over, post from today in c.d.t about rdbms
https://www.garlic.com/~lynn/2008i.html#8 pro- foreign key propaganda?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Different Implementations of VLIW

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Different Implementations of VLIW .
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 20 May 2008 18:57:20 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
3090 had a separate memory issue ... in order to meet "capacity planning" thruput ... it needed more memory ... than the technology (of the period) could easily package ... as normal processor memory ... so they did sort of a numa architecture ... that was under software control ... called *expanded store* (bascially the same chips as in processor memory ... but on a different bus).

re:
https://www.garlic.com/~lynn/2008i.html#10 Different Implementations of VLIW

for a different 3090 "capacity planning" issue ... was number of channels (w/o regard to peak transfer rate and not being able to support hippi).

3090s were built with large modules ... and had been profiled to have balanced system thruput with a specific number of i/o channels. however, fairly late in the development cycle ... it was "discovered" that the new disk controller (3880) had significantly higher protocol processing overhead ... significantly increasing channel busy time (even tho the data transfer rate had increased to 3mbytes/sec, the disk control processor handling i/o commands was quite slow).

The revised system thruput profile (using the actual 3880 controller overhead channel busy numbers) required a significant increase in the number of channels (in order to meet aggregate thruput objectives). This required that 3090 manufacturing required an extra module ... which noticeably increased the 3090 manufacturing costs.

There was joke that the incremental manufacturing costs for each 3090, should be charged off against the disk business unit's bottom line ... rather than the processor business unit's bottom line.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

pro- foreign key propaganda?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pro- foreign key propaganda?
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Tue, 20 May 2008 22:37:44 -0400
paul c <toledobysea@ac.ooyah> writes:
In the IBM way of things (which was cloned by several other mfrs, not just the mainframe ones, but even Wang, IIRC), there was this thing called Count-Key-Data disk architecture and a bunch of 'access methods' (which IMS made use of, but could be used by themselves without IMS). Likely I recall some of the details wrong, but keys in access methods such as VSAM or ISAM or even BDAM if I recall, below any logical schemes such as trees, could be stored in segregated disk cylinders and there were also little mini-computers called channels with very limited instruction sets which would search those disk cylinders asynchronously from the main cpu. (Some of those artifacts found their way into the higher-level IMS configuration verbiage and commands.) All the 'bare-metal' programmers knew about this as well as many other physical techniques such as how to avoid hardware deadlocks. Much of Codd's audience was within this (dominant) culture and he was very much addressing it. that deal.)

re:
https://www.garlic.com/~lynn/2008i.html#8 pro- foreign key propaganda?

BDAM ... basic direct access method. basically had 32bit record no/ptr

Misc. past posts mentioning BDAM (and/or CICS ... an online transaction monitor originating in the same era and frequently deployed with applications that used BDAM files)
https://www.garlic.com/~lynn/submain.html#bdam

Online medical library was developed using bdam in the 60s and was still in extensive world-wide use 30 years later ... still being the largest online search facility in the world until being eclipsed by some of the popular internet search engines sometime in the 90s.

One of the processes was that medical knowledge articles were indexed in a large number of different ways, keywords, authors, etc. Tables were built of all the different ways articles were indexed. In effect the record number of the article became a unique key for each article. A specific keyword would have a list of all articles that the keyword was applicable for ... i.e. condensed set of 32bit integers ... the record ptr was effectively used as unique key of the article.

Boolean keyword searches ... became ANDs and ORs of the sets of unique keys (unique record ptrs). An AND of two keywords becomes the intersection of key/recordptrs from the two lists. An OR of two keywords becomes the join of key/recordptrs from the two lists. This was all built on top of underlying BDAM support.

Part of the issue attempting to replace the bdam implementation was that it was highly efficient ... having collapsed the article unique key and the corresponding record pointer into the same value (however, there was significant maintenance activity ... so significant access & thruput was needed to justify the extensive care and support). Problems also started creeping in when the number of articles started exceeding the size of the record ptr/key.

...

ISAM ... indexed sequential access method ... had the really complex channel programs. the whole structure of the database was stored on disk. a channel program would be extensive and very complex ... starting out searching for specific index record ... which would be then read into memory location which was the argument of a following search command ... this could continue for multiple search sequences ... in a channel program, until pointer for the approriate data record was found and read/written. Channel programs could have relatively complex condition testing, branching, and looping.

ISAM was an enormously I/O intensive resource hog ... and went out of favor as the trade-off between disk i/o resources and real memory resources shifted (my reference to relative system disk i/o thruput having declined by better than an order of magnitude during the period) ... and it became much more efficient to maintain index structures cached in processor storage.

ISAM channel programs were also were a real bear to provide virtualization support for.

....

for other reference, the wiki IMS page:
https://en.wikipedia.org/wiki/Information_Management_System

from above:
IBM designed IMS with Rockwell and Caterpillar starting in 1966 for the Apollo program. IMS's challenge was to inventory the very large Bill of Materials for the Saturn V moon rocket and Apollo space vehicle.

... snip ...

and:
In fact, much of the world's banking industry relies on IMS, including the U.S. Federal Reserve. For example, chances are that withdrawing money from an automated teller machine (ATM) will trigger an IMS transaction. Several Chinese banks have recently purchased IMS to support that country's burgeoning financial industry. Reportedly IMS alone is a $1 billion (U.S.) per year business for IBM.

... snip ...

Bottom line in the wiki article is that IMS outperforms relational for a given task ... but requires more effort to design & maintain.

And CICS wiki page ... for much of their lives ... IMS and CICS have somewhat competed as "transaction monitors":
https://en.wikipedia.org/wiki/CICS

For old CICS folklore ... the univ. that I was at in the 60s was selected as one of the beta test sites for the original CICS product release ... and one of the things I got tasked as an undergraduate was helping debug CICS.

and BDAM wiki page ...
https://en.wikipedia.org/wiki/Basic_direct_access_method

and ISAM wiki page (although it doesn't talk about the really complex channel program implementation support done in the 60s):
https://en.wikipedia.org/wiki/ISAM

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

pro- foreign key propaganda?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: pro- foreign key propaganda?
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Wed, 21 May 2008 08:17:47 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
part of the issue attempting to replace the bdam implementation was that it was highly efficient ... having collapsed the article unique key and the corresponding record pointer into the same value (however, there was significant maintenance activity ... so significant access & thruput was needed to justify the extensive care and support). problems also started creeping in when the number of articles started exceeding the size of the record ptr/key.

re:
https://www.garlic.com/~lynn/2008i.html#8 pro- foreign key propaganda?
https://www.garlic.com/~lynn/2008i.html#12 pro- foreign key propaganda?

aka ... overloading a value with multiple characteristics can significantly improve runtime operation ... but can become an administrative burden to maintain the consistency of all the different characteristics.

reducing the number of different characteristics a value has to represent will reduce the consistency administrative burden but will typically increase the runtime overhead (navigating internal tables relating the different characteristics).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

DASD or TAPE attached via TCP/IP

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DASD or TAPE attached via TCP/IP
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 21 May 2008 10:02:07 -0400
Michael.Knigge@SET-SOFTWARE.DE (Michael Knigge) writes:
I wonder how it is possible to attach DASD- or TAPE-Devives via TCP/IP. There is a product called mfnetdisk (see mknetdisk.com) that is able to "emulate" a 3390 that resides on a PC and is accessed via TCP/IP.

So... I ask myself how this is possible. And (for me) even more interesting, would it also be possible to do the same for a Tape?


for historical reference ... the internal "csc/vm" vm370 release ... mentioned in this old email
https://www.garlic.com/~lynn/2006v.html#email731212
https://www.garlic.com/~lynn/2006w.html#email750102
https://www.garlic.com/~lynn/2006w.html#email750430

was somehow leaked to at&t longlines circa 1975. they took this highly modified "csc/vm" vm370 release and made numerous local modifications ... including remote device support ... that would run over various kinds of communication links. basically virtual machine channel program simulation would forward the stuff to remote site for actual execution on the real locally attached device. this system manage to propagate to a number of at&t longline machines. Nearly a decade later, the at&t national account manager managed to track me down ... longlines had continued to migrate the vm370 system thru various generations of mainframes ... but it came to an end with move to 370/XA ... and he was looking for assistance in helping move longlines off that vm370 system.

this isn't all that much difference with standard i/o virtualization, aka a copy of the "virtual" channel programs are replicated with real address substituted for virtual addresses. in the case of remote device, the replicated "real" channel programs are run on remote system ... with appropriate fiddling of virtual pages on the application machine and the real pages on the machine where the device was attached.

some amount of the fiddling was handled by services running in a separate virtual machine. note this isn't all that different from what is done by various virtual machine mainframe simulators that run on various other kinds of platforms ... and include simulation of various kinds of mainframe i/o devices on completely different kinds of devices.

the specific communication mechanism used is the least of the issues.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

DASD or TAPE attached via TCP/IP

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: DASD or TAPE attached via TCP/IP
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 21 May 2008 12:07:53 -0400
R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
The key is "on other systems". iSCSI is not supported on z/OS. So, on z/OS it is still impossible. However - in general - it is possible.

It remains me a dialog with DB2 admin: - DB2 uses VSAM datasets, how would you imagine a database without VSAM? - I don't know, but others imagine *and do it*. On other platforms.


re:
https://www.garlic.com/~lynn/2008i.html#14 DASD or TAPE attached via TCP/IP

for some topic drift ... posts in a recent thread in comp.databases.theory:
https://www.garlic.com/~lynn/2008i.html#8 pro- foreign key propaganda?
https://www.garlic.com/~lynn/2008i.html#12 pro- foreign key propaganda?
https://www.garlic.com/~lynn/2008i.html#13 pro- foreign key propaganda?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

should I encrypt over a private network?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: should I encrypt over a private network?
Newsgroups: comp.security.misc
Date: Fri, 23 May 2008 09:49:46 -0400
marlow.andrew writes:
Suppose a company has 2 sites, A and B, one is primary, the other is secondary for DR reasons. A and B are separated significantly geographically. Both A and B use a SAN for their data. A and B are connected by a private network. The SAN data is replicated between A and B over this private network using some replication product. My question is, "should I be worried about the fact that the SAN replication product does not do encryption?". When I raised these concerns the answer I was given was "its a private network so its not a problem". I am still not sure. Maybe I'm paranoid but I thought most security jobs were inside jobs and this is made easier if the data going over the wire is always in plaintext. But then again, data sent around the LAN using NFS is not encrypted either.

in the mid-80s, there were claims that the corporate internal network had over half of all the link encryptors in the world (basically any link leaving corporate premises had to be encrypted) ... this was about the time that the size of arpanet/internet finally exceeded the internal network (which had been larger from just about the beginning until sometime mid-85) ... misc. posts mentioning internal network:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

in that period there was a story about a foreign consulate location, in one of the major city, apparently was chosen because it had line-of-site of a large microwave communication antenna array for major cross-country communication. there were comments that a lot of foreign government espionage was heavily intertwined with industrial espionage.

slightly earlier, in the early part of the 80s ... was looking at deploying dial-up access into the corporate network for both (actually major expansion for) home access (since i've had dial-up access at home since mar70) and hotel/travel access. a detailed study found that hotel pbx rooms were frequently especially vulnerable ... and as a result encryption requirement was extended to all dial-up access ... which required designing and building a custom encrypting dial-up modem for these uses.

a lot of the internet hype seems to have distracted attention from both other forms of external compromises as well as internal attackers.

for a little additional topic drift:
https://www.garlic.com/~lynn/2008h.html#87 New test attempt

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Does anyone have any IT data center disaster stories?

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 23, 2008
Subject: Does anyone have any IT data center disaster stories?
Blog: Information Security
When we were doing the high availability HA/CMP product we looked at all kinds of ways that things could fail. One of the things was that over the decades, both software and hardware reliability had increased significantly. As a result the remaining failure modes tended to be human mistakes and environmental. As part of HA/CMP marketing, we had coined the terms disaster survivability and geographical survivability (to differentiate from disaster/recovery).
https://www.garlic.com/~lynn/submain.html#available

An example in this period, there was the garage bombing at the World Trade Center ... which included taking out a "disaster/recovery" datacenter that was located in lower floors. Later there was a large financial transaction processing center that had its roof collapse because of snow loading. Its disaster/recovery datacenter was the one in the World Trade Center (that was no longer operational).

On the other hand, long ago and far away, my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture (mainframe for cluster). While there she created Peer-Coupled Shared Data architecture ... which, except for IMS hotstandby, didn't see any takeup until SYSPLEX.
https://www.garlic.com/~lynn/submain.html#shareddata

There has been another very large financial transaction processing operation that has triple replicated locations and has attributed its 100percent availability to

automated operator • ims hot-standby

posts from slightly related discussion in comp.database.theory forum:
https://www.garlic.com/~lynn/2008i.html#8
https://www.garlic.com/~lynn/2008i.html#12
https://www.garlic.com/~lynn/2008i.html#13

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 24 May 2008 11:30:39 -0400
Johnny Billquist <bqt@update.uu.se> writes:
No. I don't think they could have done it either. Cache coherency between several processors had to be done in software back then. The whole theory behind cache coherency protocols hadn't been thought out yet, and also, implementing that in hardware back then would have been a big box.

or at least at dec.

note that the 370 cache coherency resulted in slowing processor cycle down by ten percent ... a basic two-processor smp started out at 1.8 times a single processor because each processor running ten percent slower allowing for signaling and listening to the other cache. the processing of cache invalidate signals received from another cache further degraded performance (over and above the ten percent slow-down just to allow for signaling and listening).

favorite son operating system for two-processor smp typically was quoted at 1.4-1.5 times thruput a single processor ... after throwing in kernel serialization, locking, and kernel software signaling overhead.

the stuff i had done, with some slight of hand, i had gotten very close to 1.8 hardware thruput ... and in few cases got two times or better (because of some cache affinity and cache hit ratio effects).

misc. past smp posts and/or references to charlie inventing the compare&swap instruction while working on cp67 kernel smp fine-grain locking
https://www.garlic.com/~lynn/subtopic.html#smp

old email referencing dec announcement of symmetrical multiprocessing (and some commencts about not considered "real" commercial until it supported symmetrical ... vax 8800)
https://www.garlic.com/~lynn/2007.html#email880324
https://www.garlic.com/~lynn/2007.html#email880329
in this post
https://www.garlic.com/~lynn/2007.html#46 How many 36-bit Unix ports in the old days?

i've frequently claimed that john's 801/risc design (trade-offs) were based on both the heavy multiprocessor cache consistency overhead (that didn't scale well as number of processors increase) ... and doing the exact (KISS) opposite of what had been attempted in the (failed) future system effort
https://www.garlic.com/~lynn/submain.html#futuresys

which was attempted to combine lots of advanced features, borrowing from tss/360, multics, and some very complex hardware interfaces. some sense of that showed up in the subsequent system/38 effort ... while 801/risc tried to do the exact opposite.

it wasn't until later generations with things like directory cache consistencyh and numa that started to see scale-up in number of processors

the work on (hardware) cache consistency implementations was also useful in working out details of distributed lock manager for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

as well as a process that would allow (database) cache-to-cache copying (w/o first having to write to disk) while still being able to preserve acid properties

some recent posts mentioning DLM work:
https://www.garlic.com/~lynn/2008b.html#69 How does ATTACH pass address of ECB to child?
https://www.garlic.com/~lynn/2008c.html#81 Random thoughts
https://www.garlic.com/~lynn/2008d.html#25 Remembering The Search For Jim Gray, A Year Later
https://www.garlic.com/~lynn/2008d.html#70 Time to rewrite DBMS, says Ingres founder
https://www.garlic.com/~lynn/2008g.html#56 performance of hardware dynamic scheduling
https://www.garlic.com/~lynn/2008h.html#91 Microsoft versus Digital Equipment Corporation

some recent numa/sci posts:
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#3 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#6 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#8 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#12 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#19 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008f.html#21 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical
https://www.garlic.com/~lynn/2008h.html#80 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008h.html#84 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#2 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#3 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#5 Microsoft versus Digital Equipment Corporation

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: American Airlines
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sat, 24 May 2008 11:49:22 -0400
lists@AKPHS.COM (Phil Smith III) writes:
That would be the problem today; back in 1989 when SABRE (to the best of my knowledge) was the main airline reservation system, they didn't have that option.

But it was long ago and far away, and my memory may be faulty!


sabre was "main" for several airlines ... but there also was united's reservation system (Apollo) ... and in that time-frame, my wife had done a brief stint as chief architect for amadeus (which started off with the old eastern airlines reservation system, SystemOne)

one of the things that cut her stint short with amadeus was that she side with the decision to use x.25 rather than sna as the main communication protocol ... which brought out a lot of opposition from certain quarters. it didn't do them much good since amadeus went with x.25 anyway.

current amadeus website
http://www.amadeus.com/

for other archeological notes ... eastern airlines res system had been running on 370/195. one of the things that help put the final nails in the future system project coffin
https://www.garlic.com/~lynn/submain.html#futuresys

was analysis that if a future system machine was implemented out of the same performance technology as used in 370/195 ... and the eastern airlines res. system moved over to it ... it would have the thruput of 370/145.

wiki computer res system page
https://en.wikipedia.org/wiki/Computer_reservations_system

from above:
European airlines also began to invest in the field in the 1980s, propelled by growth in demand for travel as well as technological advances which allowed GDSes to offer ever-increasing services and searching power. In 1987, a consortium led by Air France and West Germany's Lufthansa developed Amadeus, modeled on SystemOne. In 1990, Delta, Northwest Airlines, and Trans World Airlines formed Worldspan, and in 1993, another consortium (including British Airways, KLM, and United Airlines, among others) formed the competing company Galileo International based on Apollo. Numerous smaller companies have also formed, aimed at niche markets the four largest networks do not cater to.

... snip ...

for totally unrelated topic drift ... at one point we were asked to consult with one of the main reservation systems about redoing various parts of the implementation. recent posts mentioning doing a paradigm change in the implementation of routes:
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sat, 24 May 2008 11:52:22 -0400
peter@taronga.com (Peter da Silva) writes:
The 11/44 and 11/60 could support about 8 users under RSTS, about 4 under 2BSD or 3BSD UNIX. Most of the Berkeley 11/70s could support 10-30 users under 3BSD, but they started sucking at the high end. The Cory 11/70, the undergrad EECS machine, had up to 70 users on it during finals week and it was definitely way past "unhappy" at that point.

i remember somebody sending me email from one of the USB machines when it was loaded at that level and mentioning "response" (that should have been subsecond) was on the order of minute.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Worst Security Threats?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 25, 2008
Subject: Worst Security Threats?
Blog: Information Security
we had been asked to come in and help word smith the cal. electronic signature legislation (and later the fed. legislation).
https://www.garlic.com/~lynn/subpubkey.html#signature

Some of the parties involved were also working on privacy issues and had done in depth consumer surveys ... and found the two most important issues were

• identity theft (mostly account fraud, fraudulent transactions against existing account)

• denial of service (institutions using personal information to the detriment of the individual

studies have regularly found that upwards of 70percent of identity theft involve "insiders".

part of the lack of attention to identity theft problem was at the basis of the subsequent cal. state. breach notification legislation (which has since also shown up in many other states).

recent article

Most Retailer Breaches Are Not Disclosed, Gartner Says
http://www.pcworld.com/businesscenter/article/146278/most_retailer_breaches_are_not_disclosed_gartner_says.html
Most retailer breaches are not disclosed, Gartner says
http://www.networkworld.com/news/2008/060508-researchers-say-notification-laws-not.html

in the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. part of that effort was detailed study of threats & vulnerabilities related to fraudulent transactions. the product of the x9a10 financial standard working group was the x9.59 financial transaction standard
https://www.garlic.com/~lynn/x959.html#x959

part of the detail threat and vulnerability study was identifying lots of infrastructure & paradigm issues ... including transaction information having diametrically opposing requirements. For security reasons, existing transaction & account information has to be kept completely confidential and never divulged. However, there are a large number of business processes that require access to the transaction and account information in order to perform transaction processing. This has led to our periodic comment that even if the planet were buried under miles of information hiding encryption ... it still wouldn't be possible to prevent breaches.

As a result, x9.59 standard slightly modified the transaction processing paradigm ... making previous transaction information useless to attackers for performing fraudulent transactions. x9.59 did nothing regarding trying to hide such information ... but x9.59 standard eliminated such breaches as threat & vulnerability.

another aspect of the detailed vulnerability and threat analysis (besides diametrically opposing requirements on transaction information, both can never be divulged and at the same time required for numerous business processes) ... was security proportional to risk. Huge part of existing attacks (both insiders and outsiders) are directed at these breaches since the results represent significant financial gain to the attackers (from the fraudulent transactions). We've estimated that the value of the information to the attackers (steal all the money in the account or run up transactions to the credit limit) is hundreds of times greater than the value of the information to the retailers (profit margin on the transaction). As a result, the attackers (insiders and outsiders) can afford to outspend the defenders possibly 100:1. In effect, the x9.59 financial standard corrected this imbalance also by removing the value of the information to the attackers. This also eliminates much of the motivation behind the phishing attacks (i.e. doesn't eliminate the attacks, just eliminates the usefulness of the information for fraudulent transaction purposes).

part of security proportional to risk came from having been asked to consult with small client/server startup that wanted to do payments transactions on their server and had this technology they had invented and wanted to use called SSL. Most people now refer to the result as electronic commerce.

One of the things that we kept running into was that none of the server operators could afford what we were specifying as the necessary minimum security (proportional to the financial risk). This was later confirmed by the x9a10 financial standard working group detail threat and vulnerabiilty studies ... and helped motivate the paradigm tweak in the x9.59 financial standard (which removed most phishing and breaches as a vulnerability, didn't eliminate phishing and breaches, just removed most of the basic financial motivation behind the phishing and breach efforts).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 26 May 2008 09:48:17 -0400
jmfbah <jmfbahciv@aol> writes:
Shouldn't the monitor (or exec or kernel) know a physical location is shared?

caches are where subsets of information contained in local store with some sort of identification ... in fact, the location in local store is frequently selected by using subset/index bits from the identification.

a physical mapped cache uses the physical address to identify things in the cache ... and bits from the physical address to index location in the cache.

a virtual mapped cache uses the virtual address to identify things in the cache ... and bits from the virtual address to index location in the cache. this can start cache lookup w/o waiting to perform virtual to physical address translation from the table/translation lookaside buffer.

the virtual mapped cache sharing problem is aliasing ... where the same shared (physical) location can be known by multiple different virtual addresses. this opens the possibility that the same physical data indexes to different locations in a virtual cache (because of different virtual addresses) and is known/named by different (virtual address) name/alias

the monitor will know a physical address is shared ... when it sets up the (virtual to real) translation tables ... but that doesn't mean that a virtual cache can easily figure out that a physical address is shared and known by multiple different aliases (major point of having a virtual cache is doing a quicker lookup w/o having to wait for the virtual to real translation delay from the tlb).

the issue is somewhat analogous to multiprocessor cache consistency protocols ... i.e. how to maintain consistency where the same physical data may be in different caches ... but in this case, it is the same physical data in the same cache ... but at different locations because of being known by multiple different names/aliases.

the assumption here is that the cache is large enuf that it attempts to maintain locations for multiple different virtual address spaces (and doesn't flush the cache whenever there is virtual address space or context switch). this is analogous to table/translation look aside buffer keeping virtual to physical address mappings for multiple different virtual address spaces (as opposed to flushing all mappings whenever there is context or virtual address space change). this is the problem where different people have the same name and it is necessary to differentiate which person you are talking about (as opposed to the situation where the same person has multiple different aliases).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 26 May 2008 09:52:32 -0400
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
I believe it was "Translation Lookaside Buffer" since virtual storage was added to S/370. I am sometimes surprised when IBM names for things stick, as this one seems to have done. No-one else calls their disks DASD, and rarely starting a system up through IPL.

from z-architecture principles of operation

3.11.4 Translation-Lookaside Buffer
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/3.11.4?SHELF=DZ9ZBK03&DT=20040504121320

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Credit Card Fraud

Refed: **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 26, 2008
Subject: Credit Card Fraud
Blog: Information Security
In the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. part of the effort was detailed end-to-end threat & vulnerability study.

one of the major threats & vulnerabilities identified was being able to use information from previous transactions enabling fraudulent transactions (i.e. skimming at pos, evesdropping on the internet, security breaches and data breaches of log files, and lots of other kinds of compromises). we have sort of made reference to the general phenomena as the "naked transaction" (where ever it exists, it is vulnerable).

the x9a10 financial standard working group produced the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959

... which slightly tweaked the paradigm, eliminating the "naked transaction" phenomena ...
https://www.garlic.com/~lynn/subintegrity.html#payments

aka it didn't do anything about attempting to hide the information (from previous transactions) ... it just eliminated attackers being able to use the information for fraudulent transactions.

somewhat related answer
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/237628-24760462
also
https://www.garlic.com/~lynn/2008i.html#21 Worst Security Threats?

part of the x9.59 financial standard protocol had been based on the earlier work we had done on what is now usually referred to as electronic commerce. we were asked to consult with a small client/server startup that wanted to do financial transactions on their servers and had this technology called SSL they had invented and wated to use. The major use of SSL in the world today is involved with this thing called electronic commerce and hiding information related to the transactions.

Part of the x9.59 standard was eliminating the need to hide financial transaction information as countermeasure to fraudulent transactions ... which then can be viewed as also eliminating the major use of SSL in the world today.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 27 May 2008 08:58:34 -0400
Johnny Billquist <bqt@update.uu.se> writes:
Caching on virtual addresses is messy in so many ways...

Oh, and caches with more than one cache line for each address isn't in any way unusual (can someone help me remember the correct term here?). The cache in the PDP-11/70 is 2-way associative, which means that for each cache line, there are two memory cells that can hold the data. So, two different addresses that hash to the same cache line can be in the cache simultaneously. Which is a good thing.


re:
https://www.garlic.com/~lynn/2008i.html#22 Microsoft versus Digital Equipment Corporation

so virtual cache problem can be akin to the multiprocessor cache coherency ... the same physical location can appear in multiple different places.

typically a group of cache lines is "indexed" by a set of bits from the location address (and then that set is check to see if the required information is already loaded ... and if not ... one of the cache lines is selected for replacement).

in a virtual cache ... some of the bits may come from the "page" displacement ... i.e. that part of the address that is from the page displacement from of the virtual address. those set of bits would be the same for a physical address that might be known/loaded by different virtual addresses. other parts of the cache index bits may come from that part of the location address that is greater than the page displacement ... and may be different for physical location that is shared in different virtual address spaces at different location (alias problem).

so one of the approaches to virtual cache coherency is if the desired location isn't in the cache ... it is a miss and has to start a real storage fetch. however, overlapped (which might take thousands of cycles) it could check the other possible alias locations. it has to interrogate the TLB to get the real address (for the missing cache line) in order to do the real memory fetch. it then could also look at all the possible alias locations for the same physical data ... while it is waiting for the real storage fetch (alternatively it could simply invalidate all virtual cache lines that might be an alias).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 27 May 2008 09:13:55 -0400
jmfbah <jmfbahciv@aol> writes:
So, when the contents of a virtual address has to be written back to physical memory, does the monitor do the virtual-to-physical calculation or does the cache? I think I would hope that the monitor does this...but I'm not sure why.

re:
https://www.garlic.com/~lynn/2008i.html#22 Microsoft versus Digital Equipment Corporation

if it is a hardware managed cache ... on a cache miss (can't find it), the hardware interrogates the TLB for the real address ... and sends out a request to real memory to load the cache line for that real address. similarly, when a (store into) cache line is being replaced and the (changed) information has to be flushed to real storage ... it can interrogate the TLB for the real address.

however, some virtual caches may keep the virtual cache line "tagged" with both the virtual address as well as the real address (when it is loaded) ... even thot the cache line isn't "indexed" by the real address; i.e. in virtual cache that is simultaneously keeping track of cache lines from multiple different virtual address spaces ... it already has to track the virtual address space identifier and virtual address for each cache line ... in addition it might also remember the physical address (even tho the real address isn't used to index the cache line). when a (modified) cache line has to be written back to storage ... this allows the operation to start immediately w/o the hardware having to do a separate interrogation of the TLB to get the corresponding real address.

so in this discussion about analogy between multiprocessor cache coherency and virtual caches that support aliases (same real address known by multiple different virtual addresses)
https://www.garlic.com/~lynn/2008i.html#25 Microsoft versus Digital Equipment Corporation

if the cache is keeping the real address as part of a cache line tag, then it can look at all possible alternative alias locations that the same real address might appear and only invalidate/remove it if it has a matching real address.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Tue, 27 May 2008 09:58:01 -0400
jmfbah <jmfbahciv@aol> writes:
But associative memory is nothing new. it used to be considered a feature.

re:
https://www.garlic.com/~lynn/2008i.html#22 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#24 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#26 Microsoft versus Digital Equipment Corporation

both caches and TLBs are frequently partially associative. TLBs sometimes are hardware managed cache of translation table information (virtual address to real address mapping) that are located in real storage (although some TLBs are software managed ... and require the monitor to load values).

typical cache entries (and hardware TLB managed entries) are broken into sets of entries. say a 2mbyte cache with 128 byte cache lines ... has 16,384 cache lines. If the cache is 4-way associative, the cache is broken up into 4096 sets of fur cache lines each. The cache then needs to use 12 bits from the original address (real or virtual) to index one of the 4096 set of four cache lines ... and then check all four of those cache lines (i.e. associative) whether they match the desired address.

(hardware) TLBs tend to work similarly (caching virtual to real address information), they have a set of entries that may be 2-way or 4-way. Bits from the address are used to index a specific set of entries and then all entries in that set are check for match.

This is a trade-off between the circuits/delay required to do a fully associative check and the interference that can happen when a whole set of different addresses map to the same, single entry and start "thrashing".

there was an issue with 370/168 TLB in the bits it used to index the TLB. These were 16mbyte/24bit virtual address machines. There were 128 TLB entries ... and one of the bits used to index TLB entries was the "8 mbyte bit" (i.e 24bits, numbered 0-23, the first or zero bit). The favorite son operating system was designed that the kernel occupied 8mbytes of each virtual address space and (supposedly) the kernel had the other 8mbytes. The result was that typically half of the TLB entries were filled with kernel virtual addresses and half the TLB was filled with appication virtual addresses. However, for vm370/cms, the cms virtual address space start at zero ... and extended upwards ... and most applications rarely crossed the 8mbyte line ... so frequently half the 370/168 TLB entries would go unused.

370 TLBs were indexed with low associative ... however, 360/67 "look-aside" (hardware virtual to real mapping) wasn't referred to as TLB ... it was called the associative array ... since it was fully associative (all entries interrogated in parallel).

there was sort of a virtual/real cache issue with the introduction of the 370/168-3 which doubled the 32k cache (from 370/168-1) to 64k cache. The number of sets of cache line were such that the index bits could be taken purely from the page displacement of the address ... which would be the same whether it was a virtual address or a real address.

370 had support for both 2k virtual page size mode and 4k virtual page size mode. With 32k cache ... there was no difference for 2k & 4k page sizes ... however for 64k cache, they took the "2k" bit as part of cache line indexing. As a result, a 168-3, when operating in 4k virtual page mode would use the full 64k cache ... but when operating in 2k virtual page mode would only use 32k cache. And in any transition between 2k and 4k modes ... the cache would be flushed ... since the mappings were different. Now some customers running vm370 with VS1 virtual guest (batch operating system that ran with 2k virtual page sized) on 168-1, upgraded to 168-3 and performance got much worse.

Nominally, VS1 would run on 168-3 with 32k cache ... just like it was a 168-1 ... and shouldn't have seen any performance improvement (as opposed to performance decrease). The problem was that vm370 defaulted hardware settings to 4k page mode ... except when 2k page mode was specifically requested. The result was that vm370 (when running virtual VS1 or DOS/VS) was frequently making hardware switch back and forth between 2k and 4k page modes. On all other machines ... this wasn't a problem ... but on 168-3 ... this resulted in the cache having to be flushed (every time the switch occurred).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Scalable Nonblocking Data Structures

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Scalable Nonblocking Data Structures
Newsgroups: alt.folklore.computers
Date: Tue, 27 May 2008 19:36:49 -0400
Scalable Nonblocking Data Structures
http://developers.slashdot.org/article.pl?sid=08/05/27/1916235

Cliff Click on a Scalable Non-Blocking Coding Style
http://www.infoq.com/news/2008/05/click_non_blocking

from above:

The major components of Click's work are:
...

2. Atomic-update on those array words (using java.util.concurrent.Atomic.*). The Atomic update will use either Compare and Sweep (CAS) if the processor is Azul/Sparc/x86, or Load Linked/Store-conditional (LL/SC) on the IBM platform.


... snip ...

or maybe compare&swap ... invented by charlie (i.e. CAS are charlie's initials) when he was doing work on multiprocessing fine-grain locking for cp67 virtual machine system at the science center. misc. past posts mentioning smp and/or compare&swap
https://www.garlic.com/~lynn/subtopic.html#smp

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

What is your definition of "Information"?

Refed: **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 27, 2008
Subject: What is your definition of "Information"?
Blog: Information Storage
old definition we used from 15-20 yrs ago:
metadata of data is information metadata of information is knowledge metadata of knowledge is wisdom metadata of wisdom is enlightenment

....

we had looked at copyrighting the term business science in the early 90s, somewhat in conjunction with this graph ... old post from 1995 archived here ...
https://www.garlic.com/~lynn/95.html#8aa

subsequently there have been more simplified version of the above diagram

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

subprime write-down sweepstakes

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: subprime write-down sweepstakes
Newsgroups: alt.folklore.computers
Date: May 29, 11:59 am
lynn wrote:
one of the business shows commented on Bernanke's statements today saying that he has gotten quite repetitive about new regulations (like Basel2) fixing the situation.

they then went on to say that american bankers are the most inventive in the world ... that they have managed to totally screwup the system at least once a decade regardless of the measures put in place attempting to prevent it.


re:
https://www.garlic.com/~lynn/2008h.html#90 subprime write-down sweepstakes

Did Wall Street Wreck The Economy?, Congress, regulators start to connect the dots
http://www.consumeraffairs.com/news04/2008/05/wall_street.html

from above:
If so, that thread may lead to Wall Street. Increasingly, everyone from lawmakers to industry insiders has been connecting the dots to reveal how some investors' actions have had huge repercussions on the economy.

... snip ...

as mentioned previously .... toxic CDOs were used two decades ago in the S&L crisis to obfuscate the underlying value ... and in this decade-old, long-winded post .... there is discussion about need for visibility into CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Mastering the Dynamics of Innovation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: May 30, 2008
Subject: Mastering the Dynamics of Innovation
Blog: Change Management
Many times, existing processes represent some organization, technology, and/or business trade-offs that occurred at some point in time. such trade-offs become institutionalized ... and there is frequently failure to recognize that when the environment has changed (that original trade-off assumptions were based on) ... that the resulting trade-off decisions are no longer valid.

for truely off the wall way for viewing this is myers-briggs personality traits ... where the majority of the population tends to be personality types that operate on previous experience ... and only an extremely small percentage of the population routinely operate based on analytical analysis. it is much more difficult for experiential personality types to operate out of the box and routinely view and operate purely analytically. It is much easier to for the analytically oriented to recognize that basis for originally trade-off decisions to have totally changed.

this also can show up as generational issues where the young experiential personality types (that still need to be molded by experience) tend to be much more open to different ways of doing things (but that tends to gradually change as they gain experience). Analytically oriented personalities tend to live their whole life questioning rules and authorities (not just in youth).

I've periodically commented that from an evolutionary aspect, in a static, stable environment ... constantly having to analyze and figure out the reason why things are done, represents duplication of effort (effectively waste of energy). However in a changing environment, it can represent a significant more efficient means of adapting to change (compared to experimental trial and error approach). One possible study might be are their shifts in the ratio of different personality types based on whether the environment is static or rapidly changing.

Circa 1990, one of the large US auto manufacturing companies had a C4 effort that was to look at radically changing how they did businesses and they invited some number of technology vendors to participate. One of their observations was that US industry was (still) on 7-8 yr new product cycle while foreign competition radically reduced the elapsed time to turn out new products. Being faster also makes it easier to address all sorts of other issues (including quality). Introducing change is easier if it is done in new cycle... and if the new cycles are happening faster and much more frequently ... it promotes agility/change.

... aka being able to operate Boyd's OODA-loop faster than the competition. lots of past posts mentioning Boyd and/or OODA-loops
https://www.garlic.com/~lynn/subboyd.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Sun, 1 Jun 2008 14:12:40 -0700 (PDT)
Subject: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
http://bits.blogs.nytimes.com/2008/05/31/a-tribute-to-jim-gray-sometimes-nice-guys-do-finish-first/

from above:
During the 1970s and '80s at I.B.M. and Tandem Computer, he helped lead the creation of modern database and transaction processing technologies that today underlie all electronic commerce and more generally, the organization of digital information. Yet, for all of his impact on the world, Jim was both remarkably low-key and approachable. He was always willing to take time to explain technical concepts and offer independent perspective on various issues in the computer industry

... snip ...

Tribute to Honor Jim Gray
https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Mainframe Project management

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 1, 2008
Subject: Mainframe Project management
Blog: Computers and Software
Mainframe efforts have tended to be much more business critical and therefor have tended to spend a lot more time making sure that there are no problems and/or if problems might possibly show up, functions are provided that anticipate and can handle the problems.

As part of the support for the internet payment gateway as part of what is now referred to as electronic commerce (and the original SOA) ... the initial implementation involved high quality code development and testing.
https://www.garlic.com/~lynn/subnetwork.html#gateway

However, we have ofter commented to take a traditional application and turn it into a business critical service can require 4-10 times the base development effort. Part of the subsequent payment gateway effort we developed a failure matrix ... all possible ways that we could think of that a failure might occur involving the payment gateway ... and all the possible states that a failure could occur in. It was then required that the payment gateway show that it could automatically handle/recover all possible failure modes in all possible states ... and/or demonstrate that the problem could be isolated and identified within a very few minutes.

A much earlier example ... as part of turning out the mainframe resource management product,
https://www.garlic.com/~lynn/subtopic.html#fairshare

the final phase involved a set of over 2000 validation and calibration benchmarks that took over 3 months elapsed time to run. This included a sophisticated analytical system performance model which would predict how the system was expected to operate under various conditions (workload and configuration) ... automatically configure for that benchmark, automatically run the benchmark and then validate whether the results matched the predicted.
https://www.garlic.com/~lynn/submain.html#benchmark

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sun, 1 Jun 2008 20:29:51 -0700 (PDT)
Subject: Re: American Airlines
hancock4 writes:
Question:

SABRE took years to develop. Did it take equally long to develop systems for competing airlines? For those using IBM platforms, could they use any code or designs for SABRE or were they propriety to American Airlines?


re:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines

there was airline control program (ACP) that was (vendor) operating system used for many of these online systems .... wiki page
https://en.wikipedia.org/wiki/Airlines_Control_Program

there was long period of evolution of the ACP operating system as well as the customer applications built on the operating system. In some sense SABRE is a brand which is a whole bunch of online applications that were (initially) built on ACP. Some number of the other airline res "systems" were also whole set of applications built using the ACP operating system. Currently, some number of the applications have been migrated to other platforms.

circa 1980 or so ... there were some number of financial institutions using ACP for financial transactions ... which led to renaming ACP to TPF (transaction processing facility) ... wiki page
https://en.wikipedia.org/wiki/Z/TPF

from above:
Current users include Sabre (reservations), Amadeus (reservations), VISA Inc (authorizations), Holiday Inn (central reservations), CBOE (order routing), Singapore Airlines, KLM, Qantas, Amtrak, Marriott International , worldspan and the NYPD (911 system).

... snip ...

For some "transaction" drift ... yesterday, a tribute was held for Jim Gray
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

Bruce Lindsay gave a great talk about Jim formalizing transactions and databases management ... to provide sufficient integrity and reliability that they could be trusted in lieu of paper entries ... which was required to make things like online transaction processing possible (i.e. it was necessary to demonstrate high enough integrity and reliability that it would be trusted in place of paper and human/manual operations).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Sun, 1 Jun 2008 21:00:29 -0700 (PDT)
Subject: Re: American Airlines
On May 31, 7:42 am, wrote:
Don't forget that the first one is always the one that figures out everything that can't be done. The "can't be dones" take an enormous amount of calendar time to do. Sometimes it can take 2 or 3 software releases (in those days that was usually 2 years) to get it "right".

re:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines

the reference to doing 10 impossible things
https://www.garlic.com/~lynn/2008h.html#61 Up, Up, ... and Gone?

mentions having to do a major paradigm change in how things were implemented. Part of the 10 impossible things were because of heavy manual involvement in how the information was preprocessed for use by the system. Part of the major paradigm change involved effectively totally eliminating all that manual preprocessing ... making it all automated.

Some number of the 10 impossible things were also performance/thruput limitations related. So part of the paradigm change was to make some things run 100 times faster. This allowed 3-4 separate queries to be collapsed into a single operation, improving human factors (since it was now possible to do a lot more, a lot of back&forth interaction with an agent could all be automated).

A combination of the human involvement in data preprocessing and performance limitations resulted in limitation on the number of flight segments that could be considered. Change in paradigm resulted in all flt segments in the world being easily handled

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Sun, 1 Jun 2008 21:15:55 -0700 (PDT)
Subject: Re: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
re:
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines

lynn wrote:

https://web.archive.org/web/20080616153833/http://www.eecs.berkeley.edu/IPRO/JimGrayTribute/pressrelease.html


from above:
Gray is known for his groundbreaking work as a programmer, database expert and Microsoft engineer. Gray's work helped make possible such technologies as the cash machine, ecommerce, online ticketing, and deep databases like Google. In 1998, he received the ACM A.M. Turing Award, the most prestigious honor in computer science. He was appointed an IEEE Fellow in 1982, and also received IEEE Charles Babbage Award.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 2 Jun 2008 07:54:43 -0700 (PDT)
Subject: Re: American Airlines
Warren Brown wrote:
Actually, IBM built special hardware for this type of software to run on.

re:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 American Airlines

I was recently reviewing some old email exchanges with Jim Gray from late 70s and there was one discussing the 3830 (disk controller) ACP (lock) RPQ ... which basically provided logical locking function in the controller ... for coordinating multiple loosely-coupled (i,.e. mainframe for cluster) processors.

the old research bldg. 28, ... where the original relational/sql work was done
https://www.garlic.com/~lynn/submain.html#systemr

was just across the street from bldg 14 (disk engineering lab) and bldg. 15 (disk product test lab) ... and they let me play disk engineer over there
https://www.garlic.com/~lynn/subtopic.html#disk

During Jim's tribute, people were asked to come up and tell stories. The story I told was that Jim and I use to have friday evening sessions at some of the local establishments in the area (when eric's deli opened across from the plant site, they let us use the back room and gave us pitchers of anchor steam at half price). One Friday evening we were discussing what kind of "silver bullet" application could we deploy that would entice more of the corporation (especially executives) to actually use computers (primarily online vm370) and we came up with the online telephone book. However one of the requirements was that Jim would implement his half in 8hrs and I would implement my half in 8hrs.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: 2 Jun 2008 10:15:34 -0700
Subject: Re: American Airlines
Eric Chevalier wrote:
However, we were using 2314s attached to these boxes, and I believe there _was_ a hardware RPQ on the drives. Called something like "Airlines Control Buffer", I _think_ the feature allowed the drive to disconnect from the channel while doing a seek. Whatever the details, it was something that became standard on later mainframe drives from IBM.

re:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 American Airlines
https://www.garlic.com/~lynn/2008i.html#37 American Airlines

w/o the ACP RQP, loosely-coupled operation required reserve/release commands ... which reserved the whole device for the duration of the i/o operation. Actually reserve could be issued and possibly multiple operations performed before issuing the release (traditional loosely- coupled opeation ... locking out all other processors/channels in the complex).

since it was logical name locks, there was significant latitude it choosing lock names ... could be very low level like record name ... i.e. cchhr .... or something higher level like PNR.

note that while ACP/TPF did a lot of work on loosely-coupled, it took them quite awhile to getting around to doing tightly-coupled multiprocessor support. The result was quite a bit of consternation in the 3081 timeframe ... which originally wasn't going to have a single processor offering. One of the side-effects was that there were a whole bunch of changes that went into vm370 for enhancing TPF thruput in a 3081 environment ... changes that tended to degrade thruput for all the non-TPF customers. Eventually, there was enough pressure, that a 3083 (single processor) was offered ... primarily for ACP/TPF customers.

There was another technique for loosely-coupled operation ... originally developed for HONE (avoiding the performance impact of reserce/release but w/o the airlines controller RPQ). HONE was the world-wide, online (vm370-based) sales & marketing support system.
https://www.garlic.com/~lynn/subtopic.html#hone

The technique was basically a special CCW sequence that leveraged CKD search commands to simulate the semantics of the mainframe compare&swap instruction (but for DASD i/o operation). The US HONE datacenter provided possibly the largest singie system image at the time (combination of multple loosely-coupled, tightly-coupled processor complex) with load-balancing and fall-over across the complex. Later this was extended to geographic distance with replicated center in Dallas and then a 3rd in Boulder.

There was then talks with the JES2 multi-access spool people about them using the same CCW technique in their loosely-coupled operation.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Mon, 2 Jun 2008 17:04:42 -0700 (PDT)
Subject: Re: American Airlines
John P. Baker wrote:
I seem to recall something called the "Limited Lock Facility (LLF)", which provided some specialized CCW support in the controller.

Was it developed for use in situation such as that described here?


re:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 American Airlines
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#38 American Airlines

note by comparison, reserve will be a CCW that "locks" the whole device ... which typically will be followed by some sort of seek/search/read. That ends and the processor then operates/updates the data read and then writes it back ... finally releasing the device.

The other approach mentioned ... developed at HONE was simulation of multiprocessor compare&swap instruction ... using "search key& data equal" ... the data is read (w/o lock or reserve), a copy is made and the update is applied. then a channel program with search key&data equal ... using the original read image .... chained to write of the updated data.

the following from long ago and far away ...

Date: March 25, 1980
Subject: DASD sharing in ACP using the RPQ

On Monday I bumped into xxxx & yyyy. They were both interested in shared DASD for availability and load sharing. I mentioned the ACP RPQ which puts a lock manager in the microcode for the disk controller. They were real interested in that and so I began telephoning.

My first contact was xxxxx of GPD San Jose who wrote the microcode (nnn-nnnn). He explained that the RPQ "has a low profile" because it is not part of the IBM strategy and is inconsistent with things like string switching. The basic idea is that Lock commands have been added to the controller's repertoire of commands. One issues LOCK READ CCW pair and later issues WRITE UNLOCK CCW pair. If the lock fails the read fails and the CPU will poll for the lock later. xxx has documented all this in the technical report TR 02.859 "Limitied Lock Facility in a DASD Control Unit" xxxxx, xxxxx, xxxxx (Oct. 1979).

xxx pointed me to xxx xxxxx at the IBM Tulsa branch office (nnn-nnnn). xxxx wrote the channel programs in ACP which use the RPQ. He said they lock at the record level, and that it works nicely. We also discussed restart. He said that the code to reconfigure after CPU or controller failure was not hard. For duplexed files they lock the primary if available, if not they lock the secondary. ACP allows only one lock at a time and most writes are not undone or redone at restart (most records are not "critical"). xxx said that their biggest problem was with on-line utilities. One which moves a volume from pack to pack added 50% to their total effort! xxx in turn pointed me to two of the architects.

xxxxxx at White Plains DPD (nnn-nnnn) knows all about ACP and promised to send me the documentation on the changes to ACP. He said the changes are now being integrated into the standard ACP system. He observed that there is little degradation with the RPQ and prefers it to the MP approach. He mentioned that there are about 65 ACP customers and over 100 ACP systems. xxxxx is also at White Plains (nnn-nnnn). He told me lots of numbers (I love numbers).

He described a 120 transaction/second system.

The database is spread over about 100 spindles.

Each transaction does 10 I/O.

10% of such I/O involve a lock or unlock command.

The average hold time of a lock is 100 ms.

1.7 lock requests wait per second.

That implies that 14% of transactions wait for a lock.

This is similar to the System R number that 10% of transactions wait.

ACP has deadlock avoidance (only hold one lock at a time).

There are 60 lock requests per second (and 60 unlocks) and so there are about 6 locks set at any instant.

This is not a heavy load on the lock managers (a controller is likely to have no locks set.)


... snip ... top of post, old email index

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Tue, 3 Jun 2008 08:05:21 -0700 (PDT)
Subject: Re: A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
re:
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

and a little related drift in this thread:
https://www.garlic.com/~lynn/2008i.html#37 American Airlines

another article

Tech luminaries honor database god Jim Gray
http://www.theregister.co.uk/2008/06/03/jim_gray_tribute/

from above:
"A lot of the core concepts that we take for granted in the database industry - and even more broadly in the computer industry - are concepts that Jim helped to create," Vaskevitch says, "But I really don't think that's his main contribution."

... snip ...

and some old email references when Jim was leaving for Tandem and tyring to hand off some number of responsibilities to me:
https://www.garlic.com/~lynn/2007.html#email801006
https://www.garlic.com/~lynn/2007.html#email801016

thread from last year on Jim having gone missing:
https://www.garlic.com/~lynn/2007d.html#4 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#6 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#8 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007d.html#33 Jim Gray Is Missing
https://www.garlic.com/~lynn/2007g.html#28 Jim Gray Is Missing

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

American Airlines

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: Tue, 3 Jun 2008 09:23:19 -0700 (PDT)
Subject: Re: American Airlines
Shmuel Metz , Seymour J. wrote:
More like a precursor to the cache on a 3880-12.

splitting the difference :-)
3880-11 was "ironwood" ... 8mbyte 4k, "page" record cache
3880-13 was "sheriff" ... 8mbyte, full-track cache


later they were both upgraded to 32mbyte as -21 & -23

old post with some product "code" names
https://www.garlic.com/~lynn/2007e.html#38 FBA rant

there were some early -13 (& -23) literature showing 90percent cache hit rate. i pointed out that the example was actually 3880 with 10 records per track and reading sequentially. the first record read for a track would have a miss and bring in the whole track and then the subsequent 9 reads would all be hits. I raised the issue that if the application were to do full-track buffer reads ... that the same sequently read would drop to zero percent hit rate.

past posts in this thread:
https://www.garlic.com/~lynn/2008i.html#19 American Airlines
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#35 American Airlines
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#38 American Airlines
https://www.garlic.com/~lynn/2008i.html#39 American Airlines

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Security Breaches

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 3, 2008
Subject: Security Breaches
Blog: Information Security
We had been called in to help word smith the cal. state electronic signature (and later federal) legislation. past refs
https://www.garlic.com/~lynn/subpubkey.html#signature

Some of the involved organizations were also involved in privacy issues and had done in-depth consumer surveys and found that the two major issues were

1) identity theft ... account fraudulent transactions affecting most people and stats have been that upwards of 70percent of the incidents involved insiders

2) denial of service ... institutions using personal information to the detriment of the individual

because so little attention was being paid to the root causes behind these activities, it became major motivation for the cal. state breach notification legislation (and subsequent similar legislation in other states) ... hoping that the mandated breach notification and associated publicity would start to result in something being done about the problems.

Earlier we had been asked to consult with a small client/server startup that wanted to do payment transactions on their server and had this technology called SSL they had invented and wanted to use (now frequently referred to as electronic commerce). Some number of past posts referring to the activity
https://www.garlic.com/~lynn/subnetwork.html#gateway

We then got roped into working on the x9.59 financial transaction in the x9a10 financial standard working group. In the mid-90s, X9A10 had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments ... misc. past references
https://www.garlic.com/~lynn/x959.html#x959

part of the activity involved in-depth, end-to-end threat and vulnerability studies. this including focusing on the types of problems that have represented the majority of the breaches reported in the news over the past several years.

There were (at least) two characteristics

1) in the current paradigm, account information, including previous transaction information, represents diametrically opposing security requirements. on one side, the information has to be kept completely confidential and never divulged to anybody. on the other side, the information has to be readily available for numerous business processes in order to execute transactions (like presenting/divulging information at point of sale).

2) the value of the account related information in (merchant) transaction logs can be 100 times more valuable to the crooks than to the merchant. Basically to the merchant, the information is worth some part of the profit off the transaction. To the crook the information can be worth the credit limit and/or account balance for the related account. As a result, the crooks may be able to afford to spend 100 times attacking the system as the merchants can afford to spend (on security) defending the system.

So, one of the parts of x9.59 financial standard was to tweak the paradigm and eliminate the value of the information to the crooks and therefor also the necessity to have to hide the information at all (it didn't do anything to prevent what has been the majority of the breaches in the past several years ... it just eliminated any of the fraud that could occur from those breaches ... and therefor any threat the breach would represent).

misc. past posts mentioning fraud, exploits, threats, vulnerabilities, and/or risk
https://www.garlic.com/~lynn/subintegrity.html#fruad

the major use of SSL in the world today is this thing we worked on now commonly referred to as electronic commerce ... lots of past references to various aspects of SSL
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

where SSL is primarily being used to hide the account and transaction information. Since x9.59 financial standard eliminates the need to hide that information (as a countermeasure to fraudulent financial transactions) .... it not only eliminates the threat from security/data breaches but also eliminates the major use of SSL in the world today

some late breaking news:

Researchers say notification laws not lowering ID theft
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9093659
Researchers say notification laws not lowering ID theft
http://www.networkworld.com/news/2008/070308-citibank-card-scammer-sweatshirt.html
Researchers Say Notification Laws Not Lowering ID Theft
http://news.yahoo.com/s/pcworld/146738
Researchers say notification laws not lowering ID theft
http://www.infoworld.com/article/08/06/05/Notification-laws-not-lowering-ID-theft_1.html
Researchers Say Notification Laws Not Lowering ID Theft
http://www.pcworld.com/businesscenter/article/146738/researchers_say_notification_laws_not_lowering_id_theft.html

with regard to the paradigm involving transaction information ... on one hand can never be exposed or made available (to anyone) and on the other hand, by definition, the transaction information has to be available in numerous business processes as part of performing transactions.

we've tried using the comments (in the current paradigm) that even if the world was buried under miles of (information hiding) encryption, it still wouldn't prevent information leakage.

we've also tried in detailed discussions using the analogy of "naked transaction" metaphor ...
https://www.garlic.com/~lynn/subintegrity.html#payments

a military analogy is position in open valley with no cover and the enemy holding all the high ground on the surrounding hills (or like shooting fish in a barrel).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

IT Security Statistics

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 3, 2008
Subject: IT Security Statistics
Blog: Information Security
A couple years ago ... i worked on classification of the reported exploits/vulnerabilities. The problem was that at the time the descriptions quite free-form and it took a bit of analysis to try and pry out information for classification. In the past year or so, there has been some effort to add categorizing information to the descriptions. I also wanted to use the resulting classification information in updating my merged security taxonomy and glossary.
https://www.garlic.com/~lynn/index.html#glosnote

Old post referencing attempting classification of CVE entries
https://www.garlic.com/~lynn/2004e.html#43

Also, some number of the more recent classification activities have tended to corroborate my earlier efforts.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Are multicore processors driving application developers to explore multithreaded programming options?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 3, 2008
Subject: Are multicore processors driving application developers to explore multithreaded programming options?
Blog: Software Development
Charlie had invented compare&swap instruction when working on fine-grain multiprocessor locking for cp67 on 360/67. Trying to get compare&swap instruction added to 370 machines was met with some resistance ... claims being that the test&set instruction was sufficient for multiprocessor kernel operations.

The challenge for getting comapre&swap instruction added to 370, a non-kernel, multiprocessor specific use had to be created. The result was a set of examples for multithreaded application operation coordination avoiding the overhead of kernel calls.

compare&swap was used in implementation of the original relational/sql implementation, system/r .... for multithreaded operation ... independent of whether running on a uniprocessor or multiprocessor. By the mid-80s, compare&swap (or similar instruction) was available on many processors and in use by major database implementations for multithreaded operation ... independent of whether running on a single processor or multiprocessor machine.

In the past, there was been increasing processor performance in both single processor as well as multiprocessor hardware. Recently that has changed with little advances in single processor performance ... and lots of vendors are moving to multicore as standard ... where additional throughput will only be coming via concurrent/multithreaded operation.

There has been numerous observations for the past year or two that parallel programming has been the "holy grail" for the past twenty years ... with little or no practical advances in what the majority of programmers are capable of doing (with respect to parallel/concurrent/multithreaded programming).

lots of past posts mentioning multiprocessor operation and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

misc. past posts mentioning original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

the original (compare&swap writeup) is almost 40yrs old now ... but here are some of the examples still in a recent principles of operation (and over the yrs have been picked up by a large number of different machines, systems, and applications) ... note "multiprogramming" is mainframe for multithreaded
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/A.6?DT=20040504121320

btw, cp67 was morph of the original virtual machine implementation, cp40 from the custom modified 360/40 to 360/67 that came standard with with virtual memory hardware

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

ARPANet architect: bring "fairness" to traffic management

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Wed, 4 Jun 2008 10:17:01 -0700 (PDT)
Subject: ARPANet architect: bring "fairness" to traffic management
ARPANet architect: bring "fairness" to traffic management
http://arstechnica.com/news.ars/post/20080604-arpanet-architect-bring-fairness-to-traffic-management.html

can you say the "wheeler scheduler"
https://www.garlic.com/~lynn/subtopic.html#fairshare

one of the things we had done as part of rate-based flow control and dynamic adaptive high-speed backbone (and the letter from nsf said that what we already had running was at least five years ahead of all nsfnet backbone bids ... it is 20yrs later)
https://www.garlic.com/~lynn/subnetwork.html#nsfnet

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Definition of file spec in commands

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: Wed, 4 Jun 2008 10:54:42 -0700 (PDT)
Subject: Re: Definition of file spec in commands
On Jun 4, 6:39 am, greymaus wrote:
On a dual-cpu machine, or higher, would it be possible to disengage one cpu and have it running a miniprogram, watching what was happening on the other, maybe reporting keypresses (it has happened with bots on single-cpu machines, time-slicing), the other cpu running stdio, stdout from the bios (running, say, windows, as far as the user was concerned) To use a USB thing to boot into, say, Knoppix (possible). Would the 'other side' survive a reboot?. Big thing with public machines (say, internet cafe machines (but most of those would be well watched, and rebooting would be a 'no-no')) is to reboot them. One would need access to a machine which one could reboot, and that was intended to be rebooted.

note a big part of the MIT Project Athena was basically supporting the internet cafe type of operation ... except it was "terminal rooms" around the MIT campus ... where the "terminals" had been replaced by unix workstations. The idea was that they could be rebooted and have no personality (aka "thin" client) and get all the individual-specific personality off the network. In support of this was all sorts of distributed technology ... including kerberos which has been adopted by large number of different platforms as the basic distributed authentication mechanism (even hidden inside of windows operation).

misc. past posts mentioning "naked" public key kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Seeking (former) Adventurers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Seeking (former) Adventurers
Date: Wed, 04 Jun 2008 16:37:28 -0400
Newsgroups: bit.listserv.vmesa-l
following are a couple of emails from '78 regarding getting a copy of adventure for vm370/cms
https://www.garlic.com/~lynn/2006y.html#email780405
https://www.garlic.com/~lynn/2006y.html#email780405b

in this post
https://www.garlic.com/~lynn/2006y.html#18 The History of Computer Role-Playing Games
additional followup in this post
https://www.garlic.com/~lynn/2006y.html#19 The History of Computer Role-Playing Games

another old adventure email reference
https://www.garlic.com/~lynn/2007o.html#email790912

in this post
https://www.garlic.com/~lynn/2007o.html#15 "Atuan" - Colossal Cave in APL?

In the above, there was some amount of trouble caused by my making adventure (executable) available internally (via the internal network). I had an offer that anybody finishing the game (getting the points), i would send them a copy of the (fortran) source. At least one of the people at the STL lab converted the fortran source to PLI and added a bunch of additional rooms/pts.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Anyone know of some good internet Listserv's?

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 5, 2008
Subject: Anyone know of some good internet Listserv's?
Blog: Blogging
I got blamed for mailing lists and online computer conferencing on the internal network in the late 70s and early 80s ... i.e. the internal network was larger than the arpanet/internet from just about the beginning until sometime mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

somewhat as a result, there was official corporate support which led to "TOOLSRUN" that had both a usenet-mode of operation as well as mailing list mode of operation.

later there was also extensive corporate support for educational network in both the US (bitnet) and europe (earn)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

the example of toolsrun on the internal network somewhat promoted the creation and evolution of LISTSERV on bitnet.

That has since greatly evolved, been ported to a large number of different platforms and has a corporation marketing it ... history of LISTSERV from the vendor's website
http://www.lsoft.com/products/listserv-history.asp

This URL has catalog of LISTSERV lists
http://www.lsoft.com/CataList.html

This page:
http://catalist.lsoft.com/resources/listserv-community.asp?a=4

mentions 51,097 "public" mailing lists and 318,413 "local" mailing lists.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Can I ask you to list the HPC/SC (i.e. th High performace computers) which are dedicated to a problem?

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 5, 2008
Subject: Can I ask you to list the HPC/SC (i.e. th High performace computers) which are dedicated to a problem?
Blog: Computers and Software
A lot of grid, blade and other (massive parallel) technologies evolved for numerical intensive like high energy physics. In the past several years, you see vendors trying to move the products into more commercial areas. Early adopters have been in financial industry. Recent x-over article
http://www.gridtoday.com/grid/2341621.html

from above ...
JPMorgan and Citigroup attempt to increase flexibility and save money by establishing division- and company-wide services-based grids. Managing one larger, more inclusive grid is cheaper than managing 10 line-of-business clusters, and the shared services model allows for business applications to join computing applications on the high-performance infrastructure.

... snip ...

One of the issues is that the management for these large resource intensive applications has a lot of similarities to mainframe batch "job" scheduling ... reserving the resources necessary for efficient execution. An example within the GRID community
http://www.cs.wisc.edu/condor/

top500 by industry, w/financial largest category after "not specified"
http://www.top500.org/stats/list/30/apparea

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers
Date: Fri, 06 Jun 2008 05:20:05 -0400
re:
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First

at '91 SIGOPS (SOSP13, Oct 13-16) held at Asilomar, Jim and I had a running argument about whether "availability" required proprietary hardware ... which spilled over into the festivities at the SIGOPS night Monterey aguarium session ... past references to the "argument"
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2005d.html#2 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2006o.html#24 computational model of transactions

Anne and I were in the middle of our ha/cmp product with "commodity" hardware
https://www.garlic.com/~lynn/subtopic.html#hacmp
as well as our "cluster" scale-up activities ... old email references:
https://www.garlic.com/~lynn/lhwemail.html#medusa
and only a dozen weeks away from the meeting referenced in this post:
https://www.garlic.com/~lynn/95.html#13

Jim was nearly a decade with proprietary "availability" hardware ... first at Tandem and then had moved on to DEC (vax/cluster) ... he was there until DEC database group was sold off to Oracle in '94 ... reference here
https://en.wikipedia.org/wiki/Oracle_Rdb

As per previous references ...it was only fitting that later he was up on the stage espousing availability and scale-up for Microsoft clusters.

podcast reference for the tribute:

tribute also by ACM SIGMOD
https://web.archive.org/web/20111118062042/http://www.sigmod.org/publications/sigmod-record/0806

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers
Date: Fri, 06 Jun 2008 05:31:26 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
podcast reference for the tribute:
https://web.archive.org/web/20080604010939/http://webcast.berkeley.edu/event_details.php?webcastid=23082
https://web.archive.org/web/20080604072804/http://webcast.berkeley.edu/event_details.php?webcastid=23083
https://web.archive.org/web/20080604072809/http://webcast.berkeley.edu/event_details.php?webcastid=23087
https://web.archive.org/web/20080604072815/http://webcast.berkeley.edu/event_details.php?webcastid=23088

tribute also by ACM SIGMOD
https://web.archive.org/web/20111118062042/http://www.sigmod.org/publications/sigmod-record/0806


re:
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation

oh, and my little short story at the tribute is 1:14 minutes (near the end) into 23083 podcast

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Fri, 06 Jun 2008 10:58:40 -0400
Eric Smith <eric@brouhaha.com> writes:
HP had a complete architecture defined before they got Intel involved. The former HP employees I know who were involved claim that Intel caused some good ideas from the original architecture to be thrown away, and some not-so-good things to be added. However, they might be biased.

past posts regarding architect responsible for 3033 dual-address space and Itanium:
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2006.html#39 What happens if CR's are directly changed?
https://www.garlic.com/~lynn/2007p.html#21 Newsweek article--baby boomers and computers
https://www.garlic.com/~lynn/2008g.html#60 Different Implementations of VLIW

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Digital cash is the future?

Refed: **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 6, 2008
Subject: Digital cash is the future?
Blog: Information Security
Some related are these two answers with regard to security breaches
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243464-24494306
https://www.garlic.com/~lynn/2008i.html#42 Security Breaches
and credit card fraud
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/235221-7152372
https://www.garlic.com/~lynn/2008i.html#24 Credit Card Fraud

which discusses some of the vulnerabilities are characteristic of the underlying paradigm ... which require fundamental changes ... not just papering over.

We had been brought in to consult with small client/server startup that had invented this technology called SSL that they wanted to use for payment transactions on their server ... the result is now frequently referred to as electronic commerce.

There have several digital cash efforts in the past ... all of them running into various kinds of problems. One example was Digicash ... and as part of the liquidation, we were brought in to evaluate various of the assets.

Another was Mondex. As part of potential move of Mondex into the states we had been asked to design, spec, and cost system for country-wide deployment.

It turned out that many of these digital cash efforts were some flavor of "stored value" ... and were significantly motivated by the digital cash operator holding the "float" on the value in the infrastructure. During the height of these efforts more than a decade ago in Europe ... the EU central banks issued statements that the operators would have to start paying interest on the value in the accounts (once past the startup phase). That statement significantly reduced the interest (slight pun, i.e. the expected float disappeared) in many of the efforts

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Trusted (mainframe) online transactions

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Trusted (mainframe) online transactions
Newsgroups: bit.listserv.ibm-main
Date: Fri, 06 Jun 2008 13:48:18 -0400
lynn writes:
During Jim's tribute, people were asked to come up and tell stories. The story I told was that Jim and I use to have friday evening sessions at some of the local establishments in the area (when eric's deli opened across from the plant site, they let us use the back room and gave us pitchers of anchor steam at half price). One Friday evening we were discussing what kind of "silver bullet" application could we deploy that would entice more of the corporation (especially executives) to actually use computers (primarily online vm370) and we came up with the online telephone book. However one of the requirements was that Jim would implement his half in 8hrs and I would implement my half in 8hrs.

re:
https://www.garlic.com/~lynn/2008i.html#37

a couple recent posts referencing podcast files of the tribute
https://www.garlic.com/~lynn/2008i.html#50
https://www.garlic.com/~lynn/2008i.html#51

the first presentation in the technical sessions was by Bruce Lindsay talking about Jim's days at IBM San Jose research and working on the original relational/sql implementation ... system/r ... various past posts
https://www.garlic.com/~lynn/submain.html#systemr

a big part of Bruce's presentation was Jim's formalization of transaction semantics and database operation that turned out to be the critical enabler for online transactions (being trusted and could replace manual/paper).

... oh and my remembrance story (above reference) is 1hr 14mins into the technical session podcast that starts with Bruce's presentation.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Is data classification the right approach to pursue a risk based information security program?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 6, 2008
Subject: Is data classification the right approach to pursue a risk based information security program?
Blog: Information Security
data classification has most frequently been associated with disclosure countermeasures.

risk-based information security program would really involve detailed threat & vulnerability analysis, ... decade old post re thread between risk management and information security
https://www.garlic.com/~lynn/aepay3.htm#riskm

then, if the threat/vulnerability is information disclosure ... a security proportional to risk analysis can be performed ... then disclosure countermeasures proportional to risk can be specified and the data may be given classification corresponding to the necessary disclosure countermeasures.

there is the security acronym PAIN
P ... privacy (or sometimes CAIN and confidentiality) A ... authentication I ... integrity N ... non-repudiation

however, in this answer related to security breaches ... a solution is discussed which effectively eliminates requirement for privacy/confidentiality ... with the application of strong authentication and integrity (eliminating any requirement to prevent information disclosure)
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243464-24494306
https://www.garlic.com/~lynn/2008i.html#42 Security Breaches

One of the other things we had done was co-author for the financial industry privacy standard (x9.99) ... part of which involved studying privacy regulations in other countries ... as well as meeting with some of the HIPAA people (looking at situations where medical information can leak from financial aspect ... like financial statement listing specific medical procedure or treatment).

We also did a different kind of classification for one of the financial sectors ... asserting that most data classification approaches have simplified information to the point were it involves just the degree of protection. we asserted that potential much better countermeasures might be achieved if the original threat/vulnerability assessment was retained for each piece of information ... traditional disclosure countermeasures tend to be limited to degree that information is hidden. Knowing the actual threat/vulnerability for each piece of information could result in much better countermeasures.

As an example, we would point to what we did in the x9.59 financial standard where we eliminated the threat/vulnerability from the majority of breaches that have been in the news ... x9.59 didn't address preventing the breaches ... x9.59 eliminated the ability of attackers to use the information for fraudulent transactions.
https://www.garlic.com/~lynn/x959.html#x959

a little x-over from question about definition of risk assessment vis-a-vis threat assessment
http://www.linkedin.com/answers/finance-accounting/risk-management/FIN_RMG/247411-23329445
https://www.garlic.com/~lynn/2008i.html#60

taken from my merged security taxonomy & glossary
https://www.garlic.com/~lynn/index.html#glosnote

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The Price Of Oil --- going beyong US$130 a barrel

Refed: **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 7, 2008
Subject: The Price Of Oil --- going beyong US$130 a barrel
Blog: Energy and Development
Late last week, an economist was on business show talking about price of oil ... seemingly having difficulty focusing on changing landscape caused by globalization vis-a-vis traditional domestic economic price & demand forces. It seems like past experience won out and so ended with observation that increasing prices will dampen demand which will result in prices coming back down.

This somewhat ignores the new dynamics that global demand has increased significantly and the value of the dollar has fallen. Europe could be paying the equivalent of $100/barrel; dollar declines; Europe continues to pay the same per barrel (in Euros) ... but the US now has to pay $150/barrel ... just to stay even/compete with the Europeans (effectively the price in Euros hasn't changed and so there isn't any corresponding dampening effect on European demand).

Secondary effects would be that there is some additional price elasticity in Euros ... i.e. Europeans could afford to increase their bid for scarce resource by say 20 percent (which would translate into $180/barrel in dollars); Europeans would only see a 20 percent increase in price while US could see an overall 80 percent increase in price (this comparison applies to several world economies, not just Europe).

The increasing price and demand would normally result in increased production. However, there was recent observation about the interaction between retiring baby boomers and oil production projects. The claim was that oil production projects take 7-8 yrs to complete, but with the advent of the retiring baby boomers, there is a shortage of experienced people for all the possibly projects. The claim is that the number of projects to bring additional oil resources online is only about 50 percent of expected (because of lack of skill and experience resulting from retiring baby boomers).

recent blog entry
https://www.garlic.com/~lynn/2007q.html#42

quoting business news channel program that in 2005, oil projects were underfunded by 1/3rd which leads to 1m barrel/day production shortfall in 2010-2011. There is 7-8yr lag to develop new oil production sources and 1/2 of the production project specialists reach retirement over the next 3 yrs (which is claimed to be limiting factor on the number of active projects).

another related blog entry
https://www.garlic.com/~lynn/2008h.html#3

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Sun, 08 Jun 2008 13:46:23 -0400
"Joe Morris" <j.c.morris@verizon.net> writes:
Your description reminds me of the problem that Melinda Varian (at Princeton) reported encountering in the early MP support for VM: some of her users were reporting that calculations were randomly inconsistent. It wasn't a hardware problem, but instead (IIRC of course) was a bug in the fast-path dispatcher when a CPU switch was executed: the floating point registers on the new CPU weren't reloaded. She used this incident as one of her arguments about the absurdity of IBM's OCO (Object Code Only -- i.e., no source) policy: she found it only by carefully desk-checking the dispatcher code.

My own PPOE had one of the hardware funnies on an early 3031: a no-charge special feature that we called "Branch Maybe".


i had created the "fastpaths" (including fast redispatch), ... originally in cp67 ... they initially got dropped in morph vm370 ... i then provided them new code that went out in something like release 1plc9 ... and then got to do all of it with the release of my resource manager (in release 3 timeframe).
https://www.garlic.com/~lynn/subtopic.html#fairshare

original fast redispatch didn't reload floating point registers ... since kernel didn't use them ... and assumed that the values hadn't been changed during path thru the kernel.

I had also done much of kernel multiprocessor support ... in fact, had it installed internally on HONE system and some other insstallations. however, the decision to "ship" multiprocessor support in the product wasn't done until after i shipped the resource manager.

this created a number of problems.

the 23jun69 unbundling announcement started charging for software (somewhat in response to various litigation ... including by the gov) ... however the case was made that the kernel software should still be free.
https://www.garlic.com/~lynn/submain.html#unbundle

however, by the time of my resource manager ... things were starting to move in the direction of also charging for kernel software (might be considered in part motivated by clone mainframes) ... and my resource manager was chosen to be the guinea pig ... as a result, i got to spend a bunch of time with business & legal people working on policy for kernel software charging.

during the transition period ... one of the "policies" was that "free kernel" couldn't have as prerequisite "charged-for" kernel software. I had included quite a bit of multiprocessor kernel reorganization in the resource manager (w/o including any explicit multiprocessor support). The problem then became releasing "free kernel" multiprocessor support that required the customer to also "buy" the resource manager (to get the multiprocessor kernel reorganization). The eventual decision was made to remove about 90 percent of the code from the resource manager (w/o changing its price) and migrating it into the "free" kernel. Lots of posts mentioning multiprocessor support and/or compare&swap instruction
https://www.garlic.com/~lynn/subtopic.html#smp

The "big" problem in the OCO time-frame ... was that there was significant redo of the kernel multiprocessor support ... primarily oriented towards improving TPF performance on 3081 multiprocessor. some recent posts mentioning TPF:
https://www.garlic.com/~lynn/2008.html#29 Need Help filtering out sporge in comp.arch
https://www.garlic.com/~lynn/2008g.html#14 Was CMS multi-tasking?
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#38 American Airlines

the issue was that TPF didn't have multiprocessor support ... and the company had initially decided that there wouldn't be a non-multiprocessor 308x machine. That met to run TPF on 308x machine ... it had to run under VM370. Furthermore, if TPF was the primary workload, only one of the processors would be busy (unless multiple TPF virtual machines ran). The issue was that majority of virtual machine kernel executation (for a specific virtual machine) tended to be serialized. 100percent busy of all processors was achieved by having multiple (single processor) virtual machines.

To improve TPF thruput there was rework of the kernel multiprocessor support to try and achieve overlapped emulation with TPF execution (i.e. like i/o emulation going on in parallel with TPF execution as opposed to strictly serialized). This included significant increase in cross-processor signaling, handshaking, and lock interference. As a result, nearly all the non-TPF multiprocessor customers saw 10-15 percent thruput degradation (for a small increase in overlapped execution and thruput for the TPF customers). I can also believe that in this rework, they flubbed the fast (re)dispatch.

Eventually, the company decided to announce & ship a single processor 308x machine ... the 3083 ... primarily for ACP/TPF customers. After some additional delay, TPF eventually got around to shiping its own multiprocessor support.

for a different transient failure story ... was one i heard about berkeley cdc6600 ... it was something like tuesday mornings at 10am the machine would thermal shutdown. Eventually they worked out that tuesday mornings was when they watered the grass around the bldg and 10am was class break that would result in large number of flushing going on in restrooms. The combination resulted in loss of water pressure and the thermal overload.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

I am trying to find out how CPU burst time is caluculated based on which CPU scheduling algorithms are created ?

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 8, 2008
Subject: I am trying to find out how CPU burst time is caluculated based on which CPU scheduling algorithms are created ?
Blog: Computers and Software
As undergraduate in the 60s, i created dynamic adaptive scheduling that was used in cp67 and I later used in my resource manager product shipped for vm370. My dynamic adaptive scheduling supported a number of different resource allocation policies ... including "fair share". In the 70s, this was also frequently referred to as the "wheeler" scheduler.

The size of the CPU burst was adjustable and used to tailor responsiveness ... trade-off between things like responsiveness of the task being scheduled, other tasks in the system, cache-hit ratio (execution continue for long enuf period to recover cost of populating processor cache) ... and whether or not preemption was active.

One of the things in the 60s & 70s was there was frequently implementations that would confuse size of CPU burst and total resource consumption (one of the things that dynamic adaptive scheduling did was treat size of CPU burst and total resource consumption as independent optimization).

Lots of past posts regarding dynamic adaptive scheduling
https://www.garlic.com/~lynn/subtopic.html#fairshare

recent post discussing some interaction between resource manager, multiprocessor support and charging for kernel software:
https://www.garlic.com/~lynn/2008i.html#57

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Microsoft versus Digital Equipment Corporation

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microsoft versus Digital Equipment Corporation
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 09 Jun 2008 10:13:37 -0400
krw <krw@att.bizzzzzzzzzz> writes:
Granted, and "granted". ;-) However, these ideas are "novel" when no one was doing it. Such things are no longer novel, so I maintain that the patent is "useless". Perhaps another "issue" with the patent system. ;-)

expired patents at least as published prior art ... there have been lots of examples of frivilous patent applications ... like patenting the nyse.

there has been some amount of patent activity as defensive action ... in case claims about prior art haven't proved sufficient.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Threat assessment Versus Risk assessment

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 9, 2008
Subject: Threat assessment Versus Risk assessment
Blog: Risk Management
from my merged security taxonomy and glossary
https://www.garlic.com/~lynn/index.html#glosnote

one of the definitions of "risk" (from nist 800-60):
The level of impact on organizational operations (including mission, functions, image, or reputation), organizational assets, individuals, other organizations, or the Nation resulting from the operation of an information system given the potential impact of a threat and the likelihood of that threat occurring.

....

so a risk is the impact on the organization of a threat.

see taxonomy/glossary for more ...

definition of risk assessment (from nist 800-30):
The process of identifying the risks to system security and determining the probability of occurrence, the resulting impact, and additional safeguards that would mitigate this impact.

... and

threat assessment (from gao report 0691):
The identification and evaluation of adverse events that can harm or damage an asset. A threat assessment includes the probability of an event and the extent of its lethality. Threats may be present at the global, national, or local level.

....

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Could you please name sources of information you trust on RFID and/or other Wireless technologies?

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 9, 2008
Subject: Could you please name sources of information you trust on RFID and/or other Wireless technologies?
Blog: Wireless
A lot of RFID was originally targeted for inventory applications (i.e. as well as EPC enhancement to universial product code, laser scanned barcodes) ... static data that could be easily harvested.

An issue when the same/similar technology is used for transaction operations ... and becomes vulnerable to evesdropping and/or similar kinds of threats.

In the mid-90s, had been working on chips for x9.59 financial transaction standard
https://www.garlic.com/~lynn/x959.html#x959

one of the threats addressed by x9.59 was to make it immune from evesdropping and havesting attacks ... aka it didn't do anything to eliminate the attacks ... it just made the information useless to the crooks for the purpose of performing fraudulent transactions (I've also discussed this in various answers regarding eliminating the threat from breaches).

we had semi-facetiously been commenting that we would take a $500 milspec part, aggressive cost reduction by 2-3 orders of magnitude, while increasing its security.

we were approached by some of the transit operations with a challenge to also be able to implement it as a contacless chip ... being able to perform an x9.59 transaction within the transit gate power and timing requirements (i.e. contactless chip obtaining power from the radio frequency and executing the operation in the small subsecond time constraints required for transit gate operation).

Some amount of this shows up in the AADS chip strawman patent portfolio
https://www.garlic.com/~lynn/x959.html#aads

in the 90s, one of the EPC (and aads chip strawman) issues was aggressive cost reduction. Basically wafers have fixed manufacturing costs, so chip cost is related to the number of chips that can be obtained from a wafer. A limitation a decade ago was the technology to cut (slice&dice) chips from wafer was taking more (wafer) surface area than (ever shrinking) chips.

A lot of the current churn regarding RFID technologies is attempting to use it in applications requiring confidentiality and/or privacy (using a technology that could be easily evesdropped for applications that have an evesdropping vulnerability).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Ransomware

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@xxxx>
Date: Mon, 09 Jun 2008 18:07:12 -0400
Subject: Re: Ransomware
MailingList: crypto
John Ioannidis wrote:
This is no different than suffering a disk crash. That's what backups are for.

At Jim Gray's tribute on the 31st, Bruce Lindsay gave a talk about Jim's formalization of transaction processing enabled online transactions ... i.e. needed trust in the integrity of transaction as prerequisite to move from manual/paper processes.

In the early 90s, when glasshouse and mainframes seeing significant downturn in their use ... with lots of stuff moving off to PCs, there was a study that half of the companies that had a disk failure involving (business) data that wasn't backed up ... filed for bankruptcy within 30 days. The issue was that glasshouse tended to have all sorts of business processes to backup business critical data. Disk failures that lost stuff like billing data had significant impact on cash flow (there was also case of large telco that had bug in its nightly backup and when the disk crashed with customer billing data ... they found that there didn't have valid backups).

Something similar also showed up in the Key Escrow meetings in the mid-90s with regard to business data that was normally kept in encrypted form ... i.e. would require replicated key backup/storage in order to retrieve data (countermeasure to single point of failure). part of the downfall of key escrow was that it seem to want all keys ... not just infrastructure where business needed to have replicated its own keys.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

DB2 25 anniversary

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: DB2 25 anniversary
Newsgroups: alt.folklore.computers
Date: Tue, 10 Jun 2008 09:33:50 -0400
IBM DB2's 25th Anniversary: Birth Of An Accidental Empire
http://www.informationweek.com/news/software/info_management/240003121

from above:
Saturday June 7 was the 25th anniversary of DB2. Ingres and Oracle preceded it as commercial products by a narrow margin, but the launch of DB2 on June 7, 1983, marked the birth of relational database as a cornerstone for the enterprise

... snip ...

some old posts mentioning original relational/sql implementation, System/R
https://www.garlic.com/~lynn/submain.html#systemr

System/R technology transfer was to endicott for sql/ds ... a few yrs ago, one of the people on the endicott end of the technology transfer had his 30yr corporate anniversary and I was asked to contribute. I put together a log of email exchange with him from the sql/ds technology transfer period.

this old post mentioning some people at a meeting in Ellison's conference room
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15

I've periodically mentioned that two of the people in the meeting show up in a small client/server startup responsible for something called the commerce server. we were called in to consult because they wanted to do payment transactions on the server. They had this technology called SSL they wanted to use and the result is now frequently referred to as electronic commerce ... some references
https://www.garlic.com/~lynn/subnetwork.html#gateway

one of the other people (mentioned in the same meeting) claimed to have handled most of the technology transfer from Endicott to STL for DB2.

for additional drift, some recent posts mentioning tribute to Jim Gray
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#54 Trusted (mainframe) online transactions

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

DB2 25 anniversary: Birth Of An Accidental Empire

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: DB2 25 anniversary: Birth Of An Accidental Empire
Date: Tue, 10 Jun 2008 09:42
Blog: The Greater IBM Connection
IBM DB2's 25th Anniversary: Birth Of An Accidental Empire
http://www.informationweek.com/news/software/info_management/240003121

from above:
Saturday June 7 was the 25th anniversary of DB2. Ingres and Oracle preceded it as commercial products by a narrow margin, but the launch of DB2 on June 7, 1983, marked the birth of relational database as a cornerstone for the enterprise

... snip ...

some old posts mentioning original relational/sql implementation, System/R
https://www.garlic.com/~lynn/submain.html#systemr

System/R technology transfer was to Endicott for SQL/DS ... a few yrs ago, one of the people on the Endicott end of the technology transfer had his 30yr corporate anniversary and I was asked to contribute. I put together a log of email exchange with him from the SQL/DS technology transfer period.

This old post mentioning some people at a meeting in Ellison's conference room
https://www.garlic.com/~lynn/95.html#13
https://www.garlic.com/~lynn/96.html#15

I've periodically mentioned that two of the people in the meeting show up in a small client/server startup responsible for something called the commerce server. we were called in to consult because they wanted to do payment transactions on the server. They had this technology called SSL they wanted to use and the result is now frequently referred to as electronic commerce ... some references
https://www.garlic.com/~lynn/subnetwork.html#gateway

one of the other people (mentioned in the same meeting) claimed to have handled most of the technology transfer from Endicott to STL for DB2.

for additional drift, some recent posts mentioning tribute to Jim Gray
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#54 Trusted (mainframe) online transactions

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Is the credit crunch a short term aberation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 10, 2008
Subject: Is the credit crunch a short term aberation
Blog: Risk Management
A few issues

CDOs were used two decades ago during the S&L crisis to obfuscate the underlying value and unload questionable properties.

In the past, loan originators had to pay some attention to loan quality. For the past several years, loan originators have used toxic CDOs to unload their loans w/o having to pay any attention to quality ... their only limitation was how many loans could they originate (w/o having to pay attention to quality).

Institutions buying toxic CDOs, effectively also didn't pay any attention to quality; they could buy a toxic CDO, borrow against the full value, and buy another ... repeating this 40-50 times ... aka "leveraging" with very small amount of actual capital. A couple percent fall in toxic CDO totally wipes out the investment. This supposedly was contributing factor in the crash of '29 where investors had as little as 20percent (compared to current situation with maybe 1-2percent).

When the problems with toxic CDO value started to perculate up ... it became something like consumer product contamination ... toxic CDOs were too good at obfuscating the underlying value ... not all of the toxic CDOs had significant value problems ... but it was nearly impossible to tell which were good and which were bad ... so there was a rush to dump all toxic CDOs.

Once the current crisis settles out ... things aren't likely to return to the previous free wheeling days with no attention to loan quality and enormous leveraging .. recent article from today

HSBC says excessive bank leverage model bankrupt
http://www.reuters.com/article/rbssFinancialServicesAndRealEstateNews/idUSL1014625020080610

long winded, decade old post discussing some of the current problems ...including the need for visibility into underlying value in CDO-like instruments
https://www.garlic.com/~lynn/aepay3.htm#riskm

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

How do you manage your value statement?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 10, 2008
Subject: How do you manage your value statement?
Blog: Change Management
related answer to this change management question
http://www.linkedin.com/answers/management/change-management/MGM_CMG/240817-1373253
https://www.garlic.com/~lynn/2008i.html#31 Mastering the Dynamics of Innovation

old post with some extracts from fergus/morris book discussing effects in the wake of future system project failure
https://www.garlic.com/~lynn/2001f.html#33

where things became much more rigid, structured, oriented towards maintaining the status quo and resisting change.

it didn't help that I sponsored Boyd's briefings in the corporation ... lots of past Boyd references
https://www.garlic.com/~lynn/subboyd.html

part of Boyd's message ... embodied in OODA-loop metaphor was not only agility and adaptability but being able to do it much faster than your competition.

....

big portion of oursourcing has been about getting sufficient skills ... not just the money. We looked at educational competiveness in the early 90s. When we interviewing in that period ... all of the 4.0 students from top univ. were foreigners and many under obligations to return home after working in the US 5-8 yrs. Half the technical PHDs from top univs. were foreigners ... we've claimed the internet bubble wouldn't even had been possible w/o all those highly skilled foreigners.

the other example we've used is Y2K remediation happening at the same time as internet bubble. Lots of businesses were forced to outsource nuts&bolts business dataprocessing because so many were flocking to the internet bubble. They were forced into that oursourcing ... not because of salary differential ... but in order to get anybody to do the work. After the trust relations were established (sort of forced by not being able to get the skills anywhere else) ... the outsourcing work continued. After the internet bubble burst ... was when people started complaining about all these jobs had gone overseas ... but they weren't complaining in the middle of the bubble.

US educational system now ranks near the bottom of industrial nations ... which is contributing to the jobs moving as much as the salary differential. recent posts on the subject:
https://www.garlic.com/~lynn/2007j.html#58 IBM Unionization
https://www.garlic.com/~lynn/2007j.html#61 Lean and Mean: 150,000 U.S. layoffs for IBM?
https://www.garlic.com/~lynn/2007u.html#78 Education ranking
https://www.garlic.com/~lynn/2007u.html#80 Education ranking
https://www.garlic.com/~lynn/2007u.html#82 Education ranking
https://www.garlic.com/~lynn/2007v.html#10 About 1 in 5 IBM employees now in India - so what ?
https://www.garlic.com/~lynn/2007v.html#16 Education ranking
https://www.garlic.com/~lynn/2007v.html#19 Education ranking
https://www.garlic.com/~lynn/2007v.html#20 Education ranking
https://www.garlic.com/~lynn/2007v.html#38 Education ranking
https://www.garlic.com/~lynn/2007v.html#39 Education ranking
https://www.garlic.com/~lynn/2007v.html#44 Education ranking
https://www.garlic.com/~lynn/2007v.html#45 Education ranking
https://www.garlic.com/~lynn/2007v.html#51 Education ranking
https://www.garlic.com/~lynn/2007v.html#71 Education ranking
https://www.garlic.com/~lynn/2008.html#52 Education ranking
https://www.garlic.com/~lynn/2008.html#55 Education ranking
https://www.garlic.com/~lynn/2008.html#60 Education ranking
https://www.garlic.com/~lynn/2008.html#62 competitiveness
https://www.garlic.com/~lynn/2008.html#81 Education ranking
https://www.garlic.com/~lynn/2008.html#83 Education ranking
https://www.garlic.com/~lynn/2008b.html#13 Education ranking
https://www.garlic.com/~lynn/2008c.html#56 Toyota Beats GM in Global Production

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

How do you manage your value statement?

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 10, 2008
Subject: How do you manage your value statement?
Blog: Change Management
re:
https://www.garlic.com/~lynn/2008i.html#65 How do you manage your value statement?

and:
http://www.linkedin.com/answers/management/change-management/MGM_CMG/248432-3786937

When I was an undergraduate ... I was brought in to help get Boeing Computer Services going. Computing facilities had been treated purely as overhead/expense item ... dataprocessing was starting to be viewed as competitive advantage ... and moving it into its own line of business gave it some semblance of having P&L responsibility. 747 serial #3 was flying skies of Seattle getting certification. tour of the 747 mockup included the statement that the 747 would carry so many people that 747s would be served by a minimum of four jetways.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Do you have other examples of how people evade taking resp. for risk

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 11, 2008
Subject: Do you have other examples of how people evade taking resp. for risk
Blog: Change Management
re:
http://www.linkedin.com/answers/management/change-management/MGM_CMG/245229-2959272

business school article that mentions responsibility for current credit crisis
http://knowledge.wharton.upenn.edu/article.cfm?articleid=1933 (gone 404 and/or requires registration)

above article apparently was only freely available for 1st 30 days after publication.

a couple quotes from the article posted here (along with several other refs)
https://www.garlic.com/~lynn/2008g.html#32

the business school article includes comments that possibly 1000 were responsible for the current credit crunch and it would go a long way towards fixing the problem if the gov. could figure out how they could loose their jobs.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

EXCP access methos

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EXCP access methos
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 11 Jun 2008 17:48:16 -0400
DASDBill2 writes:
In VM, CCWs are not interpreted as far as I know, but rather the channel program is scanned before being executed in order to determine how to let it run safely on its own. The only way I can think of to execute a channel program interpretively is to do a separate I/O request for each CCW in the channel program (of course, with the necessary CCWs in front of it for it to work correctly). Then if a CCW reads data, the data would be read somewhere that VM could trust, and then move that data into the caller's buffer. This is similar to how interpretive machine instruction is handled. But the overhead in interpreting channel programs would be prohibitive, I believe, so they are not really interpreted. They are first made safe with the proper CCWs in front of those supplied by the problem-state caller and then allowed to run on their own.

CCWTRANS was the CP67 routine that created a "shadow" copy of virtual machine channel program.

channels run with "real" data transfer addresses. virtual machine (and "VS" system application EXCP) channel programs have virtual address.

CCWTRANS scanned the virtual machine channel program ... creating a "shadow" copy of the virtual machine channel program ... fetching/fixing the related virtual addresses ... and replacing the virtual addresses with real addresses.

The original translation of os/360 to virtual storage operation included crafting a copy of (cp67's) CCWTRANS into the side of VS2 ... to perform the equivalent function of EXCP channel programs (whether application or access methods). VS2 (SVS & then MVS) has had the same problem with access methods (and other applications) creating channel programs with "virtual" addresses ... and then issuing EXCP. At that point, EXCP processing has the same "problem" as virtual machine emulation ... translating channel programs built with virtual addresses into shadow copy that has "real" addresses.

EXCPVR was introduced to indicate that a channel program with "real" addresses was being used (rather than traditional EXCP channel program). A discussion of EXCPVR:
http://publib.boulder.ibm.com/infocenter/zos/v1r9/topic/com.ibm.zos.r9.idas300/efcprs.htm#efcprs

disk seek channel commands ... for virtual machine non-full-pack minidisks would have also result in a "shadow" made of the seek argument ... adjusting it as appropriate (i.e. a minidisk could be for 30 cyls starting at real cylinder 100 ... the shadow would have cylinder numbers adjusted by 100 ... unless it attempted to access more than 30 cyls ... which would result in shadow being adjusted to an invalid cylinder number).

OS360 used 3 channel command prefix ... "SEEK", followed by "set file mask" command and then "TIC" (transfer/branch) to the channel program referenced by EXCP (didn't need to scan/translate the passed channel program ... just position the arm and then prevent the passed channel program from moving the arm again.

There was a version of CP67 that was converted to run on 370s ("CP67-I" system) ... which was used extensively inside IBM pending availability of VM370 product. In the morph of CP67 to VM370 product, the CCWTRANS channel program translation routine became DMKCCW.

past posts mentioning VS2 effort started out by crafting cp67 CCWTRANS to get channel program translation for EXCP:
https://www.garlic.com/~lynn/2000.html#68 Mainframe operating systems
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2002c.html#39 VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
https://www.garlic.com/~lynn/2002j.html#70 hone acronym (cross post)
https://www.garlic.com/~lynn/2002l.html#65 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002n.html#62 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003k.html#27 Microkernels are not "all or nothing". Re: Multics Concepts For
https://www.garlic.com/~lynn/2004.html#18 virtual-machine theory
https://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
https://www.garlic.com/~lynn/2004g.html#50 Chained I/O's
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005b.html#49 The mid-seventies SHARE survey
https://www.garlic.com/~lynn/2005f.html#45 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
https://www.garlic.com/~lynn/2005p.html#18 address space
https://www.garlic.com/~lynn/2005q.html#41 Instruction Set Enhancement Idea
https://www.garlic.com/~lynn/2005s.html#25 MVCIN instruction
https://www.garlic.com/~lynn/2005t.html#7 2nd level install - duplicate volsers
https://www.garlic.com/~lynn/2006.html#31 Is VIO mandatory?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?
https://www.garlic.com/~lynn/2006b.html#25 Multiple address spaces
https://www.garlic.com/~lynn/2006f.html#5 3380-3390 Conversion - DISAPPOINTMENT
https://www.garlic.com/~lynn/2006i.html#33 virtual memory
https://www.garlic.com/~lynn/2006j.html#5 virtual memory
https://www.garlic.com/~lynn/2006j.html#27 virtual memory
https://www.garlic.com/~lynn/2006o.html#27 oops
https://www.garlic.com/~lynn/2006r.html#39 REAL memory column in SDSF
https://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007e.html#46 FBA rant
https://www.garlic.com/~lynn/2007f.html#6 IBM S/360 series operating systems history
https://www.garlic.com/~lynn/2007f.html#33 Historical curiosity question
https://www.garlic.com/~lynn/2007k.html#26 user level TCP implementation
https://www.garlic.com/~lynn/2007n.html#35 IBM obsoleting mainframe hardware
https://www.garlic.com/~lynn/2007o.html#41 Virtual Storage implementation
https://www.garlic.com/~lynn/2007p.html#69 GETMAIN/FREEMAIN and virtual storage backing up
https://www.garlic.com/~lynn/2007s.html#2 Real storage usage - a quick question
https://www.garlic.com/~lynn/2007s.html#41 Age of IBM VM

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

EXCP access methos

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: EXCP access methos
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 11 Jun 2008 18:15:30 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
There was a version of CP67 that was converted to run on 370s ("CP67-I" system) ... which was used extensively inside IBM pending availability of VM370 product. In the morph of CP67 to VM370 product, the CCWTRANS channel program translation routine became DMKCCW.

re:
https://www.garlic.com/~lynn/2008i.html#68 EXCP access methos

an early use of the internal network was distributed development project between the science center and endicott.

the internal network technology was created at the science center (as well as cp67, gml, lots of other stuff)
https://www.garlic.com/~lynn/subtopic.html#545tech

the internal network was larger than the arpanet/internet from just about the beinning to possibly mid-85
https://www.garlic.com/~lynn/subnetwork.html#internalnet

the 370 virtual memory hardware architecture was well specified ... and endicott approached the science center about providing 370 virtual machine support for early software testing ... i.e. in addition to providing 360 and 360/67 virtual memory emulation ... cp67 would be modified to also provide option for 370 and 370 virtual memory emulation.

the original cms multi-level source maintenance system was developed as part of this effort (cms & cp67 had source maintenance but was single level "update").

part of the issue was that this would run on the science center cp67 time-sharing system which including access by numerous non-employees (many from various educational institutions in the cambridge/boston area). 370 virtual memory was a closely held corporate secret and so there had to be a lot of (security) measures to prevent it being divulged.

the basic cambridge cp67 time-sharing system ran "CP67-L".

eventually, in a 360/67 virtual machine, a "CP67-H" kernel ran which had the modifications to provide 370 virtual machines as an option. This provided isolation, preventing the general time-sharing users from being exposed to any of the 370 features.

then a set of updates were created that modified the CP67 kernel to run on 370 "hardware" .... a "CP67-I" kernel would then run in a 370 virtual machine provided by a "CP67-H" kernel running in a 360/67 virtual machine.

CP67-I was in regular operation a year before the first engineeing 370 machine with virtual memory hardware was working. In fact, CP67-I was used as a test case when that first engineering machine became operational.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Next Generation Security

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 12, 2008
Subject: Next Generation Security
Blog: Telecommunications
I gave a graduate student seminar at ISI/USC in '97 (including ISI rfc-editor group and some e-commerce groups) about "internet" not being business critical technology.

It was somewhat based on our much earlier work for our availability ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

where we had done a detailed threat and vulnerability study of tcp/ip and the internet.

We had used that information when we were asked to consult with small client/server startup that wanted to do payment transactions on their server and had this technology they had invented called SSL that they wanted to use ... that work is now frequently referred to as electronic commerce. As part of deploying the payment gateway for processing the transactions ... we had to do a large number of compensating procedures (and countermeasures) ... not just for strictly (traditional) security purposes ... but availability and integrity also.
https://www.garlic.com/~lynn/subnetwork.html#gateway

We somewhat later formulized some of this as parameterised risk management that shows up in the aads patent portfolio
https://www.garlic.com/~lynn/x959.html#aads

that supports a risk management framework that can support dynamically adapting across a large number of different changing circumstances as well as adapting over time.

slightly related answer involving working on categorizing threats and vulnerabilities:
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243460-21457240
https://www.garlic.com/~lynn/2008i.html#43 IT Security Statistics

For the use of SSL between the server and the payment gateway ... we had "sign-off" over implementation deployment and could mandate some number of compensating processes. However, we weren't allowed similar control over the browser/server interface. Shortly after deployment ... we made facetious comments about SSL being "comfort" mechanism (as opposed to security mechanism) ... lots of past posts on the subject
https://www.garlic.com/~lynn/subpubkey.html#sslcert

the biggest use of SSL in the world today is for this thing called electronic commerce to "hide" account numbers and transaction details.

In the mid-90s, the X9A10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments ... and came up with the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959

Part of the x9.59 financial standard was to eliminate the vulnerability associated with divulging account numbers and transaction details ... slightly tweaking the existing paradigm. With it no longer necessary to hide the account numbers, the problems with the majority of the security breaches in the news goes away (doesn't stop the breaches, just eliminates any resulting fraudulent transactions). Since the information no longer has to be hidden, it also eliminates the major use of SSL in the world today.

A lot of time there are security professionals adding patches on top of an (frequently faulty) infrastructure w/o really understanding the underlying fundamentals. In fact, nearly by definition, any infrastructure requiring frequent patches implies fundamental infrastructure flaws (a simple analogy is that there frequently are regulations about NOT being able to use patched tires in commercial operations).

there was a great talk by Bruce Lindsay at the recent tribute for Jim Gray ... where he explains that Jim's work on formalizing transactions was the real enabler for online transactions (being able to "trust" electronic processing in lieu of manual/paper operations). lots of past posts referencing the period
https://www.garlic.com/~lynn/submain.html#systemr

some recent posts referencing the podcasts from the tribute:
https://www.garlic.com/~lynn/2008i.html#50
https://www.garlic.com/~lynn/2008i.html#51
https://www.garlic.com/~lynn/2008i.html#54

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

The End of Privacy?

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 12, 2008
Subject: The End of Privacy?
Blog: Information Security
some of the issue has been confusing authentication and identification.

in most situations where to verify that an entity is allowed to do something, it is possible to implement authentication (that doesn't require divulging personal information). however, because of the frequent confusion about the difference between authentication and identification ... there is a fall-back to requiring identification (rather than authentication) ... which involves divulging some level of personal information.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Outsourcing dilemma or debacle, you decide

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing dilemma or debacle, you decide...
Newsgroups: bit.listserv.ibm-main
Date: Thu, 12 Jun 2008 11:35:37 -0400
howard.brazee@CUSYS.EDU (Howard Brazee) writes:
It seems that more and more, systems programmers either find some old-timer to mentor them or they have to shoehorn themselves into positions to learn their jobs on their own.

Companies don't want to train them. Same thing with CoBOL or PL/I programmers.


and how did the old timers learn their jobs?

... back in the days of having to walk ten miles to school, barefoot in the snow ... uphill both ways.

slightly related post in this blog ... that drifted over into outsourcing:
https://www.garlic.com/~lynn/2008i.html#65 How do you manage your value statement?
https://www.garlic.com/~lynn/2008i.html#66 How do you manage your value statement?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Should The CEO Have the Lowest Pay In Senior Management?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 12, 2008
Subject: Should The CEO Have the Lowest Pay In Senior Management?
Blog: Information Security
Business news channel had a recent editorial statement that in the past, the ratio of US executive pay to worker pay was 20:1 ... they observed that it is currently 400:1 and totally out of control. By comparison in other industrial countries it runs more like 10:1.

Another recent news article said that during four yr period in the run up to the current credit crunch .... wall street paid out over $160 billion in bonuses (some implication was that it was essentially part of the $400billion to $1trillion in current write-down losses ... claim a profit for a bonuses ... which some years later actually turns out to be loss).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Should The CEO Have the Lowest Pay In Senior Management?

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 12, 2008
Subject: Should The CEO Have the Lowest Pay In Senior Management?
Blog: Information Security
On 6/12/08 9:03 AM, John Taylor wrote:
I'm not against high pay...but I think salary too high create an "I am king and you are my slaves" mentality.

re:
https://www.garlic.com/~lynn/2008.html#73 Should The CEO Have the Lowest Pay In Senior Management?

John Boyd ... in his briefings on the organic design for command and control ... use to give a different explanation. lots of past posts and/or URL references from around the web:
https://www.garlic.com/~lynn/subboyd.html

He claimed that on the entry into WW2, US had to deploy a huge number of quickly trained and inexperienced people. In order to leverage the scarce skilled resources ... they created a tightly controlled and extremely rigid command and control infrastructure. Then things roll forward a few decades and these former young officers (getting their indoctrination in how to run a large organization) started to permeate the upper ranks of commercial institutions ... and began to change the whole flavor of how large (commercial) organizations were run (reflecting their training in ww2 as young officers) ... changing the whole culture into assumption that only the top officers know what they were doing ... and essentially everybody else in the organization was totally unskilled.

So which is cause and which is effect? ... the belief that they are the only ones that know what they are doing ... justifies the enormous compensation .... or the enormous compensation justifies treating everybody else like they don't know what they are doing.

one of the other things Boyd did was give advice to up & coming youngsters that they needed to choose a career path ... either "DO something" or "BE somebody"; BE somebody could lead to positions of distinction and power, while choosing to "DO something" could put you in opposition to those in "power" and result in reprimands. This didn't endure him to the Air Force brass ... and recently the SECDEF was advising young officers to be more like Boyd (which is presumed to have really angered the Air Force brass).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Outsourcing dilemma or debacle, you decide

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Outsourcing dilemma or debacle, you decide...
Newsgroups: bit.listserv.ibm-main
Date: Thu, 12 Jun 2008 14:34:57 -0400
billwilkie@HOTMAIL.COM (Bill Wilkie) writes:
Will Durant, a famous historian once said that a nation is born stoic and dies epicurean. The same is true for everything from Operating Systems, to change control to society in general. We enhance everything to a point where it is sophisticated and mature and then abandon it because it is too expensive. Then we begin again.

re:
https://www.garlic.com/~lynn/2008i.html#72 Outsourcing dilemma or debacle, you decide...

Boyd OODA-loop would say that it got too rigid and structured ... including too many people with vested interests in not changing. OODA-loop metaphor focuses on agile, adaptibility and change
https://www.garlic.com/~lynn/subboyd.html

... I would assert that it isn't "too expensive" per se ... but too rigid and unable to adapt. Vested interests are likely to throw up lots of road blocks to change ... making things more complicated (and also expensive). Frequently KISS is more conducive to being inexpensive, agile, and adaptable (and also viewed as threat to vested interests).

There is some claim that somewhat happened in the wake of the failed future system project ... old post that includes comments from fergus/morris book about the wake left after future system project failed
https://www.garlic.com/~lynn/2001f.html#33

lots of past posts mentioning failed future system project
https://www.garlic.com/~lynn/submain.html#futuresys

and it took the corporation quite some time to work out of it.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Security Awareness

Refed: **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 12, 2008
Subject: Security Awareness
Blog: Information Security
recent article

IT Execs: Our Breaches Are None of Your Business
http://www.darkreading.com/document.asp?doc_id=156297

from above:
Eighty-seven percent of IT decision-makers don't believe the general public should be informed if a data breach occurs, according to the study. More than half (61 percent) didn't think the police should be informed, either.

... snip ...

recent posts discussing background behind breach notification legislation:
https://www.garlic.com/~lynn/2008i.html#21 Worst Security Threats?
https://www.garlic.com/~lynn/2008i.html#42 Security Breaches

also these Q&A
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/237628-24760462
http://www.linkedin.com/answers/technology/information-technology/information-security/TCH_ITS_ISC/243464-24494306

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 14, 2008
Subject: Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure?
Blog: Government Policy
toxic CDOsc were used two decades ago in the S&L crisis to obfuscate underlying value.

long-winded, decade old post discussing much of the current situation ... including visibility into CDO-like instruments.
https://www.garlic.com/~lynn/aepay3.htm#riskm

it use to be that loan originators retained the loans they originated and therefor had to pay attention to loan quality. with toxic CDOs they could unload all the loans they originated and so all financial "feed-back" controls evaporated and the only measure was how fast they could originate loans and unload them. Side-effect was a whole lot of loans got written w/o regard to whether the people getting the loans were qualified.

Subprime loans were subprime in another sense. They were targeted at first time home owners with no credit history and were therefor lower quality borrowers. Many were also subprime in the sense that the loans had a very low introductory borrowing rate for the first couple years and then became a standard adjustable rate loan. There are some stats that the majority of such loans went to people with credit history and likely not for owner-occupied housing (i.e. speculators that were looking to flip the property before the rate adjusted).

There were a large number of first time home owners that weren't remotely qualified for the house they moved into. However, there appears to be a much larger number of such subprime loans that went to pure speculation.

the 2nd order effects are that they are talking about something like $1 trillion in toxic CDO write downs. The simplified mathematical formula is that $1 trillion was unrealisticly pumped into the loan market ... with a corresponding inflation in the housing prices; implication is that a corresponding deflation adjustment now occurs in housing prices.

Housing prices sensitive to demand ... not only did that $1 trillion unrealistically drive up prices ... but the speculation also tended to create the impression of much larger demand than actual existed (houses being held by non-owner/occupied speculators looking to keep the house for a year or so and then flip it).

with some number of loans at 100% (with no down payment) ... the deflation of housing prices (to realistic levels) results in houses being worth less than the loan.

a few weeks ago one of the business news channel commentators was getting annoyed by Bernanke getting into a rut with constant refrain that new regulations will fix the problem ... and came out with the statement that American bankers are the most inventive in the world and have managed to totally screw up the system at least once a decade regardless of the regulations in effect.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Hypothesis #4 -- The First Requirement of Security is Usability

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 14, 2008 02:05 PM
Subject: Hypothesis #4 -- The First Requirement of Security is Usability
Blog: Financial Cryptography
re:
https://financialcryptography.com/mt/archives/001046.html

There is also the issue of who might make money off it. I've commented that ssl server certification authority industry have somewhat backed DNSSEC ... but it represents a significant catch-22 for them
https://www.garlic.com/~lynn/subpubkey.html#catch22

currently ssl domain name server digital certificates represents binding between domain name and public key. the authoritative agency for domain names is the domain name infrastructure. ssl domain name server digital certificates were (at least partially) justified by perceived vulnerabilities in the domain name infrastructure (the same domain name infrastructure that is the authoritative agency for domain names).

the root trust for domain names is the domain name infrastructure ... so part of DNSSEC could be viewed as improving the integrity of the domain name infrastructure as part of eliminating systemic risk for ssl domain name server digital certificates. This can be achieved by having public key presented as part of registering domain name ... and then future communication with domain name infrastructure needs to be digitally signed ... which can be verified with the previously registered, onfile public key.

This can also be used to reduce the cost of ssl domain name digital certificates. Currently certification authorities require a ssl digital certificate application to include a whole lot of identification information. Then the certification authority has to perform error-prone, expensive and time-consuming identification matching process with the information on file (for the domain name) with the domain name infrastructure.

With an on-file public key, certification authorities can just require that ssl domain name digital certificate applications be digitally signed ... then the certification authority can replace the time-consuming, expensive, and error-prone identification process with a much more reliable and inexpensive authentication process ... verifying the digital signature with the public key on-file with the domain name infrastructure.

the catch-22 for the ssl domain name certification authority industry

1) improvements in integrity of domain name infrastructure mitigates some of original justification for ssl domain name digital certificates

2) if general public can also start doing trusted real-time retrieval of on-file public key ... it further eliminates need for ssl domain name digital certificates as well as general demonstration about not needing digital certificates for trusted public key operations.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

OS X Finder windows vs terminal window weirdness

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS X Finder windows vs terminal window weirdness
Newsgroups: alt.folklore.computers
Date: Sat, 14 Jun 2008 20:01:13 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
Are there any "older" systems that didn't work this way? Every system I'm familiar with from before around 1989 read in the program and paged it out before it started to run. OS/2 is the first system I know of that paged in the executable from file storage.

when i did page mapped support for cms filesystem in the early 70s ... potentially allowing it to be demand paged from its home position in the filesystem.
https://www.garlic.com/~lynn/submain.html#mmap

there were a number of issues here ... the base cms filesystem (when possible) would do 64k-byte reads from filesystem (records of executable had to be allocated sequentlial & contiguously). i did some tricks in the underlying paged mapped support to dynamically adapt how the operation was performed ... if it was a really large executable, large amount of contention for real storage, and very little real storage ... then it would allow things to progress in demand page mode.

If the resources were available, "asynchronous reads" would be queued for the whole executable ... and underlying paging mechanism would reorganize for optimal physical transfer ... and execution could start as soon as the page for execution start was available (even if the rest weren't all in memory). There are some processor cache operations that can work like this (as soon as the requested word is available even if the full cache line isn't). Issue is for large executables ... reverything to 4k demand page operations has huge number of latencies.

however, a lot of cms compilers and applications were borrowed from os360 ... which had behavior that lots of program image locations had to be fetched and "swizzled" before execution could begin. lots of past posts discussing difficulty of patching os360 implementation paradigm for operation for high performance page-mapped environment.
https://www.garlic.com/~lynn/submain.html#adcon

for other topic drift ... old post about being contacted by people in the os2 group about adapting stuff that i had done in the 60s and early 70s for os2 implementation:
https://www.garlic.com/~lynn/2007i.html#60 John W. Backus, 82, Fortran developer, dies
https://www.garlic.com/~lynn/2007l.html#61 John W. Backus, 82, Fortran developer, dies

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Certificate Purpose

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Purpose
Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin
Date: Sat, 14 Jun 2008 21:34:17 -0400
"Vadim Rapp" <nospam@sbcglobal.net> writes:
I have a personal email signing certificate from Thawte. The certificate is issued in my name. The certificate is installed in the system.

If I look at the certificate from Internet Explorer Options/Content/Certificates, or from MMC, I see two purposes of the certificate: "proves your identity to a remote computer" and "Protects email messages".

But if I send an email signed with this certificate, and then look at the certificate already in the email (sent or received - same thing), I see only purpose "Protects email messages". Same in Outlook and in Outlook Express.

Why I don't see "proves your identity" purpose in the certificate in email?


asymmetric key cryptography is technology where a pair of keys are required for encoding and decoding (vis-a-vis symmetric key where the same key is used for both encoding and decoding).

public(/private) key cryptography is business process where one key (of asymmetric key pair) is kept confidential and never divulged (private key) and the other key (public) is freely distributed.

digital signature is a business process that provides authentication and integrity. the hash of a message is encoded with a private key. subsequently the hash of the message is recalculated and compared with the "digital signature" hash that has been decoded with the corresponding public key. if they are equal, then the message is presumed to not have been modified and was "signed" by the entity in possession of the specific "private key". If the hashes are not equal, then the message has been altered (since "signing") and/or originated from a different entity.

over the years there has been some amount of semantic confusion involving the terms "digital signature" and "human signature" ... possibly because they both contain the word "signature". A "human signature" implies that the person has read, understood, and agrees, approves, and/or authorizes what has been signed. A "digital signature" frequently may be used where a person never even has actually examined the bits that are digitally signed.

a digital certificate is a business process that is the electronic analogy to the letters of introduction/credit for first time communication between two strangers (from sailing ship days and earlier) ... where the strangers have no direct knowledge of each other and/or don't have recourse to information sources about the other entity.

there was work on generalized x.509 identity digital certificates nearly two decades ago. the issues, by the middle 90s, was that most organizations realized that such identity digital certificates, represented significant privacy and liability issues. As a result, there was significant retrenching from the paradigm.

In part, the original scenario was electronic mail from the early 80s, where somebody dialed up their electronic post office, exchanged email and then hung up. There could be significant problem authenticating first time email from total stranger (in this mostly "offline" environment).

Digital certificates had started out with a fairly narrowly defined market ... first time communication between strangers w/o direct knowledge of each other (and/or recourse to information about the other party). Realizing that generalized identity certificates represented significant privacy and liability issues, resulted in retrenching and further narrowing of the target market. The increasing pervasiveness of the internet and online information sources further narrowed their target market and usefulness (since there became lots of alternatives for information about total strangers).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Selling Security using Prospect Theory. Or not

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 15, 2008 04:16 PM
Subject: Selling Security using Prospect Theory. Or not.
Blog: Financial Cryptography
re:
https://financialcryptography.com/mt/archives/001061.html

how many times has the refrain been repeated about deficiency with "after market" solutions ... that it has to be built in ... not try to affix it on afterwards (aka automobile safety analogy ... things like seat belts, safety glass, air bags, bumpers, crash impact zone, etc).

however, based on the automobile analogy, there may be some evidence that it only happens with gov. mandates.

the safety/security engineers don't disappear with built in security ... but they tend to disappear from public limelight.

misc. old posts that include raising the aftermarket seat belt analogy
https://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
https://www.garlic.com/~lynn/aadsm16.htm#15 Difference between TCPA-Hardware and a smart card (was: example: secure computing kernel needed)
https://www.garlic.com/~lynn/aadsm17.htm#40 The future of security
https://www.garlic.com/~lynn/aadsm17.htm#56 Question on the state of the security industry
https://www.garlic.com/~lynn/aadsm19.htm#10 Security as a "Consumer Choice" model or as a sales (SANS) model?
https://www.garlic.com/~lynn/aadsm21.htm#16 PKI too confusing to prevent phishing, part 28
https://www.garlic.com/~lynn/aadsm22.htm#28 Meccano Trojans coming to a desktop near you
https://www.garlic.com/~lynn/aadsm26.htm#64 Dr Geer goes to Washington

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

parallel computing book

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: parallel computing book
Newsgroups: comp.arch
Date: Mon, 16 Jun 2008 11:42:54 -0400
Stephen Fuld <S.Fuld@PleaseRemove.att.net> writes:
You weren't thinking of "in Search of Clusters" by Greg Phister, were you? Greg used to post here.

recent post in this n.g. with reference to Greg Pfister
https://www.garlic.com/~lynn/2008e.html#40 Fantasy-Land_Hierarchal_NUMA_Memory-Model_on_Vertical ...

older reference to Greg
https://www.garlic.com/~lynn/2006w.html#40 Why so little parallelism

referencing little difference in opinion that Greg & I had regarding work on clusters ... from this exchange
https://www.garlic.com/~lynn/2000c.html#21 Cache coherence [was Re: TF-1]

regarding medusa effort ... old email
https://www.garlic.com/~lynn/lhwemail.html#medusa

and this old post
https://www.garlic.com/~lynn/95.html#13

where the effort was transferred and we were told we couldn't work on anything with more than four processors.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Certificate Purpose

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Purpose
Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin
Date: Tue, 17 Jun 2008 09:08:35 -0400
Michael Ströder <michael@stroeder.com> writes:
In Windows you need a so-called revocation provider for OCSP. Don't know Vista but until Windows XP you have to buy a third-party software for OCSP. But OCSP is not the overall solution to the problem. The client has to locate the OCSP responder, OCSP responder asked has to know about a particular CA to return the correct revocation status of certs issued by that CA...

re:
https://www.garlic.com/~lynn/2008i.html#80 Certificate Purpose

basically public key operation is something you have authentication ... i.e. business process that keeps the corresponding private key confidential and never divulged to anybody. verifying digital signature (created by a specific private key) with the corresponding public key ... demonstrates the entity has possession of that "private key" (kept confidential and never divulged to anybody).

as mentioned, digital certificate is the electronic version of the ancient letters of credit/introduction ... indicating something about the entity associated with something you have authentication for first time communication between two strangers (who have no other access to information about each other, either locally and/or in an online environment).

we had been called in to consult with a small client/server startup that wanted to do payment transactions on their server and they had invented this thing called SSL that they wanted to use as part of the process. as a result we had to do detailed business walkthru of the SSL process as well as these new operations calling themselves certification authorities ... and these things they were calling digital certificates.

we had signoff/approval authority on the operation between the server and this new thing called payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

and were able to mandate some compensating procedures. We only had advisery capacity between the servers and clients ... and almost immediately most deployments violated basic SSL assumptions to meet necessary security (which continues up to current day).

In those early days, we were getting comments from certain factions that digital certificates were necessary to bring payment transactions into the modern age. We observed that the use of digital certificates (with their offline design point) actually set online payment transactions back decades (not made them more modern). It was somewhat after a whole series of those interchanges that saw the advent of work on (rube goldberg) OCSP ... which has the facade of providing some of the benefits of online, timely operation while still preserving the archaic offline digital certificate paradigm. The problem with OCSP is that it doesn't go the whole way and just make things a real online, timely operation (and eliminate the facade of needing digital certificates for operation in offline environment). In an online payment transaction scenario, not only is it possible to do real-time lookup of corresponding public key for real-time (something you have) authentication, but also do real-time authorization ... looking at things like current account balance and/or do other analysis based on current account characteristic and/or account transaction activity/patterns.

There were other incidental problems trying to apply digital certificates (specifically) to payment transactions (other than reverything decades of real real-time, online operation to a archaic offline paradigm). After we worked on what is commonly referred to electronic commerce today (including the SSL domain name digital certificate part) ... there was some number of efforts to apply digital certificates to payment transactions ... at the same time we had been called in to work in the x9a10 financial standard working group (that had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments). we came up with x9.59 financial standard which could use digital signature authentication w/o the need for digital certificates (i.e. use digital signatures in a real online mode of operation w/o the trying to maintain any fiction of digital certificates and offline operation).
https://www.garlic.com/~lynn/x959.html#x959

we would periodically ridicule the digital certificates based efforts (besides noting that it was attempt to revert the decades of online operation to an offline paradigm). some of that presumably sparked the OCSP effort. However, the other thing we noted was that the addition of digital certificates to payment transaction increased the typical payload size by a factor of 100* times along with increase in processing by a factor of 100* times. This was enormous bloat (both payload and processing) for no useful purpose (digital certificates were redundant and superfluous compared to having public key on file in the account record ... which turns out was necessary for other purposes anyway). misc. past references
https://www.garlic.com/~lynn/subpubkey.html#bloat

we also noted that the primary purpose of SSL in the world today is in the electronic commerce application and used to hide the account number and transaction details (as a countermeasure to account fraud flavor of identity theft). we pointed out that the work on x9.59 had also slightly tweaked the payment transaction paradigm and eliminated the need to "hide" the transaction details. From the security acronym PAIN
P ... privacy (sometimes CAIN, confidential) A ... authentication I ... integrity N ... non-repudiation

... in effect, x9.59 substitutes strong authentication and integrity for privacy as countermeasure to account fraud (flavor of identity theft). We noted that not only did the x9.59 standard eliminate the major use of SSL in the world today (hiding the account number and transaction details) ... but no longer needing to hide that information ... also eliminates the threats and vulnerabilities with the majority of the data breaches that have been in the news (doesn't eliminate the breaches, just eliminated the ability of the attackers to use the information for fraudulent purposes).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Stephen Morse: Father of the 8086 Processor

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Stephen Morse: Father of the 8086 Processor
Newsgroups: alt.folklore.computers
Date: Tue, 17 Jun 2008 09:22:09 -0400
Stephen Morse: Father of the 8086 Processor
http://www.pcworld.com/article/id,146917-c,intel/article.html

from above:
In honor of the 30th anniversary of Intel's 8086 chip, the microprocessor that set the standard that all PCs and new Macs use today, I interviewed Stephen Morse, the electrical engineer who was most responsible for the chip.

... snip ...

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Which of the latest browsers do you prefer and why?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 17, 2008
Subject: Which of the latest browsers do you prefer and why?
Blog: Web Development
I've been using mozilla tab browsing for 5-6 yrs as means of masking/compensating for web latency.

i started out with a tab folder that i could click on and it would fetch 80-100 news oriented web pages (while i got a cup of coffee). I could then quickly cycle thru the (tabbed) web pages ... clicking on interesting articles (which would asynchronously load in the background into new tabs). By the time I had cycled thru all the initial web pages, the specific news articles would have all loaded and be immediately available.

Early on, I would complain about apparent storage cancers and performance problems when there were 500-600 open tabs (machine still had real storage to avoid any paging ... unless this was repeated several times w/o cycling the browser).

About the time firefox moved to sqlite ... i switched to a process that used wget to fetch the initial set of (80-100) news oriented pages ... and do a diff on the previous fetch, and then use sqlite to extact firefox previously seen URLs. This was used to produce a list of "new" URLs (from the web sites) that also had not otherwise been seen previously. I then used command line interface to signal the running firefox to load the list of "new" URLs into background tabs. Most recent firefox builds have improved significantly in both storage utilization and performance (handling opening several hundred URLs into background tabs).

during the evolution of firefox 3 and sqlite use ... there has been some adaptation to things like changes involving serialization and locking on the sqlite file.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Own a piece of the crypto wars

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Own a piece of the crypto wars
Date: Tue, 17 Jun 2008 13:53:59 -0400
To: R.A. Hettinga <rah@xxxxxxxx>
CC: cypherpunks@xxxxxxx, cryptography@xxxxxx
gold-silver-crypto@xxxxxxxx, dgcchat@xxxxx,
    Sameer Parekh
archeological email about proposal for doing pgp-like public key (from 1981):
https://www.garlic.com/~lynn/2006w.html#email810515

the internal network was larger than the arpanet/internet from just about the beginning until sometime summer of '85. corporate guidelines had become that all links/transmission leaving corporate facilities were required to be encrypted. in the '80s this met lots of link encryptors (in the mid-80s, there was claim that internal network had over half of all the link encryptors in the world).

a major crypto problem was with just about every link that crossed any national boundary created problems with both national gov. links within national boundaries would usually get away with argument that it was purely internal communication within the same corporate entity. then there was all sorts of resistance encountered attempting to apply that argument to links that cross national boundary (from just about every national entity).

For other archeological lore ... old posting with new networking activity for 1983
https://www.garlic.com/~lynn/2006k.html#8

above posting includes listing of locations (around the world) that had one or more new network links (on the internal network) added sometime during 1983 (large precentage involved connections requiring link encryptors).

more recent post
https://www.garlic.com/~lynn/2008h.html#87

mentioning coming to the realization (in the 80s) that there were three kinds of crypto.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Historical copy of PGP 5.0i for sale -- reminder of the war we lost

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Historical copy of PGP 5.0i for sale -- reminder of the war we lost
Newsgroups: alt.folklore.computers
Date: Tue, 17 Jun 2008 18:44:13 -0400
Historical copy of PGP 5.0i for sale -- reminder of the war we lost
https://financialcryptography.com/mt/archives/001064.html

there is a number of references to the subject ... i had posted this to similar thread that is running in the crypto mailing list
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars

regarding crypto on the internal network more than a decade earlier (which is also reproduced in the financial crypto blog).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

squirrels

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: squirrels
Newsgroups: alt.folklore.computers
Date: Wed, 18 Jun 2008 08:47:29 -0400
jmfbahciv <jmfbahciv@aol> writes:
Either I misheard the report on the news or ... thank for the correction.

there was similar incident involving identity theft, crooks using the stolen credit card numbers at questionable website ... UK LEO getting the transaction records for the website and going after the people listed for the credit cards ... and it taking some ages to establish that it was identity theft (but not until long after the damage had been done).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Technologists on signatures: looking in the wrong place

From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 18, 2008 12:49 PM
Subject: Technologists on signatures: looking in the wrong place
Blog: Financial Cryptography
re:
https://financialcryptography.com/mt/archives/001056.html

couple recent posts in microsoft crypto n.g. thread on "Certificate Purpose" that got into description of digital signature being something you have authentication
https://www.garlic.com/~lynn/2008i.html#80
https://www.garlic.com/~lynn/2008i.html#83

and there periodically being semantic confusion with "human signature" ... possibly because both terms contain the word "signature". misc. past posts about being called in to help wordsmith the cal. state electronic signature legislation (and later the federal electronic signature legislation)
https://www.garlic.com/~lynn/subpubkey.html#signature

and the oft repeated statement that "human signatures" have implication of having read, understood, agrees, approves, and/or authorizes (which isn't part of something you have authentication digital signature).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Certificate Purpose

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Purpose
Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin
Date: Wed, 18 Jun 2008 15:52:59 -0400
Paul Adare <pkadare@gmail.com> writes:
Wrong again. When we're talking about email certificates, whether they be signing or encryption certificates, and smart cards, the smart card is simply a more secure storage method for the issued certificates.

re:
https://www.garlic.com/~lynn/2008i.html#80 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#83 Certificate Purpose

... the chip is more secure storage method for the private key. for digital signatures to represent something you have authentication, an established business process has to provide that the private key has never been divulged, kept confidential and any specific private key is only in the possession of a single individual (the chip storage supposedly provides for high integrity and additional assurance that only a single entity has access to & use of the private key).

The public/private key process provides for the public key to be published and widely distributed. Digital certificates are a specific kind of business process for the distribution of public keys.

From a something you have authentication business process requirement for private key ... the chip can provide for a confidential storage method for the private key. The chip may also be used as a convenient storage method for the corresponding public key and any associated digital certificate (but there isn't a security requirement to keep the public key and associated digital certificates confidential ... just the reverse ... the objective is to make copies of them generally available).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Certificate Purpose

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Purpose
Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin
Date: Wed, 18 Jun 2008 16:13:22 -0400
Anne & Lynn Wheeler <lynn@garlic.com> writes:
re:
https://www.garlic.com/~lynn/2008i.html#80 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#83 Certificate Purpose


oops ... finger slip that should be
https://www.garlic.com/~lynn/2008i.html#80 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#83 Certificate Purpose

i.e. re:
https://www.garlic.com/~lynn/2008i.html#90 Certificate Purpose

oh and for a little topic drift ... some recent posts/comments about PGP which makes use of public/private key infrastructure for secure email but w/o digital certificates
https://www.garlic.com/~lynn/2008i.html#86 Own a piece of crypto wars
https://www.garlic.com/~lynn/2008i.html#87 Historical copy of PGP 5.0i for sale -- reminder of the ware we lost

it also mentions/references this old email from '81
https://www.garlic.com/~lynn/2006w.html#email810515
in this post
https://www.garlic.com/~lynn/2006w.html#12 more secure communication over the network

proposing a PGP-like certificate-less public/private key operation for the internal network.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Certificate Purpose

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Purpose
Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin
Date: Wed, 18 Jun 2008 17:10:39 -0400
"David H. Lipman" <DLipman~nospam~@Verizon.Net> writes:
Tell that to the *very large* organization that I belong to where I have to sign email using my specifically for purposes of non-repudiation.

in the 90s there was quite a bit of wide-spread confusion about digital signatures being equated to human signatures (possibly because of semantic confusion because both terms contain the word "signature") and/or digital signatures (directly) provided for non-repudiation.

since then several organizations have effectively moved to the position that various kinds of additional business processors &/or services need be used to provide for non-repudiation about *something*.

from my merged security taxonomy and glossary
https://www.garlic.com/~lynn/index.html#glosnote

... a (90s) "GSA" definition for non-repudiation:
Assurance that the sender is provided with proof of delivery and that the recipient is provided with proof of the sender's identity so that neither can later deny having processed the data. Technical non-repudiation refers to the assurance a relying party has that if a public key is used to validate a digital signature, that signature had to have been made by the corresponding private signature key. Legal non-repudiation refers to how well possession or control of the private signature key can be established.

... snip ...

more recent definition from NIST 800-60:
Assurance that the sender of information is provided with proof of delivery and the recipient is provided with proof of the sender's identity, so neither can later deny having processed the information.

... snip ...

or FFIEC:
Ensuring that a transferred message has been sent and received by the parties claiming to have sent and received the message. Non-repudiation is a way to guarantee that the sender of a message cannot later deny having sent the message and that the recipient cannot deny having received the message.

... snip ...

The current scenarios regarding non-repudiation involve additional business processes and/or services (other than entity something you have digital signatures).

For additional topic drift, one of the non-repudiation vulnerabilities for digital signatures can be a dual-use problem. Digital signatures can be used in a purely (possibly challenge/response) something you have authentication (say in place of password). The server sends random data (as a countermeasure to replay attack), which the client is expected to digital sign (with the appropriate private key). The server then verifies the returned digital signature with the onfile public key (for that account). These scenarios never have the client actually examining the data being digital signed. If the same public/private key pair is also ever used in scenario where the entity is assumed to have actually read (understood, agrees, approves, and/or authorizes) what is being digitally signed ... then an attack is to include other than random data in some challenge/response, something you have authentication (say some sort of payment transaction).

The countermeasure is to guarantee that it is only possible to use a private key for digitally signing of specific kind and that it is physical impossible for a private key to be used for making any other kind of digital signature (for instance, a private key will have knowledge that the hash that is being encoded to form a digital signature is guaranteed to have been of text that has been read & understood by you ... and w/o that knowledge, the private key will refuse to perform the encoding operation).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Certificate Purpose

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Certificate Purpose
Newsgroups: microsoft.public.security,microsoft.public.windowsxp.security_admin
Date: Thu, 19 Jun 2008 15:04:38 -0400
"Vadim Rapp" <vr@nospam.myrealbox.com> writes:
My concern is this: should the recipient trust the proof of identity that comes on the medium (certificate) that does not say it's good for the purpose of proving the identity?

re:
https://www.garlic.com/~lynn/2008i.html#80 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#83 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#90 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#91 Certificate Purpose
https://www.garlic.com/~lynn/2008i.html#92 Certificate Purpose

recipient is a relying party ... typically in trusted 3rd party certification authority paradigm ... why do you thing the word trusted appears in the press so much?

trusted 3rd party certification authorities have been typically disclaiming responsibility/liability for ages.

so there are actually a number of trust problems.

for a technical trust deficiency, most certification authorities aren't the authoritative agency for the information they are certifying (which is embodied in the digital certificate they issue).

in the case of email, the authoritative agency for email address is typically the associated ISP. so if that ISP doesn't provide any security for passwords ... then some attacker could obtain access to the email. they could then apply for a different digital certificate (with a different public/private key) for the same email address. Now, there is a situation where there may be two (or more) different trusted valid accepted digital certificates for the same email address.

a recipient's countermeasure for this sort of threat is to maintain local repository of the correct digital certificate. however, that actually becomes the PGP model ... which only requires the recipient to maintain local repository of the correct public key ... where digital certificates are redundant and superfluous.
https://www.garlic.com/~lynn/subpubkey.html#certless

for a business trust deficiency ... parties have responsibility/liability obligations based on explicit or implicit contract. in the trusted 3rd party certification authority business model the contract is between the certification authority and the entity that the digital certificate is issued to. there typically is no implicit, explicit, and/or implied contract between trusted 3rd party certificaiton authorities and the relying parties that rely on the validity of the issued digital certificates ... and therefor no reason for relying parties to trust the digital certificates.

basically the trusted 3rd party certification authority business model doesn't correspond to long established business practices. this is actually highlighted in the federal PKI program ... which has the GSA ... acting as an agent for all federal relying party entities ... signing explicit contracts with the authorized certification authorities ... creating explicit contractual obligation between the relying parties and the trusted 3rd party certification authorities ... providing basis on which trust/reliance can be based.

another approach is the relying-party-only certification authority information (i.e. the relying party actually issuing the digital certificate).
https://www.garlic.com/~lynn/subpubkey.html#rpo

the issue here is the certification authority has as part of the business process something frequently referred to as registration ... where the public key is registered (prior to issuing a digital certificate). The original design point for digital certificates is first time communication between two strangers. However, in all the relying-party-only scenarios is normally trivial to also show that the digital certificates are redundant and superfluous ... since the public key is typically registered in the same repository that other information about the subject entity is being kept ... and which is normally accessed in any dealings that the relying party will have with that entity.

as mentioned previously the early 90s, saw work on generalized x.509 identity digital certificates ... but by the mid-90s, most institutions realized that this "identity" digital certificates (frequently becoming overloaded with personal information) represented significant privacy and liability issue. The retrenchment was to relying-party-only digital certificates which would only contain some sort of record locator ... where all the actual information resided. Again it was trivial to show that digital certificates were redundant and superfluous since this record would also contain the associated public key.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Lynn - You keep using the term "we" - who is "we"?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Lynn Wheeler <lynn@xxxxxxxx>
Date: June 20, 2008
Subject: Lynn - You keep using the term "we" - who is "we"?
Blog: Information Security - UK
my wife and I worked together on many of these activities.

for instance, we had done a high-speed data transport project (HSDT)
https://www.garlic.com/~lynn/subnetwork.html#hsdt

and we working with various organizations going forward for NSFNET. TCP/IP is somewhat the technical basis for modern internet, NSFNET backbone was the operational basis for the modern internet and then CIX was the business basis for the modern internet. However, internal politics got in the way of our bidding on NSFNET backbone. The director of NSF tried to help by writing a letter to the company 3Apr1986, NSF Director to IBM Chief Scientist and IBM Senior VP and director of Research, copying IBM CEO) (including stating that an audit found what we already had running internally was at least five years ahead of all NSFNET backbone bid submissions). But that just made the internal politics worse. Some old email regarding NSFNET related activities
https://www.garlic.com/~lynn/lhwemail.html#nsfnet

later we ran ha/cmp project that resulted in developing and shipping the HA/CMP product ... misc. past post
https://www.garlic.com/~lynn/subtopic.html#hacmp

for some tie-in (between high-availability, cluster scale-up, supercomputers, relational databases and SSL) ... two of the people in this ha/cmp scale-up meeting in ellison's conference room
https://www.garlic.com/~lynn/95.html#13

show up later at a small client/server startup responsible for something called a commerce server. the startup had invented something called SSL and they wanted to apply it as part of implementing payment transactions on their server. The result is now frequently referred to as electronic commerce.

old email on the cluster scale-up aspect
https://www.garlic.com/~lynn/lhwemail.html#medusa

We recently both attended the Jim Gray tribute a couple weeks ago at Berkeley. Random other database tidbits ... including working on the original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

In this patent portfolio involving security, authentication, access control, hardware tokens, etc ... we are the co-inventors
https://www.garlic.com/~lynn/aadssummary.htm

and in one of her prior lives ... long ago and far away ... she had been con'ed into going to POK to serve as the (corporate) loosely-coupled (aka mainframe for cluster) architect ... where she was responsible for Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

... which, except for IMS hot-standby, saw very little takeup until SYSPLEX (one of the reasons that she didn't stay very long in the position).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

Accidentally Deleted or Overwrote Files?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Accidentally Deleted or Overwrote Files?
Newsgroups: alt.folklore.computers
Date: Fri, 20 Jun 2008 14:47:28 -0400
stremler writes:
If it was important, you'd have a hardcopy printed out in the vault.

part of Bruce's talk at Jim Gray's tribute was about Jim's work on formalizing transaction semantics ... and that was the great enabler for online transactions ... being able to have enough trust in computers to do things that had previously been manual/hardcopy.

some recent posts:
https://www.garlic.com/~lynn/2008i.html#32 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#34 American Airlines
https://www.garlic.com/~lynn/2008i.html#36 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#37 American Airlines
https://www.garlic.com/~lynn/2008i.html#40 A Tribute to Jim Gray: Sometimes Nice Guys Do Finish First
https://www.garlic.com/~lynn/2008i.html#54 Trusted (mainframe) online transactions
https://www.garlic.com/~lynn/2008i.html#62 Ransomware
https://www.garlic.com/~lynn/2008i.html#63 DB2 25 anniversary
https://www.garlic.com/~lynn/2008i.html#70 Next Generation Security
https://www.garlic.com/~lynn/2008i.html#94 Lynn - You keep using the term "we" - who is "we"?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

A Blast from the Past

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Blast from the Past
Newsgroups: alt.folklore.computers
Date: Sun, 22 Jun 2008 09:59:53 -0400
Quadibloc <jsavard@ecn.ab.ca> writes:
It may be that some large server boxes do use water cooling that involves a connection to the city water supply. But I have a suspicion that this is a notice whose wording dates back to those vanished days of traditional mainframe computers - despite being laser printed and proportionally-spaced and everything.

recent post mentioning folklore about berkeley's 6600 regular thermal shutdown (something like 10am tuesdays)
https://www.garlic.com/~lynn/2008i.html#57 Microsoft versus Digital Equipment Corporation

old post of science center's 360/67 in the 2nd flr machine room at 545 tech sq (open system, dumping directly into cambridge sewer, 40yrs ago) and looking at replacing with a closed system ... but a big question was the weight loading of the water tower on the bldg. roof:
https://www.garlic.com/~lynn/2000b.html#86 write rings

other posts mentioning science center at 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

things evolved into (multiple levels of) closed systems with heat exchange interfaces (with requirement for very pure liquid for closed system actually circulating thru the machine). there is old folklore about one such early customer installation that had all sorts of sensors that would trip thermal shutdown (avoiding overheating and damage to machine components). this particular problem was that there wasn't a flow sensor on the system next to the machine (there were flow sensors on the internal system) ... by the time the internal thermal sensors started to notice a temperature rise (because flow had stopped in external flow) it was too late ... there was too much heat on the internal side which couldn't be dumped.

misc. posts mentioning (closed system) heat exchange:
https://www.garlic.com/~lynn/2000b.html#36 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2001k.html#4 hot chips and nuclear reactors
https://www.garlic.com/~lynn/2004p.html#35 IBM 3614 and 3624 ATM's
https://www.garlic.com/~lynn/2004p.html#41 IBM 3614 and 3624 ATM's

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

We're losing the battle

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: We're losing the battle
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 22 Jun 2008 10:36:09 -0400
Robert.Richards@OPM.GOV (Richards, Robert B.) writes:
Most major banks that I am aware of do have parallel sysplexes in their data centers. I suspect that we are not talking about mainframe system availability here but rather whether their distributed servers which are running the front-end banking applications are highly available. High availability on IBM's System p is on the verge of becoming a real possibility since the Power 6/AIX 6 stuff was announced, but that infrastructure design certainly is not widespread across the banking footprint as of yet!

I wouldn't say we are necessarily losing the battle. Linux on System z (among other things) has been working on leveling the playing field for awhile now. Server consolidation on "Project Green" type initiatives, etc. are also in vogue. The smarter shops are attempting to stop the unrestrained proliferation of blades and racks.


ha/cmp project started two decades ago
https://www.garlic.com/~lynn/subtopic.html#hacmp

old post about deploying ha/cmp scale-up before the project got redirected and we were told to not work on anything more than four processors
https://www.garlic.com/~lynn/95.html#13

misc. old email regarding ha/cmp scale-up activity
https://www.garlic.com/~lynn/lhwemail.html#medusa

i've frequently commented that (much) earlier, my wife had been con'ed into going to POK to be in charge of loosely-coupled architecture where she created Peer-Coupled Shared Data ... misc. past posts
https://www.garlic.com/~lynn/submain.html#shareddata

but, except for IMS hot-standby ... it saw very little take-up until much later with sysplex (and parallel sysplex) activity ... which contributed to her not staying very long in the position.

another issue in that period was that she had constant battles with the communication division over protocols used for the infrastructure. in the early sna days ... she had co-authoried peer-to-peer networking architecture (AWP39) ... so some in the communication division may viewed efforts as somewhat competitive. while she was in POK, they had come to a (temporary) truce ... where communication protocols had to be used for anything that crossed the boundary of the glasshouse ... but she could specify the protocols used for peer-coupled operation within the walls of the glasshouse.

part of the ha/cmp not on mainframe platform was avoiding being limited by communication division. for some topic drift, other past posts mentioning conflict with communication division when we came up with 3-tier architecture and were out pitching it to customer executives
https://www.garlic.com/~lynn/subnetwork.html#3tier

recent ha/cmp related post (from thread mentioning tribute to Jim Gray)
https://www.garlic.com/~lynn/2008i.html#50 Microsoft versus Digital Equipment Corporation
https://www.garlic.com/~lynn/2008i.html#51 Microsoft versus Digital Equipment Corporation

the first talk at the tribute was by Bruce Lindsay mentioning that Jim's formalizing of transaction semantics was the great enabler for online transactions (providing the necessary trust in computer operation to move off the manual/paper operation).

now related to the meeting mentioned in this referenced post
https://www.garlic.com/~lynn/95.html#13

two of the people mentioned in the meeting, later show up in a small client/server startup responsible for something called a commerce server. we were called in to consult because they wanted to do payment transactions on the server ... and they had this technology that the startup had invented called SSL which they wanted to use. As part of doing payment transactions on the server ... there was the creation of something called a payment gateway that servers would interact with. lots of past posts mentioning this thing called payment gateway
https://www.garlic.com/~lynn/subnetwork.html#gateway

btw, we used ha/cmp for the payment gateway implementation (with some number of enhancements and compensating procedures). this is now frequently referred to as electronic commerce.

recent post related some other aspects of the period (in an information security blog)
https://www.garlic.com/~lynn/2008i.html#94 Lynn - You keep using the term "we" - who is "we"?

one of the other things mentioned at the tribute, was Jim's work on analysing where the majority of outages are happening (frequently cited study that outages are rairly hardware anymore). when we were out marketing ha/cmp product, we had coined the terms disaster survivability and geographic survivability ... to differentiate from simple disaster/recovery. we were also asked to write a section for the corporate continuous availability strategy document. however, the section was removed because both rochester and POK complained that they wouldn't be able to match (what we were doing) for some number of years
https://www.garlic.com/~lynn/submain.html#available

for other drift, recent post discussing the evolution from medusa to blades and the really major green enabler was the marrying of virtualization and blades (as part of server consolidation)
https://www.garlic.com/~lynn/2008h.html#45 How can companies decrease power consumption of their IT infrastructure?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

dollar coins

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: dollar coins
Newsgroups: alt.folklore.computers
Date: Sun, 22 Jun 2008 12:47:04 -0400
Larry Elmore <ljelmore@verizon.spammenot.net> writes:
The raiding started under LBJ. And there never was a "trust fund" in the usual sense of those words.

i've heard people make references to a bottom desk drawer somewhere that has slips of paper that are the IOUs to the social security system. one might consider an analogy with the payday loan business ... where the advertisements now say "borrow responsibly" and never borrow more than you can pay back within a few paydays (i.e. use it only for emergencies ... not as part of getting from one paycheck to the next) ... i.e. the borrowing out of the social security system wasn't a one time "temporary" thing.

recent posts mentioning (former) comptroller general on responsible budgets (making comment that nobody in congress for the past 50 yrs has been capable of simple middle school arithmetic).
https://www.garlic.com/~lynn/2008.html#57 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008d.html#40 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008e.html#50 fraying infrastructure
https://www.garlic.com/~lynn/2008f.html#86 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008h.html#26 The Return of Ada

part of it is that just the medicaid drug legislation (by itself) creates tens of trillions in unfunded liability and that various "social" program spending starting to dwarf all other gov. budget items (combined).

one of the (oft repeated) references (from the gov. gao site) shows gov. budget in '66 as 43percent defense & 15percent social security; in '88 28% defense and 20% social security; and in 2006, 20% defense and 21% social security (and 19% medicare & medicaid). in '66, budget was 7% was debt interest, 67% discretionary spending and 26% mandatory spending. in '86, budget was 14% was debt interest, 44% discretionary spending and 42% mandatory. In '06, budget was 9% debt interest, 38% discretionary and 53% mandatory. And by 2040, federal budget debt interest, federal social security and federal medicare/medicaid will be nearly 30% of GDP.

another view of this i've raised is with respect to the baby boomer retirement ... the significant baby boomer population bubble increases the number of retirees by something like a factor of four times ... with the number of workers in the following generation only a little over half the number of baby boomer workers. The net is that the ratio of retirees to workers increases by a factor of eight times ... aka, each worker will have to pay eight times as much to provide same level of per retiree benefits (there was recent program that mentioned some court ruled that the IRS isn't allowed to have a tax rate higher than 100%)
https://www.garlic.com/~lynn/2008f.html#99 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008g.html#1 The Workplace War for Age and Talent
https://www.garlic.com/~lynn/2008h.html#3 America's Prophet of Fiscal Doom
https://www.garlic.com/~lynn/2008h.html#26 The Return of Ada

other recent comments about baby boomer retirement issues
https://www.garlic.com/~lynn/2008b.html#3 on-demand computing
https://www.garlic.com/~lynn/2008c.html#16 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#69 Toyota Beats GM in Global Production
https://www.garlic.com/~lynn/2008h.html#11 The Return of Ada
https://www.garlic.com/~lynn/2008h.html#57 our Barb: WWII
https://www.garlic.com/~lynn/2008i.html#56 The Price Of Oil --- going beyong US$130 a barrel

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

We're losing the battle

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: We're losing the battle
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 22 Jun 2008 13:02:38 -0400
R.Skorupka@BREMULTIBANK.COM.PL (R.S.) writes:
Parallel Sysplex has nothing to do with that. You're talking about *banking system* which consist of many elements, optionally including

PS. Even if the PS is there and is really available, it doesn't mean the system (banking system) will be available. I work under SLA which allows me to have 8 hours of planned outage per year. No sysplex. I have never reached the limit, because I easily "share" outages demanded by other components. In such case PS adds almost no value, and is not the factor of banking system availability.


re:
https://www.garlic.com/~lynn/2008i.html#97 We're losing the battle

working on ha/cmp we looked at customer that required five-nines availability ... five minute outage (planned & unplanned) per year.

on the other hand ... one of the large financial transaction networks has claimed 100% availability over extended number of years ... using triple redundant IMS hot-standby and multiple geographic locations.

slight drift ... recent Information Security blog post
https://www.garlic.com/~lynn/2008i.html#17 Does anyone have any IT data center disaster stories?

made a passing reference in previous post with regard to contention with the communication division. the tcp/ip mainframe product had significant performance issues ... consuming nearly a full 3090 processor getting 44kbytes/sec thruput. I enhanced the product with RFC1044 support and in some tuning tests at Cray research got 1mbyte/sec (hardware limitation) sustained between a Cray and a 4341-clone (using only a modest amount of the 4341) ... aka nearly three orders of magnitude increase in the ratio of bytes transferred per instruction executed. misc. past posts mentioning rfc1044 support
https://www.garlic.com/~lynn/subnetwork.html#1044

another area of conflict ... as part of the hsdt project
https://www.garlic.com/~lynn/subnetwork.html#hsdt

the friday before we were to leave on trip to the other side of the pacific to discuss some custom built hardware for hsdt ... somebody (from the communication division) announced a new online conference in the area of high-speed communication ... and specified the following definitions:


low-speed               <9.6kbits
medium-speed            19.2kbits
high-speed              56kbits
very high-speed         1.5mbits

the following monday on the wall of conference room on the other side of the pacific were these definitions:

low-speed               <20mbits
medium-speed            100mbits
high-speed              200-300mbits
very high-speed         >600mbits

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

OS X Finder windows vs terminal window weirdness

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS X Finder windows vs terminal window weirdness
Newsgroups: alt.folklore.computers
Date: Sun, 22 Jun 2008 15:17:05 -0400
Roland Hutchinson <my.spamtrap@verizon.net> writes:
But... but... but...

...that's precisely how people use computers nowadays.


traditionally, time-sharing was used to refer to systems that provided concurrent usage by multiple users ... time-sharing the computing resources concurrently between multiple users. misc. past posts referring to (mostly commercial) time-sharing systems
https://www.garlic.com/~lynn/submain.html#timeshare

computers running multiple things concurrently has been traditional referred to as multitasking, multiprogramming, concurrent programming, parallel computing, time-slicing, etc. traditional time-sharing systems have been implemented using technologies used for running multiple things concurrently.

for instance, online transaction systems have also tended to run multiple things concurrently ... using multitasking, multiprogramming, concurrent programming, parallel computing, time-slicing, etc technologies ... but usually are differentiated from traditional time-sharing systems.

in that sense, modern webservers tend to have more in common with online transaction systems than traditional time-sharing systems ... although that doesn't preclude deploying a webserver on a traditional time-sharing system (or for that matter online systems).

cp67 and vm370 tended to be deployed as time-sharing systems ... supporting large numbers of different users currently ... and the first webserver deployed outside of europe (in the US) was on the SLAC vm370 system. but that webserver was more akin to the current virtual appliance.
http://www.slac.stanford.edu/history/earlyweb/history.shtml

... slac and cern were similar computing operations.

one of the current issues that is frequently raised is the severe lack of adequate parallel (aka concurrent) computing operation ... especially for desktop systems.

online transaction systems and time-sharing systems provided lots of concurrently, independently executable work that can take advantage of multi-core operation .... but it is becoming a significant problem for desktop to take advantage of newer generation of chips where multi-core is standard. in the online transaction systems and time-sharing systems ... extensive concurrent workload (that may have been time-sliced on a single processor) can now actually run concurrently on different cores/processors.

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

We're losing the battle

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: We're losing the battle
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
To: <ibm-main@bama.ua.edu>
Date: Sun, 22 Jun 2008 15:45:48 -0400
Efinnell15@AOL.COM (Ed Finnell) writes:
They do, but my suspicion is that in multi-tiered model some things got overlooked in the PCI/HIPAA redesign-all those bytes, so little time!

previous post (in this thread)
https://www.garlic.com/~lynn/2008i.html#97 We're losing the battle
https://www.garlic.com/~lynn/2008i.html#99 We're losing the battle

mentioned a post in information security blog. the main part of that particular blog thread was related to majority of the breaches that get in the news (something that PCI has been targeted at addressing).

the thread started out regarding a study that something like 84% of IT managers don't believe they need to comply with breach notification and 61% don't even believe they should notify law enforcement.

parts of the thread is repeated here
https://www.garlic.com/~lynn/2008i.html#21 Worst Security Threats?

after working on what is now frequently referred to as electronic commerce (mentioned earlier in this thread), we were brought into the x9a10 financial standard working group which in the mid-90s, had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. as part of that we did detailed end-to-end, risk, threat, and vulnerability studies. a couple highlights

1) security proportional to risk ... crooks/attackers may be able to outspend defenders 100-to-1. the information for the crooks is basically worth the value of the account balance or credit limit. the information for the merchants is basically worth some part of profit off the transaction. the value of the information to the crooks may be worth 100 times more than the value to the merchants ... as a result, the crooks may be able to outspend 100 times attacking the system. traditional military lore has something like attackers needing 3-5 times the resources to attack a fortified fixed position. potentially being able to marshall 100 times the resources almost guarantees a breach someplace.

2) account number and transaction information has diametrically opposing security requirements ... on one hand the information has to be kept confidential and never used or divulged (countermeasure to account fraud flavor of identity theft). on the other hand, the information is required to be available for numerous business processes as part of normal transaction processing. we've periodically commented that even if the planet was buried under miles of information hiding cryptography, that it still couldn't prevent information leakage.

so one of the things done in x9a10 as part of the x9.59 financial transaction standard was to slightly tweak the paradigm ... making the information useless to the attackers. x9a10 & x9.59 didn't address any issues regarding eliminating breaches ... it just eliminated the threat/risk from such breaches (and/or information leakage).
https://www.garlic.com/~lynn/x959.html#x959

now the major use of SSL in the world today is that previously mentioned stuff now frequently referred to as *electronic commerce* ... where it is used to hide account number and payment transaction information. The x9.59 financial standard effectively eliminates that SSL use since it no longer is necessary to hide that information (as countermeasure to account fraud form of identity theft).

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

OS X Finder windows vs terminal window weirdness

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS X Finder windows vs terminal window weirdness
Newsgroups: alt.folklore.computers
Date: Sun, 22 Jun 2008 17:05:00 -0400
Peter Flass <Peter_Flass@Yahoo.com> writes:
If you have ever written a CICS transaction, you get the idea of a CGI pretty quickly.

re:
https://www.garlic.com/~lynn/2008i.html#100 OS X Finder windows vs terminal window weirdness

for even more topic drift, when i was undergraduate, the univ. library had gotten an ONR grant for library automation and was also selected to be beta-test for original CICS product release ... and i got tasked to work on supporting the effort ... even shooting bugs in CICS. misc. past posts mentioning CICS (and/or BDAM)
https://www.garlic.com/~lynn/submain.html#bdam

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

OS X Finder windows vs terminal window weirdness

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS X Finder windows vs terminal window weirdness
Newsgroups: alt.folklore.computers
Date: Sun, 22 Jun 2008 21:50:16 -0400
re:
https://www.garlic.com/~lynn/2008i.html#100 OS X Finder windows vs terminal window weirdness
https://www.garlic.com/~lynn/2008i.html#102 OS X Finder windows vs terminal window weirdness

for similar description ... wiki time-sharing page
https://en.wikipedia.org/wiki/Time-sharing

both cp67 (4th flr, 545 tech sq) and multics (one flr up) trace back to ctss. misc. posts mentioning 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

above article also mentions wiki time-sharing system evolution page
https://en.wikipedia.org/wiki/Time-sharing_system_evolution

the wiki article mentions ncss and tymshare as commercial time-sharing services as take-off with cp67 & vm370 ... other references to commercial time-sharing
https://www.garlic.com/~lynn/submain.html#timeshare

it didn't mention IDC which was another commercial cp67 spin-off about the same time as ncss ... but time-sharing system evoluation page does have pointer to idc wiki page:
https://en.wikipedia.org/wiki/Interactive_Data_Corporation

for additional drift, cp/cms history wiki page:
https://en.wikipedia.org/wiki/History_of_CP/CMS

--
40+yrs virtualization experience (since Jan68), online at home since Mar70

dollar coins

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: dollar coins
Newsgroups: alt.folklore.computers
Date: Mon, 23 Jun 2008 16:37:35 -0400
Lars Poulsen <lars@beagle-ears.com> writes:
The conservative funds were investing large percentages in what they used to describe as "AAA rated bonds", but which now turn out to have been "collateralized debt obligations and realted insurance options".

CDOs were used two decades ago in the S&L crisis to obfuscate the underlying value.

long-winded, decade old post discussing some of the current problems ... including need to have visibility into the underlying value of the stuff that makeup toxic CDO instruments (rather than hiding/obfuscating)
https://www.garlic.com/~lynn/aepay3.htm#riskm

business news programs are still claiming that there is $1 trillion inflation in these instruments and so far there has only been about $400b write-downs ... so that there is still $600b possible in write-downs to come.

much of that $1 trillion was pumped into the real-estate market bubble ... simplified assumption is if there is $1 trillion write-down in the valuation of the toxic CDOs ... there is corresponding $1 trillion deflating pressure in the real-estate market bubble.

misc. recent posts mentioning toxic CDOs:
https://www.garlic.com/~lynn/2008.html#66 As Expected, Ford Falls From 2nd Place in U.S. Sales
https://www.garlic.com/~lynn/2008.html#70 As Expected, Ford Falls From 2nd Place in U.S. Sales
https://www.garlic.com/~lynn/2008.html#90 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008b.html#12 Computer Science Education: Where Are the Software Engineers of Tomorrow?
https://www.garlic.com/~lynn/2008b.html#75 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#11 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#13 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#21 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#63 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008c.html#87 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008d.html#85 Toyota Sales for 2007 May Surpass GM
https://www.garlic.com/~lynn/2008e.html#42 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008e.html#65 Banks failing to manage IT risk - study
https://www.garlic.com/~lynn/2008e.html#70 independent appraisers
https://www.garlic.com/~lynn/2008f.html#1 independent appraisers
https://www.garlic.com/~lynn/2008f.html#10 independent appraisers
https://www.garlic.com/~lynn/2008f.html#17 independent appraisers
https://www.garlic.com/~lynn/2008f.html#32 independent appraisers
https://www.garlic.com/~lynn/2008f.html#43 independent appraisers
https://www.garlic.com/~lynn/2008f.html#46 independent appraisers
https://www.garlic.com/~lynn/2008f.html#51 independent appraisers
https://www.garlic.com/~lynn/2008f.html#52 independent appraisers
https://www.garlic.com/~lynn/2008f.html#53 independent appraisers
https://www.garlic.com/~lynn/2008f.html#57 independent appraisers
https://www.garlic.com/~lynn/2008f.html#71 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#75 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#77 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#79 Bush - place in history
https://www.garlic.com/~lynn/2008f.html#94 Bush - place in history
https://www.garlic.com/~lynn/2008g.html#4 CDOs subverting Boyd's OODA-loop
https://www.garlic.com/~lynn/2008g.html#11 Hannaford case exposes holes in law, some say
https://www.garlic.com/~lynn/2008g.html#16 independent appraisers
https://www.garlic.com/~lynn/2008g.html#32 independent appraisers
https://www.garlic.com/~lynn/2008g.html#36 Lehman sees banks, others writing down $400 bln
https://www.garlic.com/~lynn/2008g.html#37 Virtualization: The IT Trend That Matters
https://www.garlic.com/~lynn/2008g.html#44 Fixing finance
https://www.garlic.com/~lynn/2008g.html#51 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#52 IBM CEO's remuneration last year ?
https://www.garlic.com/~lynn/2008g.html#59 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#62 Credit crisis could cost nearly $1 trillion, IMF predicts
https://www.garlic.com/~lynn/2008g.html#64 independent appraisers
https://www.garlic.com/~lynn/2008g.html#67 independent appraisers
https://www.garlic.com/~lynn/2008h.html#1 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#28 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#32 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#48 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#49 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#51 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008h.html#89 Credit Crisis Timeline
https://www.garlic.com/~lynn/2008h.html#90 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008i.html#4 A Merit based system of reward -Does anybody (or any executive) really want to be judged on merit?
https://www.garlic.com/~lynn/2008i.html#30 subprime write-down sweepstakes
https://www.garlic.com/~lynn/2008i.html#64 Is the credit crunch a short term aberation
https://www.garlic.com/~lynn/2008i.html#77 Do you think the change in bankrupcy laws has exacerbated the problems in the housing market leading more people into forclosure?

--
40+yrs virtualization experience (since Jan68), online at home since Mar70




previous, next, index - home