From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Wed, 19 Apr 2006 19:43:21 -0600Donald Tees writes:
this references
http://www.pavingexpert.com/blokroad.htm
has the following:
Before any pavement construction can be specified, it is necessary to
quantify the value of 2 variables....
* The CBR (California Bearing Ratio) of the sub-grade
* The expected traffic volumes and types over the design life
of the pavement (usually 20 years)
... snip ...
http://www.fhwa.dot.gov/hfl/about.cfm
has the following:
Highways built to last 25 years take such a pounding from the level
and the weight of traffic that they rarely last that long. Almost a
third of all bridges in the country are either structurally deficient
or functionally obsolete. Yet, many highway engineers agree that
50-year pavements and 100-year bridges should be attainable using
current technology.
... snip ...
http://www.eng-tips.com/viewthread.cfm?qid=113163&page=1
has the following:
QUOTE: IT IS RECOMMENDED THE NEW PAVEMENT FOR 30 TO 50 YEAR RANGE
DESIGN LIFE TO ACCOMODATE THE 75 YEAR LIFE OF THE BRIDGE
STRUCTURE. THE DESIGNER SHOULD CHOOSE THE DESIGN WHICH PROVIDES THE
MAXIMUM DESIGN LIFE WITHIN THE BUDGET CONSTRAINTS.
... snip ...
also from above:
For pavement design, in the UK the design life could be as little as
10 years, with increasing requiements for stiffness to prolong
life. As stiffness goes up, so should the 'potential' life of the
pavmement.
... snip ...
for some drift, World Bank has reference here that summarizes some of
the software tools they have for designing roads
http://www.worldbank.org/html/fpd/transport/roads/rd_tools/hdm3.htm
....
http://www.ctre.iastate.edu/educweb/ce453/labs/Lab%2003%20Design%20Traffic.doc
has the following
In the design of any street or highway the designer must select a
design life for the facility. In the case of most highways the
geometric design life is 30-60 years and the pavement design life is
20-30 years, taken from the date of construction. The length of the
design life is affected by policies established before and during the
project's life for the environment in which it is placed. Residential
and commercial development, geography, expected traffic, and soil
conditions may affect the performance of the pavement and the
operation of the facility. The objective of this laboratory exercise
is to estimate the traffic volumes and percentage of cars and trucks
that will use the facility over the course of the next thirty years.
This estimate will be used in the design of the pavement thickness,
completion of the environmental impact statement, and will affect the
horizontal and vertical alignments.
... snip ...
and of course from prevous post
misc road construction ref:
https://web.archive.org/web/19990508000322/http://www.dot.ca.gov/hq/oppd/hdm/chapters/t603.htm
603.1 Introduction
The primary goal of the design of the pavement structural section is
to provide a structurally stable and durable pavement and base system
which, with a minimum of maintenance, will carry the projected traffic
loading for the designated design period. This topic discusses the
factors to be considered and procedures to be followed in developing a
projection of truck traffic for design of the "pavement structure" or
the structural section for specific projects.
Pavement structural sections are designed to carry the projected truck
traffic considering the expanded truck traffic volume, mix, and the
axle loads converted to 80 kN equivalent single axle loads (ESAL's)
expected to occur during the design period. The effects on pavement
life of passenger cars, pickups, and two-axle trucks are considered to
be negligible.
Traffic information that is required for structural section design
includes axle loads, axle configurations, and number of
applications. The results of the AASHO Road Test (performed in the
early 1960's in Illinois) have shown that the damaging effect of the
passage of an axle load can be represented by a number of 80 kN
ESAL's. For example, one application of a 53 kN single axle load was
found to cause damage equal to an application of approximately 0.23 of
an 80 kN single axle load, and four applications of a 53 kN single
axle were found to cause the same damage (or reduction in
serviceability) as one application of an 80 kN single axle.
... snip ...
from the previous posting extract on the "design of pavement
structural section" the paragraph:
Pavement structural sections are designed to carry the projected truck
traffic considering the expanded truck traffic volume, mix, and the
axle loads converted to 80 kN equivalent single axle loads (ESAL's)
expected to occur during the design period. The effects on pavement
life of passenger cars, pickups, and two-axle trucks are considered to
be negligible.
... snip ...
the last paragraph in the referenced paragraph comments that
The effects on pavement life of passenger cars, pickups, and two-axle
trucks are considered to be negligible.
... snip ...
previous refs:
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2002j.html#41 Transportation
https://www.garlic.com/~lynn/2002j.html#42 Transportation
https://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#10 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#12 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#46 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#50 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#58 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Thu, 20 Apr 2006 07:35:39 -0600jmfbahciv writes:
goes into quite a lot of analysis of various mechanisms looking at fully apportioned costs related to highways use. one comment is that several states have gone to a "3rd" kind of fee for heavy trucks, which is related to (loaded) weight and miles traveled (to more accurately apportion road use costs; because fuel tax doesn't accurately apportion costs for heavy trucking and even registration costs based on gross vehicle weight fail to adaquately adjust for accurately apportioned road use costs).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Thu, 20 Apr 2006 07:56:53 -0600jmfbahciv writes:
so i could claim that the lack of quantitative analysis as part of economic policies (in support of intangible societal benefits) is somewhat the basis for the comment in the comptroller general's talk about programs lacking instrumentation, metrics, and audits; aka attempting to validate that various program fundings which have been described as having qualitative social benefits ... actually have had any real, measurable benefits.
past posts mentioning comptroller generals talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#14 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
... i.e.
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future
without actual data about correctly apportioned costs by various (heavy trucking) activity ... it would be difficult to do a cost/benefit analysis of the benefits of possible subsidies for long-haul heavy truck transportion.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Thu, 20 Apr 2006 09:34:10 -0600Anne & Lynn Wheeler writes:
now *intangible social benefits* somewhat implies that it can't be measured. however, sometimes *intangible social benefits* is codeword for "measurement and cost/benefit analysis isn't allowed" (it might be discovered somebody's pet program that isn't providing any general benefits).
i caught bits & pieces of discussion last night about directed appropriations ... I think I heard that one transportation bill had something like 3000 amendments for directed appropriations and something like $26B was involved (I may have gotten it wrong ... the number of directed appropriation amendments may have been spread over a larger number of bills ... but mostly they supposedly had little or nothing to do with transportation).
part of comptroller general's comments may be that you might be able to at least determine whether there has been any change at all after some funding.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Thu, 20 Apr 2006 10:27:04 -0600ref:
and the comptroller general's talk
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future
there is this folklore that in the wake of Chuck Spinney's (one of Boyd's compatriots) congressional testimony in the early 80s (regarding Spinney's analysis) of numerous Pentagon spending programs (drawn from purely non-classified sources) ... which got some pretty extensive press coverage ... the pentagon created a new document classfication "No-Spin" (aka would be down right embarrassing in the hands of Chuck Spinney)
Boyd had this story that since they couldn't get Spinney for being required to tell the truth in testimony in front of Congress, the secdef blamed Boyd (for likely having masterminding the whole thing) and had orders cut for Boyd to be assigned to someplace in Alaska and a life-time ban on Boyd ever being allowed to enter the Pentagon building.
misc. past posts mentioning John Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd
misc. posts from around the web mentiong John Boyd
https://www.garlic.com/~lynn/subboyd.html#boyd2
John died 9Mar97
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/
Spinney gave something of a eulogy at the Naval Institute in July 1997
titled "Genghis John".
http://www.d-n-i.net/fcs/comments/c199.htm#Reference
https://web.archive.org/web/20010412225142/http://www.defense-and-society.org/FCS_Folder/comments/c199.htm#Reference
from above:
One hardly expects the Commandant of the Marine Corps to agree with a
dovish former Rhodes Scholar, or an up-from-the-ranks, brass-bashing
retired Army colonel, or a pig farmer from Iowa who wants to cut the
defense budget. Yet, within days of each other in mid-March 1997, all
four men wrote amazingly similar testimonials to the intellect and
moral character of John Boyd, a retired Air Force colonel, who died of
cancer on 9 March at the age of 70.
General Charles Krulak, our nation's top Marine, called Boyd an
architect of victory in the Persian Gulf War.
... snip ...
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Thu, 20 Apr 2006 12:21:04 -0600Donald Tees writes:
so the claim is that the infrastructure damage costs as a result of (legal) heavy truck traffic is significantly larger than what is being recovered both thru fuel taxes and/or registration fees based on GVW.
all the road design documents state that the damage is proportional to number of heavy truck adjusted axle-loads (heavy truck traffic adjusted for axle load/weight characteristics).
the most recent reference implies that it is widely recognized through out the highway industry that there needs to be additional fees to correctly apportion true highway infrastructure damage costs related to heavy truck traffic and that several states are already collecting such fees (i.e. the aggregate damage cost is proportional to some mile-axle-load measure ... the specific axle-load damage times the number of miles of road that has been travelled and therefor damaged).
misc. past posts referring to heavy truck axle-loads
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#5 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#6 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#10 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#12 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#15 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#19 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#35 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#46 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#53 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#54 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
reference to study of accurately accounting for highway infrastructure
use costs and various fees and mitigation strategies:
https://www.garlic.com/~lynn/2006h.html#1 The Pankian Metaphor
part of the study discusses municipal buses (as examples of heavy trucks) that have routes through residential streets, that otherwise prohibit commercial heavy truck traffic and were never built for the number of associated heavy truck axle-loads (that are the result of the municipal bus traffic).
one municipal bus item discussed was about specially reinforced pavement, at least at bus stops (where the damage can be especially extensive). a counter argument for specially reinforced pavement, just at designated bus stops, was that a major purpose of bus service was to allow the freedom to dynamicly adjust routes (that you get from having vehicles that can travel roads ... which could be severely restricted with limited number of pavement re-inforced bus stop areas).
another suggestion was to drastically restrict bus passenger loads on residential street routes (as partial road damage mitigation effort).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Fri, 21 Apr 2006 08:58:13 -0600Donald Tees writes:
included the observation: "and were never built for the number of associated heavy truck axle-loads (that are the result of the municipal bus traffic)"
the issue repeated in all of the road design references are designing
to expected traffic volumes and available budget. the repeated
references are that the consideration isn't to maximum load but to
number of times axle-load ... above some threshold will be applied.
the citation of included a dozen or so times from cal. state highway
design ... talks about equivalent ESAL axle-loads ... and mentions
converting number of lighter (fractional) axle-loads (above the
threshold that results in deforming road construction material and
therefor damage, wear&tear) into equivalent ESAL axle-load damage
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
the referenced URLs road design documents are constantly referencing
designing for expected traffic (i.e. accumulation of damage by
repeated axle-loads) and budget. A previously posted DOT URL reference
included comment about current traffic activity was frequently never
aniticipated and so the volume of heavy truck axle-loads is severely
shortening the projected 25year road lifetimes.
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
the article on accumulation of bus traffic damage on residential streats wasn't about the residential streets having been underdesigned ... but that they had been designed for expected projected traffic and within available budget (frequent phrase that seems to crop up in the road design references) .. aka it costs a lot more to build residential steets that have reasonable lifetimes when there is large accumulation of heavy truck (equivalent) axle-loads.
one of the points raised was trade-off with the high costs of building all residential streets for high accumulated heavy truck (equivalent) axle-loads (the result of bus traffic) ... even tho bus traffic is restricted to specific route ... versus the still significant cost of only building specific residential streets to handle heavy truck (equivalent) axle-load traffic (from buses) for a specific route. Supposedly one of the advantages of buses and roads ... versus trollys and tracks ... was that buses had significant freedom of changing routes compared to trollys and tracks (which would be lost with only building specific streets to handle damage from bus traffic).
aka ... the roads weren't purposefully underdesigned ... they had been designed for specific lifetime based on projected traffic (of heavy truck equivalent axle-loads, aka accumulation of damage the result of repeated axle-loads) and budget. During the expected road lifetime (that it had been designed for) ... things changed ... and therefor damage (from repeated heavy truck equivalent axle-loads) accumulated at greater than the original projected rate.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Fri, 21 Apr 2006 09:54:50 -0600jmfbahciv writes:
sometimes just gathering the data and doing things like multiple
regression analysis turns up stuff previously unanticipated. past
posting in this thread mentiong MRA
https://www.garlic.com/~lynn/2006g.html#4 The Pankian Metaphor
we were once looking at medicare DRGs ... minor reference:
http://library.kumc.edu/omrs/diseases/dzcodes.html#Medicare%20DRG%20Handbook
and for one random point that came up was that the avg. hospital stay for hip-replacements on the east coast was 14 days ... while the avg. hospital stay for the same procedure on the west coast was 7 days. ???
so some different drift ... somewhat discussion of computer related stuff moving from qualitative to quantitative.
cp67 scheduler had this simplified mechanism for promoting the intangible societal benefit of interactive response. any time the task had a terminal i/o operation, there was an assumption that it was related to interactive activity and the task's scheduling priority was adjusted. nominally tasks were given a cpu scheduling "time-slice" and a scheduling priority. the task retained the same scheduling priority until it had consumed the allocated cpu time-slice and then its scheduling priority would be recalculated. the mechanmis basically was to approximate round-robin. however, it there was a terminal I/O ... the associated task was given a cpu scheduling "time-slice" that was around 1/10th the normal "time-slice" and an "interactive" scheduling priority (all "interactive" scheduling priorities were ahead of all non-interactive scheduling priorities). an "interactive" task supposedly would get to run very fast ... for a short period of time ... which might allow it provide very fast interactive response.
since there were no actual resource controls ... people found that they could scam the system and get much better thruput if their application would frequently generate spurious terminal operations.
i replaced all that when i was an undergraduate in the late 60s ... with dynamic adaptive stuff that actually tracked approximation of recent resource utilization (cpu and some other stuff). tasks would be given advisery scheduling deadlines. tasks were ordered for scheduling by value that was periodically recalculated as the current time plus some delta ... given a time in the future. The "plus some delta" was prorated on a bunch of factors ... including recent resource consumption rate. Overall, if you were consuming resources at lower than targeted rate ... you speeded up; if you were consumer resources at faster than targeted rate ... you slowed down.
The original stuff for intangible "interactive response" was not
quantitative ... in that it was a purely yes/no operation. my dynamic
adaptive scheduling was policy driven based on target resource
consumption rates (the default target resource consumer rate policy
was nominally "fair share" ... but policies could be adjusted to have
things other than "fair share") could be set ... quite quantitative.
There were still bias available to improve interactive response, but
it couldn't be used to increase a task's aggregate resource
consumption rate.
https://www.garlic.com/~lynn/subtopic.html#fairshare
the dynamic adaptive scheduling stuff was picked up and shipped in standard cp67. some amount of the implementation was dropped in the morph from cp67 to vm370 ... but a couple years afterwards, I was allowed to reintroduce it with the resource manager product ... 30th aniv. of product announcement coming up on may 11th.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Fri, 21 Apr 2006 10:08:49 -0600jmfbahciv writes:
they put new ones in on both north and south bound lanes of 101 between morgan hill and gilroy a couple years ago. I'm not sure exactly what the electronics are ... but it seems to have multiple parallel weigh station lanes ... with some sort of overhead communication/electronic gear above the lanes as the truck approaches the actual scale. Pure conjecture is that there is some sort of electronic interchange between the station and the truck ... w/o having physical interaction between the truck driver and the station operator. There appears some amount of video stuff on the highway in the area (possibly to catch drivers attempting to bypass the station when it is open).
quick search engine use turns up this web page claiming to have list
of all DOT weigh stations
http://www.dieselboss.com/restarea.asp
cal dot weigh stations
http://www.dot.ca.gov/hq/traffops/trucks/
lot more information about cal dot truck weigh stations
http://www.dot.ca.gov/hq/traffops/trucks/weight/
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: It's official: "nuke" infected Windows PCs instead of fixing them. Newsgroups: alt.folklore.computers Date: Fri, 21 Apr 2006 11:11:44 -0600KR Williams writes:
vmshare had been provided as service to share community by
tymshare ... misc
https://www.garlic.com/~lynn/submain.html#timeshare
after PCs started to become popular, pcshare was added.
i managed to set up regular cloning of the vmshare forums and make
them available internally on places like hone
https://www.garlic.com/~lynn/subtopic.html#hone
somewhat in the wake of the "tandem memos" online event (for which I got blamed), more structured online operation was created ... somewhat akin to vmshare that was called IBMVM. this grew into IBMPC and all whole set of other interest areas. this was supported by something called toolsrun ... which was sort of cross (combination) between usenet and listserv.
at one point corporate hdqtrs starting deploying software that would account for amount of internal network traffic (world-wide, thousand plus nodes growing to a couple thousand, several hundred thousand people). at one point somebody suggested that i had been in some way (directly or indirectly) responsible for 1/3rd of all bytes transferred on the (whole) internal network for a period of a month.
random past posts mentioning tandem memos:
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002q.html#16 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#38 ibm time machine in new york times?
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2005c.html#50 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
https://www.garlic.com/~lynn/2005q.html#5 What ever happened to Tandem and NonStop OS ?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: It's official: "nuke" infected Windows PCs instead of fixing them. Newsgroups: alt.folklore.computers Date: Fri, 21 Apr 2006 13:22:36 -0600Anne & Lynn Wheeler writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Fri, 21 Apr 2006 16:22:49 -0600ref:
and the reference mentioned in that posting
http://www.dot.ca.gov/hq/traffops/trucks/
the new technology i was noticing at the new weigh stations appears to be somewhat similar to the overhead ez-pass transponder sensors on toll roads, called "prepass":
PrePass Weigh Station Bypass
http://www.dot.ca.gov/hq/traffops/trucks/bypass/
from above:
PrePass is an automated, state-of-the-art system allowing
heavy vehicles that are registered in the program to legally bypass
open weigh stations.
Transponders: Carriers obtain special transponders used for
communication between computers in the weigh stations and the
vehicles.
Green Signal: If all requirements for weight, size, safety, etc. are
met, the driver receives a green signal that allows the vehicle to
bypass the weigh station.
... snip ...
i think the truck still exits main traffic, but if the prepass transponder agrees ... they take a lane that bypass the actual scales and returns them to main traffic flow
the shows a graphic that depicts how
http://www.educause.edu/ir/library/pdf/EPO0801.pdf
the area of the new weigh stations that i've seen on 101 is quite a wide open expanse ... significantly larger than the old one-lane operations with small weigh station shack located adjacent to the scales.
the cal dot site also has
Data Weigh-in-Motion
http://www.dot.ca.gov/hq/traffops/trucks/datawim/
there is some possibility that data weigh-in-motion is also used in conjunction with PrePass Weigh Station Bypass (there is reference in the above Weigh-in-Motion webpage to Byass WIM).
This talks about WIM technical overview and requirement for
smooth pavement surface approach leading up to a WIM installation
http://www.dot.ca.gov/hq/traffops/trucks/datawim/technical.htm
this is base document URL from cal. dot talking about
pavement design and ESAL (equivalent single axle load)
http://www.dot.ca.gov/hq/oppd/hdm/pdf/chp0600.pdf
also describing how to arrive at heavy truck equivalent single axle
load for calculating design for payment and pavement lifetime based on
amount of ESAL activity. previous postings including description
of equivalent single axle loads and pavement lifetime
https://www.garlic.com/~lynn/2006g.html#56 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#57 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#59 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#60 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#61 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#62 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#0 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#6 The Pankian Metaphor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Binder REP Cards Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 21 Apr 2006 19:47:06 -0600Peter Flass writes:
later this was rewritten with the lookup table as resident data part of the assembler ... when they started getting feedback on the thruput vis-a-vis memory size tradeoffs.
original post ref:
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Security Newsgroups: alt.folklore.computers Date: Sat, 22 Apr 2006 11:29:15 -0600eugene@cse.ucsc.edu (Eugene Miya) writes:
part of the issue in static data, shared-secret authentication paradigms ... is not only can static data be evesdropped and reproduced in replay attacks ... but the same information is used for both origination and verification. as a result, you are required to have a unique shared-secret for every different security domain ... as countermeasure to cross-domain compromises (aka you local garage isp and your place of employment or online banking). this has been further aggravated by requirement for hard to guess (and impossible to remember) passwords that are changed on frequent basis (potentially scores of different, impossible to remember passwords at any one moment)
in the 3-factor authentication paradigm
https://www.garlic.com/~lynn/subintegrity.html#3factor
• something you have
• something you know
• something you are
... the last two tend to be (relatively) static data that are
vulnerable to evesdropping/harvesting and replay attacks.
https://www.garlic.com/~lynn/subintegrity.html#harvest
unique physical tokens for something you have authentication that involve unique data for every operation (like digital signature as countermeasure to evesdropping and replay attacks) and different data for origination and verification (like public/private key as countermeasure to cross-domain compromises).
the issue then is that something you have authentication may be vulnerable to lost/stolen tokens ... and multi-factor authentication, with "somthing you know" or "somthing you are" then is countermeasure to lost/stolen tokens (and tokens are countermeasure to the static data evesdropping against something you know or something you are and replay attacks).
a somewhat implicit assumption in multi-factor authentication is that
the different methods are vulnerable to different threats. the
assumption in multi-factor authentication (in something like
pin-debit) can be subverted where both the something you have
(magstripe) and something you know (pin) are both subject to the
same, common skimming/harvesting vulnerability(and replay attack)
https://www.garlic.com/~lynn/subintegrity.html#harvest
the next scenario ... even with relatively high integrity multi-factor
authentication is the compromise of the authentication environment
(where token/viruse can reproduce static data authentication and any
physical token can be be induced to perform multiple operations
... w/o the owners knowledge). recent posting on this in thread on
multi-factor authentication vulnerabilities
https://www.garlic.com/~lynn/aadsm23.htm#2
the above mentions that something you know authentication can either involve a shared-secret (that is typically registered at some institutional, security domain repository) or plan "secret". In the plan secret method, the "secret" is registered in a something you have token and required for correct token operation. Since the "secret" isn't registered at specific institutional, security domain repositories ... there is much less a threat of cross-domain compromises (and therefor the same authentication mechanisms could be used in multiple different security domains).
start of the thread mentioning a number of different security
related weaknesses
https://www.financialcryptography.com/mt/archives/000691.html
and man-in-the-middle attacks
https://www.garlic.com/~lynn/subintegrity.html#mitm
lots of past posts on exploits, threats, and vulnerabilities
https://www.garlic.com/~lynn/subintegrity.html#fraud
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Security Newsgroups: alt.folklore.computers Date: Sat, 22 Apr 2006 12:59:46 -0600the really old, ancient, "new" thing that has been bubbling off and on in the press for at least the past year (much more recently), is virtualization as security ... stuff like
turns out that cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech
started on it slightly over 40 years ago. I didn't do any work on it until three people from science center brought a copy out to the univ. the last week of jan68.
basically virtualization helps with partitioning and isolating effects
of things like viruses and trojans. It also can be considered
encompassing things like countermeasure to compromises of
authentication environment ... raised in these recent postings:
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/aadsm23.htm#2
not too long later, science center was using it to offer time-sharing
service (as well as number of commercial time-sharing service bureaus)
https://www.garlic.com/~lynn/submain.html#timeshare
the science center had a combination of sensitive corporate activites as well as a mix of faculty and students from various educational institutions in the cambridge area (bu, mit, harvard, etc).
one of the really sensitive things was a lot of work on providing
370 virtual memory emulation (before 370 virtual memory had been
announced and/or even hardware had been built). one of the others
was corporate hdqtrs use of cms\apl for the most valuable and
sensitive of corporate data. misc. posts that mention apl and/or
hone (a major internal timesharing service built almost totally
on cms\apl ... later moving to apl\cms and subsequent versions):
https://www.garlic.com/~lynn/subtopic.html#hone
cambridge had ported apl\360 to cms ... and added filesystem api semantics, as well as made available several mbyte workspaces as standard (compared to the typical 16kbyte workspaces available under apl/360). apl in the 60s and 70s were the spreadsheet "what-if" workhorse for corporate planners and business people. once cambridge had cms\apl up and running as part of standard offering ... some of the business people from corporate hdqtrs shipped up a tape of the most sensitive corporate business data for loading into cms\apl workspaces (apl\360 didn't have any filesystem api semantics, any data loaded into the miniture 16kbyte workspaces had to be done manually at the keyboard).
in any case, there was significant issue with not allowing any security breaches and/or data breaches of the extraordinarily sensitive corporate information ... especially by any of the general users (like various students from the surrounding educational institutions).
there were other organizations (besides internal systems and
external commercial timeshare services) using the system for
security purposes ... like referenced here
https://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml
misc. past posts mentioning the above reference:
https://www.garlic.com/~lynn/2005k.html#30 Public disclosure of discovered vulnerabilities
https://www.garlic.com/~lynn/2005k.html#35 Determining processor status without IPIs
https://www.garlic.com/~lynn/2005p.html#0 Article: The True Value of Mainframe Security
https://www.garlic.com/~lynn/2005s.html#23 winscape?
https://www.garlic.com/~lynn/2005s.html#44 winscape?
https://www.garlic.com/~lynn/2005u.html#36 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2005u.html#37 Mainframe Applications and Records Keeping?
https://www.garlic.com/~lynn/2005u.html#51 Channel Distances
https://www.garlic.com/~lynn/2006.html#11 Some credible documented evidence that a MVS or later op sys has ever been hacked
even tho a lot of stuff I was doing as an undergraduate was being picked up in standard system distribution ... i didn't hear about the guys mentioned in the above reference until much later (although I could reflect that some of the things that I was being asked to consider, when I was an undergraduate, may have originated from some of those organizations).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Security Newsgroups: alt.folklore.computers Date: Sat, 22 Apr 2006 13:47:06 -0600ref:
one of the ancillary issues in havesting/skimming/evesdropping
https://www.garlic.com/~lynn/subintegrity.html#harvest
of static data shared-secrets
https://www.garlic.com/~lynn/subintegrity.html#secret
or any kind of static data shared-secrets are the security breaches and data breaches by insiders. insiders have repeatedly shown to be the major threat for id theft, id fraud, and account fraud; long before the internet and continuing right up thru the internet era to the present time.
one method to plug some of the security breaches and data breaches is by moving to multi-factor authentication (i.e. the static data authentication repositories are augmented) where at least one factor involves some sort of dynamic information (impersonation isn't possible by copying existing repository of authentication and transaction information).
this can help minimize the insider threat which has been responsible
for the majority (possibly 75precent or more)
https://www.garlic.com/~lynn/aadsm17.htm#38 Study: ID theft usually an inside job
of id theft, id fraud, and account fraud. my slightly related, old
standby about security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61
however, it can make the attackers move from focusing on the backend ... to attacking the origin of the transaction and authentication ... including the environment that any authentication takes place in.
one of the other countermeasures for attacks on the backend infrastructure (security breaches and data breaches) is encryption. however, encryption is not going to be very effective if the encrypted repositories are required (unencrypted) by a large number of different business processes and insiders (aka insiders have always represented the majority of the threat). this is somewhat my repeated comment that the planet could be buried under miles of cryptography and still not be able to effectively stem such exploits.
misc. random past posts mentioning even miles deep cryptography may
not be able to halt the leakage of various kinds of information (and
therefor you have to change the nature and use of the
information, so that even if it leaks, it can't be used for fraudulent
purposes):
https://www.garlic.com/~lynn/aadsm15.htm#21 Simple SSL/TLS - Some Questions
https://www.garlic.com/~lynn/aadsm15.htm#27 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm19.htm#45 payment system fraud, etc
https://www.garlic.com/~lynn/2004b.html#25 Who is the most likely to use PK?
https://www.garlic.com/~lynn/2005u.html#3 PGP Lame question
https://www.garlic.com/~lynn/2005v.html#2 ABN Tape - Found
https://www.garlic.com/~lynn/2006c.html#34 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#26 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006e.html#44 Does the Data Protection Act of 2005 Make Sense
https://www.garlic.com/~lynn/aadsm22.htm#2 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#33 Meccano Trojans coming to a desktop near you
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Sat, 22 Apr 2006 17:41:04 -0600Larry Elmore writes:
a co-worker on a business trip to japan once told the story of one of his first business meetings telling his audience that he wanted to practice his Japanese that he learned from his roommate in college (back in the days when yen was greater than 300/dollar). During the first break, somebody took him aside and attempted to tactfully explain the situation.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Sun, 23 Apr 2006 09:06:03 -0600Anne & Lynn Wheeler writes:
NSF Begins a Push to Measure Societal Impacts of Research
http://www.sciencemag.org/cgi/content/full/312/5772/347b
... from above:
When politicians talk about getting a big bang for the buck out of
public investments in research, they assume it's possible to measure
the bang. Last year, U.S. presidential science adviser John Marburger
disclosed a dirty little secret: We don't know nearly enough about the
innovation process to measure the impact of past R&D investments, much
less predict which areas of research will result in the largest payoff
to society.
... snip ...
somewhat similar to comments in comptroller general's talk about being
able to audit/measure funding programs (big bang for the buck from any
funding program?)
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future
other posts
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#7 The Pankian Metaphor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Sun, 23 Apr 2006 17:24:39 -0600Morten Reistad writes:
long term contracts tend to have more incentive for capital investment by producers. attempting to live off the excesses not needed by others ... can result in shortages when you have no long term supply commitment
the northwest has had a lot of hydroelectric power which gets dumped into the power-grid. power over and above long-term contracts shows up on the spot market (sort of like unsold seats on airplanes or unsold rooms at hotels ... sometimes you can find some really great discounts at the last minute).
then the northwest was having a draught ... and hydroelectric plants were dumping less into the power-grid. this appeared to make it easier for some unscrupulous dealers to manipulate the perceived scarcity/excess and the spot market; aka during periods of scarcity ... the spot market can be significantly higher than long term contracts.
this is a posting from a couple years ago, mentioning both preventive
maintenance (on railroad tracks) as well as the article about
california pucc having some regulation about getting power from the
spot-market.
https://www.garlic.com/~lynn/2001e.html#75 Apology to Cloakware (open letter)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Sun, 23 Apr 2006 18:03:22 -0600Anne & Lynn Wheeler writes:
has a reference to the thread between risk management and information
security ... and a quote from a participate in a conference on the
subject ... long posting of this person's observation:
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
I've recently made references to a talk by the comptroller general:
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future
One of the comments the comptroller general made during the talk was that there is a $160k/person (every man, woman, child, and baby) fed. program liability in the US for various obligations. The extract (in this earlier thread) explains how the bailout of the S&L industry is being carried off-books, since it represents a $100k/person liability. It wasn't clear in the comptroller general's speech whether his figure of $160k/person included the S&L $100k/person bailout obligation or was in addition to the S&L bailout obligation.
recent postings referencing the comptroller general's talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Binder REP Cards (Was: What's the linkage editor really wants?) Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 23 Apr 2006 18:48:12 -0600Chris Mason wrote:
was from the BPS (basic programming system) loader.
In the morph from cp67 to vm370, the meaning of "CMS" was changed to Conversational Monitor System. There were quite a bit of code rewrite for the vm370 virtual machine kernel ... but significantly less changes were done for CMS. CMS did have a body of code that emulated os/360 functions ... that allowed running a number of assemblers and compilers from os/360. In the early MVS time-frame ... this os/360 emulation code totaled approx. 64kbytes and there were factious references to the cms 64kbyte os/360 emulation was almost as good as the mvs 8mbyte os/360 emulation.
Later versions of cms expanded the cms os/360 emulation support ... providing significantly greater compatibility with the mvs environment.
the cms help page you reference is copyrighted 1990, 2003.
however, my cms (hardcopy) manuals from early 70s (both program logic manual and user's guide) don't list VER.
I remember VER being part of (os/360) superzap for applying (PTF and other) patches to os/360 executables .... you would have superzap file and DD statements would reference input source. in some sense superzap served something was a hex editor ... except in the edit syntax ... you would have a specific record and change specific input string to specific output string.
here is help file on how to use superzap (and description of superzap
statements):
http://www.cs.niu.edu/csci/567/ho/ho4.shtml
misc. past posts in this thread:
https://www.garlic.com/~lynn/2006g.html#43 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2006g.html#44 Binder REP Cards (Was: What's the linkage editor really wants?)
https://www.garlic.com/~lynn/2006g.html#58 REP cards
https://www.garlic.com/~lynn/2006h.html#12 Binder REP Cards
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Binder REP Cards Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Mon, 24 Apr 2006 07:44:48 -0600"Charlie Gibbs" writes:
ref:
https://www.garlic.com/~lynn/2006h.html#12 Binder REP Cards
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 08:27:31 -0600jmfbahciv writes:
so a terminal i/o kicked off a new scheduling advisery deadline for a very short amount of resource consumption. the deadline was calculated 1) proportional quanta of resource consumption for the period (smaller quanta for doing terminal i/o resulted in sooner deadline) as well as 2) proportional to recent resource consumption against some policy (the default policy being fairshare).
if the default resource policy was fairshare ... users consuming more than their fairshare had all their deadlines prorated further into the future (slowing them down), if users were really trivial interactive, then their recent resource consumption would be less than their fairshare, as a result the prorated calculations made their deadlines sooner ... speeding them up. a user was ahead of their targeted resource consumption (and therefor got advisery scheduling deadline priorities that slowed them down) or behind their targeted resource consumption (somewhat implicit if they were actually trivially interactive ... and got advisery scheduling dealine priorities that speeded them up.
for users that were way behind in their targeted resource consumption, they would start to speed up their measured resource consumption (because of the advisery deadline priorities) ... as their measured resource consumption approached their targeted resource consumption, they would slow down until their measured resource consumption and their targeted resource consumption was in equilibrium.
so i did a joke for the resource manager (vm370 re-issue of the dynamic adaptive stuff i did as undergraduate for cp67 ... 30th anv of the product announce of resource manager coming up on may 11th).
i had done all this elaborate dynamic adaptive stuff to measure what was going on and dynamically adapt everything. parameters were available for changing specified policies ... at system level and individual user level. however, all the performance tuning stuff had been subsumed by the elaborate dynamic adaptive capability.
furthermore, there had been extensive benchmarking, calibrating and
validating the dynamic adaptive capability across a wide range of
workloads, configurations, and policies. recent posts mentioning
that benchmarking, calibrating and validating effort (one series
of 2000 tests took 3 months elapsed time to complete)
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006e.html#25 About TLB in lower-level caches
https://www.garlic.com/~lynn/2006f.html#22 A very basic question
https://www.garlic.com/~lynn/2006f.html#30 A very basic question
https://www.garlic.com/~lynn/2006g.html#1 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor
so as to the joke? well, somebody from corporate hdqtrs observed that all the existing state-of-the-art resource managers had elaborate parameters that could be set by installations (primarily for system tuning) and the resource manager would require equivalent capability before it could be released as a product. it wasn't possible to get across to the person that the elaborate dynamic adaptive capability subsumed all such features.
so i added some such parameters, published the calculations ... and of course all source was available (product was shipped in source maintenance form ... as well as applying the source changes for the resource manager to the base product). i even though classes on how the calculations and parameters all worked.
in the early 90s, we were making a number of marketing trips to the
far east for our ha/cmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp
i related an anecdoate from one such trip in this recent post
https://www.garlic.com/~lynn/2006g.html#21 Taxes
on one of the trips to HK, we were doing customer call on major bank ... and going up the elevator in the bank building with the external skeleton (there were some references to tinker toy building because of the external structure). from a younger person in the back of the elevator came a question ... are you the "wheeler" of the "wheeler scheduler"?, we studied you at the university.
so nobody had figured out the joke. as i've periodically referred to in the past, most system programmers tend to deal with states ... things are either in one state or another ... or in case of parameters, a specific value from a range.
the dynamic adaptive resource manager ... was much more of a dynamic feedback and feedfoward nature ... much more of a operations research methodology than a kernel programmer state methodology. In OR methodology calculations you tend to have parameters with characteristics like degrees of freedom. Now for the dynamic adaptive resource manager ... the parameters provided for people to set (other than the policy selection parameters) all fed into the same dynamic adaptive calculations as the base dynamic adaptive stuff. The dynamic adaptive stuff would iterate its values in the calculations ... changing the dynamic adaptive parameters to adapt to workload, configuration and how well things were going. The magnitude and range of the dynamic adaptive parameters recalculated at every interval had much larger degrees of freedom than the static set parameters (that corporate hdqtrs required to be added) for people to set.
So the dynamic adaptive resource calculations had great latitude in dynamiclly adjusting its parameters to constantly and dynamically compensate for changes in configuration and workload ... as well as compensating for any staticly set parameters that people might be fiddling with (somtimes referred to as performance tuning witch doctors).
misc. past posts mentioning the elaborte joke in the resource manager:
https://www.garlic.com/~lynn/2001e.html#51 OT: Ever hear of RFC 1149? A geek silliness taken wing
https://www.garlic.com/~lynn/2001l.html#9 mainframe question
https://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002i.html#53 wrt code first, document later
https://www.garlic.com/~lynn/2002k.html#66 OT (sort-of) - Does it take math skills to do data processing ?
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
https://www.garlic.com/~lynn/2004o.html#10 Multi-processor timing issue
https://www.garlic.com/~lynn/2005b.html#58 History of performance counters
https://www.garlic.com/~lynn/2005p.html#31 z/VM performance
https://www.garlic.com/~lynn/2006b.html#8 Free to good home: IBM RT UNIX
https://www.garlic.com/~lynn/2006b.html#21 IBM 3090/VM Humor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 08:50:31 -0600jmfbahciv writes:
as mentioned in past postings, somebody else even posted a reference to several states have gone to a 3rd fee. that it was recognized that straight fuel tax didn't accurately account for heavy trucking "use". As a result, some number of gov. bodies had added registration fees for heavy trucking that were proportional to the vehicles gvw. However, this second kind of gvw fee was still static ... it didn't actually account for the different amounts of wear and tear that happens based on miles driven and load carried (aka miles-ESAL ... miles-equivalent-single-axle-loads). The 3rd fee attempts to accurately account for actual wear and tear use caused by specific vehicles based on something akin to miles-equivalent-single-axle-loads).
if you more accurately account for actual costs ... the economics of doing something might change. one change might be that the amount of long-haul trucking is reduced ... since if the actual costs of long-haul trucking was accurately accounted for ... it might increase the costs of some of the products that were transported, if there was some increase in the costs of products transported by long-haul trucking ... some people might buy less of it and buy more of something else.
a possible point is that any "efficiency" in "market economy" is at least partially the result of having dynamic adaptive feedback operations based on actual costs (and prices accurately reflecting those costs). "managed economys" may enormously distort prices (with respect to actual costs) and therefor drastically distort the "market economy" ability to accurately, efficiently, and rapidly adapt to changing configurations and workloads.
some gov. bodies may try and achieve some trade-offs between degree of distorting prices and the efficiency of "market economy" ... possibly because "market economy" may heavily over optimize for short-term results at the expense of longer term optimization. however, one of the previously raised issues is that the people in gov. may have the least experience and skill to make such trade-off decisions.
past posts on the subject in this thread:
https://www.garlic.com/~lynn/2006g.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#8 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#20 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#26 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#32 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#34 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#48 The Pankian Metaphor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 09:35:40 -0600Brian Inglis writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 12:20:14 -0600Anne & Lynn Wheeler writes:
sort of the evoluation was that computer systems were built to just run. as things got more complex ... especially with various kinds of multiprogramming ... there was a realization that varous kinds of optimization and performance turning could improve throughput.
the problem was that much of the state-of-the-art around the time that i was (re)releasing dynamic adaptive scheduling for the research manager (11may76, not to mention the earlier incantation done in the late 60s) was that there was no really good understanding of the science of performance tuning. some set of performance settings were specified ... and sometimes it improved things and sometimes it didn't. part of the issue was that performance settings could be workload and configuration specific ... with static settings and workload that possibly dynamically changed minute to minute ... there would be no ideal setting.
in any case, the prevalent state-of-the-art at the time for (re-)releasing the dynamic adaptive stuff (11may76) was to try and identify all thruput related decisions in the systems and attach various kinds of control parameters to each of the decision points. You build an enormous specification of all possible control parameters and the types of decisions that they affected. customers then were encouraged to have significant resources devoted to studying thruput, (possibly randomly) changing turning parameters, evaluated the result and presenting detailed reports at user group meetings (like SHARE and GUIDE) about specific customer experiences (randomly) modifying the numerous tuning parameters (aka the rituals involved in propagating the performance tuning magic incantations thruout the performance turning witch doctor society and from one generation of performance turning witch doctors to the next)
part of the pressure from corporate hdqtrs was that the (other) mainstream operating system product had a significant subculture and folklore around performance tuning (requiring large amount of resources devoting to performance tuning was felt to be representative of an advanced, state-of-the-art customer installation)
the concept that you could have a science of thruput and deploy a dynamic adaptive resource manager based on such principles, was incomprehensible.
as a result i had to come up with the ruse of having "people set" tuning parameters and allow the dynamic adaptive control mechanisms "compete" with the people specific static settings.
part of this is because the science center had spent a lot
of effort on instrumenting systems and capturing the data for
detailed study and analysis
https://www.garlic.com/~lynn/subtopic.html#545tech
and was well on its way to evolving things like capacity planning
based on the work ...
https://www.garlic.com/~lynn/submain.html#bench
aka the stuff that the performance predictor application was
able to do on hone
https://www.garlic.com/~lynn/subtopic.html#hone
being able to take input from sales and marketing people about a customer's configuration and workload and allow "what-if" questions to be asked about changes to workload and/or configuration.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Security Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 13:11:38 -0600Anne & Lynn Wheeler writes:
trivial case of skimming, harvesting, evesdropping standard business process data for replay attacks ... being able to use the information for fraudulent transactions that get treated as valid. trivial recent example in the news:
Crook used dumped credit data
http://www.edmontonsun.com/News/Edmonton/2006/04/20/1541789-sun.html
in the mid-90s, the x9a10 financial standards working group was
given the requirement to preserve the integrity of the financial
infrastructure for all retail payments. one of the issues addressed
in work on x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959
was changing the paradigm so that any skimmed, harvested, and/or
evesdropped normal business process information couldn't be used for
performing fraudulent transactions.
https://www.garlic.com/~lynn/subintegrity.html#harvest
https://www.garlic.com/~lynn/subintegrity.html#secret
this was particular important when considering that all the long term statistics about such fraudulent behavior involved insiders the majority of the time (aka it didn't prevent the information from being skimmed, harvested, and/or evesdropped, it eliminated the ability of crooks being able to use the information for fraudulent activity).
compromised (and/or counterfeit) authentication environments have been around in the physical world (in the guise of point-of-sale terminals and/or atm machines) have possibly been around for decades. authentication information is skimmed/harvested and then used for replay attacks involving fraudulent transactions at other locations.
the current genre of phishing attacks as well as trojans and viruses on PCs ... just extend that same harvesting/skimming threat model to the internet. part of the objectives in the x9.59 financial standard was to eliminate harvesting/skimming (for at least some types of information) as a mechanism for (some of the more common types of) fraudulent transactions.
the basic x9.59 standard didn't do anything to eliminate crooks compromising and/or counterfeiting authentication environments ... it just minimized the fraudulent return on investment.
there are still threat models involving compromised and/or counterfeit authentication environments involving duplicated transactions unknown to the originating entity. there may not be information in the actual, valid transactions that can be skimmed and used for fraud. however, a compromised and/or counterfeit authentication environment may still be able to perform additional surreptitious fraudulent transactions in concert with valid transactions (unknown to the originator).
in the physical world, the crooks have tended to try and obfuscate the source of the compromised authentication environment (hoping that they can continue to use it as a source for being able to create fraudulent transactions). actually performing the fraudulent transactions at the point of compromise can result in it being quickly identified and removed. in the internet environment, individual introduction of trojan to compromize an end-user PC authentication environment represent less of an investment and therefor less of a loss if it is identified and removed (transactions accounting for a few thousand per PC may be sufficient to justify the effort).
the EU FINREAD terminal/standard
https://www.garlic.com/~lynn/subintegrity.html#finread
was an attempt to remove the PC authentication environment from the control of any trojans or viruses that might exist on individual PCs.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: confidence in CA Newsgroups: comp.security.misc Date: Mon, 24 Apr 2006 13:53:12 -0600Sebastian Gottschalk writes:
(used in m'soft infrastructure as well as many other authentication
operations) called for registering public key in lieu of
password ... aka w/o digital certificates
https://www.garlic.com/~lynn/subpubkey.html#certless
then there was a strong lobby to add certificate-based option to the pk-init specification. i've periodically gotten email apologizing from the person claiming primary responsibility for certificate-based option being added to pk-init.
what they realized was that they now have a certification authority based infrastructure for registering entities ... which has primarily to do with who they are.
except for the trivial, no-security operations ... they then continue to require the kerberos based registration infrastructure which involves both information about who the entity is, but also what permissions need to be associated with the entity. the counter argument is that every entity in the possesion of any valid digital certificate should be allowed unrestricted access to every system in the world (regardless of who they are and/or what systems are involved). the trivial example is that everybody in the world has unlimited access to perform financial transactions against any and all accounts that may exist anywhere in the world.
in effect, they now tend to have duplicated registration business processes ... with the certification authority registration infrastructure tending to be a subset (and duplicate) of the kerberos permission oriented registration operation. as a result, the digital certificates issued by the certification authority based operation have tended to become redundant and superfluous.
there has been a lot written about various serious integrity
issues related to SSL domain name digital certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert
part of proposals to improve the integrity of the SSL domain name certification authority operation ... is to have domain name owners register public keys (with the domain name infrastructure) when domain names are obtained. then when entities apply for SSL domain name infrastructures, they are required to be digitally signed. The certification authority then can do a real-time retrieval of the on-file public key from the domain name infrastructure to validate the digital signature on the SSL domain name digital certificate application (improving the integrity of the SSL domain name certification process).
the catch-22 for the SSL domain name certification authority industry is if the certification authority industry can rely on real-time retrieval of onfile public keys (from the domain name infrastructure) as the root of their certification and trust ... then why wouldn't it be possible for everybody in the world to also start performing real-time retrievals of the onfile public keys (making any use of SSL domain name digital certificates redundant and superfluous).
one could even imagine a highly optimized SSL variation where any public key and crypto-opts are piggy-backed on the same domain name infrastructure response that provided the domain name to ip-address mapping (totally eliminating the majority of existing SSL setup protocol chatter)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: confidence in CA Newsgroups: comp.security.misc Date: Mon, 24 Apr 2006 14:32:12 -0600Sebastian Gottschalk writes:
the issue regarding OCSP was that it preserved the stale, static (redundant and superfluous) digital certificate model ... by doing possibly real-time response regarding whether the stale, static information in the digital certificate was still valid. It didn't do anything about providing real-time operational information about things that may involve non-stale, and non-static information ... like real-time response authorization a transaction as to whether it was within the account limits.
credentials, certificates, licenses, diplomas, letters of introduction, and letters of credit, etc have served for centuries, providing statle, static information for relying parties that had no other method for obtaining information about the party they were dealing with.
digital certificates have been electronic analogs to the physical world counterparts for relying parties that lack any of their own information about the party they are dealing with AND lack any online mechanism for obtaining such information.
as online environments have become more ubiquituous and prevalent, digital certificates have somewhat moved into the no-value market segment (as the possible offline operations that would benefit from stale, static digital certificate information have disappeared being replace with relying parties being able to directly access real-time information about the entities they were dealing with). the no-value market segment are business operations where the relying parties can't justify the cost or expense of having access to real-time information.
the scenario for OCSP for financial transactions ... was that the relying party could do a OCSP to see whether the digital certificate was still current.
the counter was the x9.59 financial standard
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959
in both cases there was a digital signature attached to the transaction to be verified. in the x9.59 scenario, the relying party could forward the transaction and the attached digital signature to the customers financial institution and get a real-time response either standing behind the payment or denying the payment (based not only on verifying the digital signature and whether the account still existed, but also current credit limit and possibly recent transaction patterns that might represent fraud).
attaching a digital certificate was purely redundant and superfluous in any sort of real-time, online operation ... and provided absolutely no additional benefit.
my other observation (made at the same time as pointing out that attaching stale, static digital certificates not only didn't modernize the operation but set it back 20-30 years) was about the enormous payload bloat.
the typical payment transaction payload size is on the order of 60-80 bytes. the digital certificate oriented financial efforts going on in that time-frame were seeing payment transactions being increased by 4k-12k bytes for the digital certificate appending mechanisms (i.e. payload was being increased by 100 times, two-orders of magnitude for stale, static information that was redundant and superfluous).
so another effort was started in parallel about the same time the ocsp stuff started ... which was to define "compressed" digital certificates. this effort hoped to get compressed digital certificates into the 300 byte range ... representing only a factor of five times payload bloat (for stale, static, redundant and superfluous information) rather than 100 times payload bloat.
one of their suggested mechanisms was to remove all non-unique information in the digital certificate ... leaving only the necessary information that was absolutely unique for a particular digital certificate. I pointed out if the point of appending the digital certificate was to have it forwarded to the entity's financial institution ... for processing by the entity's financial institution ... then it was also possible to eliminate all information in the digital certificate that was already in possession of the entity's financial institution. I then could trivially prove that the entity's financial institution would have a superset of all information in the digital certificate and compress the size of the digital certificate to zero bytes.
rather than eliminating the appending of stale, static, redundant and superfluous digital certificates to every financial transaction, in part to avoid a factor of 100 times payload bloat ... it would be possible to compress the appended stale, static, redundant and superfluous digital certificate to zero bytes. You would still have an appended digital certificate, but since it would only be zero bytes in size, any associated payload bloat would be significantly reduced.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: confidence in CA Newsgroups: comp.security.misc Date: Mon, 24 Apr 2006 14:53:26 -0600Sebastian Gottschalk writes:
ocsp was then suggested for that. law enforcement officer would stop you and ask for a valid driver's license (chip reader to get the digital certificate copy out) and then the officer could do an oscp to see whether it was still valid or not.
however, in that time frame, law enforcement was moving to real-time, online operations. rather than wanted to know whether the physical driver's license was still valid (i.e. this is the century's old paradigm involving credentials, certificates, licenses, diplomas, letters of credit/introduction, etc) ... the officer needed some database lookup value (account number analog) ... and the officer would do real-time access of all the "real" information.
the license was a physical object substitute for relying parties that lacked the ability to access the real information (including real-time stuff like oustanding warrants, tickets, etc). in the move to a real-time, online operation ... any stale, static distributed physical representation of that information was becoming less and less useful.
having realtime access to the real information eliminated any need for having a stale, static distributed representation of that information and/or any OCSP-style real-time operation providing simple yes/no regarding whether the stale, static distributed copies were still valid. you get rid of needing stale, static distributed copies (in the form of physical licenses or digital certificates with the same information) when you have direct, online, real-time access to the real information.
if you have direct, online, real-time access to the real information, negating the requirement for any stale, static distributed copies, then any requirement for an OCSP-style protocol is also negated (since you are dealing with the real information, and don't need to consider situations involve stale, static, redundant and superfluous copies).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 17:29:24 -0600Brian Inglis writes:
there had been a strategic decision that vm370 would be stabilized and there would be no new releases. furthermore, nearly all of the group was required in pok in support of the mvs/xa development effort ... they were needed to build an internal only XA-based virtual machine capability needed by the mvs/xa development organization. this activity to retarget the vm370 development group to a purely internal mission in support of mvx/xa development ... somewhat coincided with competitive corporate forces trying to continue vm370 and help obtain decision allowing me to (re-)release the dynamic adaptive stuff (as the resource manager).
the vmtool effort had very similar static tuning paradigm as the initial vm370 implementation (in part because there was much less variety in workload and configuration ... for purely supporting mvs/xa development).
misc past references to decision to retargeting the vm370 development
group to the vmtool mission purely in support of internal mvs/xa
development
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002e.html#27 moving on
https://www.garlic.com/~lynn/2002m.html#9 DOS history question
https://www.garlic.com/~lynn/2002p.html#14 Multics on emulated systems?
https://www.garlic.com/~lynn/2003g.html#22 303x, idals, dat, disk head settle, and other rambling folklore
https://www.garlic.com/~lynn/2004g.html#38 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004k.html#23 US fiscal policy (Was: Bob Bemer, Computer Pioneer,Father of
https://www.garlic.com/~lynn/2004k.html#66 Question About VM List
https://www.garlic.com/~lynn/2004n.html#7 RISCs too close to hardware?
https://www.garlic.com/~lynn/2005f.html#58 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005f.html#59 Where should the type information be: in tags and descriptors
https://www.garlic.com/~lynn/2005j.html#25 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005j.html#54 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005s.html#35 Filemode 7-9?
in part because of customer demand ... some amount vm370 support and
development effort continued ... even with much of the original vm370
development group having been retargeted to internal mvs/xa
development support. in part because of the continued customer virtual
machine demand; eventually there was a decision to repackage vmtools
as a customer product as vm/xa ... initially for 3081 running in
370-xa mode. this moved to 3090 ... where pr/sm had been
implemented. pr/sm implementation was somewhat in response to
hypervisor that had been done by Amdahl. misc. past posts mentioning
pr/sm and Amdahl hypervisor:
https://www.garlic.com/~lynn/2003.html#56 Wild hardware idea
https://www.garlic.com/~lynn/2005d.html#59 Misuse of word "microcode"
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005h.html#24 Description of a new old-fashioned programming language
https://www.garlic.com/~lynn/2005p.html#29 Documentation for the New Instructions for the z9 Processor
https://www.garlic.com/~lynn/2005u.html#40 POWER6 on zSeries?
https://www.garlic.com/~lynn/2005u.html#48 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
https://www.garlic.com/~lynn/2006c.html#9 Mainframe Jobs Going Away
https://www.garlic.com/~lynn/2006e.html#15 About TLB in lower-level caches
as part of continued development of vm/xa offering as customer product, three different competitive scheduling product proposals emerged. at one point i observed that all the resources spent on resolution and escalation meetings (regarding the three competitive scheduler implementation proposals) was significantly larger than needed to actual implement all three competitive solutions and perform extensive benchmark comparisons.
while all of that was going on ... i had observed that system
configurations had changed from being significantly real storage
and/or processor constrained to being significantly i/o constrained.
i did quite a bit of work significantly enhancing the resource manager
to improve its ability in i/o constrained environment ... but that
never shipped in product. at one point i characterized the
transformation as relatitive system disk performance had declined by a
factor of 10 times over a period of years (i.e. other resources
increased by a factor of 50, but disks only improved by a factor of
five ... or less). i've made past references to the anecdote about the
disk division adversely reacting to my observation and assigning the
divisions performance and modeling group to refute the statement. they
subsequently came back and reported that i had actually understated
the change.
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2002f.html#14 Mail system scalability (Was: Re: Itanium troubles)
https://www.garlic.com/~lynn/2002l.html#29 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002l.html#34 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003b.html#22 360/370 disk drives
https://www.garlic.com/~lynn/2003k.html#22 What is timesharing, anyway?
https://www.garlic.com/~lynn/2004b.html#54 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004d.html#3 IBM 360 memory
https://www.garlic.com/~lynn/2004d.html#45 who were the original fortran installations?
https://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
https://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004l.html#12 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#52 CKD Disks?
https://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2004q.html#27 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#85 The TransRelational Model: Performance Concerns
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005d.html#71 Metcalfe's Law Refuted
https://www.garlic.com/~lynn/2005g.html#14 DOS/360: Forty years
https://www.garlic.com/~lynn/2005j.html#51 Q ALLOC PAGE vs. CP Q ALLOC vs ESAMAP
https://www.garlic.com/~lynn/2005k.html#34 How much RAM is 64K (36-bit) words of Core Memory?
https://www.garlic.com/~lynn/2005l.html#41 25% Pageds utilization on 3390-09?
https://www.garlic.com/~lynn/2005n.html#18 Code density and performance?
https://www.garlic.com/~lynn/2005r.html#0 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2005u.html#44 POWER6 on zSeries?
https://www.garlic.com/~lynn/2006.html#4 Average Seek times are pretty confusing
https://www.garlic.com/~lynn/2006e.html#45 using 3390 mod-9s
https://www.garlic.com/~lynn/2006f.html#1 using 3390 mod-9s
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Intel vPro Technology Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 21:28:42 -0600Anne & Lynn Wheeler writes:
Intel vPro Technology Security:
http://www.intel.com/vpro/security.htm
in some sense virtualization technology is being applied as countermeasures to the extreme vulnerability of many PCs to viruses and trojans. In that sense it is also attempting to secure the PC as an authentication environment (among other things).
this also goes along with my recent account
https://www.garlic.com/~lynn/2006h.html#13 Security
https://www.garlic.com/~lynn/2006h.html#14 Security
of lots of news stories over at least the past year of using the new, really old thing, virtualization as security mechanism and countermeasure to various threats and vulnerabilities.
misc. stray ocmments about fraud, exploits. vulnerabilities and
threats
https://www.garlic.com/~lynn/subintegrity.html#fraud
and numerous posts on the subject of assurance ... that somewhat
started with my talk at an assurance panel in the trusted computing
track at the 2001 spring Intel Developers Forum
https://www.garlic.com/~lynn/subintegrity.html#assurance
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Intel vPro Technology Newsgroups: alt.folklore.computers Date: Mon, 24 Apr 2006 22:28:24 -0600Anne & Lynn Wheeler writes:
and a comment dating to last summer:
Fear-commerce, something called Virtualisation, and Identity Doublethink.
http://www.financialcryptography.com/mt/archives/000513.html
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Tue, 25 Apr 2006 09:05:58 -0600jmfbahciv writes:
it is somewhat like capacity planning ... w/o fundamental instrumentation and measurement regarding what currently goes on, it may be impossible to understand what might happen if anything changes (aka if you don't understand what it is currently happening, it is probably not possible to understand what might happen if things change).
somewhat the discussion regarding improvements to road use metrics and accounting deals with making information even more accurate. just because there may be major flaws in other parts of the infrastructure (and possibly only in specific jurisdictions) doesn't mean that these specific issues shouldn't be addressed at all.
reference to the comptroller general's talk
http://www.gao.gov/cghome/nat408/index.html America's Fiscal Future
past posts referring to the comptroller general's talk
https://www.garlic.com/~lynn/2006f.html#41 The Pankian Metaphor
https://www.garlic.com/~lynn/2006f.html#44 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#9 The Pankian Metaphor
https://www.garlic.com/~lynn/2006g.html#27 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#2 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#3 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#4 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#17 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#19 The Pankian Metaphor
my own counter argument to the accountability paradigm, I used at a financial sector conference in europe last fall. many of the European executives were complaining that sarbanes-oxley was causing them significantly increased costs (and pain).
my observation was that particular accountability paradigm is basically looking for inconsistencies in a firm's records. however, given the prevalent use of IT technology for maintaining corporate records ... a reasonably intelligent fraudulent endevor should be able to make sure that the corporate IT technology generates absolutely consistent corporate records. Increasing the amount of auditing isn't likely to be effective in such a situation.
It is somewhat like the assumptions related to the benefits of
multi-factor authentication; the assumption is that much of the
benefits comes because the different factors should have different
threats and vulnerabilities. This assumption is negated if all the
authentication factors being used have common threat or vulnerability.
https://www.garlic.com/~lynn/subintegrity.html#3factor
In the case of audits and accountability, there is some assumption that records from different corporate sources may show up inconsistencies when fraud is involved. That assumption is negated if corporate IT technology can be used to maintain and generate all corporate records and therefor guarantee consistency. Somewhat built into that audit and accountability paradigm is the assumption about records from different sources (and looking for inconsistencies).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Tue, 25 Apr 2006 09:51:19 -0600Anne & Lynn Wheeler writes:
we were asked to consult with this small client/server startup
that wanted to do payments on their server
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
they had this technology they called ssl that they wanted to use in conjunction with the payments.
the original SSL scenario was that the host domain name you thot you were talking to wasn't actually who you were talking to. as a result webservers got certificates declaring their domain name. you typed in a domain name into your browser ... and the browser connected you to the server. the server then supplied its certificate ... the browser validated the certifcate and then checked the domain name you typed in against the domain name in the certificate.
however, early on, most of the merchant webservers found that using SSL cut there capacity by 80-90 percent ... they could support five times as much activity if they didn't use SSL. so you saw the change-over to SSL just be used for checkout/payment. The domain name provided by the browser no longer has any SSL guarantess. eventually the person gets to checkout and clicks on the checkout/pay button. the checkout/pay button supplies a domain name that goes off to some payment webpage which does the SSL thing. The issue now is it would take a really dumb crook to provide you with a domain name (on the checkout/pay button) that was different than the domain name in the SSL certificate they were supplying. There is implicit assumption in the SSL infrastructure that the domain name for getting to the server comes from a different source than the server which is supplying the SSL certificate. If they are from the same source ... then all bets are off (you are just validating that the person is able to prove that they are who they claim to be ... as opposed to proving that they are who you think them to be).
misc. past discussions about only using SSL for the checkout/pay
phase subverts fundamental assumptions about the use of SSL:
https://www.garlic.com/~lynn/aepay10.htm#63 MaterCard test high-tech payments
https://www.garlic.com/~lynn/aadsm14.htm#5 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm19.htm#26 Trojan horse attack involving many major Israeli companies, executives
https://www.garlic.com/~lynn/aadsm20.htm#6 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#9 the limits of crypto and authentication
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm21.htm#22 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#36 browser vendors and CAs agreeing on high-assurance certificates
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#40 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/2005g.html#44 Maximum RAM and ROM for smartcards
https://www.garlic.com/~lynn/2005l.html#19 Bank of America - On Line Banking *NOT* Secure?
https://www.garlic.com/~lynn/2005m.html#0 simple question about certificate chains
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2006c.html#36 Secure web page?
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
for a little more drift ... catch-22 issue assocated with ssl
digital certificates
https://www.garlic.com/~lynn/aadsm8.htm#softpki6 Software for PKI
https://www.garlic.com/~lynn/aadsm13.htm#32 How effective is open source crypto? (bad form)
https://www.garlic.com/~lynn/aadsm14.htm#39 An attack on paypal
https://www.garlic.com/~lynn/aadsm15.htm#25 WYTM?
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#43 SSL/TLS passive sniffing
https://www.garlic.com/~lynn/aadsm19.htm#13 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm20.htm#31 The summer of PKI love
https://www.garlic.com/~lynn/aadsm20.htm#43 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#39 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/2002n.html#2 SRP authentication for web app
https://www.garlic.com/~lynn/2003.html#63 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003.html#66 SSL & Man In the Middle Attack
https://www.garlic.com/~lynn/2003d.html#40 Authentification vs Encryption in a system to system interface
https://www.garlic.com/~lynn/2004g.html#6 Adding Certificates
https://www.garlic.com/~lynn/2004h.html#58 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004i.html#5 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2005e.html#45 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005e.html#51 TLS-certificates and interoperability-issues sendmail/Exchange/postfix
https://www.garlic.com/~lynn/2005g.html#1 What is a Certificate?
https://www.garlic.com/~lynn/2005g.html#9 What is a Certificate?
https://www.garlic.com/~lynn/2005i.html#3 General PKI Question
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005o.html#41 Certificate Authority of a secured P2P network
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2006c.html#38 X.509 and ssh
https://www.garlic.com/~lynn/2006d.html#29 Caller ID "spoofing"
https://www.garlic.com/~lynn/2006f.html#33 X.509 and ssh
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: 64-bit architectures & 32-bit instructions Newsgroups: comp.arch Date: Tue, 25 Apr 2006 14:11:39 -0600eugene@cse.ucsc.edu (Eugene Miya) writes:
Our garlic web pages see a fairly large number of daily hits from various search engines or other web crawlers ... including some project or another that appears to be someplace inside ibm. we suspect that because of the extremely high ratio of "hrefs=" (especially in rfc index and the merged glossaries) ... that it is being used as some sort of test case.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Tue, 25 Apr 2006 14:30:25 -0600jmfbahciv writes:
On the move; The deal will create the world's largest toll-road
operating company with a 6,740km network of highways in Europe and
America
http://www.economist.com/agenda/displaystory.cfm?story_id=E1_GRQSGSR
...
If the later ... does the states continue to keep their road use fuel tax (aka any selling off roads to private interests potentially represents both the capital on the sale as well as being able to reallocate various other collected road use fees like fuel tax).
I thot I remember something about the mass pike and its toll road. nominally tolls are put in place to pay off the original road construction bonds sold to get money to build the road (what the federal gov. wasn't supposedly otherwise subsidizing). most places then discontinue the tolls once the bonds are paid off. What I remember was that they decided to keep on collecting the tolls on the mass pike even after the road construction bonds were paid off.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Taxes Newsgroups: alt.folklore.computers Date: Tue, 25 Apr 2006 14:53:24 -0600Anne & Lynn Wheeler writes:
I may have tuned in and out of pieces of the program ... it may have been India graduating 300,000 and Russia was some other number.
although ... not exactly inconsistent with the referenced numbers; couple references from today.
The Continuing American Decline in CS
http://developers.slashdot.org/developers/06/04/25/139203.shtml
A Red Flag In The Brain Game; America's dismal showing in a contest of
college programmers highlights how China, India, and Eastern Europe
are closing the tech talent gap
http://www.businessweek.com/magazine/content/06_18/b3982053.htm?campaign_id=bier_tca
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Taxes Newsgroups: alt.folklore.computers Date: Tue, 25 Apr 2006 15:19:31 -0600Al Balmer writes:
i've run across the annual graduate numbers on the nsf.gov web site in the past. i didn't find them with quick check just now ... but
Science and Engineering Indicators 2006; America's Pressing Challenge
- Building A Stronger Foundation
http://www.nsf.gov/statistics/seind06/
little more searching turns up this overview page:
http://www.nsf.gov/statistics/showpub.cfm?TopID=2&SubID=5
and this is the page I remember running across before
http://www.nsf.gov/statistics/infbrief/nsf06301/
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Wed, 26 Apr 2006 08:11:54 -0600jmfbahciv writes:
misc. past posts mentioning mass pike
https://www.garlic.com/~lynn/2002i.html#28 trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#35 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002i.html#36 pop density was: trains was: Al Gore and the Internet
https://www.garlic.com/~lynn/2002j.html#68 Killer Hard Drives - Shrapnel?
https://www.garlic.com/~lynn/2002l.html#67 The problem with installable operating systems
https://www.garlic.com/~lynn/2002l.html#69 The problem with installable operating systems
https://www.garlic.com/~lynn/2003j.html#11 Idiot drivers
https://www.garlic.com/~lynn/2006g.html#49 The Pankian Metaphor
https://www.garlic.com/~lynn/2006h.html#36 The Pankian Metaphor
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 26 Apr 2006 11:42:06 -0600zansheva@yahoo.com wrote:
for a whole slew of reasons ... the mainframe systems of the 60s tended to evolve a paradigm that provided clearcut design & implementation separation of system command&control from system use. you saw this further evolving in the late 60s with much of the command&control infrastructure starting to be automated. part of this was number of commercial interactive timesharing use that provided 7x24, continuous operation with offshift being unattended in lights-out kind of environment
https://www.garlic.com/~lynn/submain.html#timeshare
the implicit separation of system command&control (as well as automating much of the command&control functions) from system use has tended to permeate into all aspects of the design and implementations over the past 40 years.
the evolution of the desktop systems partly included the enormous simplification that could be achieved if there was no differentiation of the system command&control from the system use (i.e. single user system where the same person that used the system was also responsible for the command&control of the system).
many of the current things commonly referred to as "servers", have tended to be platforms that evolved from the desktop paradigm with no strong, clearcut differentiation between the command&control of the system (along with little or no automation of the command&control functions) from the use of the system. many such "servers" may have a patchwork facade applied on top of the underlying infrastructure to try and create the appearance that there is fundamental separation of command&control from use (with some degree of automation). trivial scenario is frequent situations where remote user may acquire system command&control capability (via a wide-variety of different mechanisms)
the lack of clearcut and unambiguous separation of system command&control from system use ... permeating all aspects of design and implementation can lead to large number of integrity and security problems.
for instance would you prefer to have the financial infrastructure (that you regularly use,) managed by 1) dataprocessing operation that has strongly differentiated system command&control from system use or 2) a system that has constant and frequent reports of vulnerabilities and exploits.
then there are all sorts of feature/function/capability that will tend to evolve over a period of forty years ... where the operational environment includes basic premise of unattended operation as well as strong separation of system command&control from system use.
slight drift ... lots of posts on fraud, exploits, vulnerabilities,
and threats
https://www.garlic.com/~lynn/subintegrity.html#fraud
misc. past posts raising the issue of answering questions that appear to
be homework:
https://www.garlic.com/~lynn/2001.html#70 what is interrupt mask register?
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001c.html#11 Memory management - Page replacement
https://www.garlic.com/~lynn/2001c.html#25 Use of ICM
https://www.garlic.com/~lynn/2001k.html#75 Disappointed
https://www.garlic.com/~lynn/2001l.html#0 Disappointed
https://www.garlic.com/~lynn/2001m.html#0 7.2 Install "upgrade to ext3" LOSES DATA
https://www.garlic.com/~lynn/2001m.html#32 Number of combinations in five digit lock? (or: Help, my brain hurts)
https://www.garlic.com/~lynn/2002c.html#2 Need article on Cache schemes
https://www.garlic.com/~lynn/2002f.html#32 Biometric Encryption: the solution for network intruders?
https://www.garlic.com/~lynn/2002f.html#40 e-commerce future
https://www.garlic.com/~lynn/2002g.html#83 Questions about computer security
https://www.garlic.com/~lynn/2002l.html#58 Spin Loop?
https://www.garlic.com/~lynn/2002l.html#59 Spin Loop?
https://www.garlic.com/~lynn/2002n.html#13 Help! Good protocol for national ID card?
https://www.garlic.com/~lynn/2002o.html#35 META: Newsgroup cliques?
https://www.garlic.com/~lynn/2003d.html#27 [urgent] which OSI layer is SSL located?
https://www.garlic.com/~lynn/2003j.html#34 Interrupt in an IBM mainframe
https://www.garlic.com/~lynn/2003m.html#41 Issues in Using Virtual Address for addressing the Cache
https://www.garlic.com/~lynn/2003m.html#46 OSI protocol header
https://www.garlic.com/~lynn/2003n.html#4 Dual Signature
https://www.garlic.com/~lynn/2004f.html#43 can a program be run withour main memory ?
https://www.garlic.com/~lynn/2004f.html#51 before execution does it require whole program 2 b loaded in
https://www.garlic.com/~lynn/2004f.html#61 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004h.html#47 very basic quextions: public key encryption
https://www.garlic.com/~lynn/2004k.html#34 August 23, 1957
https://www.garlic.com/~lynn/2005h.html#1 Single System Image questions
https://www.garlic.com/~lynn/2005m.html#50 Cluster computing drawbacks
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006b.html#2 Mount a tape
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 26 Apr 2006 11:53:55 -0600ref:
past posts raising the issue about desktop paradigm not having strong,
clearcut requirement separating system command&control from system
use ... and that distinction might permeate into all aspects of system
design and implementation
https://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#197 Computing As She Really Is. Was: Re: Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2001.html#43 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2001k.html#14 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2002.html#1 The demise of compaq
https://www.garlic.com/~lynn/2002h.html#73 Where did text file line ending characters begin?
https://www.garlic.com/~lynn/2002o.html#14 Home mainframes
https://www.garlic.com/~lynn/2003h.html#56 The figures of merit that make mainframes worth the price
https://www.garlic.com/~lynn/2003n.html#46 What makes a mainframe a mainframe?
https://www.garlic.com/~lynn/2004.html#40 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004.html#41 AMD/Linux vs Intel/Microsoft
https://www.garlic.com/~lynn/2004b.html#10 Mars Rover Not Responding
https://www.garlic.com/~lynn/2004o.html#21 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005m.html#51 Cluster computing drawbacks
https://www.garlic.com/~lynn/2005q.html#2 Article in Information week: Mainframe Programmers Wanted
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 26 Apr 2006 12:31:43 -0600Anne & Lynn Wheeler wrote:
if you don't have a clearly defined and unambiguous separation of system command&control from system use ... then it is much more difficult to create a comprehensive threat model along with the countermeasures for the threats ... the lack of which can contribute to a large number of integrity and security vulnerabilities.
if you are trying to have automated server operation, with minimized operational and support costs ... then having a clearly defined command&control operation immensely contributes to defining the feature/function that need to be optimized. if this has existed for 40 years ... then the amount of command&control feature/function optimization will tend to have gone through enormous amount of evolution and numerous generations, tending to result in more sophisticated and comprehensive solutions.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Intel VPro Newsgroups: comp.arch Date: Wed, 26 Apr 2006 12:46:49 -0600David Brown writes:
many of the systems that had implicit single user desktop use have
poor separation between command&control of the system from
system use.
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries
a couple other recent postings mentioning vPro and security
https://www.garlic.com/~lynn/2006h.html#31 Intel vPro Technology
https://www.garlic.com/~lynn/2006h.html#32 Intel vPro Technology
virtual machine technology can attempt to retrofit stronger separation between system command&control and system use ... to an environment that doesn't natively have such strong separation.
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Wed, 26 Apr 2006 13:10:57 -0600Anne & Lynn Wheeler wrote:
there have been numerous news articles over at least the past year about being able to utilize virtualization technolology to retrofit stronger integrity and security (some separation of system command&control from system use) to systems where it wasn't part of the fundamental infrastructure.
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Thu, 27 Apr 2006 07:06:23 -0600jmfbahciv writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: blast from the past, tcp/ip, project athena and kerberos Newsgroups: alt.folklore.computers Date: Thu, 27 Apr 2006 09:35:32 -0600Date: 22 March 1988, 17:05:12 EST
email was from CAS, same person for whom compare&swap instruction was
named
https://www.garlic.com/~lynn/subtopic.html#smp
who I had worked with at the science center in the early 70s
https://www.garlic.com/~lynn/subtopic.html#545tech
Jerry Saltzer and Steve Dyer made the trip also.
misc. past posts mentioning kerberos
https://www.garlic.com/~lynn/subpubkey.html#kerberos
congestion control ... refers to slow-start ... misc. past posts
mentioning slow-start
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002b.html#4 Microcode? (& index searching)
https://www.garlic.com/~lynn/2002c.html#54 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003.html#59 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003k.html#57 Window field in TCP header goes small
https://www.garlic.com/~lynn/2003l.html#42 Thoughts on Utility Computing?
https://www.garlic.com/~lynn/2003p.html#13 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2005g.html#4 Successful remote AES key extraction
https://www.garlic.com/~lynn/2005q.html#22 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#28 tcp-ip concept
https://www.garlic.com/~lynn/2005q.html#37 Callable Wait State
https://www.garlic.com/~lynn/2006d.html#21 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006g.html#18 TOD Clock the same as the BIOS clock in PCs?
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: guess the date Newsgroups: alt.folklore.computers Date: Thu, 27 Apr 2006 09:56:42 -0600the following are summaries of some industry articles, guess the date:
Tools for creating business programs faster, are on the way. BofA "can of worms" story about replacing aging business software. - Dual runs on new/old systems - Major customers pulled out of bank - Statements fell behind 9 months - Bank gave up . 2.5 Million lines of code . $20 million investment . $60 million to correct the difficulties . Vice presidents of technology and trust departments resigned - Extreme example, but becoming commonplace There's a mounting shortage of good software - custom-made software for big corporations - US companies are increasingly vulnerable to competition . Europe and Japan have head-start to automate software development - Alarm at the Pentagon . Military cannot get enough, reliable software quickly . Mar 30, House Armed Forces Services Committee cut all procurement funding for "over the horizon" radar . No point building the hardware until the software is ready Software powers: - on-board computers in new car engines copiers, micro-wave ovens - stoplights that control city traffic - stocks for Wall Street traders - loans for bank officers - routing for . telephone calls . truck fleets . factory production flow - Hard to find anything that doesn't depend on software . Banks would close in 2 days . distribution companies in 4 days . factories in a week Widening gap between hardware and software performance - hardware doubles every 2-3 years - customers expect to tackle tougher jobs - tasks require more complex software - no software counterpart to semiconductor technology Programmers "grind out instructions" at 1-2 lines per hour - Akin to building a 747 using stone knives - Manpower shortage isn't helping - 25% growth in the average length of programs - 12% growth in overall demand for software - 4% growth in number of programmers - 3 year backlog for identified application programs - 32,000 workdays to finish average business-software package - Finished programs get tossed out . When they're ready, they're obsolete - Projecting the trend means nearly everyone will become a programmer - Something has to give Means for dealing with the crunch are falling into place CASE: Computer Aided Software Engineering - helps automate the job of writing programs - speeds work; improves quality Texas Instruments, one of 4 leading suppliers of CASE tools - world-wide demand will hit $2 billion by xxxx New "Object Oriented" programming languages - Replacing Cobol, C, Pascal - By xxxx, expected to dominate many business areas - Offer the best of two worlds: . ease of use . unmatched versatility - "The emerging thing" -WH Gates, Microsoft Experts believe US information managers lack the money and will to implement new technology - Top executives aren't involved; you can't touch or smell software Europe and Japan are building momentum - Britain and France pioneered the concept of CASE - European Community has spent $690 million since 1983 for development - Japan Sigma, 3 years old, $200 million effort . builds on previous government projects . most Japanese experts doubt Sigma will score a breakthru . may be irrelevant; race doesn't go to most original or creative . Real test is how soon technology is implemented . Fujitsu and Hitachi have more than 10,000 programmers each working on "white collar assembly lines" . Japanese willing to invest for the very long term (There's no quick fix to the software problem) Automating software development means a drastic cultural upheaval - requires overhaul of programming curriculum taught at universities - there is little choice; business depend on software . Many programs today are "downright rickety" . 60-70% of budget for Maintenance of programs . 80% by xxxx . You make an innocent change, and the whole thing collapses - Why not scrap old programs? . Few know what an existing program does . New software is notoriously bug-ridden . The larger the program, the higher the risk that glitches will go undetected for weeks/months after installation "Programmers regard themselves as artists. As such, they consider keeping accurate records of their handiwork on par with washing ash trays." Pentagon, 1975 awarded contract to develop new programming language (ADA) - CII-Honewell Bull - Object Oriented - Required for all "Mission Critical" software - Resistance (from contractors) ceased in 1987 . More than 120 compilers . Libraries of routines . Success stories started rolling in . TRW, Harris, GTE, Boeing, Raytheon - Raytheon . Nearly two thirds of new programs consist of reusable modules . 10% reduction in design costs . 50% reduction in code-generation and debugging . 60% reduction in maintenance - Honeywell Bull . 33% reduced costs overall . 30% increase in Aerospace and Defense software Object Oriented Programming Systems (OOPS): - takes reuseability one step further - each module wraps instructions around the specific data that the software will manipulate - the two elements constitute an "object" - data and program commands are always handled together - maintaining complex software is simpler - Microsoft expects to offer products incorporating this approach soon "So simple to learn, they will foster a "huge market" among programmers" - Gates - Technique is especially useful for managing massive amounts of data. - Mainstay Software Corp, Denver Colorado . Object Oriented Data Base runs on a PC . Often out-shines mainframe based systems . a "breeze to tailor for individual needs" Smalltalk: - First object-oriented program - Alan Kay, Xerox PARC - Widely criticized because it ran slowly Objective-C (Stepstone Corp) and C++ (AT&T) - Software industry needs mostly "off-the-shelf components" - a few custom-designed circuits give "personality" to a system - software can then keep up with ever-changing business conditions So far, big spenders have been companies where software is a profit center (EDS - Electronic Data Systems) Most of the 100 USA CASE ventures are tiny, aimed at narrow markets European Commission hopes to hit a "grand slam" - comprehensive "environment" - automated, integrated tools - all aspects of software development - Sweeping standards already defined by 4-year Esprit project . Bull (France) . GEC and ICL (Britain) . Nixdorf and Siemens (West Germany) . Olivetti (Italy) - Next phase will start shortly . "software factory" for all facets of programming Software Engineering Institute, Carnegie-Mellon - "We're not going to automate away the problem with technology" - Larry Druffel, directory - Need to automate the intellectual enterprise required to conceive the programs. - SEI is funded by Pentagon $125 million Stars Project . Knowledge base of "hotshot programmers" . Develop smart CASE system to bypass programmers . Endusers would do their own software Software Production Consortium, Reston Virginia - SPC founded by 14 major aerospace and electronics companies . Boeing, Martin Marietta, McDonnel Douglas . each contributed $1 million per year . 155 researchers - satisfied to wring maximum efficiency from reusable software - reusable software to create quick, bare-bones prototypes . "Look and Feel" of the final program . Feedback from users of the software "Fit the software to the user" 80% of system total life-cycle costs stem from changes made to initial software design - TRW constructed a Rapid Prototyping Center . dummy workstations and consoles . Artificial Intelligence software . prototype closely simulates final product performance Never displace the "software Picassos" - always a need for totally novel solutions - The greater need at the moment is for house painters... snip ...
Answer:
88/05/11
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Chant of the Trolloc Hordes Newsgroups: alt.folklore.computers Date: Thu, 27 Apr 2006 12:55:44 -0600"Charlie Gibbs" writes:
few old posts mentioning original relational/sql activity and
some wars with the 60s databases
https://www.garlic.com/~lynn/2002l.html#71 Faster seeks (was Re: Do any architectures use instruction
https://www.garlic.com/~lynn/2003c.html#75 The relational model and relational algebra - why did SQL become the industry standard?
https://www.garlic.com/~lynn/2003c.html#78 The relational model and relational algebra - why did SQL become the industry standard?
https://www.garlic.com/~lynn/2003f.html#44 unix
https://www.garlic.com/~lynn/2004e.html#15 Pre-relational, post-relational, 1968 CODASYL "Survey of Data Base Systems"
https://www.garlic.com/~lynn/2004e.html#23 Relational Model and Search Engines?
https://www.garlic.com/~lynn/2004o.html#67 Relational vs network vs hierarchic databases
https://www.garlic.com/~lynn/2004q.html#23 1GB Tables as Classes, or Tables as Types, and all that
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#24 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005q.html#23 Logon with Digital Siganture (PKI/OCES - or what else they're called)
https://www.garlic.com/~lynn/2005s.html#9 Flat Query
https://www.garlic.com/~lynn/2006e.html#46 using 3390 mod-9s
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Thu, 27 Apr 2006 16:03:05 -0600"David Wade" writes:
supposedly the announcements today will moderate some of the entry and mid-range situations ... i've seen lots of complaints over the past several years that the various mainframe emulators running on wintel servers have been the only offering in that market niche (and the software pricing really bites them)
IBM expands mainframe into mid-market with breakthrough business class
mainframe to target growth of SOA transactions and data;
Shanghai lab to develop mainframe software
http://www.enterprisenetworksandservers.com/newsflash/art.php?571
When an Inexpensive Mainframe Isn't an Oxymoron
http://www.serverwatch.com/news/article.php/3602091
IBM to offer mainframe for the midmarket
http://www.infoworld.com/article/06/04/27/77826_HNibmmainframe_1.html
IBM to offer mainframe for the midmarket
http://www.computerworld.com/hardwaretopics/hardware/mainframes/story/0,10801,110925,00.html
IBM Unveils 'Business Class' Mainframe
http://news.yahoo.com/s/ap/20060427/ap_on_hi_te/ibm_mainframe_lite;_ylt=A0SOwlt2LVFEqgAAhAQjtBAF;_ylu=X3oDMTA5aHJvMDdwBHNlYwN5bmNhdA--
IBM launches low-cost mainframe
http://www.techworld.com/opsys/news/index.cfm?newsID=5886&pagtype=all
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: nntp and ssl Newsgroups: gnu.emacs.gnus Date: Fri, 28 Apr 2006 11:54:35 -0600muk@msu.edu (m. kolb) writes:
first couple times i had to kill emacs and restart. i eventually did a script that i run in another window that gets the process number of openssl and kills it. turns out that it kills openssl and then after a couple seconds it checks to see if it has to kill openssl a second time (about half the time after it has killed openssl while reading the active file, gnus continues a little bit and then freezes a second time).
it then is good ... until the next get-new-news, at which point there is about 1-in-3 chance it will repeat
it also freezes about 95 percent of the time when posting ... i run the kill script; gnus then completes saying that the posting failed; however the post was actually made (like is going to happen when this is posted)
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Chant of the Trolloc Hordes Newsgroups: alt.folklore.computers Date: Fri, 28 Apr 2006 13:51:53 -0600scott@slp53.sl.home (Scott Lurndal) writes:
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Need Help defining an AS400 with an IP address to the mainframe Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Fri, 28 Apr 2006 14:48:58 -0600patrick.okeefe@ibm-main.lst (Patrick O'Keefe) writes:
specific reference regarding major FS objectives:
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System
despite some of the comments in the above reference ... at the time, I
drew some comparisons between FS project and a cult film that had been
playing non-stop down in central sq. ... at that time I was with the
science center in tech sq., a few blocks from central sq.
https://www.garlic.com/~lynn/subtopic.html#545tech
in the early SNA time-frame ... my wife and Bert Moldow produced
an alternative architecture that actually represented networking
(being forced to use the label peer-to-peer) ... referred to
as AWP39. a few past posts mentioning AWP39:
https://www.garlic.com/~lynn/2004n.html#38 RS/6000 in Sysplex Environment
https://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
https://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
https://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005p.html#17 DUMP Datasets and SMS
https://www.garlic.com/~lynn/2005q.html#27 What ever happened to Tandem and NonStop OS ?
https://www.garlic.com/~lynn/2005u.html#23 Channel Distances
my wife went on to serve a stint in POK in charge of loosely-coupled
architecture ... where she had numerous battles with the SNA
organization. she was also responsible for Peer-Coupled Shared Data
architecture ... which initially saw major uptake with ims hot-standby
... and later in parallel sysplex.
https://www.garlic.com/~lynn/submain.html#shareddata
at the time APPN was attempting to be announced, one of the primary persons behind APPN and I happened report to the same executive. The SNA organization had non-concurred with the announcement of APPN and the issue was being escalated. After six weeks or so, there was finally approval for the announcement of APPN (out of corporate) ... but only after the announcement letter was carefully rewritten to avoid implying any possible connection between APPN and SNA. The original APPN architecture was "AWP164".
also, almost every organization that ever built a box to the "official" (even detailed internal) SNA specifications found that it wouldn't actually work with NCP ... it first had to be tweaked in various ways to make it work.
supposedly the drive for FS ... as mentioned in the previous reference
https://www.garlic.com/~lynn/2000f.html#16 FS - IBM Future System
had been the appearance of the clone controllers.
this is one of the things that I had been involved with as an
undergraduate in the 60s. I had tried to make the 2702
telecommunication controller do something that it couldn't quite
actually do. this somewhat prompted a project at the univ. to build
its own telecommunication controller; reverse engineering the channel
interface, build a channel interface card, and program a Interdata/3
minicomputer to emulate 2702 functions. Somebody wrote an article
blaming four of us for starting the clone controller business
https://www.garlic.com/~lynn/submain.html#360pcm
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 29 Apr 2006 08:03:08 -0600jmfbahciv writes:
however, when attachment of desktop systems to networks expanded from purely local non-hostile environment to world-wide, and potentially extremely hostile environment, some of the partitioning requirements re-emerged. to some degree the network attachment allowed remote attackers to reach into many of the desktop systems ... these hadn't provisions for separation/partitioning of system use and system command&control (they hadn't been designed and built from the ground up with extensive countermeasures to hostile attacks).
in the past, virtualization has been used for variety of different purposes; 1) timesharing services, 2) simplification of testing operating systems, 3) using different operating systems and operating environments, as well as 4) providing partitioning and separation of system command&control from system use.
some of the current activity is layering some additional partitioning of system command&control (from system use) to environments that weren't orginally intended to operate in such hostile and adversarial conditions (might be thought of as akin to adding bumpers to horseless carriages).
past postings on this topic:
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#42 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#43 Intel VPro
https://www.garlic.com/~lynn/2006h.html#44 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#49 Mainframe vs. xSeries
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Mainframe vs. xSeries Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sat, 29 Apr 2006 10:12:28 -0600jmfbahciv writes:
i had given a talk at ISI (usc graduate student seminar plus some of the ietf rfc editor staff) in the late 90s about tcp/ip implementation not inherently taking into consideration various business critical operational considerations.
original unix, being somewhat multi-user oriented, tended to have some greater separation between system command&control and system use ,,, however, adaptation to desktop environments can create some ambiquities (ambivalence?) regrading simplifying the partitioning/separation needed.
misc. assurance postings
https://www.garlic.com/~lynn/subintegrity.html#assurance
recent postings on this topic:
https://www.garlic.com/~lynn/2006h.html#40 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#41 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#42 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#43 Intel VPro
https://www.garlic.com/~lynn/2006h.html#44 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#49 Mainframe vs. xSeries
https://www.garlic.com/~lynn/2006h.html#53 Mainframe vs. xSeries
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: History of first use of all-computerized typesetting? Newsgroups: alt.folklore.computers Date: Sat, 29 Apr 2006 21:53:31 -0600Brian Inglis writes:
the principles of operation was a much wider distributed manaul ... nearly all the mainframe customers ... rather than just cp67/cms customers
the original cms script document processing command had been done early
at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech
which had runoff like "dot" commands (tracing common heritage back to ctss)
and then in 1969 at the science center, "G", "M", & "L" invented GML
(i.e. precursor to sgml, html, xml, etc) ... and gml processing
supported was added to script command ... and you could even intermix
"dot" commands and "gml" tags
https://www.garlic.com/~lynn/submain.html#sgml
i think part of the early move of principles of operation to cms script was conditionals ... where the same source file could produce the full internal architecture "red book" and its subset, principles of operation document available to customers.
here is online extract from melinda's history
https://www.leeandmelindavarian.com/Melinda/25paper.pdf
that covers ctss and ctss runoff command
http://listserv.uark.edu/scripts/wa.exe?A2=ind9803&L=vmesa-l&F=&S=&P=40304
a lot more on ibm 7094 and ctss
http://www.multicians.org/thvv/7094.html
ctss runoff command, jh saltzer, 8nov1964
http://mit.edu/Saltzer/www/publications/CC-244.html
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Sun, 30 Apr 2006 08:50:08 -0600jmfbahciv writes:
batch paradigm tended to assume that there was no interaction with human. specifications tended to be more complex, in part because there was no assumption about human being there to handle anomolies. there tended to be a whole lot more up front specification ... allowing the batch system to determine whether all required resources could be committed before even starting the sequence of operations (lot more operations that required significant portion or all available resources ... potentially all available tape drives and nearly all disk space for correct operation). during execution ... almost any kind of possibly anomoly could be specified as being handled by the application programming ... and if no application specific handler had been specified ... then go with some system default. this separated specification of application required resources from the application use of the actual resources.
batch processing application specification didn't translate well into interactive use ... since there was this assumption that the required resources had to be specified in some detail before specifying the execution of the application. for any sort of interactive use ... they attempted to create an intermediate layer that basically intercepted commands and then provided some set of default resource specification before invoking the application (and possibly defining some number of hooks/exits for various kinds of anomolous processing which might be reflected back to the user).
there are sporadic threads in some of the mainframe discussions about being able to use REXX as a "job control language" (the infrastructure for specifying required resources) as opposed to invoking REXX as an application ... which, then, in turn processes a command file.
REXX was originally "REX" internally in the late 70s on vm/cms ...
and sometime later made available as vm/cms product REXX ... and
subsequently ported to a number of other platforms. some number
of posts that make some reference to doing applications in REX
https://www.garlic.com/~lynn/submain.html#dumprx
rexx language association web page
http://www.rexxla.org/
some trivia drift , two people had done a new, and ever improved FAPL in the early 80s ... one of them was the person also responsible for REXX.
recent post that mentions FAPL
https://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
somewhat random pages from search engine mentioning FAPL
http://www.hps.com/~tpg/resume/index.php?file=IBM
more detailed discussion of fapl
http://www.research.ibm.com/journal/rd/271/ibmrd2701K.pdf
Date: 05/06/85 16:52:04
From: wheeler
re: sna; interesting note in this month's sigops. Somebody at CMU
implemented LU6.2 under UNIX4.2 ...
Rosenberg reported on an implementation of an SNA node consisting
of LU 6.2 and PU T2.1 protocols. (These protocols cover approximately
OSI layers 2 through 6.) The implementation was made on UNIX
4.2. About 85% of the code was generated automatically from the FAPL
meta-description of SNA. The following problems were reported:
1. The protocol code is large, and thus cannot run in the kernel
space. Consequently, communication between user program and the node
(processor executing the SNA code) is more complex and slower than if
the node were part of the kernel. In addition, error recovery proved
tricky.
2. The SNA node must simulate the UNIX sockets, which are full duplex
and place no restriction on the states of the two conversants. The SNA
node uses a half-duplex, flip-flop protocol, where the states of the
two conversants must remain synchronized. To match the two required an
extension to SNA.
The implementation is now complete and is actually used to drive a
3820 printer, which is a SNA device
... snip ... top of post, old email index
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: PDS Directory Question Newsgroups: bit.listserv.ibm-main,alt.folklore.computers Date: Sun, 30 Apr 2006 09:15:15 -0600Shmuel Metz , Seymour J. wrote:
couple minor references to talk i gave at fall68 share in Atlantic City ...
about both reworking stage-2 sysgen to carefully control disk layout
(optimizing arm seek) and bunch of pathlength work i had done on cp67
(at the university). stage-2 included a bunch of different steps that
included both iehmove and iebcopy. carefully controlly file layout
required some rework of sequence of stage-2 steps. to carefully control
member placement (in pds) required reordering move/copy member statements.
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#20 CP/67 & OS MFT14
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: Sarbanes-Oxley Newsgroups: bit.listserv.ibm-main Date: Sun, 30 Apr 2006 09:23:11 -0600Phil Payne wrote:
i was at a financial industry conference (including some of the european
exchanges) in europe late last fall ... where a lot of the (european)
corporation executives spent a lot of time discussing the sarbanes-oxley
issue. recent post discussing the subject ... towards the end
https://www.garlic.com/~lynn/2006.html#12a sox, auditing, finding improprieties
https://www.garlic.com/~lynn/2006h.html#33 The Pankian Metaphor
From: Anne & Lynn Wheeler <lynn@garlic.com> Subject: Re: The Pankian Metaphor Newsgroups: alt.folklore.computers Date: Sun, 30 Apr 2006 10:07:00 -0600jmfbahciv writes:
round-robin can be fair ... assuming relatively homogeneous workload. it is when you get into all sorts of heterogeneous workload that things get tricky. also round-robin tends to have overhead expense associated with switching (improvements in fair shouldn't be offset by energy lost in the switching).
the other issue is shortest-job-first. if you have large queue ... and there are people associated with items in the queue ... then shortest-job-first reduces the avg. waiting time and avg. length of queue. you can somewhat see this at grocery checkouts that have some sort of "fast" lane.
one trick is balancing shortest-job-first (to minimize avg. waiting time and queue length) against fair. a frequent problem, if you don't have really good, explicit shortest-job-first rules ... can users scam the system by careful packaging of their work to take advantage of shortest-job-first rules ... and get more than there fair share aka ... people trying to use the 9-item checkout lane when they have a basket full ... or jump their position in a line.
a law enforcement officer claimed that at least 30percent of the population will regularly attempt to circumvent/violate rules and laws for their advantage (of course the population sample he deals with may be biased).
--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/