List of Archived Posts

2002 Newsgroup Postings (02/17 - 02/28)

Did Intel Bite Off More Than It Can Chew?
Gerstner moves over as planned
Need article on Cache schemes
Did Intel Bite Off More Than It Can Chew?
Did Intel Bite Off More Than It Can Chew?
Did Intel Bite Off More Than It Can Chew?
medium term future of the human race
Opinion on smartcard security requested
TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
IBM Doesn't Make Small MP's Anymore
Opinion on smartcard security requested
OS Workloads : Interactive etc
OS Workloads : Interactive etc
OS Workloads : Interactive etc
OS Workloads : Interactive etc
Opinion on smartcard security requested
OS Workloads : Interactive etc
OS Workloads : Interactive etc
Did Intel Bite Off More Than It Can Chew?
Did Intel Bite Off More Than It Can Chew?
medium term future of the human race
Opinion on smartcard security requested
Opinion on smartcard security requested
Opinion on smartcard security requested
Opinion on smartcard security requested
the same question was asked in sci.crypt newgroup
economic trade off in a pure reader system
OS Workloads : Interactive etc
OS Workloads : Interactive etc
Page size (was: VAX, M68K complex instructions)
OS Workloads : Interactive etc
You think? TOM
Did Intel Bite Off More Than It Can Chew?
Did Intel Bite Off More Than It Can Chew?
OS Workloads : Interactive etc
TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
economic trade off in a pure reader system
VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
Wang tower minicomputer
VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
using >=4GB of memory on a 32-bit processor
Beginning of the end for SNA?
Beginning of the end for SNA?
Beginning of the end for SNA?
cp/67 (coss-post warning)
cp/67 addenda (cross-post warning)
cp/67 addenda (cross-post warning)
Moving big, heavy computers (was Re: Younger recruits versus experienced ve
Swapper was Re: History of Login Names
Swapper was Re: History of Login Names
Swapper was Re: History of Login Names
cp/67 addenda (cross-post warning)
Swapper was Re: History of Login Names
VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
Swapper was Re: History of Login Names

Did Intel Bite Off More Than It Can Chew?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Feb 2002 03:34:28 GMT
"Stephen Fuld" writes:
Sure. And I have met my share of totally incompetent IBMers. But then, to what do you attribute IBM's success versus DEC's failure. I doubt it was the competence of the technical people. I am postulating that it was the competence of the upper management.

according to 2nd hand that I heard about testimony at anti-trust trial ... one of the companies (that since went out of business) testified that in the late '50s every company in the industry knew that the single most important criteria to be successful in the computer business was to have a compatihle hardware line across all models.

correlary was that if you were the only company that got that single, most important criteria correct ... it might even be possible that you could did everything else wrong ... and still beat the competition.

peak employment at ibm possibly approached 500k and the mainframes were (and are) the strategic workhorse of almost every industry. given the company size and the diversification around the world a lot more things would have to go wrong for a much longer period of time.

repeat:
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/96.html#20 1401 series emulation still running?
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/99.html#231 Why couldn't others compete against IBM?
https://www.garlic.com/~lynn/2001j.html#33 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#38 Big black helicopters
https://www.garlic.com/~lynn/2001j.html#39 Big black helicopters
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq

even DEC's largest and most successful market position ... is still dwarfed by the overall mainframe market. also, the mini-computer market felt the high-end workstations and then high-end PCs moving upstream into their markets (taking sales as well as cutting profit margins) long before such things started to directly affect mainframes.

random ref:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Gerstner moves over as planned

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Gerstner moves over as planned
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 18 Feb 2002 13:59:43 GMT
Stephen Samson writes:
Phil,

Maybe Gerstner was just lucky to take over IBM at a time when almost any change of direction would have been positive. However, I think he deserves more credit than that. After some tentative bad moves, IBM seems to have gained focus, and the moral level among the surviviors is up notwithstanding some bad bean counting in retirement benefits.

However, I have no doubt that if John Akers had been in charge for another year, IBM would now be in little pieces. and a third-rate company if not defunct. You can see the Akers mentality at work in the current troubles of AT&T, under Akers' disciple Mike Armstrong. I fear for Akamai's future with George Conrades in charge for the same reason.

IBM's worst times came when salesmen (other than those named Watson) were in charge. Learson and Cary combined to prevent FS from lighting a real fire in IBM's fortunes; it took too many years for MVS to fill its shoes.

Gerstner had the huge advantage of being a Man From Mars; inbreeding was killing IBM.


FS was much more a case of techies in charge with nearly no practical market and customer experience. The reaction in the aftermath of FS was a swing to the opposite direction from the techies totally in charge to the techies having much less say. The loss of bertram was a big blow also.

A lot of advanced technology activities (the stuff bridging the gap between research and product deliverables) just evaporated (in part because of the rush to try and back-fill the product gaps that were suppose to have been FS).

random fs refs:
https://www.garlic.com/~lynn/96.html#24 old manuals
https://www.garlic.com/~lynn/99.html#100 Why won't the AS/400 die? Or, It's 1999 why do I have to learn how to use
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#21 OT?
https://www.garlic.com/~lynn/2000f.html#27 OT?
https://www.garlic.com/~lynn/2000f.html#28 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#56 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#18 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001f.html#30 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#43 hollow files in unix filesystems?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Need article on Cache schemes

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Need article on Cache schemes.
Newsgroups: comp.arch
Date: Mon, 18 Feb 2002 14:06:19 GMT
Martin Knoblauch writes:
I always wonder why "homework" questions are treated so badly here. Not that I don't like the humorus kind of replies :-)

Sure, we should not provide direct answers to problems, but if the poor soul asks for pointers I think they should be treated politely. And I do not think those requests are "inapropriate". In "the old" days you went to the library and did your research. Or you asked people who knew their stuff. Today there is still the library, of course. But there is also the internet as a source of information. So, one should "google" first, but asking for references on a topis-specific group seems a valid use of the information pool to me.


how many beginning CS students are there in the world each semester?

how many posts would that be if they were even just limited to one such question post per student per semester?

misc. replacement algorithm posts (virtual memory as well as other "caching" implementations)
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Feb 2002 14:39:49 GMT
Anne & Lynn Wheeler writes:
peak employment at ibm possibly approached 500k and the mainframes were (and are) the strategic workhorse of almost every industry. given the company size and the diversification around the world a lot more things would have to go wrong for a much longer period of time.

somewhat related article
http://news.com.com/2100-1001-839483.html
IBM loves mainframes because sales of the systems typically bring years of revenue from maintenance and software license agreements-- just the type of recurring revenue that helped carry the company through the current economic downturn comparatively unscathed.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Feb 2002 18:23:42 GMT
"Walter Rottenkolber" writes:
Works the other way too. There are companies that don't want to take on the hassle of computer purchase and maintenance. It becomes a simple current expense rather than a personnel and depreciation problem.

On DDJ TeleWeb, a fellow from IBM explained the commitment to Linux for IBM's low end systems. They figure to gain more than lose from open source programs as a way to expand the utility of their systems.


there are a lot of companies that don't ... but there is also significant profit margin and revenue for providing such products for the companies that do.

remember ... genesis of a lot of ibm current (software) product offerings was during the era where everything was open source (hasp, cp/67, vm/370, etc) and some number of them were actually written/developed at customer sites (hasp, cics, etc). It seemed that the industry then went thru a long period of ossification where rather than need for agile & rapid advances (which open source promotes) ... it was era of consolidation and protecting installed turf.

There were lots of vocal customers during the late '70s & early '80s complaining about the transition to OCO (object code only) ... as opposed to the early convention of open and freely distributed source.

I think there was a line in the ibm mainframe ng about determining where the profit margins & where vendors can establish product differentiation (and operating system and proprietary can be an inhibitor in some of these market segments).

In the 50s, 60s, etc ... there was a lot of attention placed on hardware compatibility. In the 80s that started to move upstream into operating system compatibility and interoperability ... at least for some market segments. For those market segments where agility to move quickly to different vendor hardware products ... operating system compatibility and interoperability is significant factor (not necessarily hardware compatibility). Open source is making a "come-back" and playing more & more of a significant role in these market segments.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 18 Feb 2002 19:17:03 GMT
"Walter Rottenkolber" writes:
On DDJ TeleWeb, a fellow from IBM explained the commitment to Linux for IBM's low end systems. They figure to gain more than lose from open source programs as a way to expand the utility of their systems.

the other way of looking at it ... is that a lot of the "standardization" efforts is a commoditization effort ... remove many of the differentiators/proprietary allowing customers to easily/trivially jump from one vendor to another.

in a market segment that has significant orientation towards standardized commoditization ... vendors have to look for other places with regard to profit margin.

very thin profit margins may be very attractive to some customer segments ... however if the profit margin gets too thin, the corporate entity may not be able to continue to exist. This can create downside effects to some customers in that market segment. On the other hand, it could also be viewed as darwinism in action.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

medium term future of the human race

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: medium term future of the human race
Newsgroups: comp.society.futures
Date: Wed, 20 Feb 2002 18:09:52 GMT
Malcolm McMahon writes:
I know a lot of people are rather looking forward, in fantasy, to a nice dark-age but I rather doubt it. First off, the prophets of doom have been telling us about the exhausion of oil and gas "real soon now" for 40-50 years. There's still lots of proven reserves and new ones being discovered.

Sedond off, the running out isn't going to be sudden, it will take the form of steadilly increasing prices. There are a good number of alternate power sources. The reason we're using oil and gas are largely because they are the cheapest and most convenient.

As usual the human race doesn't plan in advance. It responds to shifts in ecconomics. As the oil and gas prices start to ramp up you'll see all sorts of other energy sources becoming ecconomically competative. Bio-fuel. Geothermal. Deep ocean thermal. Wind. Nuclear. (with hydrogen powered vehicles unless there are major improvements in battery technology).


note that a lot of the green revolution has been dependent on high usage of petro chemical (aka grains/plants with much higher production dependent on heavy use of petro-chemical fertilizer) ... the availability of oil not only affects transportation but also food production (from growing thru delivery). cheap petro has led to cheap(er) & more plentiful food.

some areas of the world that had been subject to periodic severe starvation got some respit with the green revolution ... until their population growth caught up (again) with production. because of the much larger population base with significant larger dependency on petro-chemical for food production, changes in petro availability/prices not only affects general economic stability because of transportation costs but also can have significant downside effect on food availability (in some situations where there may be little supply elasticity already).

raising prices could put the availability of petro-chemical out of reach of some uses in various parts of the world (gasoline for car use going from $1-$2/gal to maybe $10/gal or more might put in crimp in some people's recreational transportion use ... but it could also make it totally unavailable for others).

random refs:
https://www.garlic.com/~lynn/2001d.html#25 Economic Factors on Automation
https://www.garlic.com/~lynn/2001d.html#29 Economic Factors on Automation
https://www.garlic.com/~lynn/2001d.html#37 Economic Factors on Automation
https://www.garlic.com/~lynn/2001d.html#39 Economic Factors on Automation

the un population URL
http://www.un.org/popin/

world population trends
http://www.un.org/popin/wdtrends.htm
world population reached 6.1 billion in mid-2000 and is currently growing at an annual rate of 1.2% or 77 million people per year. Six countries account for half the annual growth.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Wed, 20 Feb 2002 20:19:16 GMT
"norman" writes:
Hi Recently I was asked in a meeting about using smart cards for pensions fund payouts. I was not given the details, but it did seem that the system was secured by one cryptographic key which would be on every card (No its not my design). I know for sure that techniques (at a cost) exist to get that key. My immediate answer was "no" as if the system relies ONLY on a smart card, then the data is worth getting. (in other words, just cracking one cards keys would give ability to crack and defeat the whole system as it was explained to me.) Lets assume the information is valuable (guess $100 000) and if the keys are on every card. May I ask for this groups opinion on the security of such smart cards (the system is totally another matter!!!).

(FYI) my answer was that a barcode (the 2d kind) that was encrypted with a suitably long key was in fact far more secure that what can be done on a smart card.


are the cards used for encryption and information hiding for protection?

or are the cards used for authentication for valid transactions?

is it a shared-secret cryptographic key or a non-shared-secret cryptographic key system (aka like a asymmetric key or public key system).

if shared-secret cryptographic key is it the same key for the whole infrastructure .... implying the compromise of that single key puts the whole infrastructure at risk ... aka systemic risk.

systemic risk failures putting the infrastructure at risk can also apply to some of the asymmetric key implementations like PKIs where there may be certificates issued under the control of a root signing key (either directly or indirectly).

in a per account-specific transaction authentication skeme (where cryptographic key is used for valid transactions) ... individual cards per account with unique public/private key pairs can avoid the PKI systemic risk failure modes by just registering the associated public key with each specific account.

In the smart-card authentication scheme (assuming elimination of the systemic risk failure modes as per above), then the issue is it one, two, or 3-factor authentication ... i.e. one or more of the following:
something you havesomething you knowsomething you are

smartcard represents something you have and (single-account) infrastructure can be compromised by stealing the card. Stronger authentication is possible by using something you have in conjunction with something you know or something you are.

it is possible to find chips where the cost of extracting the private key (in a asymmetric key authentication infrastructure) can approach or exceed your "at risk" value. furthermore, the elapsed time to perform such extraction can exceed the nominal expected interval where a card is reported lost or stolen (costly to extract but also a race to beat having the use of the private key suspended). In this case, the "system" isn't "totally another matter" because being able to suspend use of a specific card/key is part of the overall system security.

random 3-factor authentication
https://www.garlic.com/~lynn/aadsmore.htm#schneier Schneier: Why Digital Signatures are not Signatures (was Re :CRYPTO-GRAM, November 15, 2000)
https://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
https://www.garlic.com/~lynn/aadsm7.htm#rhose12 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm7.htm#rhose13 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm7.htm#rhose14 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm7.htm#rhose15 when a fraud is a sale, Re: Rubber hose attack
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm10.htm#bio6 biometrics
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/2000f.html#65 Cryptogram Newsletter is off the wall?
https://www.garlic.com/~lynn/2001c.html#39 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001g.html#1 distributed authentication
https://www.garlic.com/~lynn/2001g.html#11 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001j.html#44 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2001j.html#49 Are client certificates really secure?
https://www.garlic.com/~lynn/2001j.html#52 Are client certificates really secure?
https://www.garlic.com/~lynn/2001k.html#34 A thought on passwords
https://www.garlic.com/~lynn/2001k.html#61 I-net banking security

random systemic risk
https://www.garlic.com/~lynn/aadsmail.htm#variations variations on your account-authority model (small clarification)
https://www.garlic.com/~lynn/aadsmail.htm#complex AADS/CADS complexity issue
https://www.garlic.com/~lynn/aadsmail.htm#parsim parsimonious
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aadsmail.htm#vbank Statistical Attack Against Virtual Banks (fwd)
https://www.garlic.com/~lynn/aadsm2.htm#risk another characteristic of online validation.
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm2.htm#strawm3 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech7 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aepay2.htm#fed Federal CP model and financial transactions
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/aepay2.htm#aadspriv Account Authority Digital Signatures ... in support of x9.59
https://www.garlic.com/~lynn/aadsm10.htm#smallpay2 Small/Secure Payment Business Models
https://www.garlic.com/~lynn/98.html#41 AADS, X9.59, & privacy
https://www.garlic.com/~lynn/99.html#156 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#238 Attacks on a PKI
https://www.garlic.com/~lynn/99.html#240 Attacks on a PKI
https://www.garlic.com/~lynn/2000.html#36 "Trusted" CA - Oxymoron?
https://www.garlic.com/~lynn/2001c.html#34 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001c.html#45 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001n.html#54 The demise of compaq

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TOPS-10 logins (Was Re: HP-2000F - want to know more about it)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Thu, 21 Feb 2002 16:07:38 GMT
"Douglas H. Quebbeman" writes:
Unfair comparison of CP/67 to Multics running on the ring-challenged GE645 in use at the time?

it wasn't so much a comparison ... as an observation of the operation of two different systems being developed in the same building (on different floors), starting essentially at the same time ... by similar groups of people ... both with members that had previously worked on 7094 ctss. supposedly the observation was (at least) one of the things that prompted the work on the multics fast file system.

the cp/67 work had started as cp/40 (on a 360/40), where the group had modified the machine & built their own virtual memory relocation hardware. When a 360/67 became available (which had virtual memory relocation hardware standard) ... cp/40 was ported to 360/67 and renamed cp/67 (the virtual memory hardware for 360/40 was significantly different than the standard virtual memory hardware on the 360/67).

i don't know whether the significantly larger number of people working on multics helped compensate for the "ring-challenged ge645"(?) or not.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

IBM Doesn't Make Small MP's Anymore

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM Doesn't Make Small MP's Anymore
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 21 Feb 2002 16:40:58 GMT
EBIE@PHMINING.COM (Eric Bielefeld) writes:
How quick we forget. I remember MPs, and APs. The MP, or MultiProcessor had 2 processors. Channels were attached to one processor or the other, so you usually configured I/O gear with 1 channel on each processor. APs, or Attached Processors had one processor that could do I/O, and one that couldn't. We used to have a 370-158 with an attached processor. The AP took us from about 1 Mip to 1.8 Mips. The bigger 370 machines could be made into a either an AP or MP. Also, the 3084 was an MP of 2 Dyadic machines (I think). I can't remember just what made a machine dyadic.

prior to the 3081, multiprocessors where two independent uniprocessors that could be configured such that they were tied together and shared a single linear real memory address space. They could also be configured as two independently operating uniprocessing systems.

370 cache machines, when operating it multiprocessor mode also had a performance degradation of about 10-15 percent because of cross-cache synchronization effects.

the 3081 was a dyadic (to distinguish between a multiprocessor that could be configured as two independent uniprocessors) ... a two processor system that was not partitionable into two independent operating uniprocessors (the two processors came in the same box that shared a lot of common components).

originally the 308x was only going to be a dyadic machine along with two 3081s configurable as a multiprocessor 3084 (i.e. a 3084 was partitionable into two independent 3081s).

The "problem" was that TPF didn't have SMP support, many/most TPF installations were operating at 100 percent cpu utilization and needed maximum sustained CPU processing power. As a result, there was eventually a 3083 uniprocessor (some components of the 2nd 3081 processor disabled and/or not present). With the elimination of the cross-cache synchronization, the single 3083 processor ran at about 15 percent higher mip rate than the individual 3081 processors.

The 158-3 raw MIP rate was approx. 1mip ... two 158-3 processors either in MP or AP configuration, was a raw aggregate of of 1.8mips (because of the cross-cache synchronization slow-down).

Because of additional MVS operating system SMP overhead ... the effective delivered thruput was about 1.4-1.5 times that of a uniprocessor (i.e. the combination effects of cross-cache synchronization overhead and the operating system cross-machine sysncronization overhead) or about equivalent to 1.4-1.5mips.

The early VM/370 release 3 SMP suppport that I installed at HONE (official VM/370 multiprocessor support didn't come out until VM/370 relase 4) on a 158-3 AP actually got better than two times thruput of a uniprocessor 158-3. This was a slight of hand because of

1) extremely minimal inline pathlength for SMP support 2) bot i/o and all interrupts all on single processor 3) some slight of hand that tended to keep processes with frequent i/o requests & I/O wait on the processor with channels

The processor with the i/o channels clocked in around .9 mips (i.e. the cross-cache synchronization degradation from 1mips). The processor w/o the i/o channels was clocked at 1.2-1.5mips because of the improved cache hit effects (i/o interrupts were very detrimental to high cache hit ratios). The careful operating system pathlength implementation for SMP support kept that degradation in couple percent range.

random refs:
https://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#92 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#65 oddly portable machines
https://www.garlic.com/~lynn/2000c.html#9 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#61 TF-1
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001b.html#37 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#62 z/Architecture I-cache
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001f.html#73 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#18 I hate Compaq
https://www.garlic.com/~lynn/2001n.html#86 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Thu, 21 Feb 2002 17:05:58 GMT
Nicol So writes:
The following might interest you. (It's a message from Bill Stewart reporting on a fast DPA/DFA smartcard cracking demo that he saw at RSA).

http://www.inet-one.com/cypherpunks/current/msg00177.html


note that DPA/DFA characteristics are quite different for RSA public/private key and EC/DSA public/private key.

there are chips rated at EAL-4 high or better and FIPS140-2 or higher.

it is also possible to buy no-security chips at lower prices (at least sometimes you get what you pay for).

the issue can be viewed from two perspectives

1) EAL-4high/fips140-2 hardware token implementing EC/DSA for authentication with PIN &/or biometric (2-factor or 3-factor) and the cost to compromise the system vis-a-vis say a simple password scheme and the cost to compromise that system. It doesn't necessarily mean that it isn't impossible to compromise either system ... the issue is does the difference in risk outweigh the difference in expense aka is the reduction in risk (going from simple password to hardware token) greater than the cost of going from simple password to 2/3-factor authentication.

2) does the overall system reduce risk ... i.e. in a server oriented environment ... with no shared-secret global keys, unique keys per hardware tokens, eal-4high/fips140-2 chips, pin/biometric required token operation ... does the theft & probing of a chip cost more than probability that they can complete the operation and perform any reasonable fraudulent transaction in period less than typical lost/stolen reporting interval.

there are various kinds of pin-entry exploits ... especially when using a dumb reader and PC keyboard entered PIN. The EU FINREAD standard (european union standard for readers used in financial transactions) addresses many of these issues.

again, it is important that overall system vulnerabilities be investigated. in some cases it is possible to mitigate excessive risk in one area by compensating procedures in another area.

random EU finread refs:
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
https://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
https://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
https://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Thu, 21 Feb 2002 18:02:14 GMT
"Rupert Pigott" writes:
There you go again, you're attemping to be partonising instead of explaining your point. Hell, I don't expect you to be "nice" or "helpful", this is USENET ! However that kind of response can't be doing your reputation any good.

As it happens I have spent a substantial amount of time characterising the performance of OS's under various workloads. Part of the work behind that consisted of reading as much as I could about OS design, in particular VM systems and schedulers. This research was required for the task at hand, it was also very interesting. :)


cambridge science center (4th floor, 545 tech. sq) had done a lot of work on performance profiling and workload profiling ... as well as optimal algorithms for managing resources ... a lot of it became the basis for the transition from a performance tuning culture to a capacity planning culture.

in the resource management area ... a lot of work was done in dynamic adaptive controls. basically the underlying infrastructure managed resource consumption goals. Layered on top of the goal-oriented resource consumption management were various policy oriented facilities ... like "fair share" policy.

two specific cases related to the adaptive nature:

1) prior to official release of the resource manager ... the implementation was used extensively inside the company. there was also a special deal cut with AT&T longlines to provide them a copy (this is in the days of open source was standard ... before the advent of OCO ... and current situation making big deal of providing open source). The copy disappeared into AT&T longlines ... and nearly ten years later somebody responsible for the account tracked me down because longlines was still using it, having migrated it to newer & newer generation of machines. The remarkable thing was that the dynamic adaptive stuff appeared to have adapted not only to a wide range of workloads, pure interactive, mixed interactive & batch, pure batch, etc ... but has also managed to adapt to evolving hardware that represented nearly two orders magnitude increase in available resources (real storage size, cpu processing power, etc).

2) for the official release of the resource manager, an official set of carefully calibrated configurations and workload benchmarks were performance that took three months elapsed time (to verify that the dynamic adaptive characteristics were actually working). The first thousand such benchmarks were specified to cover greater than the expected operational configuration & workloads space that might be found in a large diverse customer base. However, as part of the authomated benchmarking methodology, effectively after the first thousand "specified" benchmarks ... there was some dynamic adaptive features put into the benchmarking specification methodology to try and search for anomolous operating points (aka configuration and/or workload). the benchmarks not only validated the dynamic adaptive nature of the implementation but also the ability to implement various policy specifications, fair-share, non-fair-share, multiple/fractions of fair-share for specific processes, absolute percentage, etc ... across a wide range of hardware configurations and workloads.

random refs:
https://www.garlic.com/~lynn/94.html#52 Measuring Virtual Memory
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/95.html#14 characters
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#180 The Watsons vs Bill Gates? (PC hardware design)
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#75 Mainframe operating systems
https://www.garlic.com/~lynn/2000b.html#74 Scheduling aircraft landings at London Heathrow
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2001b.html#15 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#16 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#74 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001b.html#79 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2001e.html#51 OT: Ever hear of RFC 1149? A geek silliness taken wing
https://www.garlic.com/~lynn/2001e.html#64 Design (Was Re: Server found behind drywall)
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#56 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#18 checking some myths.
https://www.garlic.com/~lynn/2001l.html#9 mainframe question
https://www.garlic.com/~lynn/2001l.html#32 mainframe question
https://www.garlic.com/~lynn/2002b.html#28 First DESKTOP Unix Box?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#55 "Fair Share" scheduling

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Thu, 21 Feb 2002 18:28:39 GMT
Anne & Lynn Wheeler writes:
2) for the official release of the resource manager, an official set of carefully calibrated configurations and workload benchmarks were performance that took three months elapsed time (to verify that the dynamic adaptive characteristics were actually working). The first thousand such benchmarks were specified to cover greater than the expected operational configuration & workloads space that might be found in a large diverse customer base. However, as part of the authomated benchmarking methodology, effectively after the first thousand "specified" benchmarks ... there was some dynamic adaptive features put into the benchmarking specification methodology to try and search for anomolous operating points (aka configuration and/or workload). the benchmarks not only validated the dynamic adaptive nature of the implementation but also the ability to implement various policy specifications, fair-share, non-fair-share, multiple/fractions of fair-share for specific processes, absolute percentage, etc ... across a wide range of hardware configurations and workloads.

one of the reasons i really like john boyd and his theory of performance envelopes and OODA-loop (feedback loops) ... although in slightly different context:
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Thu, 21 Feb 2002 20:33:44 GMT
"Rupert Pigott" writes:
That's very impressive. How "configurable" is the resource manager ? I imagine that certain installations might demand customisation of the manager's behaviour.

Are you aware of any published papers on this resource manager ?


configurable was one of the jokes. there was the product documentation and a number of presentation/papers at SHARE (user group) meetings.

policies were speciafiable .... but the resource manager did dynamic adaptive based on efficiently measuring lots of things.

prior to shipping a review of other resource manager products at the time indicated the prevalent use of large number of tuning knobs ... and the state-of-the-art at the time was significant random walk activity twiddling performance tuning knobs this way and that. Huge numbers of reports from the period were published on the results and recommendations from these random walks.

in anycase, marketing decreed that because all other resource manager products had large number of tuning knobs ... giving system programmers lots of job security ... that this resource manager also needed tuning knobs.

ok, so a number of tuning knobs were implemented. they were fully documented, the algorithms published on how the math work ... and of course all source was delivered/published with the product.

ok, what is the joke?

most resource managers at the time were static operations and used (human managed) turning knobs to approximate dynamic adaption to configuration and workload.

So why do you need such tuning knobs if the system is constantly dynamically adapting to actual measured configuration and workload?

Given any dynamics at all, workload variation over time (hours, days, minutes, etc) ... manually set tuning knobs will tend to be least common denominator for average observed workloads over extended periods of time.

ok, so how do you actually implement effictive dynamic adaptive controls and also install tuning knobs that appear to do something based on documentation, formulas, and code inspection?

Well, in 4-space environment with dynamic adaptive controls with extensive feedback operation ... it is possible to set degrees of freedom for different coefficients. If the dynamic adaptive feedback controls have much greater degree of freedom than the tuning knob coefficients ... it is possible for the system to compensate for human meddling.

Now since most of the resource managers of the era didn't actually implement tightly controlled resource distribution, most systems appears to operate with lots of anomolous activity where tuning knobs had little or no observable effect. This resource manager had tightly control resource distribution rules ... as calibrated by extensive benchmarking tests over wide-range of workloads and configurations. However, it did share the characteristic that frequently the tuning knobs appeared to have little or no effect .... not because the actual resource allocation controls were not effective ... but because the system decided that the tuning knob values should be compensated for.

While the resource manager show large customer installation ... it wasn't exactly an academic hit. I used to build custom modified operating systems for internal corporate distriubtion (in addition to furnishing code for customer production distribution). I sometimes joke that the number of internal corporate installations that I explicitly build, distributed, and supported was frequently larger than the total customer install-base of some better known time-sharing systems (i.e. not comparing total number of customer installations, but comparing the number of internal corporate installations that I personally supported against some other systems total number of customer instllations).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Thu, 21 Feb 2002 20:38:08 GMT
"Rupert Pigott" writes:
I wondered where I'd seen his name before... F-16 !

head of lightweight fighter plane development ... not just F-16, but F-15, F-18, others. F20/tigershark even more so.

and of course ... the fight to change how america fights ... the (then) upcoming crop of capt. & majors being refered to as boyd's jedi knights.

https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Thu, 21 Feb 2002 20:54:48 GMT
daw@mozart.cs.berkeley.edu (David Wagner) writes:
I have no first-hand knowledge, but what I am told by those more knowledgeable in smartcard hacking than I is that they believe they can reverse-engineer just about every tamper-resistant device on the market, except for IBM 4758 (which is not really a smartcard and is not cheap). Anyway, in practice it seems that if you want a smartcard cheap enough to deploy en masse, it is unlikely it will be tamper-resistant -- or so I am told.

Again, please remember that I have no first-hand knowledge and am only repeating the claims of other better-informed sources. However, based on what I've seen described in the public literature, I'm inclined to believe them.


re:
https://www.garlic.com/~lynn/x959.html#aads aads chip strawman

We've been working on this on and off for nearly four years. There are chips that have very impressive tamper-resistant characteristics (eal4-high evaluation). somebody made the observation to me 30 years ago that chips go to $.10/chip in quantity ... given sufficiently large quantity. also a lot of the hardware token/smartcard costs is in the post-FAB processing ... not the actual cost of the chip.

so the two places to address "en masse" costs are

1) sufficiently large quantity
2) all post-fab processing

Note that item one has a correlary ... one of the ways to achieve large quantity is to have something that represents wide-spread applicability.

There are two current ways of achieving wide-spread applicability

a) large general purpose, supporting everything, including the kitchen sink. This has somewhat dynanic offsetting forces, since "large general purpose" also implies more expensive ... aka the increased complexity has to be increasing market size faster than the increased complexity is increasing chip cost (although note that there isn't a straight linear relationship between the two).

b) simple (KISS) operation that addresses a wide-spread, well defined business requirement. While "a" isn't synergistic, it turns out that this approach ("b") has advantage that simple, reduces nearly all costs areas in conjunction with increasing market size by addressing a specific well defined business requirement.

random strawman discussions:
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm2.htm#strawm2 AADS Strawman
https://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech9 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#cstech10 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aepay3.htm#passwords Passwords don't work
https://www.garlic.com/~lynn/aepay3.htm#x959risk1 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aadsm9.htm#carnivore2 Shades of FV's Nathaniel Borenstein: Carnivore's "Magic Lantern"
https://www.garlic.com/~lynn/aadsm10.htm#keygen Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/99.html#170 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#189 Internet Credit Card Security
https://www.garlic.com/~lynn/2000c.html#2 Financial Stnadards Work group?
https://www.garlic.com/~lynn/2001c.html#73 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#5 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001n.html#94 Secret Key Infrastructure plug compatible with PKI
https://www.garlic.com/~lynn/2002.html#39 Buffer overflow

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Thu, 21 Feb 2002 21:18:22 GMT
Anne & Lynn Wheeler writes:
Well, in 4-space environment with dynamic adaptive controls with extensive feedback operation ... it is possible to set degrees of freedom for different coefficients. If the dynamic adaptive feedback controls have much greater degree of freedom than the tuning knob coefficients ... it is possible for the system to compensate for human meddling.

I had this somewhat embarresing incident a couple years ago. My wife and I had done HA/CMP product and were running around the world doing marketing presentations (8 days in europe, different country nearly every day, sometimes five parallel executive presentations a day ... she giving five at the same time I'm doing five; 8 days in asia/pacific, etc).

So we are riding up a the elevator in the HK "tinker-toy" bank building, and some young(er) person in the back says are you "lynn wheeler" of the "wheeler scheduler"? So what do you say? He then says that he studied it as an undergraduate at xyz university. So what do I say? that nearly 20 years later nobody realized the joke about dynamic adaptive feedback, operating in 4-space, and degrees of freedom?

random hacmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

random scheduler
https://www.garlic.com/~lynn/subtopic.html#fairshare

somewhat related virtual memory algorithm
https://www.garlic.com/~lynn/subtopic.html#wsclock

both the original fair share and clock stuff was done when I was an undergraduate. the wsclock stuff turned out to be an issue over ten years later when somebody was getting a stanford PhD on essentially the same work. The problem was that about the time I had done the original work as an undergraduate there were a number of papers published on a different technique. The clock stuff I did was significantly better and got wide-spread commercial system deployment, however, as mentioned it didn't really leak well into the academic world. There was fairly strong opposition to the stanford PhD because of the alternative approach that had been published in the '60s. Whether it was significant or not ... I did manage to dredge up some old A/B comparisons and provide them ... and the PhD was finally awarded.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Thu, 21 Feb 2002 22:14:45 GMT
Charles Richmond writes:
I like F-16's!!! When taking off, they can come of the end of the runway, and go straight up!!! Remarkable...

boyd not only designed fighter planes ... he also flew them ... see "40 second boyd" on page 3 of:
https://web.archive.org/web/20020102104321/http://www.codeonemagazine.com/archives/1997/articles/jul_97/july2c_97.html

allowing somebody directly on his tail ... he could reverse the situation in less than 40 seconds.

also 40-second boyd in "genghis john" article:
http://radio-weblogs.com/0107127/stories/2002/12/23/genghisJohnChuckSpinneysBioOfJohnBoyd.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 21 Feb 2002 23:18:12 GMT
name99@mac.com (Maynard Handley) writes:
for their computing needs no matter how fast it was, just like the bulk of the population do not consider riding a horse to work no matter how cheap or environmentally sound or whatever it may be---a horse is simply so far divorced from their needs that the fact that it is also a "mode of transporation" is completely irrelevant,

ot warning :-)

i seem to remember some article form the late 1800s about severe environmental pollution in NYC caused by all the horses. i got the impression that on a per unit basis (horses vis-a-vis automobile) that if you had a couple hundred thousand horses up & down the streets of some large city ... that it would be significantly more environmental pollution than equivalent number of automobiles.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 21 Feb 2002 23:33:20 GMT
name99@mac.com (Maynard Handley) writes:
One argue usefully, perhaps, about why Alpha failed (if it did) against Sun or SGI or maybe even RS/6000, but to argue about why it failed against Intel is simply moronic. So if the argument IS against Sun and SGI, the very first question is --- did it fail? Were the returns, sales, costs etc against these rivals vastly out of line? Maybe the problem is simply that space is not a very profitable space to be in, being attacked from both below by better PCs and above by an aggressive IBM evangelizing cheaper mainframes.

i believe the profit margins in the mini-computer market had historically been much better than either the workstation or personal computing market segment. For one thing, the mini-computer market had tended to be proprietary which tended to bring better ROI (assuming that you could get market share).

the mini-computer market got severely squeezed from lots of sides, workstations/PCs from below, some mainframes from above, and whole issue of proprietrary/non-proprietary.

the issue in the mini-computer market would have been both component hardware cost structure as well as organizational cost structures (somewhat the line about organizations expand to fill the available ROI). as/400 in the mini-computer market seems to done a bit of adoption using powerpc chips to address various hardware component cost structure and presumably doing various organization and system things needed to deal with changes in the ROI-profile in a much more non-proprietary and price competitive market.

It seemed like alpha was directed at vax solution (in somewhat similar way to as/400 with powerpc) as well as workstation and pc solutions. as/400 was significantly aided in its ability to pull off the powerpc hardware transition because it had maintained a significantly higher level application environment abstraction (than vax). This is somewhat legacy of the s38/as400 approximating FS (future system) architecture (all the folklore about after the company killed off FS, it continued to survive in rochester).

recent FS related posting
https://www.garlic.com/~lynn/2002c.html#1 Gerstner moves over as planned

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

medium term future of the human race

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: medium term future of the human race
Newsgroups: comp.society.futures
Date: Thu, 21 Feb 2002 23:43:45 GMT
Malcolm McMahon writes:
Did you know there's never been a famine in a functioning multi-party democracy?

is that cause or effect?

having enuf/excess resources would appear to enable more resources & time spent on non-direct subsistance activities ... like for instance schooling.

so does functional multi-party democracy create abundant resources ... or does abundant resources enable functional multi-party democracies?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Fri, 22 Feb 2002 15:41:51 GMT
Sebastian_30@lycos.com (Sebastian) writes:
Do you know what cards they were cracking? Do anyone have a "list" of vulnerable cards - meaning cards that they actually have cracked (not vulnerable because they are smart cards and that all smart cards can be cracked...). I've never seen any concrete SPA/DPA given the brand of the card (except for PIC1684). I guess this is bad commercial value for the manufacturers, but good to know for the consumers!

one of the issues is that a lot of card vendors are buying chips from multiple sources ... it is the chips that have the vulnerability characteristics and any smartcard vendor might actually be using chips from several different vendors in the same card; chips are being cracked ... and the chip "customers" are the smartcard or other hardware token vendors (for any of the various hardware token vendors, what is the list of chips that they are using in their products?)

smartcards & cc
https://web.archive.org/web/20020124070419/http://csrc.nist.gov/cc/sc/sclist.htm

fips 140-1 & 140-2 validations
http://csrc.ncsl.nist.gov/cryptval/140-1/1401val.htm

infineon security controller certification
http://www.infineon.com/cmc_upload/documents/029/198/zertifizierungsstatus_0109.pdf

some philips certification
http://www.semiconductors.philips.com/news/publications/content/file_866.html

finread overview
http://www.semiconductors.philips.com/news/publications/content/file_866.html

misc. EAL evaluation (from austrailia)
https://web.archive.org/web/20020221213202/http://www.dsd.gov.au/infosec/aisep/EPL/ineval.html

note in the cases of hardware token products listed above ... they don't actually mention what chip is being used.

NIAP certification laboratories:
https://web.archive.org/web/20020221012004/http://www.nsa.gov/releases/cctls_08282000.html

cambridge tamper lab:
http://www.cl.cam.ac.uk/Research/Security/tamper/

dpa
http://www.cryptography.com/resources/whitepapers/DPA.html

some overview
https://web.archive.org/web/20020126132220/http://www.geocities.com/ResearchTriangle/Lab/1578/smart.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Fri, 22 Feb 2002 18:49:56 GMT
stevei_69@hotmail.com (Steve H) writes:
2. How much the customer is willing to pay for the s/card There are some very high security ICs having high level of ITSEC (FIPS equivalent) classification. The majority of customers are sot willing to pay the price for those. For an example, the famous $1 Visa chip card, falls under the class of a chep 8 bit low security IC with no dedicated crypto controller. The customer has to pay something like 10-20 $ per card to use some of the high end secure ICs. The banks are not willing to pay this price. Which is normal. There is no business case for paying this level of pricing. remember the mag stripe credit/debit card costs the banks relatively little ( my guess less than 50 cents). TO ASK THE banks to pay 10x the amount is not going to gain great popularity points.... 3.s/card vendors multi source the cheaper end ICs. When it comes to the higher security ICs, the list of vendors that manufactures them is reduced to something like 2 .

looking at it slightly different view point

the current (magstripe) payment cards are authentication/transaction devices. rather than look at the raw magstripe costs ... look at the fully-loaded costs for deliverying such a magstripe card to a customer and the incremental cost of adding a chip as part of that delivery.

in effect the current expiration date is a form of something you know information requiring frequent card re-issue. aka there are well known algorithms for generating valid payment card account numbers ... the expiration date is in some sense a check-code.

there is a claim ... that name & expiration date can be eliminated as a payment card attribute if a chip was added (meeting EU point-of-sale privacy issues as well as eliminating periodic need for frequent card re-issue) and the transactions performed as in the x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959

If adding a chip to an existing magstripe card delivery could result in eliminating just one subsequent card re-issue (because of elimination of the expiration date as an authentication attribute) ... the incremental costs of the chip (even a higher secruity IC) could easily be less than the fully loaded costs of a subsequent card re-issue. aka a chip could actually save money (when you take into account the overall system and infrastructure issues).

That is independent of the issue of something needing to be done because of the increasing vulnerabilities and exploits in the existing magstripe based payment cards (aka chips reducing risk & fraud costs).

the requirement given the x9a10 working group for x9.59 was to preserve the integrity of the financial infrastructure for all electronic retail payments in all environments (i.e. stored-value, debit, credit, atm, point-of-sale, internet, aka ALL).

random risk/exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud

additional x9.59 privacy & authentication:
https://www.garlic.com/~lynn/subpubkey.html#privacy

some aads chip strawman references at:
https://www.garlic.com/~lynn/x959.html#aads

some related discussions of x9.59 (and hardware token, card, dongle, etc) with respect to current spectrum of (online) magstripe payment cards (including current online stored-value magstripe payment cards usable at existing POS debit/credit terminals):
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm6.htm#digcash IP: Re: Why we don't use digital cash
https://www.garlic.com/~lynn/aadsm6.htm#terror12 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm7.htm#pcards4 FW: The end of P-Cards?
https://www.garlic.com/~lynn/aadsm7.htm#idcard2 AGAINST ID CARDS
https://www.garlic.com/~lynn/aadsm9.htm#cfppki12 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm9.htm#smallpay Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aadsmore.htm#eleccash re:The Law of Digital Cash

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Fri, 22 Feb 2002 18:49:56 GMT
stevei_69@hotmail.com (Steve H) writes:
2. How much the customer is willing to pay for the s/card There are some very high security ICs having high level of ITSEC (FIPS equivalent) classification. The majority of customers are sot willing to pay the price for those. For an example, the famous $1 Visa chip card, falls under the class of a chep 8 bit low security IC with no dedicated crypto controller. The customer has to pay something like 10-20 $ per card to use some of the high end secure ICs. The banks are not willing to pay this price. Which is normal. There is no business case for paying this level of pricing. remember the mag stripe credit/debit card costs the banks relatively little ( my guess less than 50 cents). TO ASK THE banks to pay 10x the amount is not going to gain great popularity points.... 3.s/card vendors multi source the cheaper end ICs. When it comes to the higher security ICs, the list of vendors that manufactures them is reduced to something like 2 .

looking at it slightly different view point

the current (magstripe) payment cards are authentication/transaction devices. rather than look at the raw magstripe costs ... look at the fully-loaded costs for deliverying such a magstripe card to a customer and the incremental cost of adding a chip as part of that delivery.

in effect the current expiration date is a form of something you know information requiring frequent card re-issue. aka there are well known algorithms for generating valid payment card account numbers ... the expiration date is in some sense a check-code.

there is a claim ... that name & expiration date can be eliminated as a payment card attribute if a chip was added (meeting EU point-of-sale privacy issues as well as eliminating periodic need for frequent card re-issue) and the transactions performed as in the x9.59 standard
https://www.garlic.com/~lynn/x959.html#x959

If adding a chip to an existing magstripe card delivery could result in eliminating just one subsequent card re-issue (because of elimination of the expiration date as an authentication attribute) ... the incremental costs of the chip (even a higher secruity IC) could easily be less than the fully loaded costs of a subsequent card re-issue. aka a chip could actually save money (when you take into account the overall system and infrastructure issues).

That is indendent of the issue of something needing to be done because of the increasing vulnerabilities and exploits in the existing magstripe based payment cards (aka chips reducing risk & fraud costs).

the requirement given the x9a10 working group for x9.59 was to preserve the integrity of the financial infrastructure for all electronic retail payments in all environments (i.e. stored-value, debit, credit, atm, point-of-sale, internet, aka ALL).

random risk/exploits:
https://www.garlic.com/~lynn/subintegrity.html#fraud

additional x9.59 privacy & authentication:
https://www.garlic.com/~lynn/subpubkey.html#privacy

some aads chip strawman references at:
https://www.garlic.com/~lynn/x959.html#aads

some related discussions of x9.59 (and hardware token, card, dongle, etc) with respect to current spectrum of (online) magstripe payment cards (including current online stored-value magstripe payment cards usable at existing POS debit/credit terminals):
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/aadsm2.htm#straw AADS Strawman
https://www.garlic.com/~lynn/aadsm6.htm#digcash IP: Re: Why we don't use digital cash
https://www.garlic.com/~lynn/aadsm6.htm#terror12 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#pcards2 The end of P-Cards? (addenda)
https://www.garlic.com/~lynn/aadsm7.htm#pcards4 FW: The end of P-Cards?
https://www.garlic.com/~lynn/aadsm7.htm#idcard2 AGAINST ID CARDS
https://www.garlic.com/~lynn/aadsm9.htm#cfppki12 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm9.htm#smallpay Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aadsmore.htm#eleccash re:The Law of Digital Cash

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Opinion on smartcard security requested

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Opinion on smartcard security requested
Newsgroups: sci.crypt
Date: Fri, 22 Feb 2002 19:20:15 GMT
Anne & Lynn Wheeler writes:
in effect the current expiration date is a form of something you know information requiring frequent card re-issue. aka there are well known algorithms for generating valid payment card account numbers ... the expiration date is in some sense a check-code.

of course the original "expiration date" was for the plastic card itself (pre-magstripe) in the "offline" world. This is the plastic card as a credential ... somewhat similar to the design point for PKI certificate "credentials" targeted at the offline electronic world circa early '80s (aka connect, download, disconnect, read email ... where there was no end-to-end online, connectivity).

adding magstripe to the payment card moved it from an offline credential/certificate model to an online transaction model (out of the pre-'70s offline era model ... something that PKI certificates are still being targeted at ... the pre-70s offline era world).

issues for magstripe payment cards now is supporting non-secure/non-private online networks (existing payment card networks have been private) and the advances in technology supporting card counterfeiting.

x9.59 effectively adds a digital signature to an existing iso 8583 online transaction (w/o requiring any PKI certificate ... which are targeted at offline environments where there is no prior relationship between the parties; payment transactions are both online & involve prior relationship between consumer and their financial institution).

a hardware token performing x9.59 digital signature operation added to existing iso 8583 online transaction (not only online debit & credit, but various of the stored-value flavors) would not only address secure transactions being able to flow over non-secure (non-private) online networks as well as the magstripe card counterfeiting issue (chip being significantly harder to counterfeit than existing magstripe).

This is effectively the NACHA/Debit network trial:
https://www.garlic.com/~lynn/x959.html#aads

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

the same question was asked in sci.crypt newgroup

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: the same question was asked in sci.crypt newgroup
Newsgroups: alt.technology.smartcards
Date: Fri, 22 Feb 2002 21:54:51 GMT
"norman" writes:
I posted the same question at the cryptology newsgroup (sci.crypt) and many of the replies would be of interest to the contributors to this thread. norman wrote in message

and other parts of it also
https://www.garlic.com/~lynn/2002c.html#7 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#15 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#22 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#23 Opinion on smartcard security requested
https://www.garlic.com/~lynn/2002c.html#24 Opinion on smartcard security requested

somewhat related postings here
https://www.garlic.com/~lynn/99.html#224 X9.59/AADS announcement at BAI this week
https://www.garlic.com/~lynn/99.html#229 Digital Signature on SmartCards
https://www.garlic.com/~lynn/2000.html#33 SmartCard with ECC crypto
https://www.garlic.com/~lynn/2000.html#35 SmartCard with ECC crypto
https://www.garlic.com/~lynn/2000.html#65 Cybersafe & Certicom Team in Join Venture (x9.59/aads press release at smartcard forum)
https://www.garlic.com/~lynn/2000b.html#53 Digital Certificates-Healthcare Setting
https://www.garlic.com/~lynn/2000c.html#55 Java and Multos
https://www.garlic.com/~lynn/2000e.html#27 OCF, PC/SC and GOP
https://www.garlic.com/~lynn/2000f.html#77 Reading wireless (vicinity) smart cards
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#5 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001n.html#8 Future applications of smartcard.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

economic trade off in a pure reader system

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: economic trade off in a pure reader system
Newsgroups: sci.crypt
Date: Sat, 23 Feb 2002 00:31:51 GMT
"norman" writes:
it seems smartcard readers are way below $100., maybe $35......so for many systems that just require a read capability, the economic balance is when the system cost (readers plus cards) is equal. so for systems that need a lot of cards, then the 2d code offers genuine economic advantage.

The obvious arena where such a code would be a access system with numerous card users. the moment you have say 200 or more cards per reader, then the barcodes make economic sense.

( the maths is not rigorous, assume smartcards cost $5 and readers 35, the 200 users is close enough to $1000. for the bar codes, the reader is the $1000 and the cards are "close to zero" ?)


similar discussion in alt.technology.smartcards last year:
https://www.garlic.com/~lynn/2001m.html#4 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#5 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
https://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market

also, for standard PC market you could pay somewhat more for USB dongle hardware token (compared to chipcard) and eliminate the requirement for card acceptor device (dongle plugs directly into usb port). This still leaves open the issue of secure PIN-entry.

if it was for new installation ... it is possible to get keyboard/reader/usb ... where the keyboard has numeric keypad "cut-out" i.e. mode where key entry from the keypad goes directly to the hardware token and bypasses bios/system/etc which could be prone to virus/trojan-horse evesdropping. note this is a 2-factor authentication ... where the PIN represents something you know and affects corrects operation of the chip (something that is obviously not possibly with a 2d bar-code ... since a 2d bar-code isn't actually executing anything).

the issue for cards/reader also raises the question of high traffic activity ... where standard 7816 contact start to have reliability issues (one of the things driving 14443 contactless).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Sat, 23 Feb 2002 17:04:38 GMT
jmfbahciv writes:
GDit. Why can't you answer a direct question? I'm trying to figure out how to use the language. If English was your second language, I'd write differently.

in the early '80s there was a researcher that sat in the back of my office for 9 months taking notes on how i communicated, face-to-face, email, instant messanging, telephone, etc. also went to meetings and took notes.

this turned into a corporate report on detailed analysis of how i used language, how i communicated, and cmc (computer mediated communication) ... also a stanford phd thesis between language department and computer AI department. material also used subsequently in some number of books.

the person had thaught esl (english as second language) for 10-15 years prior to going back to school (england, australia, thailand, etc).

their observation was that i bore all the marks of esl ... even tho i was born and raised in the us, and had little non-english language exposure (except for couple years of latin, french & spanish in high school). there was some slander that i thot & spoke machine language.

random refs:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/99.html#205 Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000c.html#1 A note on the culture of database
https://www.garlic.com/~lynn/2001j.html#29 Title Inflation
https://www.garlic.com/~lynn/2001k.html#64 Programming in School (was: Re: Common uses...)
https://www.garlic.com/~lynn/2002b.html#51 "Have to make your bones" mentality

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Sat, 23 Feb 2002 20:43:58 GMT
Anne & Lynn Wheeler writes:
prior to shipping a review of other resource manager products at the time indicated the prevalent use of large number of tuning knobs ... and the state-of-the-art at the time was significant random walk activity twiddling performance tuning knobs this way and that. Huge numbers of reports from the period were published on the results and recommendations from these random walks.

in anycase, marketing decreed that because all other resource manager products had large number of tuning knobs ... giving system programmers lots of job security ... that this resource manager also needed tuning knobs.


the resource manager had another distinction. june 23rd, 1969 was unbundling, i.e. separate pricing for lots of things that had previously been (sort of) thrown in for free. however, scp (system control program) software (to make the hardware run) still continued to be free(? bundled).

besides wanting tuning knobs on the resource manager ... the other thing that marketing wanted was that the resource manager was going to be the test case for the first SCP charged for product (i.e. it was add-on to the basic system control program ... but they were going to charge for it). besides having to put in the tuning knobs ... I also got to spend six months with business, planning, & forecasting people trailblasing on all the stuff associated with charging for a SCP product.

another distinction was that csc (cambridge science center, 4th floor, 545 tech sq) had been listed as data processing division "field" location. up to a couple weeks prior to the release of the resource manager. The distinction was that people in "field" locations that released products got the 1/12th of annual license fee for the first two years (as bonus/incentive to develop charged for products). A month before the release of the resource manager ... vs/repack product had been released by CSC and the primary individuals responsible collected the first month's license fee for vs/repack. Between the time vs/repack was released and the time the resource manager was released a month later, csc was reclassified as a hdqtr's site (not a "field" site) and therefor no longer eligible for the license fee incentive.

market uptake of resource manager was such that monthly license revenue exceeded $1m within couple months of FCS (first customer ship).

random vs/repack refs:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)

as an aside, early development version of vs/repack was used by CSC in helping redo the apl storage manager as part of the apl\360 to cms\apl port (aka transition from small 16k/32k real storage orientation to making it much more virtual memory friendly).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Page size (was: VAX, M68K complex instructions)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Page size (was: VAX, M68K complex instructions)
Newsgroups: comp.arch
Date: Sat, 23 Feb 2002 21:10:53 GMT
anton@mips.complang.tuwien.ac.at (Anton Ertl) writes:
This came up several times in the last two years; e.g., there was a thread "4M pages are a bad idea" in December 2000, and there I found that my single-user machine had 44 processes running at one point in time, with a total of 557 writable and 70 unique read-only mappings, thus probably requiring 557+70=627 pages or more (each unique mapping needs at least one page). With 4M pages, this would require at least 2508MB of memory to fit without paging.

note that in the late '70s to early '80s mainframe developed concept of "big pages" for page i/o transfer. basically the line about disk access performance was improving slower than disk transfer (as well as slower than performance of many other system components).

the basic implementation would cluster a track worth of (4k) pages ("big page", 10 4k pages on 3380) for transfer to disk as a single track write. a fault on any member of a page in a "big page" would bring in the complete "big page". An advantage over doing straight 40k pages was that the track "cluster" members didn't have to be contiguous virtual memory ... just collection of pages from the same address space that appeared to have some recent use affinity.

the "big page" paging area and allocation/deallocation was done similar to some of the journaling file system ... always write to a new location that was closest to the current disk arm position (in part because the actual cluster members of any big page might change/update on any output operation). performance recommendation was that the total available disk space for big pages would be five to ten times the actual allocated big pages. That way as the cursor allocation/write algorithm swept across the disk surface ... it could almost always do a full cylinder of writes before having to move the arm.

the implementation didn't bother with garbage collection & file compaction (as in most journaling file systems) since it was felt that most allocated data would naturally evaporate when an application eventually got around to (re)touching some member of a big page (requiring it to be read and associated disk space deallocated).

random big page postings:
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching

some old postings on relative disk "system" performance
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/2001n.html#78 Swap partition no bigger than 128MB?????

random 4m page past postings:
https://www.garlic.com/~lynn/2000g.html#38 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#42 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#43 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#44 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#45 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#47 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#1 4M pages are a bad idea (was Re: AMD 64bit Hammer CPU and VM)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#68 I/O contention
https://www.garlic.com/~lynn/2001h.html#20 physical vs. virtual addresses
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001k.html#62 SMP idea for the future
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#41 mainframe question
https://www.garlic.com/~lynn/2001m.html#56 Contiguous file system
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002b.html#34 Does it support "Journaling"?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Sat, 23 Feb 2002 22:25:58 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Which goes a long way to explaining why VS/APL running under MVS/TSO on a 168 was able to deal with an 8 megabyte workspace which held a model of the Canadian economy, circa 78. (Today, we'd only need 8 megabits B-) But that 168 almost groaned under the load.

in the '70s cms\apl, apl\cms, and then vs/apl was used extensively in modeling large number of different things. It was used to create and deliver a large number of applications that today have become spread-sheet based (being able to ask a lot of what-if questions).

cambridge had taken apl\360 and done a lot of work on it to turn it into cms\apl (as an aside, cms\apl also had license "charged-for" and the primary csc people working on it got their part of the first months' license fee). Besides adopting the whole structure to virtual memory environment, CSC also implemented a lot of system call functions that allowed bunch of stuff like external file access.

One of the early business critical apl "modeling" applications was corporate hdqtrs business planning and forecasting. Basically corporate hdqtrs people were given online access to the csc machine in cambridge and they dumped a large part of corporate economic infrastructure into large apl model and munged on it extensively. This made for some very interesting security issues. The csc machine room had to have extremely tight security because some of the data resident on the machine. However, the machine also hosted researchers at CSC doing a wide variety of devleopment and scientific research (even if there were only 30-35 people), onsite mit, bu, harvard student & other access, employee home terminal access and even some off-site student access from various universities in the area.

Eventually a vm service closer to corporate hdqtrs was created for those guys ... and also cloned for other hdqtr operations (emea moved from white planes to paris, i hand carried a installation tape to emea hdqtrs in the then, brand-new la defense bldgs).

The whole world wide online field support system (HONE) had just about all of its features delivered on a vm/370 cms\apl (and then apl/cms) platform. US eventually had large multi-machine distributed cluster running in palo alto, dallas, and boulder. There were other large HONE installation that sprang up around various other places in the world, Havant in England, Uithoorne on the continent, at various times a couple different places in &/or around paris, toronto, tokyo, etc.

The system call function support in cms\apl caused a big flap with the people at the phili science center that had done apl\360 ... violating the integrity purity of apl. This led to a strenuous effort to develop a apl paradigm that allowed supporting system function access w/o violating the purity of apl; aka "shared variables". The palo alto science center did much of the work for incorporating shared variables as well as doing the 370/145 apl microcode accelerator turning cms\apl into apl\cms.

Eventually a group was formed in STL to take-over apl\cms product and enhance it so that it operate in both cms and tso ... since they then could no longer call it apl\cms ... they renamed it to vs/apl.

random hone, apl, etc
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

You think? TOM

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: You think? TOM.
Newsgroups: alt.technology.smartcards
Date: Sat, 23 Feb 2002 21:56:05 GMT
"Wim Ton" writes:
Hi,

The use of public key is limited if the relations between the parties are fixed. The advantage of public key lays in the fact that you don't need to agree a key with every possible communication partner, like e-mail and e-commerce. If the only releation is between cardholder and card-.issuer, one migth as well use symmetric cryptography on a cheaper card.


there are advantages to using public key in almost all situations (even if unique per account, and not some of the card infrastructures that are shared-secret based ... but have to use multiple layers of shared-secrets ... some global and therefor represents systemic risk) since it eliminates shared-secrets and the problem that shared-secret can be used to both authenticate as well as originate transactions.

eliminating the ability to have shared-secrets capable of originating fraudulent transactions simplifies everybody's infrastructures (controlling modification of records is simpler than preventing viewing records or dealing with audit trail of everybody that might have ever view the record).

the issue then becomes key registration .... key registration can be similar to all the current operations for key (authentication material registration) registration.

if you are taling digital signature authentication with ec/dsa with a secure chip that provides reasonable protection of the key material ... the des accelerator for ec/dsa and des is effectively the same cost in those class of chips. it is only when you get into the no-security class of chips (effectively no key protection), that you might see a little cost difference between ec/dsa and des.

the primary distinction between ec/dsa and des is the requirement in dsa for high quality random number generator (not present in a straight des requirement). however, the higher security chips have high quality randomizer as part of other security features (which then effectively eliminates it as being a unique cost differentiation between ec/dsa and des or other symmetric key algorithm)

things change if you are talking about rsa signature ... rsa signature can be done on no-security chip because it doesn't directly require a high quality random number generator (especially if keys are injected as opposed to generated on chip). however, rsa signature performance does typically lead to a unique accelerator ... which does take a lot of silicon and increases cost.

I would prefer a high quality hardware randomizer in support of various security features as well as random number generator supporting ec/dsa for authentication (and common accelerator for both des/symmetric and ec/dsa) ... which then also supports on-chip key generation ... and allows for key not being divulged outside the chip (compared to a no-security chip with huge silicon area in support of rsa acceleration).

then w/o compromising security (current security guidelines that require unique password/key/pin for each security domain) the same simple chip/hardware-token with the same public key can be used for authentication in multiple, different security domains.

random refs:
https://www.garlic.com/~lynn/x959.html#aads

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 24 Feb 2002 20:11:38 GMT
"Bill Todd" writes:
Stan's comment about the problem being release of otherwise locked-in fossil carbon is correct but doesn't directly address your argument that carbon is carbon. The above does. If current vegetation were being buried and fossilized at the same rate we dig and pump fossil fuel your argument would hold water, but such is not the case at all.

but does that mean that all this fossil fuel burning is actually restoring the natural earth's ecological balanace ... since those elements had been unnaturally(?) removed from the normal ecology ... and that burning all fossil fuel over a few tens of years (depleting a non-renewable resource) is an accelerated attempt to restore all those resources to the standard ecology (fossilized material is unnatural state and so it is our duty to restore it to normal ecology as quickly as possible).

note that there have been somewhat similar thread in comp.society.futures. random pieces:
https://www.garlic.com/~lynn/2002c.html#6
https://www.garlic.com/~lynn/2002c.html#20

there was posting today claiming that various oil extraction is now approaching (or has crossed) negative energy expenditure (i.e. energy needed to extract the oil is greater than the energy available in the oil extracted); somewhat orthogonal to whether or not all such fossil resources need to be restored to standard ecological balance.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Did Intel Bite Off More Than It Can Chew?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Did Intel Bite Off More Than It Can Chew?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 24 Feb 2002 20:11:38 GMT
"Bill Todd" writes:
Stan's comment about the problem being release of otherwise locked-in fossil carbon is correct but doesn't directly address your argument that carbon is carbon. The above does. If current vegetation were being buried and fossilized at the same rate we dig and pump fossil fuel your argument would hold water, but such is not the case at all.

but does that mean that all this fossil fuel burning is actually restoring the natural earth's ecological balanace ... since those elements had been unnaturally(?) removed from the normal ecology ... and that burning all fossil fuel over a few tens of years (depleting a non-renewable resource) is an accelerated attempt to restore all those resources to the standard ecology (fossilized material is unnatural state and so it is our duty to restore it to normal ecology as quickly as possible).

note that there have been somewhat similar thread in comp.society.futures. random pieces:
https://www.garlic.com/~lynn/2002c.html#6
https://www.garlic.com/~lynn/2002c.html#20

there was posting today claiming that various oil extraction is now approaching (or has crossed) negative energy expenditure (i.e. energy needed to extract the oil is greater than the energy available in the oil extracted); somewhat orthogonal to whether or not all such fossil resources need to be restored to standard ecological balance.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OS Workloads : Interactive etc

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OS Workloads : Interactive etc.
Newsgroups: alt.folklore.computers
Date: Mon, 25 Feb 2002 04:12:08 GMT
ab528@FreeNet.Carleton.CA (Heinz W. Wiggeshoff) writes:
Which goes a long way to explaining why VS/APL running under MVS/TSO on a 168 was able to deal with an 8 megabyte workspace which held a model of the Canadian economy, circa 78. (Today, we'd only need 8 megabits B-) But that 168 almost groaned under the load.

note that apl\cms on 145 with apl microcode assist ran as fast (or faster) as apl\cms on 168-3 ... course 145 didn't have 8mbyte of real memory ... so a apl model greater than 145 real storage size could have page-trashed itself nearly to death.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

TOPS-10 logins (Was Re: HP-2000F - want to know more about it)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: TOPS-10 logins (Was Re: HP-2000F - want to know more about it)
Newsgroups: alt.folklore.computers,alt.sys.pdp10
Date: Mon, 25 Feb 2002 16:59:23 GMT
fox@crisp.demon.co.uk (Paul D Fox) writes:
It would be good to have a college course on cracking and hacking. In the older days, such activities were being done to promote self-awareness. Nowadays people seem to do it out of complete stupidity. (Why get caught on a felony charge when you can have as much fun on your own machine).

been at some meetings regarding university information assurance/security programs. at some, the number one stated problem is little or no interest by students in correctness and prevention ... but spending all their time attempting to break existing systems and garnering the associated bragging rights (not even figuring out solutions to problems they had uncovered, just uncovering the problems)

slightly related:
https://www.garlic.com/~lynn/2000g.html#4 virtualizable 360, was TSS ancient history

random information assurance/security:
https://www.garlic.com/~lynn/aadsmail.htm#mfraud AADS, X9.59, security, flaws, privacy
https://www.garlic.com/~lynn/aadsm3.htm#cstech12 cardtech/securetech & CA PKI
https://www.garlic.com/~lynn/aadsm3.htm#kiss4 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
https://www.garlic.com/~lynn/aadsm4.htm#01 redundant and superfluous (addenda)
https://www.garlic.com/~lynn/aadsm5.htm#epaym "e-payments" email discussion list is now "Internet-payments"
https://www.garlic.com/~lynn/aadsm5.htm#encryp Encryption article
https://www.garlic.com/~lynn/aadsm6.htm#terror3 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#terror7 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm6.htm#terror10 [FYI] Did Encryption Empower These Terrorists?
https://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
https://www.garlic.com/~lynn/aadsm10.htm#cfppki13 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#smallpay2 Small/Secure Payment Business Models
https://www.garlic.com/~lynn/aadsm10.htm#cfppki18 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#bio3 biometrics (addenda)
https://www.garlic.com/~lynn/aadsm10.htm#keygen Welome to the Internet, here's your private key
https://www.garlic.com/~lynn/ansiepay.htm#aadsach NACHA to Test ATM Card Payments for Consumer Internet Purchases
https://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security
https://www.garlic.com/~lynn/aepay3.htm#riskaads AADS & RIsk Management, and Information Security Risk Management (ISRM)
https://www.garlic.com/~lynn/aepay3.htm#x959risk1 Risk Management in AA / draft X9.59
https://www.garlic.com/~lynn/aepay4.htm#nyesig e-signatures in NY
https://www.garlic.com/~lynn/aepay4.htm#comcert3 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay6.htm#docstore ANSI X9 Electronic Standards "store"
https://www.garlic.com/~lynn/aepay6.htm#gaopki GAO: Government faces obstacles in PKI security adoption
https://www.garlic.com/~lynn/aepay6.htm#cacr7 7th CACR Information Security Workshop
https://www.garlic.com/~lynn/aepay6.htm#cacr7b 7th CACR Information Security Workshop
https://www.garlic.com/~lynn/aepay7.htm#cacr8 8th CACR Information Security Workshop (human face of privacy)
https://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? Photo ID's and Payment Infrastructure
https://www.garlic.com/~lynn/aepay7.htm#orst X9.59 paper ... fyi
https://www.garlic.com/~lynn/aepay8.htm#orst2 Project Corvalllis
https://www.garlic.com/~lynn/2001c.html#61 Risk management vs security policy
https://www.garlic.com/~lynn/2001d.html#7 Invalid certificate on 'security' site.
https://www.garlic.com/~lynn/2001e.html#77 Apology to Cloakware (open letter)
https://www.garlic.com/~lynn/2001f.html#31 Remove the name from credit cards!
https://www.garlic.com/~lynn/2001f.html#35 Security Concerns in the Financial Services Industry
https://www.garlic.com/~lynn/2001f.html#79 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#0 FREE X.509 Certificates
https://www.garlic.com/~lynn/2001g.html#38 distributed authentication
https://www.garlic.com/~lynn/2001h.html#45 Article: Future Trends in Information Security
https://www.garlic.com/~lynn/2001h.html#64 Net banking, is it safe???
https://www.garlic.com/~lynn/2001i.html#9 Net banking, is it safe???
https://www.garlic.com/~lynn/2001j.html#5 E-commerce security????
https://www.garlic.com/~lynn/2001j.html#44 Does "Strong Security" Mean Anything?
https://www.garlic.com/~lynn/2001k.html#1 Are client certificates really secure?
https://www.garlic.com/~lynn/2001l.html#56 hammer
https://www.garlic.com/~lynn/2002.html#12 A terminology question

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

economic trade off in a pure reader system

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: economic trade off in a pure reader system
Newsgroups: sci.crypt
Date: Mon, 25 Feb 2002 17:44:34 GMT
Francois Grieu writes:
The number of terminals and cards is not the only, and often not the decisive criteria in choosing among card technologies. Smarts Cards are rarely a sensible choice unless you need at least one of

1 great difficulty to duplicate the card, or equivalently ability for the card reader/terminal to authenticate the card as genuine

2 terminals that work offline (non-connected)

3 user PIN code verification (esp. in combination with 2)

4 value stored in the card (esp.in combination with 2)

5 ability to field-update card content with a low-cost terminal

6 data storage capacity in k-bytes

7 ability to operate the card in a harsh environments

8 ability to operate the card at a distance

9 low cost terminal that can be reliably operated by an untrained user

Several criterias may combine to reach the thresold where Smart Cards are worth their increased cost.

In some cases (e.g. pay-TV over a unidirectional link with a standard terminal), Smart Cards may be the only technology that does the job.


at least one of the situations where authentication 7816 chipcard application was replaced with barcode reader ... was in high traffic transit-like application where 7816 contact had significant reader reliability problems (and there wasn't generally available 14443 or other contactless technology readily available).

on the other hand, various authentication/authorization protocols operating over insecure networks have been developed for chipcards ... not only is the reader possibly remote ... but the connection between the reader and the authorization agent is non-private &/or possibly non-secure.

an obvious example is internet e-commerce transactions.

Example of number 4 are the offline stored-value cards used in europe. there are similar stored value cards in the us that have extensive deployment and use ... but they are online. The contrast was that (at least one time) there was significant PTT costs &/or even question of online being available in some regions (compared to the US). However the world is significantly changing ... with things like internet and wireless changing the online/offline consideration in most areas of the world.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
Newsgroups: comp.arch
Date: Mon, 25 Feb 2002 18:14:19 GMT
dsiebert@excisethis.khamsin.net (Douglas Siebert) writes:
So this whole argument is really pretty silly since processors and operating systems have been supporting multiple page sizes quite well for a while now. I have a feeling Lynn Wheeler might jump in here right about now and tell us how variable page sizing was implemented on mainframes back when the 6502 was state of the art :)

?? ... 370 supported 2k & 4k pages (as well as 64k & 1m segments) ... early 70s (but had to be consistent within the same address space). there was a problem with the 370/168-3 in middle '70s ... with 64kbyte cache. In going from 168-1 to 168-3 and doubling cache size from 32kbyte to 64kbyte ... they used the 2k bit as part of cache line indexing.

that had two problems, 1) running in 2k page mode, the machine was restricted to 32kbyte and 2) any switching between page size modes, the complete cache got flushed.

there were some number of customers that upgraded from a 168-1 to a 168-3 and actually saw a performance degradation (any use of double cache size for 4k page mode address spaces was more than offset by the cache flushing any time page size mode occurred).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Wang tower minicomputer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Wang tower minicomputer
Newsgroups: alt.folklore.computers
Date: Tue, 26 Feb 2002 14:38:17 GMT
Barry OGrady writes:
This one had a 5.25in hard drive. All files were in libraries, and microcode was downloaded to external devices such as workstations and printers. It was used as a word processor mainly.

besides as/400, macs, & rs/6000 getting power or powerpc chips ... so did wang and bull (as well as some number of embedded processor applications, like some number of mainframe ancillary processors).

random ref:
https://www.garlic.com/~lynn/2002c.html#19 Did Intel Bite Off More Than It Can Chew?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)
Newsgroups: comp.arch
Date: Tue, 26 Feb 2002 15:58:13 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
S/370, which came out around 1972, had 2K or 4K page size, with 64K or 1M segment size. Though I don't know that those machines implemented the 1M segment size. (It is a two level virtual address, with segment tables containing pointers to page tables containing pointers to real memory. Either could have an invalid bit, so page tables could be paged.

As I remember the story at the time, IBM bought the virtual memory patent, along with some other patents, so that they wouldn't worry about someone suing them. This was not long after their anti-trust case, so they weren't likely to sue anyone else.


the s/370 was still 24bit virtual addressing

the 360/67 was a major effort along with tss/360 circa 1966 (although not a commercial success) with virtual memory. It had 4k pages and 1mbyte segments ... but had both 24bit and 32bit virtual addressing modes.

308x follow-on to 370 introduced 24bit & 31bit virtual addressing circa 1980 (one bit less than the original 360/67 from nearly 15 years earlier).

cambridge science center had been hoping that ibm would win the multics project with the proposed virtual memory machine ... actually the 360/62. The original 360 models were 30, 40, 50, 60, & 70 (with the 360/62 being a virtual memory version of the 360/60). Prior to first customer ship ... the 60, 62, & 70 got enhanced hardware storage (the new memory had 8byte wide, 750ns access, I believe the original memory was to have been 1000ns access) and were "re-named" 65, 67, and 75.

note/update

I remember reading an early document about 360/6x machine with virtual memory having one, two, and four processors. I sort of had vaque recollection that it was model number other than 360/67.

however, i've subsequently been told that 360/60 was with 2mic memory and 360/62 was with 1mic memory. both models never shipped, and were replaced with 360/65 with 750ns memory. the 360/67 then shipped as 360/65 with virtual memory ... only available in one (uniprocessor) and two processor (multiprocessor) configurations

https://www.garlic.com/~lynn/2006m.html#50 The System/360 Model 20 Wasn't As Bad As All That


In part, because of the loss of multics to ge ... CSC started on the virtual machine/memory project. They had intended to modify a 360/50 with virtual memory ... but all of the 360/50s were going to FAA ... so they modified a 360/40 with virtual memory support and developed cp/40. Later when 360/67s became available, they ported cp/40 to 360/67 and renamed it cp/67.

CP/67 saw a fraily successful commercial deployment in customer shops. It was also used internally for a lot of online service delivery as well as system development. An early version of VS/2 was built by modifying MVT to include CCWTRANS (virtual to real CCW/io translation) from CP/67 (as well as other CP/67 components) and testing on 360/67s.

One of the internal online services that was built originally on cp/67 and later migrated to VM/370 was "HONE". It provided online support and interactive tools to all the field/sales people in the world (several tens of thousands):
https://www.garlic.com/~lynn/subtopic.html#hone

in the late '70s HONE developed a large scale cluster support ... initially deployed at a single location in the US (although by then there were a number of HONE clones spread around the world) and then the US HONE was expanded into a distributed cluster in three sites (palo alto, dallas, and boulder) for disaster survivability purposes. My wife and I used some of that experience when we were doing the HA/CMP product:
https://www.garlic.com/~lynn/subtopic.html#hacmp

Also modified versions of CP/67 were done which provided both 360/67 virtual machines as well as 370 virtual memory virtual machines (i.e. CP/67 running on 360/67 ... but providing virtual machines that conformed to the 370 virtual memory architecture; which included somewhat different page & segment table formats as well as some new instructions).

I believe that the first and earliest code that paged "page tables" was part of the resource manager product that I released. It didn't actually page page tables .... it paged the disk backing store mapping tables for a segment that had no valid pages. random ref:
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

There is the story that virtual 370 virtual memory machines were in production use one year prior to the first engineering model of a 370 machine with virtual memory was operation. There were two versions of CP/67 with 370 virtual memory modifications; a) version of CP/67 that ran on 360/67 architecture and provided 370 virtual memory architecture virtual machines and b) version of cp/67 that ran on 370 architecture. When the first 370 virtual memory engineering machine was ready for testing the 370-modified CP/67 was booted on the machine as a test (this engineering machine had a knife switch for the "boot" button). The boot failed and after some analysis it turned out that the engineers had implemented one of the new 370 instructions incorrectly. CP/67 was quickly modified to conform to the mis-implemented instruction and was rebooted & run successfully.

more detailed description of the whole MIT CTSS, CP/40, CP/67, Multics, Project MAC, 360/67, TSS/360, cambridge science center, etc history can be found at:
http://www.leeandmelindavarian.com/Melinda#VMHist

as an aside, the cambridge science center was also responsible for the "internal network" (larger than arpanet/internet until the mid-80s) as well as GML (precursor to SGML, HTML, XML, etc), a lot of the early work transitioning from performance tuning to capacity planning and various interactive and other tools.

lots of CSC references (4th floor, 545 tech sq, same building as project mac & multics):
https://www.garlic.com/~lynn/subtopic.html#545tech

random other refs:
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#46 Rethinking Virtual Memory
https://www.garlic.com/~lynn/94.html#53 How Do the Old Mainframes
https://www.garlic.com/~lynn/94.html#54 How Do the Old Mainframes
https://www.garlic.com/~lynn/95.html#1 pathlengths
https://www.garlic.com/~lynn/98.html#10 OS with no distinction between RAM a
https://www.garlic.com/~lynn/98.html#11 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#12 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#13 S/360 operating systems geneaology
https://www.garlic.com/~lynn/99.html#126 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#127 Dispute about Internet's origins
https://www.garlic.com/~lynn/99.html#142 OS/360 (and descendants) VM system?
https://www.garlic.com/~lynn/99.html#177 S/360 history
https://www.garlic.com/~lynn/99.html#237 I can't believe this newsgroup still exists
https://www.garlic.com/~lynn/2000.html#1 Computer of the century
https://www.garlic.com/~lynn/2000.html#43 Historically important UNIX or computer things.....
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000.html#81 Ux's good points.
https://www.garlic.com/~lynn/2000.html#82 Ux's good points.
https://www.garlic.com/~lynn/2000.html#89 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#54 Multics dual-page-size scheme
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#53 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#2 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001b.html#18 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001b.html#21 First OS?
https://www.garlic.com/~lynn/2001b.html#35 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001e.html#69 line length (was Re: Babble from "JD" <dyson@jdyson.com>)
https://www.garlic.com/~lynn/2001f.html#47 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001f.html#48 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001h.html#9 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#32 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#34 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#39 IBM OS Timeline?
https://www.garlic.com/~lynn/2001l.html#5 mainframe question
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/2001l.html#7 mainframe question
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#24 mainframe question
https://www.garlic.com/~lynn/2001m.html#44 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2001m.html#49 TSS/360
https://www.garlic.com/~lynn/2001m.html#53 TSS/360
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2001n.html#10 TSS/360
https://www.garlic.com/~lynn/2001n.html#18 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#89 TSS/360
https://www.garlic.com/~lynn/2002.html#36 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2002b.html#6 Microcode?
https://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
https://www.garlic.com/~lynn/2002b.html#45 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#46 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2002b.html#64 ... the need for a Museum of Computer Software

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

using >=4GB of memory on a 32-bit processor

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: using >=4GB of memory on a 32-bit processor
Newsgroups: comp.arch
Date: Tue, 26 Feb 2002 16:21:08 GMT
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes:
Well, for those who were there when 16 bit machines had more than 64K, Intel's answer was Segment Selectors, and, for 32 bit machines, 48 bit addresses. (16 bit segment selector, 32 bit offset). The pentium PMMU (paging unit) makes it difficult to do this right, but with OS support it would be possible to have a single process address more than 4GB. I don't know that any OS support this, though.

a hack was done for 3033 ... a late life cycle 370 (late '70s) ... prior to 3081 and full 31 bit address (31 not 32).

the 370 architecture was pure 24 bit addressing and many 3033 configurations were severely real storage constrained at 16mbytes.

turns out that the 370 page table entry (in 4k page mode) had two bits that were undefined. 3033s were built that had 26bit "real" address lines (for 64mbyte addressing, even tho instructions only had 24bit addressing). On the 3033, the two undefined bits in the PTE were then used as extended page number bits (two additional page number bits in addition to the standard defined 12). The TLB then was extended to support 14 bit page numbers. The result was a machine that only had 24 bit instruction addressing ... but 26 bit real storage addressing (input into TLB might be 12bit virtual page number ... but output was a 14bit real page number).

these machines were 4k page mode only.

now this is analogous but different to various publications describing ROMP (pc/rt) having 40bit addressing and POWER (rs/6000) have 52bit addressing.

801 ROMP had 16 segment registers and "inverted" page tables. The top four bits of a 32bit address was used to address one of 16 segment registers and the remaining 28bits addressed within a 256mbyte segment. A segment register contained a 12bit "segment id" which was used as part of TLB lookup. At any one time, a 801 ROMP woulc have up to 4096 uniquely defined segments. These are somewhat analogous to the number of different and uniquely defined virtual address spaces in some other architectures.

Various documents have descriptions that the 28bit addressing (within 256mbyte segment) plus the 12bit segment identifier yielded a machine that supported 40bit addressing.

Later 801 RIOS documents extended that description to 52bit addressing because the 801 RIOS segment registers support 24bit segment identifiers (instead of 12bit segment identifiers).

However, the actual analogy is really more akin to number of simultaneous, unique virtual address spaces. For a historical 370 comparison with segment & page tables, the 370/168 had a 7-entry STO (segment table origin, basically unique address space pointer) stack. Whenever the current address space pointer register was changed, the STO stack was checked for matching value. If no match was found, a entry was scavenged ... and all TLB entries with the corresponding 3bit ID were invalidated. The romp 12bit segment id and the RIOS 24bit segment id are analogous to the 370/168 3bit STO stack idenfier. The difference between 370 and ROMP/RIOS was that when a address space was changed, instead of changing a address space pointer register (in 370), all 16 segment registers (normally) needed to be changed.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Beginning of the end for SNA?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Beginning of the end for SNA?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 26 Feb 2002 16:33:21 GMT
JMckown@HEALTHAXIS.COM (McKown, John) writes:
I thought this announcement was interesting. The 3745 and 3746 are being withdrawn from marketting. There is NO replacement product announced. Or am I misunderstanding what this is announcing?

http://www.ibmlink.ibm.com/usalets&parms=H_902-040


been a long haul
https://www.garlic.com/~lynn/submain.html#360pcm

some claim that original pu4/pu5, ncp/sscp design for 3705/vtam was response to the building of the original 360 pcm telecommunication controller that a couple of us did when I was an undergraduate.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Beginning of the end for SNA?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Beginning of the end for SNA?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 26 Feb 2002 19:25:26 GMT
Mike.O'Neill@53.COM (O'Neill, Mike) writes:
Oh, also FWIW, the announcement specifically said that it did NOT apply to the NCP software. Of course, with no hardware to run on, what good is the software? Maybe we can make a 3745 emulator on an Intel like Hercules/390 emulates an S/390? <grin>.

the original 3705 was a uc.5 engine (same as in the 8100 and in the 3081 service processor). this was after some in-fighting where an attempt was made to get the s1/peachtree engine as the 3705 engine (a significantly more capable processor).

when I made the following presentation to the SNA ARB in raleigh, oct '86
https://www.garlic.com/~lynn/99.html#66 System/1
https://www.garlic.com/~lynn/99.html#67 System/1
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1?)
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI

during the presentation the executive responsible for ncp asked how so few people could have done all that work (basically a superset of both pu4 & pu5 with peer-to-peer networking support implemented on series/1, and sna emulation only at the necessary boundary nodes) when Raleigh had so many people supporting NCP (somewhere between ten times and a hundred times more).

one of the issues appeared to be that the core NCP kernel was only about 6000 lines of uc.5 code; as a result any of the drivers and other features had to implement a large amount of their own ROI services (instead of being able to rely on a lot of common services being provided by the kernel).

there was subsequent agitated reaction ... possibly not as much as the original project implementing the first 360 pcm controller (when I was an undergradudate)
https://www.garlic.com/~lynn/submain.html#360pcm

in any case, besides the base uc.5 engine needing emulation there would be extensive line-scanner and other i/o hardware peculiar to the 37xx boxes.

a significantly better base would be the above mentioned project that we were looking at doing a s/1 to 801/rios port.

random other refs:
https://www.garlic.com/~lynn/94.html#8 scheduling & dynamic adaptive ... long posting warning
https://www.garlic.com/~lynn/94.html#33a High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/94.html#52 Measuring Virtual Memory
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#66 System/1 ?
https://www.garlic.com/~lynn/99.html#67 System/1 ?
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#189 Internet Credit Card Security
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000.html#16 Computer of the century
https://www.garlic.com/~lynn/2000.html#50 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#0 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#57 South San Jose (was Tysons Corner, Virginia)
https://www.garlic.com/~lynn/2000b.html#78 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#47 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#48 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#51 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#52 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000c.html#58 Disincentives for MVS & future of MVS systems programmers
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001.html#4 Sv: First video terminal?
https://www.garlic.com/~lynn/2001.html#10 Review of Steve McConnell's AFTER THE GOLD RUSH
https://www.garlic.com/~lynn/2001.html#72 California DMV
https://www.garlic.com/~lynn/2001b.html#49 PC Keyboard Relics
https://www.garlic.com/~lynn/2001b.html#63 Java as a first programming language for cs students
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#47 PKI and Non-repudiation practicalities
https://www.garlic.com/~lynn/2001d.html#38 Flash and Content address memory
https://www.garlic.com/~lynn/2001e.html#8 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001e.html#55 Pre ARPAnet email?
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001g.html#32 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001h.html#21 checking some myths.
https://www.garlic.com/~lynn/2001h.html#56 Blinkenlights
https://www.garlic.com/~lynn/2001h.html#57 Whom Do Programmers Admire Now???
https://www.garlic.com/~lynn/2001h.html#59 Blinkenlights
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#52 misc loosely-coupled, sysplex, cluster, supercomputer, & electronic commerce
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#13 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2001k.html#42 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001k.html#46 3270 protocol
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#23 mainframe question
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2001n.html#9 NCP
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#53 A request for historical information for a computer education project
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq
https://www.garlic.com/~lynn/2002.html#32 Buffer overflow
https://www.garlic.com/~lynn/2002.html#45 VM and/or Linux under OS/390?????
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002b.html#36 windows XP and HAL: The CP/M way still works in 2002
https://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
https://www.garlic.com/~lynn/2002b.html#59 Computer Naming Conventions
https://www.garlic.com/~lynn/2002c.html#41 Beginning of the end for SNA?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Beginning of the end for SNA?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Beginning of the end for SNA?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 26 Feb 2002 19:29:42 GMT
edjaffe@PHOENIXSOFTWARE.COM (Edward E. Jaffe) writes:
But why does that spell doom for SNA? Do we still need 3745s in modern networks? Can't APPN networks do everything subarea networks used to do? Can't an OSA-attached z/OS host do everything we need? I'm by no means a VTAM expert (and I've never worked with NCP), so if these sound like questions born of ignorance ... they are.

somewhat as a total aside ... CPD/SNA non-concurred with the original APPN announcement. After some amount of escalation ... the announcement was finally rewritten so not to imply any relation at all between APPN and SNA ... which then was finally released a couple months later.

the position was that SNA and APPN were in absolutely no way related.

note however by most traditional standards, SNA is a telecommunication control protocol ... totally lacking a "network" layer. One of the ways that APPN violated basic SNA architecture was that APPN at least had a real network layer.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

cp/67 (coss-post warning)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: cp/67 (coss-post warning)
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 26 Feb 2002 22:01:24 GMT
prune@ZAnkh-Morpork.mv.com (Paul Winalski) writes:
Several of the original programmers for CP/67 and CMS were still at the IBM Cambridge Scientific Center when I worked there part time while in grad school back ca. 1978. They told me that the main reason for developing CP/67 was as a platform for developing performance tools. CSC had several projects to measure OS/360 performance. These very often involved instrumenting the OS code. As you point out, there was a shortage of 360/50 hardware at the time and CSC couldn't afford to dedicate harware stand-alone to debugging the instrumentation code. CP/67 allowed many developers to share a single 360/40 (later 360/67) and crash their virtual machines to their heart's content, without impacting anyone else's work.

as melinda's paper talks about ... there was the real reasons and the talked about reason (aka ... disguise the fact that you were trespassing on somebody else's turf).

some previous postings:
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2002b.html#6 Microcode?

In the following, by the time I graduated and joined CSC, Creasy had transferred to the Palo Alto Science Center (where he was manager of various projects including apl\cms & the 145 microcode assist), Comeau had transferred to g'burg (in FS, he was in charge of advanced I/O and system interconnect, and my future wife reported to him ... this was before she went to pok to be in charge of loosely coupled architecture; later he retired & returned to boston where he was the "C" in CLaM aka C, L, & M; who we subcontracted a lot of HA/CMP development to), and Bayles had left to be one of the founders of NCSS (a cp/67 service bureau in stanford, conn; that actually happened the summer of '68. the friday before a one week cp/67 class that IBM hosted in IBM office in Hollywood, so instead of attending the class I got roped into teaching some amount of it ... somewhat as a Bayles backfill, bayles and a couple of others had visited the univ. the last week of jan. '68 to do a cp/67 install and it was turned over to me as a hobby).

melinda's paper
http://www.leeandmelindavarian.com/Melinda#VMHist

various things extracts:
CP-40 and CMS

In the Fall of 1964, the folks in Cambridge suddenly found themselves in the position of having to cast about for something to do next. A few months earlier, before Project MAC was lost to GE, they had been expecting to be in the center of IBM's time-sharing activities. Now, inside IBM, ''time-sharing'' meant TSS, and that was being developed in New York State. However, Rasmussen was very dubious about the prospects for TSS and knew that IBM must have a credible time-sharing system for the S/360. He decided to go ahead with his plan to build a time-sharing system, with Bob Creasy leading what became known as the CP-40 Project. The official objectives of the CP-40 Project were the following:

1. The development of means for obtaining data on the operational characteristics of both systems and application programs;

2. The analysis of this data with a view toward more efficient machine structures and programming techniques, particularly for use in interactive systems;

3. The provision of a multiple-console computer system for the Center's computing requirements; and

4. The investigation of the use of associative memories in the control of multi-user systems.

The project's real purpose was to build a time-sharing system, but the other objectives were genuine, too, and they were always emphasized in order to disguise the project's ''counter-strategic'' aspects. Rasmussen consistently portrayed CP-40 as a research project to ''help the troops in Poughkeepsie'' by studying the behavior of programs and systems in a virtual memory environment. In fact, for some members of the CP-40 team, this was the most interesting part of the project, because they were concerned about the unknowns in the path IBM was taking. TSS was to be a virtual memory system, but not much was really known about virtual memory systems. Les Comeau has written: Since the early time-sharing experiments used base and limit registers for relocation, they had to roll in and roll out entire programs when switching users....Virtual memory, with its paging technique, was expected to reduce significantly the time spent waiting for an exchange of user programs.


...
Creasy and Comeau were soon joined on the CP-40 Project by Dick Bayles, from the MIT Computation Center, and Bob Adair, from MITRE. Together, they began implementing the CP-40 Control Program, which sounds familiar to anyone familiar with today's CP. Although there were a fixed number (14) of virtual machines with a fixed virtual memory size (256K), the Control Program managed and isolated those virtual machines in much the way it does today. 28 The Control Program partitioned the real disks into minidisks and controlled virtual machine access to the disks by doing CCW translation. Unit record I/O was handled in a spool-like fashion. Familiar CP console functions were also provided.

This system could have been implemented on a 360/67, had there been one available, but the Blaauw Box wasn't really a measurement tool. Even before the design for CP-40 was hit upon, Les Comeau had been thinking about a design for an address translator that would give them the information they needed for the sort of research they were planning. He was intrigued by what he had read about the associative memories that had been built by Rex Seeber and Bruce Lindquist in Poughkeepsie, so he went to see Seeber with his design for the ''Cambridge Address Translator'' (the ''CAT Box''), which was based on the use of associative memory and had ''lots of bits'' for recording various states of the paging system. Seeber liked the idea, so Rasmussen found the money to pay for the transistors and engineers and microcoders that were needed, and Seeber and Lindquist implemented Comeau's translator on a S/360 Model 40.

Comeau has written:

Virtual memory on the 360/40 was achieved by placing a 64-word associative array between the CPU address generation circuits and the memory addressing logic. The array was activated via mode-switch logic in the PSW and was turned off whenever a hardware interrupt occurred. The 64 words were designed to give us a relocate mechanism for each 4K bytes of our 256K-byte memory. Relocation was achieved by loading a user number into the search argument register of the associative array, turning on relocate mode, and presenting a CPU address. The match with user number and address would result in a word selected in the associative array. The position of the word (0-63) would yield the high-order 6 bits of a memory address. Because of a rather loose cycle time, this was accomplished on the 360/40 with no degradation of the overall memory cycle.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

cp/67 addenda (cross-post warning)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: cp/67 addenda (cross-post warning)
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 26 Feb 2002 23:20:36 GMT
prune@ZAnkh-Morpork.mv.com (Paul Winalski) writes:
Several of the original programmers for CP/67 and CMS were still at the IBM Cambridge Scientific Center when I worked there part time while in grad school back ca. 1978. They told me that the main reason for developing CP/67 was as a platform for developing performance tools. CSC had several projects to measure OS/360 performance. These very often involved instrumenting the OS code. As you point out, there was a shortage of 360/50 hardware at the time and CSC couldn't afford to dedicate harware stand-alone to debugging the instrumentation code. CP/67 allowed many developers to share a single 360/40 (later 360/67) and crash their virtual machines to their heart's content, without impacting anyone else's work.

I had done a lot of OS/360 ... as well as cp/67 performance work as an undergraduate.

I had started doing "hand built" os/360 stage-II sysgens with careful disk location placement of data as well as other optimizations ... that got about a three times thruput speed-up in various workloads at the university.

After CP/67 was installed at the university in jan. '68 ... i also did a lot of cp/67 rewrite ... the initial version of the goal-oriented scheduler with fair-share policy support, redo of the page replacement algorithm, the initial idea of wsclock workingset-like algorithm, fastpath invention, and a bunch of other pathlength reductions ... for both virtual os/360 as well as cms online intensive environments.

part of share presentation that I made in fall '68 SHARE meeting on both MFT14 enhancements as well as CP/67 enhancements.
https://www.garlic.com/~lynn/94.html#18

Between the above presentation and the time I graduated and joined CSC, I had also done extensive additional pathlength reduction rewrites of critical components as well as early version of "paging" portions of the CP kernel (many of the changes were released as part of the standard CP/67 product, but the kernel paging changes didn't get out until vm/370).

I had also done cp/67 TTY/ascii support (which ibm released) with a peculiar feature (one byte arithmetic for calculating incoming line length ... which worked until somebody modified the code to support devices that could have 400-500 byte "input") ... see cp/67 story at:
http://www.multicians.org/thvv/
also
https://www.garlic.com/~lynn/2002b.html#62 TOPS-10 logins ...

Doing tty/ascii support, I found a design feature of the 360 telecommunication controller that in part led to a project at the university to build the first non-ibm controller (and we got blamed for originating the 360 pcm controller business):
https://www.garlic.com/~lynn/submain.html#360pcm

A lot of the performance stuff was:

1) extensive data gathering of cp/67 performance operation ... typically every five minutes every possible system and process specific counter was dumped to tape (and there was a lot of counters). CSC archived that data since the time CP/67 first booted so by the mid-70s there was nearly ten years worth of production performance monitoring data across all the system and hardware changes ... as well as the switch over to vm/370 running on 370.

2) csc had done the port of apl\360 to cms\apl (opening up workspace restriction from 32kbytes to 16mbytes, as well as adding functions to do system calls so it was possible to do things like read/write external files). There was then a lot of performance modeling and analysis programs written in APL. This was used to analyse the extensive performance history information. A lot of this led to the early work in the paradigm change from performance tuning to capacity planning. The performance predictor, an APL-based model was then also made available on HONE as a sales tool (i.e. sales people could characterize their current hardware configuration and workload, and then ask what-if questions about changes to configuration and/or workload). This wasn't so much the result of doing extensive monitoring of operating system in a virtual machine, but the extensive performance history of the production CP/67 and VM/370 operation.
https://www.garlic.com/~lynn/subtopic.html#hone

3) In the mid-70s, I got to do the "resource manager"
https://www.garlic.com/~lynn/2001e.html#45 VM/370 Resource Manager
https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#12 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc

as part of that we combined some stuff done in #2 with some automated benchmarking stuff, workload profiling, and artificial workload generators that we had developed and set out to validate the resource manager operation, across a wide range of configurations and workloads. Something over 2000 benchmarks were eventually run taking three months elapsed time. Baiscally a configuration "envelope" and workload "envelopes" were defined. The first 1000 benchmarks or so were somewhat manually defined configurations and workloads that were pretty uniformly distributed across the envelopes with some specific "outlyers" ... aka workloads five to ten times heavier than seen in normal operation. After the first thousand or so benchmarks ... we started using an APL model to analyse the benchmarks done up to that point and start to define new benchmark workloads &/or configurations that it thot should be run (in part attempting to look for discontinuites or other anomolous conditions).

4) The VS/repack product grew out of a CSC research project to see if it was possible to improve application virtual memory operation. Basically it started as a full instruction trace that was typically run in a cp/67 virtual machine (basically all i-fetch, storage-fetch and storage-stores). One of the first things it was used for was in the apl\360 to cms\apl port. apl\360 had a small real-storage work space orientation with a storage allocation algorithm that always assigned the next available storage location (starting low) for every assign statement (even ones involving previously assigned variables). Eventually the algorithm would reach the end of "memory" and then do garbage collection, moving/compacting all allocated storage at lowest memory address. This wasn't bad in a 32kbyte real storage workspace operation ... but it was guaranteed to touch every virtual page in a 16mbyte virtual address space ... whether it needed to or not. This garbage collection scheme was completely rewritten to be virtual memory friendly ... and vs/repack was to do detailed analysis of the changes. The "official" vs/repack product was used to do detailed traces of example application execution and then was provided a "module" map of the individual modules making up the application. vs/repack would then do various kinds of cluster analysis to come up with optimum module order/packing for application execution in a virtual memory (aka minimum avg working set, minimum page faults for given storage, etc). random vs/repack refs:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc

in support of the vs/repack operation, i developed a modification to the standard kernel virtual memory support. This allowed a user to set an artificial limit on the number of simultaneous "valid" virtual pages. The application would start at zero, & page fault up to the limit. When the limit was reached, the virtual page numbers of valid pages were dump to an analysis file, all the pages invalidated (but left in memory) and the application restarted. It would then restart the page fault sequence. It turned out that the sequence of virtual page number sets, when the "max" virtual pages was reasonably chosen provided effectively the same quality information to the vs/repack process (as a full instruction trace) at significantly reduced overhead. This kernel mod., we used extensively but wasn't made available to customers as part of the vs/repack product (who had to rely on the full instruction trace method).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

cp/67 addenda (cross-post warning)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: cp/67 addenda (cross-post warning)
Newsgroups: comp.arch,alt.folklore.computers
Date: Tue, 26 Feb 2002 23:35:20 GMT
Anne & Lynn Wheeler writes:
4) The VS/repack product grew out of a CSC research project to see if it was possible to improve application virtual memory

the full instruction trace version .. gave storage use resolution in 32byte increments and distinguished between program and data as well as loads & stores. there was also a straight paging model where you could ask some what-if questions about things like the effects of different page sizes ... even between program & data. The page number "window" version could only address what-if questions about page sizes larger than the 4k size that the data was gathered at (since the data was gathered with respect to sets of 4k byte virtual page numbers).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Moving big, heavy computers (was Re: Younger recruits versus experienced ve

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Moving big, heavy computers (was Re: Younger recruits versus experienced ve
Newsgroups: alt.folklore.computers
Date: Wed, 27 Feb 2002 21:03:58 GMT
"Charlie Gibbs" writes:
The Univac 8414 (equivalent to the IBM 2314 but individual drives in freestanding cabinets) had this sort of power sequencing (although they'd only take 5 or 10 seconds to reach 80% speed). It was fun to walk down a string of drives punching all of the "on" buttons, then stand back and watch them come up in turn. But you had to do this starting at unit 0; if you started at the other end the sequencing didn't take effect and you'd pop a breaker.

standard 360 processor power-on button on the front panel would sequence everything (channels, control units, drives/devices, etc).

There were times on the weekend when things were powered off for various reasons and you went to do power-on and the sequence wouldn't complete. Rather than call in field maintance, first go around and put everything you could into CE mode and then hit the front panel power-on button ... which would typically bring the processor up.

Then go around to each individual unit, hit its power-on button ... and then take that unit out of CE mode and go to the next unit (until everything had been manually sequenced). For some installations this could be dozens of units (although the probability of getting dedicated time on the weekend of really large configuration started to diminish to zero).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Swapper was Re: History of Login Names

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Swapper was Re: History of Login Names
Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers
Date: Wed, 27 Feb 2002 21:36:29 GMT
J Ahlstrom writes:
Why did all y'all hate swapper?

there are different kinds of swappers. the original term applied to implementation that would do total application roll-out/roll-in in contiguous sections of real storage. there were then some partial enhancements that used paging hardware to eliminate the requirement for contiguous sections of real storage (but still consistent of complete application roll-out/roll-in).

some demand paging systems had various kinds of optimized block page out/in. many of them weren't referred to as swappers because of the earlier definition/use of the term. The block page out/in was coupled intol the standard demand paging in various ways ... aka for entities that weren't member of a standard block page in/out set.

block page out/in implementations may or may not have also implemented contiguous allocation for all members of a specific page group/set.

There are couple issues:

w/o contiguous allocation,

1) a block page out/in still reduces latency and

2) also tends to throw a group of pages against the page device driver which can be organized for optimal device operation (as opposed to treating the requests as random sequential, one at a time).

contiguous allocation

can further improve block page out/in I/O efficiency over non non-contiguous operation.

========================================================

"big pages" was an attempt to maximize both. For page out operation, clusters of pages were grouped in full track units, where members of a track cluster tended to be pages in use together (not contiguous or sequential) ... somewhat related to members of working set. A suspended process could have all of its resident pages re-arranged into multiple track clusters and all queued simultaneously for write operation. When a task was re-activated, fetch requests for some subset of the pages previously paged out was queued (instead of waiting for individual demand page faults). Subsequent demand page faults would not only fetch the specific page, but all pages in the same track cluster.

A tricky part was when real storage was fully commuted and a demand page fault occurred, there was a trade-off decision regarding attempting to build a single "big page" (track cluster) on the fly, or to select individual pages for page out. If individual pages are selected for replacement, then there becomes two classes of pages on secondary storage, singlet pages, and track cluster pages (which potentially also needs different allocations strategy).

Other optimization issues:

simultaneous scheduling of write I/O for all track clusters on task suspension or placing them on a pending queue and only performing the writes as required

dynamic allocation of disk location of a track cluster at the moment of the write operation to the first available place closest to the current disk arm location

simultaneous scheduling of read I/O for all track clusters on task activation, only demand page fault members of track clusters (a demand page fault of any member of a track cluster is the equivalent of a demand page fault for all members of the same track cluster), or some hybrid of the two.

misc. big page refs:
https://www.garlic.com/~lynn/2001k.html#60 Defrag in linux? - Newbie question
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002c.html#29 Page size (was: VAX, M68K complex instructions)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Swapper was Re: History of Login Names

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Swapper was Re: History of Login Names
Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers
Date: Thu, 28 Feb 2002 02:20:51 GMT
J Ahlstrom writes:
In a paper early in 1971 in Journal of the ACM Aho and Denning proved that demand paging was more effective than any other VM strategy, as long as the cost of loading n pages 1 at a time was <= to the cost of loading n pages n at a time. They forgot that with contemporary hardware (and most hardware since then) it was almost always cheaper to load n at a time rather than 1 at a time.

I had sort of invented my own concept of working set and page replacement algorithm when I was an undergraduate in '68 and implemented it for cp/67 kernel (about the same time as denning's acm paper in '68 on working set).

Later at Cambridge Science Center, I made several additional enhancement.

In the early '70s, the grenoble science center took essentially the same cp/67 kernel and implemented a "straight" working set implementation ... very close to the '68 acm paper. grenoble published an acm paper on their effort cacm16, apr73. The grenoble & cambridge machines, workload mix, and configurations were similar except

the grenoble 67 was 1m machine (154 4k pageable pages after fixed kernel requirements)

the cambridge 67 was 768k machine (104 4k pageable pages after fixed kernel requirements)

the grenoble had 30-35 users

the cambridge was a similar workload mix but twice the number of users 70-75 (except there was probably somewhat more cms\apl use on the cambridge machine ... making the avg. of the various kinds of transaction/workload types somewhat more processor intensive).

both machines provided subsecond response for the 90th percentile of trivial interactive transactions ... however, cambridge response was slighter better than the grenoble response (even with twice the users).

The differences early '70s:


                     grenoble                  cambridge
machine               360/67                    360/67
# users               30-35                      70-75
real store             1mbyte                     768k
p'pages               154 4k                     104 4k
replacement          local LRU               "clock" global LRU
thrashing            working-set             dynamic adaptive
priority             cpu aging               dynamic adaptive

misc. refs
L. Belady, A Study of Replacement Algorithms for a Virtual Storage Computer, IBM Systems Journal, v5n2, 1966

L. Belady, The IBM History of Memory Management Technology, IBM Journal of R&D, v25n5

R. Carr, Virtual Memory Management, Stanford University, STAN-CS-81-873 (1981)

R. Carr and J. Hennessy, WSClock, A Simple and Effective Algorithm for Virtual Memory Management, ACM SIGOPS, v15n5, 1981

P. Denning, Working sets past and present, IEEE Trans Softw Eng, SE6, jan80

J. Rodriquez-Rosell, The design, implementation, and evaluation of a working set dispatcher, cacm16, apr73


also with respect to vs/repack and program restructure mentioned in a related recent thread in these newsgroup (parts of this technology was also used in conjunction with some other modeling work to look at page size issues):
D. Hatfield & J. Gerald, Program Restructuring for Virtual Memory, IBM Systems Journal, v10n3, 1971

random refs:
https://www.garlic.com/~lynn/93.html#0 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)
https://www.garlic.com/~lynn/94.html#01 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#4 Schedulers
https://www.garlic.com/~lynn/94.html#14 lru, clock, random & dynamic adaptive ... addenda
https://www.garlic.com/~lynn/94.html#49 Rethinking Virtual Memory
https://www.garlic.com/~lynn/96.html#0a Cache
https://www.garlic.com/~lynn/96.html#0b Hypothetical performance question
https://www.garlic.com/~lynn/98.html#54 qn on virtual page replacement
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/99.html#104 Fixed Head Drive (Was: Re:Power distribution (Was: Re: A primeval C compiler)
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000f.html#34 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2000f.html#36 Optimal replacement Algorithm
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2001h.html#26 TECO Critique
https://www.garlic.com/~lynn/2001l.html#6 mainframe question
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Swapper was Re: History of Login Names

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Swapper was Re: History of Login Names
Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers
Date: Thu, 28 Feb 2002 06:29:27 GMT
"John Keiser" writes:
SWAPPER and the controls it offered allowed me to put together an interesting MCP tutorial on thrashing for a CUBE back in the early 80's. SWAPPER had settings for both the maximum real memory allowed for an individual subspace and the amount of real memory a subspace would start with. By setting both to the same value I was able to eliminate problems that were associated with deciding when to increase a subspace size ( we were never very good at that ). By varying that value and rerunning the same program many times on an otherwise idle machine, I was able to demonstrate exactly what performance could be expected for various ratios of virtual memory to real memory on Burroughs large systems using the ASN memory model.

I did much the same thing when I was an undergraduate in '68 rewriting much of the cp/67 dispatching and paging subsystem.

In cp/67 it was possible to fix/pin/lock virtual memory pages in real storage. I used the lock command to lock specific virtual pages of an idle process .... leaving specific amount of pageable, unlock pages available for other tasks. I then ran a number of a large number of different tasks.

I included a simple example of that in a presentation I made at the SHARE user group meeting in fall '68.

I also used the technique when evaluating different paging techniques that I was developing as well as modifications/improvements to the code exeucting in the virtual address space.

much of the presentation pieces was previously posted
https://www.garlic.com/~lynn/94.html#18


MODIFIED CP/67

OS run with one other user. The other user was not active, was just
available to control amount of core used by OS. The following table
gives core available to OS, execution time and execution time ratio
for the 25 FORTG compiles.

CORE (pages)    OS with Hasp            OS w/o HASP

104             1.35 (435 sec)
94             1.37 (445 sec)
 74             1.38 (450 sec)          1.49 (480 sec)
64             1.89 (610 sec)          1.49 (480 sec)
54             2.32 (750 sec)          1.81 (585 sec)
44             4.53 (1450 sec)         1.96 (630 sec)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

cp/67 addenda (cross-post warning)

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: cp/67 addenda (cross-post warning)
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 28 Feb 2002 17:09:35 GMT
jcmorris@mitre.org (Joe Morris) writes:
Random thought: were you using, or were you the source for some of the data-set-within-disk and member-within-PDS layout recommendations that appeared from time to time as performance-improving tips in the Installation Newsletter that customers had to beg their SE to give them?

i had started out doing the stage-ii sysgen break-up so i could run it as part of normal production workload (w/o needing dedicated weekend time with stand alone starter system).

Since I was putting job cards on all the exec steps, i started doing re-ordering of the steps to help optimize disk position (i.e. you couldn't specify disk position, but since there was sequential allocation ... you could affect sequential allocation by the exeuction ordering).

then i re-arranged the iehmove/iebcopy statements for the most obvious 100 or so members of sys1.linklib & sys1.svclib. When I started this, stage-ii was about a box (2000) of physical cards and would send them off to be "interpreted" ... i.e. read the holes and print the character. standard keypunch would print one character per column at the top of the card (80 punched cols., one line of 80 chars). The interpreter (this had an IBM machine model, but I can't remember it at) printed about one character per two columns and resulted in two lines of text printed at the top of the card (first 40 chars on the top line). After, I got CP/67 in jan68, I would run stage-1 under cp/67 and "punch" the stage-ii deck to a CMS virtual machine and use the CMS editor to munge around with the virtual cards.

after my first SHARE presentation on results of production system sysgens, somebody got me an little thing from pok that traced every load and produced a report giving counts of things loaded. i used that to further refine the ordering (application was possibly what was used for articles in installation newsletters, although with-out specific customer workload traces, such recommendations would tend to be very generic).

I had also complained about not being able to put the vtoc in the middle of the pack and radiate allocation out from the center. In release 15/16 (normally you got nice sequential release numbers every six months or so ... except for things like release 9.5 ... aka 9 so bad that it needed to be immediately fix, and release 15/16, aka 15 was so bad/late that it was combined with 16) you got option to specify vtoc cylinder location as part of device format (you still need stand-alone time to format 2414 pack for the new system generation).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Swapper was Re: History of Login Names

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Swapper was Re: History of Login Names
Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers
Date: Thu, 28 Feb 2002 17:40:18 GMT
"John Keiser" writes:
SWAPPER and the controls it offered allowed me to put together an interesting MCP tutorial on thrashing for a CUBE back in the early 80's. SWAPPER had settings for both the maximum real memory allowed for an individual subspace and the amount of real memory a subspace would start with. By setting both to the same value I was able to eliminate problems that were associated with deciding when to increase a subspace size ( we were never very good at that ). By varying that value and rerunning the same program many times on an otherwise idle machine, I was able to demonstrate exactly what performance could be expected for various ratios of virtual memory to real memory on Burroughs large systems using the ASN memory model.

early TSS/360 was even worse than this. When an interactive task was re-activate, TSS/360 would copy its page set from 2311 disk to 2301 fixed-head "drum". When that was done, it was start executing the interactive task. When the interactive task suspended, all of its pages would be copied from the 2301 back to the 2311.

when i was rewriting much of the cp/67 code in the '60s, I believed that everything needed to be dynamic adaptive, and you didn't do anything that you didn't need to do. With things like process suspension (like interactive waiting for event or scheduling decision for contention for real memory) if dynamic adaptive indicated high enuf real storage contention, the suspension code would gather all the task's pages and queue them for (effectively) block page out ... but possibly wouldn't actually start the events (unless real storage contention was at a higher level, since there would likely be some probability of reclaim). If dynamic adaptive indicated much lower level of real storage contention, it would even do less ... so there was very high probability of re-use/reclaim.

In very late 70s (probably '79), somebody from the MVS group (that had just gotten a big award) contacted me about the advisability of changing VM/370 to correspond to his change to MVS; which was at task suspension don't actually write out all pages ... but queue them on the possibility that the writes wouldn't actually have to be done, because the pages could be reclaimed. My reply was that I never could figure out why anybody would do it any other way ... and that was the way that I had always done since I first became involved w/computers as an undergraduate.

This was not too long after another MVS gaff was fixed. When POK was first working on putting virtual memory support into OS/370 (aos2,svs,vs2), several of us from cambridge got to go to POK and talk to various groups. One of the groups was the POK performance modeling people that were modeling page replacement algorithms. One of the things that their (micro) model had uncovered was that if you select non-changed pages for replacement before changed pages (because you first had to write changed pages, non-changed pages you had some chance of re-using the copy that was already on disk and so could avoid the write) you did less work. I argued that page replacement algorithms were primarily based on approximating LRU-type methodology ... and choosing changed before non-changed pages violating any reasonable algorithm approximation. In any case, VS2/SVS was shipped with that implementation. Well into the MVS cycle, somebody discovered that the page replacement algorithm was choosing shared, high-use, resident, linklib program pages before simple application data pages for replacements (even tho simple application data pages had much lower use than high-use shared resident linklib a.k.a. system pages).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than It Can Chew?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VAX, M68K complex instructions (was Re: Did Intel Bite Off More Than   It Can Chew?)
Newsgroups: comp.arch
Date: Thu, 28 Feb 2002 18:11:21 GMT
hack@watson.ibm.com (hack) writes:
(1) When there are different kinds of memory. Early examples are DOS extenders. Also in the early 80s timeframe, the experimental 37T which had only 900K of main memory, but also 16M of Multibus memory. Finally, the S/370-XA through S/390 "Expanded Store". The last two did not support instruction execution out of the extra memory, but synchronous paging was an excellent way to exploit them. On S/390 it was one way of breaking the 2G barrier on Main Store; with 64-bit support (z/Architecture) all installed memory is now of one kind. ... ... Synchronous paging (low latency and high bandwidth) is nice to have. It is usually available through an I/O interface that looks like DASD (disk or drum) but without rotational or seek delays -- but that doesn't get the latency down to the level of direct memory access.

it was also possible to implement memory at some distance that had longer latency ... but a much wider bus ... basically a kind of 3-level store with software managing the bottom layer (Kingston implemented HiPPI attachment to IBM mainframe off this bus in the '80s since it was the only thing available that supported the transfer rate).

the synchronous paging paradigm was better than an asynchronous electronic disk model ... since a lot of operating system gorp could be bypassed.

the third characteristic was if you were having trouble building an integrated page management & replacement algorithm that dealt effectively with very large memories ... this paradigm forced the designers into a layered approach which could result in a better overall resource management solution than attempting to design & deliver a single integrated solution (this is somewhat akin to use of LPARs today with respect to management of mainframe processor resources).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Swapper was Re: History of Login Names

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Swapper was Re: History of Login Names
Newsgroups: comp.sys.unisys,Comp.arch,alt.folklore.computers
Date: Thu, 28 Feb 2002 20:54:57 GMT
Anne & Lynn Wheeler writes:
In very late 70s (probably '79), somebody from the MVS group (that had just gotten a big award) contacted me about the advisability of changing VM/370 to correspond to his change to MVS; which was at task suspension don't actually write out all pages ... but queue them on the possibility that the writes wouldn't actually have to be done, because the pages could be reclaimed. My reply was that I never could figure out why anybody would do it any other way ... and that was the way that I had always done since I first became involved w/computers as an undergraduate.

it seemed like a vast majority of operating system, control program, kernel state of the art in the 60s, 70s and even into the 80s appeared to assume system reaching some relatively static steady state (also an issue observed with tcp slow-start in the late '80s).

i had reached a working hypothesis that the people typically responsible for kernel programming spent almost all of their time dealing with binary yes/no, on/off, true/false situations resulting in a fairly entrenched mind set. dynamic adaptive was a significant paradigm shift which was more characteristic of the OR crowd implementing fortran and apl models.

To dynamically adapt programming style ... even with-in a span of a couple machine instructions didn't seem to be a common occurance. In fact, some number of people complained that they couldn't understand how some of the resource manager was able to work ... there was a sequence of few machine instructions flowing along a traditional kernel programming paradigm dealing with true/false states .... and then all of a sudden the machine instruction programming paradigm completely changed. In some cases I had replaced a couple thousand instructions implementing n-way state comparisons with some values that was calculated someplace else, a sorted queue insert, and some simple value compares and possibly a FIFO or LIFO pop off the top of a queue (although I do confess to have also periodicly rewritten common threads thru the kernel ... not only signficantly reducing the aggregate pathlength but also sometimes making certiain effects automagically occur as a side-effect of the order that other things were done, my joke about doing zero pathlength implementations)

boyd, performance envelopes, and ability to rapidly adapt:
https://www.garlic.com/~lynn/subboyd.html#boyd

recent dynamic adaptive related thread (check for the "feed-back" joke now 25 years old):

https://www.garlic.com/~lynn/2002c.html#11 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#12 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#13 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#16 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#42 Beginning of the end for SNA?

scheduler posts
https://www.garlic.com/~lynn/subtopic.html#fairshare

virtual memory posts
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

next, previous, index - home