List of Archived Posts

2004 Newsgroup Postings (12/05 - 12/31)

Single User: Password or Certificate
Systems software versus applications software definitions
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
XML Data Model
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Announce] The Vintage Computer Forum
[Lit.] Buffer overruns
PR/SM Dynamic Time Slice calculation
Tru64 and the DECSYSTEM 20
Systems software versus applications software definitions
Tru64 and the DECSYSTEM 20
Tru64 and the DECSYSTEM 20
1GB Tables as Classes, or Tables as Types, and all that
Tru64 and the DECSYSTEM 20
Question on internal/external IPs
[Lit.] Buffer overruns
1GB Tables as Classes, or Tables as Types, and all that
[Lit.] Buffer overruns
Two Fedora Core 2 problems
High Level Assembler for MVS & VM & VSE
Integer types for 128-bit addressing
Amusing acronym
Listserv for TCPIP
[Lit.] Buffer overruns
[Lit.] Buffer overruns
CAS and LL/SC
A Glimpse into PC Development Philosophy
CAS and LL/SC
CAS and LL/SC
Tru64 and the DECSYSTEM 20
How many layers does TCP/IP architecture really have ?
browser without "padlock" secure?
[Lit.] Buffer overruns
How many layers does TCP/IP architecture really have ?
C v. Ada
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
creat
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
FC3 sound
creat
CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
high speed network, cross-over from sci.crypt
CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
Will multicore CPUs have identical cores?
will there every be another commerically signficant new ISA?
will there every be another commerically signficant new ISA?
creat
Will multicore CPUs have identical cores?
Will multicore CPUs have identical cores?
Will multicore CPUs have identical cores?
Integer types for 128-bit addressing
Will multicore CPUs have identical cores?
Organizations with two or more Managers
CAS and LL/SC
will there every be another commerically signficant new ISA?
IUCV in VM/CMS
Athlon cache question
[Lit.] Buffer overruns
[Lit.] Buffer overruns
Athlon cache question
Athlon cache question
[Lit.] Buffer overruns
Athlon cache question
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
[Lit.] Buffer overruns
The TransRelational Model: Performance Concerns
Organizations with two or more Managers

Single User: Password or Certificate

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Single User: Password or Certificate
Newsgroups: comp.security.ssh
Date: Sun, 05 Dec 2004 13:35:10 -0700
re:
https://www.garlic.com/~lynn/2004p.html#60 Single User: Password or Certificate

for a little more topic drift, i recently saw a demo of the lexus smartkey, basically if you have it in your pocket ... it lets you open a (locked) door and turn the ignition. basically it is as much a form of

something you have

authentication ... as a regular key. one issue is that it is a lot larger (and integrated into the remote unlock token). i have no idea if it is easier or harder to counterfeit than regular key.

basically passwords have been form of

something you know

authentication and also a shared-secret
https://www.garlic.com/~lynn/subintegrity.html#secret

where every unique security domain has requiremetns about unique shared-secrets for every unique environment (avoiding cross-security domain contamination ... like between local garage ISP and your online banking or employee access).

the issue with real hardware tokens & shared-secrets is that the concept of perpetuating a unique hardware token per environment occurs.

I once went around a smartcard show and commented to people in the booths that if the current smartcard approach (institutional-centric with unique cards to every individual by every institution) ever became successful ... it might be a return to the mid-80s copy-protection scheme of requiring a unique floppy disk inserted in the floppy drive for every application (the prospect of being faced with scores of cards).

the public/private key issue with hardware tokens has a number of issues:

  1. the same public key can be registered in different security domains ... and it is not possible for individuals in one security domain, knowing your public key ... from impersonating you in a different security domain. this is somewhat a common form of identity theft where there is heavy reliance of the same (shared) secrets used in lots of different places.

  2. successful uptake of institutional-centric token paradigm could lead to individuals requiring scores of institutional specific tokens

a hardware token with a public/private key implementation (which is totally orthogonal to the issue of whether the paradigm has certificates or totally certificate-less operation) can

  1. hardware token represents something you have authentication

  2. relying party having some certified evidence of hardware token requiring a PIN to operate, can assume that there has been "something yoi know" authentication (without the relying party needing to know what the actual PIN is ... so it can be secret based w/o it having to be shared-secret based)

  3. registration of the public key and being able to validate the digital signature ... can imply that there has been something you have as well as something you know authentication (as long as the relying party as certified evidence as to the operational characteristics of the hardware token).

with respect to SSH and public key authentication ... the same protocol can work whether

• the private key is software-based (and the relying party can only assume non-shared-secret, but secret pin-based, something you know authentication by validating a digital signature) or

two-factor authentication and the private key is hardware token based.

the public/private key protocol part of ssh can address exploit issues like things related to shared-secret vis-a-vis non-shared-secret and evesdroping and replay attacks. what the protocol doesn't tell the relying party is what does the validation of the digital signature with the public key actually imply with regard to 3-factor authentication issues.

the trust and assurance that a relying party can place in the validation of a digital signature ... requires the relying party having some knowledge of the environment that originated the digital signature.

just validating a digital signature with a public key, by itself, doesn't establish whether it represents one-factor, two-factor or possibly even three-factor authentication and/or what of the factors
something you have
something you know
something you are


might be involved.

so i'm wondering if i can have a single individual-centric smartkey someday ... where the same smartkey will let me open all possible doors that i might need to open (vehicle, home, office), use my pc, perform electronic commerce transactions, etc, ... and possibly even ssh?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Systems software versus applications software definitions

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Systems software versus applications software definitions
Newsgroups: comp.software-eng,comp.lang.c,comp.programming,alt.os.development,comp.arch,alt.folklore.computers
Date: Sun, 05 Dec 2004 13:57:41 -0700
ref:
https://www.garlic.com/~lynn/2004p.html#63
https://www.garlic.com/~lynn/2004p.html#64

another dimension of assurance is this stuff:
http://www.software.org/quagmire/

something like 2167a can increase the straight-forward application development costs by a factor of ten times ... and frequently this sort of stuff can't be retrofitted after the fact (has to be done up front before coding ever starts)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Tue, 07 Dec 2004 20:32:10 -0700
"Douglas A. Gwyn" writes:
Programmer oversights are possible in any PL. While bounds enforcement would prevent *some* attacks, it cannot stop them all. For example, the program might use a fixed-length array to implement a ring buffer, and there could be a bug that surfaces only when the buffer is full, which might not happen in normal use.

There is an additional danger, in that some programmers would become even more careless if they believe that the PL will catch all such mistakes.


the claim is that there is something like a hundred fold increase in buffer-overflows because of the semantics of string library in C ... compared to other infrastructures. That doesn't mean that it isn't impossible to do length related errors in other infrastructures ... it is just the frequency is significantly less often.

as of approx. 1999, the majority of (programming exploits) were C-related buffer overflows.

as of approx. two years ago ... the exploits were something like
1/3rd c-related buffer overlows
1/3rd automatic scripting
1/3rd social engineering


(not so much that the c-related buffer overflows declined ... but that the other exploits increased significantly).

there is the multics security review paper .... which cliams that multics had no known cases of length related exploits. part of this is the different length related semantics in its implementation language PLI; PLI implementation typically had buffers with max & current lengths in a header field. copy/move/io library routines honored the explicit lengths. It wasn't impossible to write bad code with length-related problems ... but you had to work much, much harder at doing it (than is typical in c).

this isn't a situation of PLI catching mistakes ... it is that c library semantics provide more opportunities to make mistakes compared to other languages where the semantics make it much less likely to make mistakes.

past reference to the multics review
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation

lots of past posts discussing buffer overflows
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 10:46:06 -0700
"Douglas A. Gwyn" writes:
Anybody can make a nonsensical claim; that doesn't make it true. The string library isn't even used in buffer management.

there can be two totally different issues here ... buffer overruns/overflows typically have to do with length management and moving things into (or out of) buffers. various string libraries do move things into buffers.

another kind of buffer overrun not mentioned (as frequently) is incoming characters from some hardware device ... where the rate of the incoming characters exceed the capacity of the system to allocate space for them. this buffer overrun/overflow situation usually results in dropped data ... as opposed to move/copy of data past the end of an allocated buffer. this kind of buffer overrun strays into the area of windowing algorithms and rate-based pacing

another buffer management problem can be allocation/deallocation of the buffers. this is frequently an infrastructure serialization problem ... with things like dangling pointers still being in use after dynamic buffer had been de-allocated (or serialization process trying to play and safe and creating zombie process type problems trying to make sure a process doesn't go away since there might be some orphan activity left around which wakes up in the future and crashes the kernel).

long ago and far away when i was doing kernel stuff ... i got to release the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

as part of that, developed some sophisticated testing and benchmarking tools. besides using the benchmarking to validate extremely fine-grain deterministic scheduling for the fair share scheduler ... i also used it for severe stress testing ... which, when i started was guaranteed to crash the kernel. before the release of the resource manager ... i redesigned and rewrote the kernel serialization infrastructure eliminating all known cases of kernel crashes because of dangling/orphan buffer pointers as well as all cases of zombie/hung process.

I then went on to do a automated kernel problem/crash analysis and determination tool ... which at one time was used by all corporate PSRs responsible for analysis of customer kernel problems.
https://www.garlic.com/~lynn/submain.html#dumprx
as part of doing this tool ... i gathered extensive data on all customer reported problems.

About the same time, i also got involved with the disk engineering lab .. responsible for developing new disks ... at the time, they were operating with stand-alone computers ... because attempting to operate operating system with engineering disks had a MTBF of 15 minutes.. I redesigned and rewrote the io subsystem so that disk engineering could concurrently operate with multiple engineering disks in an operating system environment w/o system crashes.
https://www.garlic.com/~lynn/subtopic.html#disk

so when my wife and I got around to starting the HA/CMP project ... we did a detailed vulnerability analysis of the environment ....
https://www.garlic.com/~lynn/subtopic.html#hacmp

one of the conclusions was that there would be a hundred fold increase in the incidents of buffer length related problems and exploits ... that what we had been familiar with in other environments (because of the common length handling paradigm in C).

minor topic drift post related to ha/cmp, parallel oracle
https://www.garlic.com/~lynn/95.html#13
and the relationship to ssl and electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and recent thread on what is necessary for industrial and business strength programming and applications
https://www.garlic.com/~lynn/2004p.html#20 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2004q.html#1 Systems software versus applications software definitions

in any case, our resulting experience was that there was, in fact, something like a hundred fold increase in buffer length related problems (compared to other environments and paradigms that we were familiar with, based on some experience having looked in some detail at customer and other reported operating system failures and problems and done detail analysis over the years on the causes) ... previous reference to collected postings on buffer length problems
https://www.garlic.com/~lynn/subintegrity.html#overflow

one specific posting from 1999, referencing a published buffer overflow study
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug

I also have done some analysis of the cve vulnerability & exploit database ... some summary of the analysis
https://www.garlic.com/~lynn/2004j.html#58

from prior posting
https://www.garlic.com/~lynn/2004q.html#2 [Lit.] Buffer overruns

last year, i was on panel discussion with somebody from one of the anti-virus companies and a somebody heading up fbi cyber forensic ... he presented the 1/3rd, 1/3rd, 1/3rd statistics. you can actually see our picture buried some place on this page:
http://www.w3w3.com/CSSB.htm

and mention of the air force security audit and evaluation of multics (from section 2.3 No Buffer Overflows) that there were no buffer overflows.

random past references to the air force multics security evaluation:
https://www.garlic.com/~lynn/aadsm14.htm#32 An attack on paypal
https://www.garlic.com/~lynn/aadsm15.htm#23 NCipher Takes Hardware Security To Network Level
https://www.garlic.com/~lynn/aadsm16.htm#1 FAQ: e-Signatures and Payments
https://www.garlic.com/~lynn/aadsm16.htm#8 example: secure computing kernel needed
https://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from the Multics Security Evaluation
https://www.garlic.com/~lynn/2002m.html#8 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#10 Backdoor in AES ?
https://www.garlic.com/~lynn/2002m.html#58 The next big things that weren't
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2002p.html#6 unix permissions
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003i.html#59 grey-haired assembler programmers (Ritchie's C)
https://www.garlic.com/~lynn/2003j.html#4 A Dark Day
https://www.garlic.com/~lynn/2003k.html#3 Ping: Anne & Lynn Wheeler
https://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
https://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
https://www.garlic.com/~lynn/2003m.html#1 Password / access rights check
https://www.garlic.com/~lynn/2003o.html#5 perfomance vs. key size
https://www.garlic.com/~lynn/2004b.html#51 Using Old OS for Security
https://www.garlic.com/~lynn/2004f.html#20 Why does Windows allow Worms?
https://www.garlic.com/~lynn/2004h.html#2 Adventure game (was:PL/? History (was Hercules))
https://www.garlic.com/~lynn/2004j.html#29 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004j.html#41 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#21 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2004m.html#25 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004q.html#2 [Lit.] Buffer overruns

some random topic drift regarding the other kind of buffer overrun/overflow having to do with pacing algorithms:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2002.html#38 Buffer overflow
https://www.garlic.com/~lynn/2002i.html#45 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002i.html#57 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002k.html#56 Moore law
https://www.garlic.com/~lynn/2002p.html#28 Western Union data communications?
https://www.garlic.com/~lynn/2002p.html#31 Western Union data communications?
https://www.garlic.com/~lynn/2003b.html#44 filesystem structure, was tape format (long post)
https://www.garlic.com/~lynn/2003g.html#54 Rewrite TCP/IP
https://www.garlic.com/~lynn/2003g.html#64 UT200 (CDC RJE) Software for TOPS-10?
https://www.garlic.com/~lynn/2003.html#55 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003.html#59 Cluster and I/O Interconnect: Infiniband, PCI-Express, Gibat
https://www.garlic.com/~lynn/2003j.html#1 FAST - Shame On You Caltech!!!
https://www.garlic.com/~lynn/2003j.html#19 tcp time out for idle sessions
https://www.garlic.com/~lynn/2003j.html#46 Fast TCP
https://www.garlic.com/~lynn/2003p.html#15 packetloss bad for sliding window protocol ?
https://www.garlic.com/~lynn/2004f.html#37 Why doesn't Infiniband supports RDMA multicast
https://www.garlic.com/~lynn/2004k.html#8 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#12 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#13 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#16 FAST TCP makes dialup faster than broadband?
https://www.garlic.com/~lynn/2004k.html#29 CDC STAR-100
https://www.garlic.com/~lynn/2004n.html#35 Shipwrecks
https://www.garlic.com/~lynn/2004o.html#62 360 longevity, was RISCs too close to hardware?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 10:52:26 -0700
"Arnaud Carré" writes:
yep :-) That's why sometimes when someone said "C langage is as secure as other", you can't really know if the guy is a poor C coder and have a bad opinion, or if the guy is a C power user and know that's possible to write solid code in C ( as in assembly langage), but you have to spend a lot of efforts, wich is not realistic in real life for real project. That's why there is other langages such as ADA , etc... I just pointed out that it's theorically "possible" to get the same level of security with ANY langage, even assembly.

it isn't just the high/low level of the language that helps/aids a person in making mistakes but also had the length paradigm/semantics are implemented.

there are a number of operating system examples that use the same buffer length conventions as mentioned for PLI ... i.e. buffers have headers with max/current lengths and strings have headers with current lengths ... and the various string libraries that manipulate buffers and strings ... honor the header lengths.

the combination of NULL-terminated strings w/o explicit lengths and the string library implementation that frequently assumes implied lengths or that the programmer must know what he is doing ... result in the majority of buffer overrun/overflow problems.

In the past, I dealt extensively with assembler kernel code that used the same buffer & string conventions as mentioned for PLI ... and the environment had far lower incidence of buffer overflow/overrun problems as C-language environment.

misc. past buffer related posts
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 13:59:21 -0700
"Douglas A. Gwyn" writes:
Anybody can make a nonsensical claim; that doesn't make it true. The string library isn't even used in buffer management.

I apologize for the misunderstanding there was in the original post
https://www.garlic.com/~lynn/2004q.html#2 {Lit.] Buffer overruns

buffer overruns/overflows with respect to discarding characters when things are arriving to fast ... and typically involve characters arriving faster than the software is prepared to handle

buffer overruns/overflows involving buffer length management issues and common exploits/vulnerabilities ... frequently associated with the c programming language environment

dangling/orphan pointers involving buffer allocation management issues and frequently system failures .... or hung/zombie processes associated with over zeolous attempts to avoid danglin/orphan pointers:
https://www.garlic.com/~lynn/2004q.html#3 {Lit.] Buffer overruns

I guess that i was hoping that the context of the post would provide the ability to distrinquish buffer management as being buffer length management as opposed to buffer allocation managerment.

the issue in the original post and mentioned in the subsequent post
https://www.garlic.com/~lynn/2004q.html#4 {Lit.] Buffer overruns

was default c programming conventions compared to some other environments. all the PLI language implementations that i'm aware of have explicit headers with max & current lengths ... and all the library routines honor and maintain these header fields consistently. in an environment where buffers don't carry their own explicit lengths, then it is frequently pushed on to the programmer's responsbility to perform the administrative-like tasks associated with buffer length management operation. for infrastructures where the length is managed as part of the infrastructure ... it is one less mistake for the programmer to make.

note that infrastructures that maintain such explicit length paradigms are not just limited to PLI language environment. There are some number of system infrastructures where the default buffer length management is with explicit headers ... and all the library rourtines tend to conform to the system convention ... regardless of the language ... even low-level assembler and machine languages. For these environments, when dropping below the library level ... the recommended coding conventions also explicitly specify managing the buffer header length fields (if nothing else to maintain compatibility with the rest of the environment). again, while it is possible to make coding mistakes in such environments ... length mistakes are significantly less common (compared to typical c language coding).

there is one other genre not previously mentioned ... the apl/lisp type environment where the (language&operational) environment manages both the allocation and the lengths.

long ago and far away apl\360 had real small workstaces (16k-32k bytes) in real memory. On every allocation, new storage was allocated from unallocated storage (but previous allocation was not discarded). when all unallocated storage ran out, garbage collection was run to reclaim storage not currently in use by assigned variable. the amount of actual storage touched was proportional to the number of assignments and the aggregate size of all variables (where it was possible for the number of assignments to dominate over the actual aggregate size of all variables).

when the science center ported apl\360 to cms for cms\apl ...
https://www.garlic.com/~lynn/subtopic.html#545tech

it moved it into a (relatively) large virtual memory environment (1mbyte to 16mbytes ... typically running on 512kbytes to 1mbyte real machines). the original apl\360 buffer allocation management tended to touch all available virtual memory pages ... which could cause severe virtual memory paging characteristics (even for relatively small programs that otherwise did a large number of assignments).

random past postings, some including apl references:
https://www.garlic.com/~lynn/subtopic.html#hone

so i have an example of a buffer length coding error. Besides inventing fair share scheduling as an undergraduate (and getting it deployed in commercial products), i had also done tty/ascii terminal support which was also shipped in commercial operating system.

Here is tale from somebody that modified the code to support a non-standard tty/ascii device ... about middle of the page, referencing the system crashing 27 times in a single day.
http://www.multicians.org/thvv/360-67.html

The way i had remembered what had happened was since tty/ascii terminal hardware was limited, i had used one-byte arithmetic to calculate size of incoming data (and all max lengths were well under 255). Somebody at the MIT urban lab(?) had changed their system to support a non-standard tty/ascii terminal located some place over at harvard ... which involved changing max. allowed length to something like 1200. Since the base implementation calculation was using one-byte arithmetic (0..255) and was not changed ... the length calculations got messed up.

random other references to the even ...
https://www.garlic.com/~lynn/99.html#44 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#53 Internet and/or ARPANET?
https://www.garlic.com/~lynn/99.html#207 Life-Advancing Work of Timothy Berners-Lee
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/2000b.html#77 write rings
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000g.html#2 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#4 virtualizable 360, was TSS ancient history
https://www.garlic.com/~lynn/2001c.html#36 How Commercial-Off-The-Shelf Systems make society vulnerable
https://www.garlic.com/~lynn/2001f.html#78 HMC . . . does anyone out there like it ?
https://www.garlic.com/~lynn/2001g.html#52 Compaq kills Alpha
https://www.garlic.com/~lynn/2001i.html#32 IBM OS Timeline?
https://www.garlic.com/~lynn/2002f.html#38 Playing Cards was Re: looking for information on the IBM
https://www.garlic.com/~lynn/2002i.html#5 DCAS [Was: Re: 'atomic' memops?]
https://www.garlic.com/~lynn/2002l.html#56 10 choices that were critical to the Net's success
https://www.garlic.com/~lynn/2003.html#73 Card Columns
https://www.garlic.com/~lynn/2003g.html#5 Any DEC 340 Display System Doco ?
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2003p.html#23 1960s images of IBM 360 mainframes
https://www.garlic.com/~lynn/2004j.html#47 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004k.html#43 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#18 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004m.html#26 Shipwrecks

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

XML Data Model

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: XML Data Model
Newsgroups: comp.databases.theory
Date: Wed, 08 Dec 2004 15:20:08 -0700
Tom Hester writes:
But, it seems to me that all of this is beside the point. XML is an interface language, not a data language. There is no data model for XML because it does not describe data (facts about the real world). Rather it describes how to pass text between processes.

as previously mentioned it started out as GML ... generalized markup language ...
https://www.garlic.com/~lynn/submain.html#sgml

and the initials GML are actually the three people at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

involved inventing in 1969. They took "G", "M", and "L" ... and had to come up with something other than the people's last names.

it was part of an infrastructure for document formating .... and common reference at the time was markup languages to refer to rules for formating documents.

however, relatively early in the '70s ... GML tags started taking on the characteristics of attribute tags as opposed to markup tags (with some indirection where attribute tags where then given markup rules ... as opposed to giving markup rules directly to contents of a documents).

so a typically attribute tags was ":address." (original gml tag format, you see the transition to <address> brackets, later in the 70s with ISO standardization of SGML).

The issue was the original 1969 invention started out as a formating markup language ... but by the early '70s, the tags were in common use as information descriptors ... independent of the formating of the information.

... so something like 4th floor, 545 technology sq, cambridge mass then in XML semantics is formated like

:address.4th floor, 545 Tech. Sq, Cambridge, Mass,

and the semantics becomes

"4th floor, 545 Tech. Sq, Cambrdige, Mass" IsA "address".

So, the analogy in typical RDBMS, is possibly the data dictionary giving the field/column characteristic.

The ML-genre allows for hierarchy of constructs ... so there can be a large file where the whole thing might be a <document> and their are individual fields that are subsections of <document> ... like <address> ... where there is a relationship between a thing that is a <document> and has a characteristic of <address>.

The RDBMS analogy could be considered a single level hierarchy where there is a primary field that has relationships to other fields in the same table.

Lets say you have a RDBMS "document" column as the primary field/key ... in RDBMS ... the contents of the field is some identifier that might be used to distinguish a specific document from some other document. In the ML paradigm ... what follows the field <document> ... is the actually document ... as opposed to an identifier for selecting the document (which you might find in a RDBMS paradigm). In both ML and RDBMS ... the contents of "address" tends to be the actual address.

So one might claim that in ML ... the contents of the thing marked by the tags are the actual things (i.e. the actual document, the actual address, etc). In RDBMS ... the fields might be something that represents the actual thing (i.e. a document sequence number that might be used to finding the document someplace) or it might be the actual thing (like an address).

So ... lets take an XML document that starts with a tag <document> and in the hierarchy, it might have other tags <address>, <document serial number>, etc ... all as sub-items in a document hierarchy.

Map that to RDBMS ... you could have one large table ... with the primary field being the <document serial number>, and an <address> field and a (very large) <document contents> field.

One could characterize such a RDBMS table as have a one level hierarchy ... with a primary field (document serial number) and all the other fields related to the primary field.

In the ML world, the top of the hierarchy would be the actual document (contents) and all the other fields would be related to (or are attributes of) the actual contents.

So one might claim that in a RDBMS world ... the document serial number is the unique thing ... with everything else as attributes of (or having relationship to) the document serial number. In the ML world ... the document could be considered the unique thing ... and everything else (including the document serial number) are attributes/characteristics of the document (lower down in the hierarchy).

some of the confusioin is that the same document might contain both markup tags and attribute tags aka "<br>" ... is a formating, markup tag ... while "<address>" is a data schema tag. So ... a "<br>" embedded in a document isn't likely to be considered as part of the data schema of a document ... while "<address>" may in fact be considered part of the document data schema.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 15:32:51 -0700
"karl malbrain" writes:
But you are responsible for your own "C-language environment" and your own discipline. It's very easy in C to add an object to a character pointer that enables "buffer management" for non-constant character arrays.

the issue isn't about what might be capable of using C-language ... anymore than what might be capable of using assembler language. In theory both C and assembler have enuf low-level constructs to implement almost any paradigm.

the comment was that the standard C-language environment doesn't (as a default), carry explicit length fields for all strings and buffers ... and that the standard c-language library routines don't (by default) check the maximum length in the header of a buffer target ... and make sure that it doesn't move more data than is allowed by what is specified in the buffer header (and therefor overflow the buffer).

The issue was that all of the PLI language implementations (that i'm aware of) implemented buffers with length headers and the library routines all made sure that they didn't violate the buffer lengths (aka an explicit attribute of a buffer ... in the header of the buffer ... is the maximum length of that buffer). While it is not impossible to violate the buffer length ... it is much hardere to accidentally violate buffer lengths compared to the standard C-language environemnt.

Futhermore, there are a number of systems where the default system infrastructure (regardless of the language used) have paradigm that implements buffer lengths in header fields ... and that all languages and library routines that exist in that system environment tend to have coding conventions that consistently use and maintain such buffer header length fields. Again, it isn't impossibly to write assembler code in such environments that violate buffer lengths ... but since the default system coding conventions and operations observes the buffer header length fields ... it tends to be a significantly lower frequency mistake than occurs in most typical c-language environments (where it isn't common to find all coding conventions and all library routines that involve buffers ... implementing, maintaining, and consistently observing buffer lengths specified in buffer header fields).

past pieces of this thread
https://www.garlic.com/~lynn/2004q.html#2 [Lib.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#3 [Lib.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#4 [Lib.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#5 [Lib.] Buffer overruns

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 16:07:27 -0700
i've heard various stories about the null terminating by unix (say compared to multics pli and buffer headers with lengths that were common at the time).

one was a minimum string header tended to be two bytes (two byte fixed length) or possibly four bytes (variable length buffer, two byte max length, two byte current buffer contents length). Having null termination saved one byte (compared to two byte length header on fixed string) and saved three bytes compared by keeping track of every buffers maximum length (aka do nothing and push it up to the programmer and hopes he does it correctly).

this is the sort of thing from the period of saving every byte possible in constrained real storage and resource limited environment. This type of approach also contributed heavily to the y2k problem ... where years were only implemented as two digit numbers (there was actually scenario in the past where there one digit years and problems showed up on decade roll-over ... there was situation from rolling from the 60s to the 70s).

The other scenario is the addresses/pointers in a string processing loop becomes a little more expensive. With null-termination ... you pick up the start of the string and keep processing bytes until you find a null characters (and only need the pointer to the current character). In the byte-header scenario ... you have to have both the current character address and the last character address (and loop compares whether it has moved past the last character address). It can also be done with a current character pointer and counter of remaining characters to process (in either case, the generated machine language tends to require an additional register).

The startup tends to be slightly more expensive ... say when copying data or even appending data. If this is a string library appending data to a buffer ... it has to pick up the source length from the source string header, it has to pick up the length of the current destination buffer contents (from the destination buffer header) and the destination buffer maximum length (from the destination buffer header). The append library routine then has to have an api semantics giving either the number of characters actually appended ... or the inverse ... the number of characters that it was unable to append.

If the api semantics is purely defined as returning characters not copied/appended ... then the calling code has to

  1. specify the origin buffer (the origin string length is an attribute of the origin buffer, kept in the origin buffer header),

  2. specify the destination buffer (the current string length in the destiniation buffer is an attribute of the destination buffer, kept in the destination buffer header, and the maximum length of the destination buffer is an attribute of the destination buffer, kept in the destination buffer header).

  3. call the library append routine,

  4. check for non-zero return (which would indicate some characters not copied/appended).

So, if i'm using a standard C-programming library routine to append one string to another ... what is the fail-safe programming required?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 16:21:46 -0700
"karl malbrain" writes:
typedef struct { int len, max; char *array; } String;

that wasn't the issue ... in theory, i could be faced with the same scenario in many common assembler language implementations. however, many of the standard system infrastructures actually implemented buffer headers as part of the default system infrastructure ... regardless of the language used in that infrastructure.

the tendancy at the time of multics pli ... and some number of other operating systems of the era ... was more like a 16bit max, a 16bit length followed by the actual data (predating unix and c). You got a pointer to the actual data ... and could backup two bytes to get the length of the current data ... or backup two more bytes and get the maximum length of the buffer.

There were some fixed length constant strings that were only used as source (and never destination) ... and so you only needed the actual length (and the implementation didn't have to waste the maximum buffer length ... because it was a constant string the current and maximum were known to be the same).

Variable length strings/buffers had a four byte header. You got a pointer to the buffer/string ... and could backup two bytes and get the length of the current string or back up two more bytes and get the maximum length of the buffer.

the four byte headers were frequently the default infrastructure implementation for allocated and variable length buffers (two byte maximum length followed by two byte current length). You had to go to a different type to get larger lengths.

lots of standard libraries and infrastructures have supported this paradigm before either unix or c were created.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 16:39:01 -0700
"karl malbrain" writes:
Guess what? You get to implement your own library in C. I'm responsible for a 130K lines of code Windows/unix application engine and we don't even link with libc. The standard c library is of no interest to us.

so that gets back to my comment about having done detailed vulnerability and exploit investigation when we started ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

and predicted that the standard environment would have something like two orders of magnitude increase in buffer related problems that what we were used to.

part of ha/cmp was writing a core of code to manage assurance, availability, fall-over ... and to also provide high performance distributed lock manager. An objective was to be able to run on a standard platform and be able to offer fall-over services to a variety of applications, including off-the-shelf applications that might run on such platforms. As a result we didn't have control over all the code that might run on the machine ... either at the point in time when the code was originally shipped ... or possibly 10-15 years later when the customer might install any arbitrary application in the environment.

a big part of the detailed vulnerability and exploit investigation was to identify possible failure-modes over which we had little or no control. it identified things that we had to tightly control in our own code implementation ... but also identified vulnerability/exploits possibilities where there was going to be little, if any control.

for instance in this scenario ...
https://www.garlic.com/~lynn/95.html#13

we weren't going to be able to control every line of database code.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 08 Dec 2004 16:46:59 -0700
"karl malbrain" writes:
I think you're reading way to much into the idea of "standard library."

is this "standard library" ... as in common use by the large portion of people writing C code ... or possibly other hypothetical "standard library" ... like in the pli/multics scnario ... and some other system infrastructures that predate unix & c?

in the pli/multics scenario ... if all of the kernel is implemented in pli and all of the kernel uses the pli header strings convention for both internal kernel constructs as well as constructs that cross the kernel/application api boundary ... if the standard kernel libraries all support/assume the standard kernel construct, if all the system libraries supplied for application support/assume the same standard header construct .... then this is one form of standard library.

another form of standard library ... is what is the default use by the largest number of programmers in the c language environment? ... and possibly a main source of reported buffer length exploits and vulnerabilities.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: 09 Dec 2004 00:48:55 -0700
"Douglas A. Gwyn" writes:
There is also an important point that this whole line of discussion keeps missing, namely: if the programmer's assumptions are violated at run time, something *unplanned* is going to happen, which is bad from the security perspective. That is as true with boundary-enforced buffer mechanisms as it is for the sloppy UCB undergraduate hacks that so many systems "borrowed" for their IP suite. At the very least, you have a DoS vulnerabililty, but it could be a lot worse -- since the program will execute some "error" code that the programmer did not mean to be executed. Imagine a medical control device or an automotive or flight control device that traps to a stack-trace abort when a boundary is violated.

my statements weren't intended to reflect theoritical conditions ... my statements were intended to reflect that there has been a signficant difference in the actual observed occurances of buffer related vulnerabilities/exploits in implementations done with standard C language as compared to implementations done using buffer header paradigm.

the assertion is that programmers make signficantly fewer buffer length abuses/mistakes in environments where there are explicit buffer length headers ... compared to the frequency of buffer length abuses/mistakes using standard C language environments.

It is more than a theoritical mistake/abuse/vulnerability/exploit. It is like if you were getting 1000 fatalities per million miles driven in one specific kind of vehical ... and 10 fatalities per million miles driven in another specific kind of vehical ... that it might be worth consider changing vehicles. This is dispite somebody observing that it was possible for either vohicle to still go off a mountain road and kill everybody.

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: 09 Dec 2004 00:14:33 -0700
"Douglas A. Gwyn" writes:
The Y2K problems were entirely unrelated to whether terminated strings were used. In fact with self- delimiting strings there is less inclination for the programmer to allocate a fixed-width field.

I never intended to imply that Y2K issues were related in any way to null terminated strings. The example I heard was that null terminated strings conserved some bytes compared to the explicit header length paradigm ... and that Y2k issues arose from similar issues regarding conserving bytes.

Using null terminated string might save 1-3 bytes compared to an explicit length implementation .... using two (or one) digit for year could conserv 2(-3) bytes compared to an implementation using 4 digits for years.

In the 60s and at least early 70s, there was a lot more effort to do implementations that conserved bytes ... potentially at the expensve of something else

the original statement from
https://www.garlic.com/~lynn/2004q.html#8

wasn't intended to claim that all things might use null terminated string ... but that null terminated string was (possibly) justified because it used less storage (conserved 1-3 bytes possibly compared to explicit buffer header ... where the length of the buffer is now carried as an explicit attribute of the buffer) ... and that many of the Y2k problems arose from efforts in the same era attempting to also conserve real storage.

the statements had more to do with trade-offs made in one era with conserving/optimizing some resource ... and could have significant later repercusions. I would claim that the a lot of the efforts in the 60s to use two-byte year fields (rather than 4byte) were done to conserve storage ... and that optimization led to many of the y2k problems.

I would by analogy argue that the performance/conservation trade-off of using null terminated strings (based on trade-off choice that it used less storage than explicit length headers) contributes to the signficant difference in the number of buffer length vulnerabilities found in standard c language coding ... compared to the number/frequency of buffer length vulnerabilities found in infrastructures that utilize buffer headers with explicit length as standard coding convention

Again, the statement isn't about what the buffer length fail-safe way would be to write c language code ... but whether comparing two default, coding conventions ... one pervasive C coding conventions and say PLI coding conventions (with explicit lengths attributes carried with buffers) and the sigificant difference in the number of buffer length problems ... could the difference between explicit buffer length convention compared to the null-termination convention account for most of the difference

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: 08 Dec 2004 23:54:06 -0700
"Douglas A. Gwyn" writes:
That is what I (and Karl M in an associated thread) was referring to, and you're wrong about it. Buffer length can be managed perfectly well using C, and if it isn't, blame the programmer for not doing his job. It isn't C's job to impose assumptions about your application upon you; if you try to use it to do harmful things, then harmful things will occur, much like using a sharp knife (which expert knife users natually much prefer over safety-enforced knives).

which is my question from
https://www.garlic.com/~lynn/2004q.html#8

given either copy from source to destination buffer ... or append source to destination buffer ... the buffer header paradigm has the programmer specifying

  1. the source string/buffer ... where the source length is an attribute of the source string/buffer in the header field

  2. the destination buffer ... where the destination buffer currently occupied length (in the case of append) is an attribute of the destination buffer and the maximum buffer length is an attribute of the destination buffer

  3. call the library append/copy/move routine

  4. check for non-zero return (which would indicate some characters not copied/appended).

the observation is that

  1. some number of systems and languages prior to creation of unix & c implemented such paradigms ...

  2. some of these implementations continue to exist today

  3. these implementations have tended to have two orders magnitude fewer of the common buffer overflow vulnerabilities/exploits seen when using C language implementations.

the prediction was that because of the common C language programming conventions ... that C language inplemented infrastuctures would tend to have two orders greater buffer length related problems.

there was no claim that c language couldn't be used to implement buffer length safe implementations ... the claim was that buffer unsafe implementations were so much easier in C ... that it would contribute to a significant increase in buffer length related vulnerabilities/exploits.

I'm only observing that the air force pli/multics security study claimed that there were no buffer length related problems .... while there are significant number of buffer length related problems in C implementations.

My contention was that default/standard C programming conventiosn contribute to this significant number of buffer length related problems and by comparison there are other infrastructures with default/standard probramming conventions with signifantly lower buffer length related problems.

furthermore the default/standard library and programming conventions can be independent of the programming language ... where at least some environments have default/standard library and programming conventions the same for assembler, pli, and a number of other languages (being more a characteristic of the infrastructure rather than any specific language).

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Thu, 09 Dec 2004 10:36:43 -0700
"Arnaud Carré" writes:
Agree, that's why I said in previous post that C is (to my opinion) a low level langage ( but I really love C !). To me, it's a universal macro-assembler without registers contrainst. You have to take care of everything but the code generation. ( as with a macro assembler !)

some of this is coding conventions ... and regardless of whether or not you can violate rules if you need to.

the claim is that null terminated convention is open to larger number of programmer mistakes (as opposed to purposeful abuses) compared to buffer/string explicit lengths (both current length and maximum buffer length).

i claim that there are infrastructures implemented in assembler (at a lower level than C) where there are significant fewer buffer overruns ... not because of the language characteristics but because of the standard infrastructure environment and coding conventions.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Announce] The Vintage Computer Forum

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Announce] The Vintage Computer Forum
Newsgroups: alt.folklore.computers
Date: Thu, 09 Dec 2004 11:04:36 -0700
Morten Reistad writes:
We will see a huge worldwide battle between the internet charging model and the phone charging model the next decade.

The phone companies want to charge by the bits transported end to end; the Internet model wants to charge for capacity available. Both have their merits.

The Internet model is much simpler, and works wonders when things are growing by leaps and bounds. The Internet still does; despite some setbacks in the US lately. The "Phone model" works well in a static world where capacity is scarce.

Mobile phones have taken off in "phone company" mode. They use the scarseness of the radio spectrum as a proxy to push high by-the-minute rates. They are currently printing money with their mobile networks.


there have been claims that the ISO OSI standard was driven by point-to-point copper-wire people from the telcos and it was only a little over 10 years ago that the federal gov and numerous gov. agencies were mandating the elimination of the internet and converting everything to OSI ... which lacked any internetworking concept at all.

misc. past comments on osi, gov. mandating eliminating the internet, etc.
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Thu, 09 Dec 2004 13:50:05 -0700
Bryan Olson writes:
I saw a PBS documentary on automotive safety in which they explained that in the early days of the industry, cars were far more dangerous but people didn't think much about making them safer. The root cause of the vast majority of the casualties was "driver error". Blame the drivers, not the machines.

I had to laugh, realizing that the thinking in my own profession is the better part of a century behind the auto industry. What kind of engineer would knowingly select a design where common human errors can easily slip by, and such slips frequently cause disaster?


i've periodically used the analogy to after-market seatbelts ... everybody could install their own seatbelts if they needed ... so why was it necessary to have manufactures put seatbelts in cars ... or school buses, or a variety of other vehicles.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

PR/SM Dynamic Time Slice calculation

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: PR/SM Dynamic Time Slice calculation
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 10 Dec 2004 09:45:46 -0700
gdieh@ibm-main.lst (Diehl, Gary , MVSSupport) writes:
Would someone please enlighten me as to exactly what the PR/SM calculation for dynamic time slice is?

I see that the "default", as listed in the IRD Redbook under the "How WLM LPAR CPU Management works" section, clearly states that the time slice for an LPAR's LP is between 12.5 and 25 ms, and is determined by the following calculation:

(25ms * # of Physical CPs) / (total # of logical CPs not in stopped state)

This seems fairly easy... If I have a 1C7 with two LPARS that have 6 and 4 LPs online respectively, then my total LP to CP ratio is 10/7, and my default time slice is (25 * 7) / 10 = 17.5 ms. So if I cut it down some, taking one LP off each LPAR, changing the ratio to 8/7, the default time slice becomes 21.875 ms.

That's all well and good for the "default" time slice. But what does PR/SM do with it after that? It's dynamic, so my assumption is that the time slice may be constantly changing.

If I change my LPAR weights, say from 600/400 to 550/450, then what does that do to the time slice?

If one LPAR is maxxing out, and the other is idle, what does that do to my time slice?

What else can I to do affect the dynamic time slice, other than vary off and on LPs?

I'd like to be able to tune to this fine point, if necessary (though I'll agree that for the most part it is unnecessary to plan to this level of detail).


from long ago and far away ....

the original vm370 code had time-slice table at boot/ipl ... that basically had machine model number and time-slice ... at boot, it would do a store cpuid ... look it up in the table ... and set the time-slice to the corresponding value.

the vm370 code allowed for pre-empting dispatching ... so if an interrupt came in for a higher priority task ... there would be a task switch ... even if the current running task's time-slice hadn't completed. the time-slice was basically a catcher for recalculating dispatching task priority (allow multiple, concurrent computationally intensive tasks to all get periodic shots at the processor).

the processor model time-slice adjustment was to approximately let progress to be about the same per time-slice aka approximately the same number of instructions ... regardless of the machine model. part of this was to constrain the dispatching overhead for time-slice switch ... you knew the pathlength (number of instructions) for the dispatching process ... you would possibly like the ratio of dispatching overhead to productive task execution to exceed some level ... while at the same time allowing some reasonable dispatching control.

so when i released the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

i made a change

eliminated the cpu processor table ... and substituted a short, timed compute bound loop done at boot/ipl time. the time-slice at boot was updated based on measured number rather than the processor table. The problem was that their were some non-linear effects. cache hit ratio for the loop was nearly 100percent ... so it wasn't representative of the actual difference in real-world mip rates between non-cache machines and cache machines. I was hoping for some code that could work on a wide range of different processors ... and dynamically adapt to new processors as they came out ... w/o having to constantly update the boot/ipl processor model table.

later i made some more changes

1) in the dispatcher ... i added some code that used SSM to temporarily enable for i/o interrupts and then immediately disable. this was done prior to going to the overhead of selecting and dispatching a new task. the idea was that if there was a pending interrupt ... all the dispatching overhead would be superfluous because the interrupt would immediately happen anyway ... and would have to repeat the process.

2) monitored the i/o interrupt rate ... and if the i/o interrupt went past a (a dynamically adjusted) limit ... it switched the dispatching task execution from enabled for i/o interrupts to disabled for i/o interrupts. the result would be that when a task was dispatched, it would continue to execute until either 1) it gave up processing or 2) reached time-slice end. i/o interrupt would then only occur in the dispatcher's i/o interrupt "window"

the issue was that asynchronous i/o interrupts had a very disruptive effect on cache hit ratios. if the i/o interrupt rate passed some high level ... you were loosing a large amount of your processing power to cache thrashing. you were better off slightly delaying i/o interrupts and then going thru interative cycle to drain all queued i/o interrupts before allowing task to dispatch. the processing of i/o interrupts tended to be more efficient because you would interative loop thru the interrupt handler for all pending i/o interrupts (improved cache hit rate) and then switch to task execution (improved cache hit rate). while slight delays in taking i/o interrupts might appear to decrease i/o thruput ... the slight delays of i/o handling could be more than offset by the improved thruput of the interrupt handler having a higher cache hit rate (handling multiple interrupts in sequence). the result could be both a higher aggregate i/o thruput plus a higher task thruput (because of the improved cache hit ratio).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tru64 and the DECSYSTEM 20

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tru64 and the DECSYSTEM 20
Newsgroups: alt.folklore.computers
Date: Fri, 10 Dec 2004 13:16:09 -0700
"Charlie Gibbs" writes:
It was an evolutionary step from the so-called "intelligent terminals" of the time. The operative word was "terminal" - IBM intended for these gadgets to act (at least part-time) as terminals to mainframes. Hence the 3270 emulation, the SysRq key, etc. But personal computers were a device whose time had come, and the Law of Unintended Consequences came into effect.

misc past posts regarding terminal emulation subject
https://www.garlic.com/~lynn/subnetwork.html#emulation

one of the claims for the eventual large growth in PC sales was the business market segment and having combination of some PC-based software along with terminal emulation ... so businesses for about the same price and desk footprint as 3270 ... could get both mainframe connectivity and local computing (same screen and keyboard). I actually had this discussion/argument with some of the mac developers before the mac announced.

this however led to entrenced install base towards the late 80s which inhibited the evolution of paradigms involving PC operation in multi-tier mainframe/legacy business environments. misc. 3-tier & saa posts
https://www.garlic.com/~lynn/subnetwork.html#3tier

one of the complaints of the disk division from the period ... was that w/o newer and better paradigms for letting distributed computers access legacy business data ... the legacy business data was going to leak out of the glass house (significantly cutting glass house disk growth and fueling demand for the new generations of commodity disks).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Systems software versus applications software definitions

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Systems software versus applications software definitions
Newsgroups: comp.software-eng,comp.lang.c,comp.programming,alt.os.development,comp.arch,alt.folklore.comupters
Date: Fri, 10 Dec 2004 13:35:28 -0700
Joona I Palaste writes:
You're much more learned than I am, then. The only thing almost a decade of writing toy machine language programs to see what the Commodore 64 can do has taught me in this regard is being able to convert any integer from 0 to 255 from decimal to hexadecimal or back in my head in a couple of seconds. Well, it amazed my little brother for a couple of minutes.

besides learning to read hex from mainframe dumps ... i also learned to read it from the front console lights as well as the punch holes in cards (output of assembler and compiler binary/txt decks) ... both hex->instructions/addresses and hex->character (or in the case of hex punch cards, holes->hex->instructions/addresses and holes->hex->ebcdic).

In the past I had made (the mistake of?) posts about the TSM lineage from a file backup/archive program
https://www.garlic.com/~lynn/submain.html#backup

that I had written for internal use that then went thru 3-4 (internal) releases, eventually packaged as customer product called workstation datasave facility, and then its morphing into ADSM and now TSM (tivoli storage manager).

so a couple days ago ... i get email from somebody trying to decode a TSM tape; included was hex dump of the first 1536 bytes off the tape ... asking me to tell them what TSM had on the tape.

well way back in the dark ages ... you could choose your physical tape block size ... and the "standard label" tape convention started with three 80-byte records; vol1, hdr1, hdr2.

so the first 1536 bytes was three 512byte records ... and i recognize the first 80 bytes of each (512byte) record as starting vol1, hdr1, hdr2.

the hex dump had included the hex->character translation ... but for ascii ... and of course the tsm heritage is from ebcdic mainframe ... not ascii (aka it was the ebcdic hex for vol1, hdr1, hdr2) It didn't even get to the TSM part of the tape data ... it was still all the os standard label convention.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tru64 and the DECSYSTEM 20

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tru64 and the DECSYSTEM 20
Newsgroups: alt.folklore.computers
Date: Fri, 10 Dec 2004 14:34:55 -0700
some apple trivia dift ...

my brother was regional apple marketing rep ... he would periodically come into town and i sometimes got to tag along on apple business dinners. he said that he had the largest (physical/sq. miles) apple marketing region in continental us. he also had technical background and said that he was possibly the only apple marketing rep that knew how to really take apart and put back together an apple-ii. he believed he was one of only very few apple marketing reps that also knew how to setup apple-ii to dial-up the business computer at apple hdqtrs and directly get the production and delivery schedules.

so in much of the 80s ... what was the business computer in apple hdqtrs?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tru64 and the DECSYSTEM 20

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tru64 and the DECSYSTEM 20
Newsgroups: alt.folklore.computers
Date: Sat, 11 Dec 2004 08:55:25 -0700
Anne & Lynn Wheeler writes:
so in much of the 80s ... what was the business computer in apple hdqtrs?

hint: the descendent of this business computer and the current generation apples use the same kind of processor chip.

the folklore is that some number of afficionados of the canceled FS project
https://www.garlic.com/~lynn/submain.html#futuresys

went off to rochester to build it.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1GB Tables as Classes, or Tables as Types, and all that

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1GB  Tables as Classes, or Tables as Types, and all that
refuted
Newsgroups: comp.databases.theory
Date: Sat, 11 Dec 2004 09:13:51 -0700
"Dawn M. Wolthuis" writes:
Name one precise problem that the hierarchical DBMS's had that is now present for anyone using an XML model of data. It's time to drop the flawed notion that data graphs have some inherent problems. One can build terrible database management systems based on graphs or good ones. The model, itself, is useful and never was abandoned in reality (or in Reality, a database from the company Northgate -- McDonnell-Douglas and variants of the company kept this graph-based solution active since the early 70's as the Microdata company, and it is still being sold and used today)

Graph-based data models have survived the Relational Database trend and will now get a new push given that more people now understand -- and even more will! -- that RDBMS's have no better theorectical basis than graph-based database management tools. And, PA-LEASE STOP calling them "Network" and "Hierarchical" -- they are graphs and trees. No other niche in the computer industry has problems using mathematical terms "graphs" and "trees". Using a mathematical term ("relations") for your preferred data model, while avoiding the mathematical terms for the others is way too obvious a form of spin and, by golly, it worked for a couple of decades, dag nab it (thus holding back our industry unnecessarily, methinks), but the time has come to AT LEAST get away from the N & H terms. --dawn


note that the arguments that i remember going on between stl/bldg.90 and sjr/bldg.28 wasn't so much about the information structure. the hierarchical and network databases of the 60s used physical pointers and system/r (first rdbms)
https://www.garlic.com/~lynn/submain.html#systemr

used indexes.

the argument was the trade-off between the human advministrative effort to maintain the physical pointers (which was subsumed in large part by system/r indexes) and the indexes typically doubling the physical disk spaced occupied by the database (because of the indexes) ... as well as the index lookup being slower than direct pointers. The issue was could you trade-off processing resources and disk space resources against the manual administrative efforts.

during the 70s and 80s, the manual resources became scarcer and more expensive while processing and disk space became much more plentiful and less expensive. also with large and less expensive real memories of the 80s, it was possible to cache some amount of the indexes, off-setting some amount of the index processing penalty.

however, physical pointers (used in the 60s) aren't necessarily intrinsicly a characteristic of the information organization ... just a characteristic of the resource trade-off implementation circumstances of the period.

there are still quite a few large, major ims installations still in existance
http://www-306.ibm.com/software/data/ims/

from above:
IBM's premier transactional and hierarchical database management system for critical on-line operational and e-business applications and data, enabling Information Integration, Management, and Scalability

and
http://search390.techtarget.com/featuredTopic/0,290042,sid10_gci990489,00.html
When most mainframers hear Web-enabling a database, they think connecting DB2 to the Web. But IMS, IBM's older database, is just as Web worthy. To hear more about Webifying an IMS database, check out this Webcast with Jim Keohane.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tru64 and the DECSYSTEM 20

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tru64 and the DECSYSTEM 20
Newsgroups: alt.folklore.computers
Date: Sat, 11 Dec 2004 09:17:15 -0700
keith writes:
Must be AS/400. Interesting that M$ also used AS/400s to run their business (until mid 90's perhaps).

originally s/38 ... the as/400 was the s/38 follow-on, as/400 initially built using cisc processor ... but later converted to risc processor.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Question on internal/external IPs

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question on internal/external IPs
Newsgroups: comp.security.firewalls
Date: Sat, 11 Dec 2004 16:04:03 -0700
ibuprofin@painkiller.example.tld (Moe Trin) writes:
ftp://ftp.isi.edu/in-notes/rfc-index.txt

[compton ~]$ zgrep -c '^[0123]' rfcs/rfc-index.11.09.04.txt.gz 3931 [compton ~]$ zcat rfcs/rfc-index.11.09.04.txt.gz | tail -3 3956 Embedding the Rendezvous Point (RP) Address in an IPv6 Multicast Address. P. Savola, B. Haberman. November 2004. (Format: TXT=40136 bytes) (Updates RFC3306) (Status: PROPOSED STANDARD) [compton ~]$

I imagine there are a few more documents since then, as I only check it about every 60 days. The index file alone is nearly 15,000 lines or 635k.


this is my index
https://www.garlic.com/~lynn/rfcietff.htm

at the moment it is up-to-date ... except the two RFCs listed in the rfc-editor announcement that went out today.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sun, 12 Dec 2004 09:29:22 -0700
Bryan Olson writes:
By the standards of safer programming languages, memcpy is *not* perfectly well-behaved. It has undefined behavior if the programmer passes a size_t that is too large.

my experience is that programs written in C, except for buffer length failures ... tend to have failures & mistake rates similar to other environments; for instance storage cancers seem to crop up about as frequently in C environments as in many other environments.

one might then conclude that storage cancers are a relatively common problem across a large number of different environments (except for the apl/lisp/etc environments where storage allocation/deallocation responsibilities have been totally removed from the programmers's responsibility). given that programmers are going to be given the allocation/deallocation responsibility ... then it is probably going to be require some development methodology to address storage cancer type issues.

the buffer length scenario goes to a completely different level ... since the buffer length failure & mistake rate in significantly larger in c than lots of other environments (possibly by two orders of magnitude) ... there is some statement somewhare that given large enuf quantitative difference ... it can become a qualitative difference.

during much of the 90s ... i was told that it was simply just another programming development issue and better tools were going to eliminate the C language environment tendency towards buffer length mistakes. However, it seems like that the better tools still have had little effect on the buffer length mistakes ... new C application seems to have just as many buffer length mistakes as the code from the 80s.

i looked at the structural differences and observed

a) null termination convention appeared to encourage programmers to believe that length was an attribute of the data pattern

b) default buffer allocation/deallocation in C ... was having buffer construct being simply default to a C address/pointer construct ... with the responsibility being placed on the programmer for maintanence of the buffer length attribute. however, one could claim that the result of the null-convention encouraging programmers to think of length as an attribute of the data pattern rather than and attribute of the structure containing the data ... programmers would frequently forgot to enforce lengths about the length attributes of buffers (which was their responsibility by the convention of mapping buffer constructs to simply pointers and leaving programmers to manage the length attribute).

so, i was aware of numerous infrastructures during the 60s which took the approach that length was an attribute of the structures ... rather than the data. a simple example is buffer structures which had used a pointer convention to the start of the data portion of the buffer ... but if you backed up two bytes, it had the length of the data in the buffer ... and if you backed up two more bytes, it had the (max) length of the buffer (buffers explicitly carried their length attribute, it was not the responsibility of the programmer to carry it around in their heads). Furthermore, the length attribute of data was carriend by the structure that contained that data ... rather than an attribute implicit in the pattern of the data.

long ago and far away ... when i asked why the null data pattern methodology was chosen and not the length header convention (common in the 60s) ... i was told that they specifically chose the C approach because it saved a couple bytes of storage per structure, an instruction or two in loops ... and possibly a register.

so i claim that there is an intersection of characteristics ... somewhat unique to C (as compared to many other environments that have radically lower frequency of buffer length problems); the null pattern convention for length ... which encourages programmers to think of length as an attribute of the pattern of the data (and something that don't need to explicitly manage) and buffer allocation/deallocation construct is mapped to simple pointer with implicity requirement that the programmer is (now manually) responsible for the length attribute of the buffer constructs. Buffers are real constructs in C environments with both an pointer attribute and a length attribute ... however only the pointer attribute is mapped to explicit C language construct, a pointer ... and the length attribute is carried implicitly ... forcing programmers to manage it ... and the null terminating convention is encouraging programmers to think of length as an attribute of the data pattern ... as opposed to an attribute of the structures that contain the data.

so switching roles ... i'm an attacker ... i'm looking at typical environment where C programmers frequently are assuming that length is an attribute of some pattern in well-behaved data. so it seems that besides just old-fashion, run-of-the-mill program failures because the programmer made a mistake ... the bad guys can frequently and successfully attack C-language environments by sending ill-behaved data.

that makes common c-language implemented environments ... not only subject to failures because of large increase in buffer related programmer mistakes (compared to other environments) .... but these failure characteristics also represent explicit attack vulnerabilities.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

1GB Tables as Classes, or Tables as Types, and all that

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 1GB  Tables as Classes, or Tables as Types, and all that
refuted
Newsgroups: comp.databases.theory
Date: Mon, 13 Dec 2004 10:13:33 -0700
alfredo_novoa@hotmail.com (Alfredo Novoa) writes:
The DBMS approach was one of the biggest advances in the computer field, but most programmers are not aware of that.

old time databases from batch processing days tended to be services to manage complex data ... more complex than reading/writing files sequentially (more complex than the original tape in & tape out .... and then sequential paradigm moved to random access disks).

dbms has tended to refer to infrastructures (somewhat associated with online) that managed concurrent access by multiple different programs/applications ... providing locking, transaction consistency, acid properties, etc. ... aka dbms tended to represent concurrency control for databases.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Mon, 13 Dec 2004 09:47:31 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
Part of one reasonable way of coping is to use a safe language where array bounds are checked, etc., so that some kinds of accidental slipups are less deadly.

i would claim that it wouldn't even take that much to eliminate majority of the current c environment buffer length slipups ... just have buffer constructs carry both their address and its length (to eliminate the problems with the programmers being required to carry the length around in their heads .... and/or assuming that its is represented as part of the data pattern contained in the buffer construct).

there are environments where the buffer construct carries both the address and the length ... but don't actually enforce array bounds checking ... but with the length readily available and most library routines taking advantage of buffer constructs carrying their own lengths ... programmers have to work much, much harder to make mistakes and violate the bounds.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Two Fedora Core 2 problems

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Two Fedora Core 2 problems
Newsgroups: linux.redhat
Date: Mon, 13 Dec 2004 10:03:47 -0700
bushwah writes:
2. No sound. The Soundcard Detection says:

i'm not sure fc3 gets much better ... i just did brand new fc3 install on new vaio a290 laptop after completely wiping the disk. it has ac97 motherboard chip.

i also recently did upgrade of two machines from fc2 to fc3 ... old dell precision 410 with motherboard CS chip, and a dell dimension 8300 with (bios) disabled motherboard ac97 chip and a soundblaster card.

all three systems are at the (same) most current fc3 maintenance

hardware detection finds all three chips ... but on the 8300, the soundblaster doesn't show up in /proc/asound/cards so alsamixer doesn't recognize it. on the other two machines (vaio and 410), i've sent alsamixer values to non-mute and maximum volume.

the vaio bios plays a couple notes on startup ... so the hardware works ... but w/fc3 nothing is heard.

the 410 is dual-boot machine and the sound works fine under windows, also indicating the hardware works; also during fc3 boot, there are a couple scratchy poping sounds (so it appears to be trying to do something).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

High Level Assembler for MVS & VM & VSE

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: High Level Assembler for MVS & VM & VSE
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 13 Dec 2004 21:26:41 -0700
bblack@ibm-main.lst (Bruce Black) writes:
Yes. I was recently very surprised to see that it was clearly documented in POPs that the various flavors or OR, AND and EXCLUSIVE OR are not serialized for storage access on multi-processors. They do a fetch and store, and another processor can sneak in between those. I just assumed for years that this could not happen. Of course, you all probably know what "assume" does.

CS (compare and swap) is one technique that can be used to compensate. It is a bit more work to twiddle one bit but if it has gotta be done, then you do it.


charlie invented compare&swap while working on fine-grain locking for cp/67 360/67 smp at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

C-A-S was chosen because they are charlie's initials ... and then we had to come up with an opcode mneumonic that matched his initials. at the time, the only atomic instruction was test-and-set.

trying to get the instruction into 370 architecture ... the push back from architecture (padegs and smith, mostly smith), was that a new SMP-only instruction couldn't be justified ... and some non-SMP (single processor) justification had to be supplied for compare&swap in order to get it into 370 architecture.

that was the origin of the original write-up that was included with the compare&swap instruction programming notes in the principles of operation (but since moved to the appendix) ... describing the use of compare&swap for multi-thread application code ... where the application code might be interrupted in the middle of some operation and another thread resumed. note that the immediate-modify instructions do a non-atomic fetch/store ... however interruptions only occur on instruction boundaries (at least for these instructions) ... so it isn't an issue in a single-processor environment ... but concurrent operation in a multiple processor environment can lead to unpredictable results

multiprogramming/multiprocessing appendix from esa/390 pop:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6?SHELF=EZ2HW125&DT=19970613131822&CASE=

from above:
When two or more programs sharing common storage locations are being executed concurrently in a multiprogramming or multiprocessing environment, one program may, for example, set a flag bit in the common-storage area for testing by another program. It should be noted that the instructions AND (NI or NC), EXCLUSIVE OR (XI or XC), and OR (OI or OC) could be used to set flag bits in a multiprogramming environment; but the same instructions may cause program logic errors in a multiprocessing configuration where two or more CPUs can fetch, modify, and store data in the same storage locations simultaneously.

... snip ...

the above appendix reference also contains latest version of the write ups.

random other compare&swap and/or smp posts:
https://www.garlic.com/~lynn/subtopic.html#smp

a current page for padegs:
http://inventions.lza.lv/eng/izgudrotaji/PadegsA.asp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Integer types for 128-bit addressing

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Integer types for 128-bit addressing
Newsgroups: comp.arch
Date: Tue, 14 Dec 2004 07:30:23 -0700
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
The technique is particularly recommended for use in C programs in conjunction with the numerous transparent garbage collectors claimed to be available for that language.

some versions of "idea" used the technique in the 80s.

started a little bit after system/r (original rdbms)
https://www.garlic.com/~lynn/submain.html#systemr

in bldg28/sjr ... idea was a semantic network database done by the vlsi tools group in bldg29/lsg.

part of *idea* was an attempt to integrate logical and physical chip design. the low-level implementation used bidirectional links emulating content addressable relationships (as opposed to the physical pointers that characteristic of network databases of the 60s).

i happened to get to work on both.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Amusing acronym

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Amusing acronym
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 14 Dec 2004 09:09:15 -0700
edjaffe@ibm-main.lst (Edward E. Jaffe) writes:
It's the other way around. IBM came first. Long, long before HAL.

there was a HAL vlsi chip company in the 90s doing a 64bit sparc chip .... with investment from fujitsu and was eventually taken over by fujitsu. there was some claim that american started the sjc<->narita md11 service ... in part, because hal had standing 1st class reservations on the flight every week.

minor reference:
https://www.garlic.com/~lynn/2004p.html#40 Computers in movies

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Listserv for TCPIP

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Listserv for TCPIP
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 14 Dec 2004 10:25:44 -0700
eugene.muszak@ibm-main.lst (Gene Muszak) writes:
Can someone please post the link for the listserv for TCPIP.

some history on bitnet
http://www.lsoft.com/products/listserv-history.asp

note that there was a precursor on the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

products from this vendor
http://www.lsoft.com/products/products.asp

another mail list server (written in perl):
http://www.greatcircle.com/majordomo/

so did you want a listserv that supports tcpip ... or a tcpip mailing list? many of the original bitnet lists are gatewayed to newsgroups in the bit.listserv hierarchy ... aka a couple tcp mailing lists are gatewayed as: bit.listserv.tcpip-l bit.listserv.ibmtcp-l

tcpip-l seems to be defunct.

pointer to ibmtcp-l mailing list
http://catalist.lsoft.com/scripts/wl.exe?SL1=IBMTCP-L&H=VM.MARIST.EDU

ibmtcp-l archive page:
http://vm.marist.edu/htbin/wlvindex?IBMTCP-L

other lists at marist.edu:
http://catalist.lsoft.com/scripts/wl.exe?XH=VM.MARIST.EDU

the "official" catalog of listserv lists:
http://www.lsoft.com/catalist.html

other archaeological references:
http://nethistory.dumbentia.com/nm8608.html
http://www.ocf.berkeley.edu/Library/Network/listserv.groups

some topic drift: rfc 1044 support for mainframe tcpip (originally done in vm370 using vs/pascal ... and later ported to mvs with a thin vm370 emulation layer between tcp and mvs)
https://www.garlic.com/~lynn/subnetwork.html#1044

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Tue, 14 Dec 2004 10:49:18 -0700
"karl malbrain" writes:
The levels are separated by PROTOCOL and implemented by LANGUAGE. The levels themselves are conceptual. Perhaps, see OSI NETWORKING charts or use definitions.

total random topic drift ... tried to get hsp protocol as a standard item in x3s3.3 ... however it went directly from transport to lan/mac layer. Unfortunately ISO had a rule that neither ISO nor ISO chartered organizations (ANSI) could do networking standardization that violated OSI model.

x3s3.3 rejected hsp because:
• lan/mac violates OSI ... in part, because the interface sits in the middle of OSI layer3/networking ... and therefor any protocol that talks to lan/mac interface violates the OSI model

• going directly from transport to lan/mac ... bypassed the osi model layer3/layer4 interface and therefor violated the OSI model

• hsp would support internetworking ... aka IP; internetworking is non-existent in the OSI model ... and therefor supporting internetworking violates OSI model.


random refs:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Tue, 14 Dec 2004 14:30:55 -0700
ref:
https://www.garlic.com/~lynn/2004q.html#34

when osi comes up ... i frequently claim it should be taught as something that seems so terribly perfect ... and yet so horribly wrong ... and iso compounded the problem by mandating it couldn't be changed
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

besides the prediction about the enormous increase in buffer length problems ... we did uncover some number of other things crawling thru lots of c-code ... including tahoe/reno tcpip code for ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

a simple one was trying to do fall-over and ip-address take-over (simply) with servers. it turns out that in client/server environment with tahoe/reno clients ... there was a performance feature of the tahoe/reno arp code. the ip layer saved the last response from calling arp-cache code ... and the next time thru ... if the ip-address was the same ... it used the previous arp-cache response. this value never timed out. in a heavily client/server oriented environment ... a client might always use a local gateway ip address and therefor the client could go for hrs w/o changing the ip address that it was talking to (and never get around to actually recalling the arp-cache code ... which had timed-out the mac address hrs earlier).

i had done rfc 1044 support ... random recent reference
https://www.garlic.com/~lynn/2004q.html#33 Listserv for TCPIP
and
https://www.garlic.com/~lynn/subnetwork.html#1044

the base code had been done in vs/pascal ... and could consume a 3090 processor getting about 43kbytes/sec aggregate thruput using an 8232 controller.

i had extended the standard tcpip product with rfc1044 suppoert (also in vs/pascal) and done some tuning at cray research (on a scheduled flight to Minneapolis to do some of the work, wheels lifted from sfo 20 minutes late ... and five minutes before the earthquake hit). We finally got performance up to 1mbyte/sec thruput between a 4341-clone and a cray ... using only about 20% of the 4341-clone (which was maybe 1/10th the mip rate of a 3090 processor ... aka nearly 25times the bytes/sec using about 1/50th as much cpu).

vs/pascal had started at bldg.29/lsg vlsi lab ... by P & W. at the time, bldg.29/lsg was using DeRemer's TWS stuff for a lot of specialized grammers associated with vlsi design (and other stuff). W then left to do a 3274-clone startup, then VP of software development at mips and then general manager at sun of a group that included JAVA. P stayed around for several years and then left to join DeRemor at metaware (i was trying to talk P into doing a C-frontend for vs/pascal ... and i left for a six week lecture tour in europe and when I got back he was over in santa cruz). I did talk the company into subcontracting to metaware for a c-compiler ... that is why aos (bsd port) on the pc/rt was done w/metaware c-compiler.

note however, I know of no buffer length exploit in any of the vs/pascal implemented stuff ... and it was possible to tune vs/pascal for pretty high performance (rfc 1044 support as simple example) ... note however, vs/pascal had a number of significant extensions. I found this out several years later porting a 60k instruction vs/pascal application (at the time, ran on both mainframes and rs/6000s) to another platform. This appeared to be a vanilla pascal implementation possibly never been used for anything other than student projects (they had also outsourced their pascal support to some place on the opposite side of the planet ... which met a lot of delay in turning around bugs).

totally unrelated recent bldg.29/lsg reference (although it also involved vs/pascal):
https://www.garlic.com/~lynn/2004q.html#31 Integer types for 128-bit addressing

far away reference to the (really old) tws manual:
https://www.garlic.com/~lynn/2004d.html#71 What terminology reflects the "first" computer language ?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 14 Dec 2004 15:44:24 -0700
Eric Smith <eric-no-spam-for-me@brouhaha.com> writes:
Interesting. Other than 360 and derivatives, and some of the Motorola 68K processors (starting with MC68020), what architectures have included CAS?

Who invented the Load Locked and Store Conditional instructions used in the MIPS, Power, and Alpha architectures, and where else have they been used?


original rios/6000 didn't support smp ... so there was no multiprocessor issues ... but it was a risc architecture with nothing that did both a load, modify, store in single instrucation (as the immediate flag bit instructions on 360) ... ao it had a problem with doing single-processor multi-threading of enabled code ... so a compare&swap macro was invented ... it did a special supervisor call which placed you in the supervisor call interrupt handler disabled for interrupts ... which then had a small amount of fastpath code to emulate compare&swap semantics and immediately return.

perform locked operation eventually shows up in mainframe

i remember some early somerset hardware architecture meetings designing perform lock like hardware semantics ... i don't remember any details of where it originated. a big issue for risc was not having a single instruction that did possibly more than one thing ... &/or did both load & store. so the lock & store separate instructions would have tended to be a risc-oriented smp thing. it would have been unlikely to have originated in the 801 affiliated groups since they were so zealously anti-smp and anti cache consistency.

doing a little search engine turns up this discussion (power/pc aka somerset compared w/ia32):
http://www.usenix.org/events/jvm02/full_papers/alpern/alpern_html/node10.html

this mentions compare&swap for sparc
http://www.syssoft.uni-trier.de/systemsoftware/Download/Fruehere_Veranstaltungen/Seminare/Prozessorarchitekturen/Prozessorarchitekturen-6.html

and off-the-wall mention of compare&swap instruction in tcp/ip history thread
http://www.postel.org/pipermail/internet-history/2004-September/000431.html

misc. past reference to mainframe perform lock instruction.
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2003j.html#58 atomic memory-operation question
https://www.garlic.com/~lynn/2004b.html#57 PLO instruction
https://www.garlic.com/~lynn/2004d.html#43 [OT] Microsoft aggressive search plans revealed
https://www.garlic.com/~lynn/2004k.html#39 August 23, 1957
https://www.garlic.com/~lynn/2004l.html#55 Access to AMD 64 bit developer centre

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A Glimpse into PC Development Philosophy

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Glimpse into PC Development Philosophy
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 14 Dec 2004 15:59:41 -0700
howard@ibm-main.lst (Howard Brazee) writes:
I worked at a place that had a beta of Amdahl's operating system Aspen. I liked the OS, but it never made it past beta. In the help about reading labeled tapes it mentioned that labeled tapes are used by "a popular water-cooled computer".

part of the issue was that simpson (aka hasp, crabtree, et al) had done something similar before leaving and joining Amdahl ... and so there were threats of litigation and independent auditors reading code.

one of the (big) issues with both uts & aix/370 (a port of ucla locus) running under vm ... was that there was a bunch of mainframe specific processor and i/o device erep recovery, diagnostic, and recording support ... which was expensive to duplicate.

I knew people in both the aspen and uts groups and suggested that too bad they couldn't get together and do a uts layer to aspen ... in much the same way that bell had done unix to ssup-layer on tss/370. I have vague memories of there being some differences of opinion between dallas and sunnyvale.

for random drift, collection of past postings mentioning hasp:
https://www.garlic.com/~lynn/submain.html#hasp

past mention of RASP (project before simpson going to dallas/Amdahl) and aspen (project after simpson going to dallas/Amdahl):
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2002g.html#0 Blade architectures
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?

past mention of the tss/unix work
https://www.garlic.com/~lynn/96.html#4a John Hartmann's Birthday Party
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#8 IBM Linux
https://www.garlic.com/~lynn/2000f.html#68 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000f.html#70 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000.html#92 Ux's good points.
https://www.garlic.com/~lynn/2001d.html#77 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#19 SIMTICS
https://www.garlic.com/~lynn/2001f.html#20 VM-CMS emulator
https://www.garlic.com/~lynn/2001f.html#22 Early AIX including AIX/370
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001l.html#8 mainframe question
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2002m.html#21 Original K & R C Compilers
https://www.garlic.com/~lynn/2002m.html#24 Original K & R C Compilers
https://www.garlic.com/~lynn/2003c.html#53 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003d.html#54 Filesystems
https://www.garlic.com/~lynn/2003g.html#24 UltraSPARC-IIIi
https://www.garlic.com/~lynn/2003g.html#31 Lisp Machines
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004p.html#10 vm/370 smp support and shared segment protection hack

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 15 Dec 2004 09:13:28 -0700
Brian Inglis writes:
Could explain some of why the RT disappeared and was replaced by or evolved into the RS then the SP. Technical workstations without order of magnitude priced same architecture servers have limited market.

PC/RT (ROMP chip) was originally targeted as replacement for the displaywriter by OPD (office products division). when the product was canceled, the group quickly retargeted it for the unix workstation market.

core displaywriter replacement was built on CPr and written in PL.8. with retargeting the product to the unix workstation market, the PL.8 programmers were given the task of writing something called the VRM ... creating an abstract virtual machine layer ... and the unix port to the VRM layer was outsourced to the company that had done the PC/IX port.

after PC/RT was out, the palo alto acis group, which had been working on a bsd port to 370 ... got retarged to pc/rt ... and they quickly built "aos" running on the bare metal. recent "aos" references in thread in sci.crypt:
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns

in the above, there is discussion of choice of metaware for 370 c-compiler ... but they kept the same compiler when "aos" was retargeted to pc/rt & ROMP.

random past 801, romp, rios, somerset, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801

and other recent unix-related topic drift:
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosphy

aix/370 mentioned in the above ... was done by palo alto acis ... after the "aos" work ... however ... rather than bsd ... it was ucla locus. it was sort of unix saa ... random drifts related to saa
https://www.garlic.com/~lynn/subnetwork.html#3tier

where ucla locus was ported to both aix/370 and aix/ps2 ... supposedly providing transparent file & process migration across 370 and ps2 boundary.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 15 Dec 2004 09:13:28 -0700
Brian Inglis writes:
Could explain some of why the RT disappeared and was replaced by or evolved into the RS then the SP. Technical workstations without order of magnitude priced same architecture servers have limited market.

PC/RT (ROMP chip) was originally targeted as replacement for the displaywriter by OPD (office products division). when the product was canceled, the group quickly retargeted it for the unix workstation market.

core displaywriter replacement was built on CPr and written in PL.8. with retargeting the product to the unix workstation market, the PL.8 programmers were given the task of writing something called the VRM ... creating an abstract virtual machine layer ... and the unix port to the VRM layer was outsourced to the company that had done the PC/IX port.

after PC/RT was out, the palo alto acis group, which had been working on a bsd port to 370 ... got retarged to pc/rt ... and they quickly built "aos" running on the bare metal. recent "aos" references in thread in sci.crypt:
https://www.garlic.com/~lynn/2004q.html#35 [Lit.] Buffer overruns

in the above, there is discussion of choice of metaware for 370 c-compiler ... but they kept the same compiler when "aos" was retargeted to pc/rt & ROMP.

random past 801, romp, rios, somerset, etc. posts
https://www.garlic.com/~lynn/subtopic.html#801

and other recent unix-related topic drift:
https://www.garlic.com/~lynn/2004q.html#37 A Glimpse into PC Development Philosphy

aix/370 mentioned in the above ... was done by palo alto acis ... after the "aos" work ... however ... rather than bsd ... it was ucla locus. it was sort of unix saa ... random drifts related to saa
https://www.garlic.com/~lynn/subnetwork.html#3tier

where ucla locus was ported to both aix/370 and aix/ps2 ... supposedly providing transparent file & process migration across 370 and ps2 boundary.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Tru64 and the DECSYSTEM 20

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Tru64 and the DECSYSTEM 20
Newsgroups: alt.folklore.computers
Date: Wed, 15 Dec 2004 09:44:31 -0700
mwojcik@newsguy.com (Michael Wojcik) writes:
I don't believe any AS/400 models use PowerPC. They use POWER-family chips - PowerAS and POWER5, IIRC - but not PowerPC. There may not be huge differences among the POWER siblings, but they're not inter-changeable.

I believe PowerAS adds a few 400-specific instructions and direct support for the 400's "single level store", with its whopping great virtual addresses (128 bit).


the issue is what you call power and what you call power/pc. original power was rios ... absolutely no cache coherency and no capability for consistent shared memory multiprocessing (live oak was 4-way with rios.9 chips where virtual memory segments were defined as cacheable or not cacheable ... aka smp coordination was achieved by forcing non-cached memory).

somerset was started with motorola and apple ... to do several things ... one was supporting cache consistency and smp. one might claim that part of power/pc by somerset sort of had 88k bus and cache consistency applied to an 801 infrastructure. these were the 601, 602, 602, 603, 610, 615, 620, 630, etc. original as/400 port to risc used some flavor of 6xx chip. 64-bit 620 design meetings had rochester demanding 65-bit (as opposed to 64-bit) ... where they needed the 65th bit for special tagging.

when we started ha/cmp (my wife had been manager, 6000 engineering architecture):
https://www.garlic.com/~lynn/subtopic.html#hacmp

we reported to an executive that then went over to start somerset (he had originally came from motorola). he got somerset rolling and later left to become president of mips ... and turn out the 10k.

trusty search engine use turns up as/400 and apple using power/pc chip
http://c2.com/cgi/wiki?PowerPc

from above:
The chip used in a PowerMac and the IBM eServer pSeries (RS/6000). A very close variant of PowerPc is also used in the IBM eServer iSeries (AS/400).

... snip ...

http://www.findarticles.com/p/articles/mi_m0SMG/is_n4_v13/ai_13495730

from above:
The IBM AS/400 will be reborn by the mid-1990s with a new 64-bit reduced instruction set computing (Risc) microprocessor that will also run IBM's Unix-based RS/6000 and next-generation Power PC personal computers, according to a report issued byADM Consulting, Inc.,

... snip ...

http://publib.boulder.ibm.com/iseries/v5r1/ic2924/tstudio/tech_ref/rrmap/

from above:
If your move is a CISC-RISC, see the appropriate 'AS/400 Road Map for Changing to Power PC Technology' manual for more information.

... snip ...

more trusty search engine use turns up random refs to as/400 power/pc
http://www.notesbench.org/summary.nsf/0/dbac611b63b086208525691400044504?OpenDocument
http://www.geocities.com/SiliconValley/Pines/5581/facts.htm
http://www.notesbench.org/Storage3.nsf/0/c762edfc74b9ce3b8525691000644472?OpenDocument
http://vip400.vtinfo.com/iasp/www/generic/services/hardware/as400e.asp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How many layers does TCP/IP architecture really have ?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How many layers does TCP/IP architecture really have ?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 15 Dec 2004 10:28:58 -0700
Tito writes:
Hello. I have read some books and Internet webs, and I see some authors tells TCP/IP model has 4 layers, others say 5 layers. This is: -4 layers:subnet, internet, transport, appplication. -5 layers: phisic, link, internet, transport, application.

But which is true ? How many layers really ? Where could I find the TCP/IP offcial reference model ? It exists ?


slightly related:
https://www.garlic.com/~lynn/2004q.html#34
https://www.garlic.com/~lynn/2004q.html#35

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

browser without "padlock" secure?

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: browser without "padlock" secure?
Newsgroups: comp.os.linux.security
Date: Wed, 15 Dec 2004 14:00:36 -0700
"dmorgan1-with-suffixed-\"1\"-ATdslextreme.com" <dmorgan-with-suffixed-"1"-ATdslextreme.com> writes:
I was about to pay on the internet with a credit card when I noticed absence of the accustomed closed padlock icon in my browser that denotes use of https protocol that encrypts the communication. I want to be sure my credit card info won't travel in the clear.

the locked padlock indicates that https is being used and that the hostname in url is pretty similar to the hostname in the servers certificate ... and presummably the issuer of the certificate performed some validation that the person applying for the server certificate was in some way associated with the applied for hostname ... and that the subsequent transmission was encrypted.

this is countermeasure against evesdropping attacks and server impersonation attacks.

the original design point for SSL was that the URL you typed would be https/ssl and all subsequent web-pages at that site would be via SSL ... and you were sure that the webserver you contacted with the typed in URL was the same webserver that you are actually talking to.

the problem became that SSL placed a very heavy computational burden on the server ... and most places have since gone to only using SSL for the actual entry of the credit card. To get to the "pay now" web page you typically just click on a button (rather than actually typing the URL).

the situation is if you are dealing with a case of server impersonation and got there w/o having the original typed URL checked (via SSL) and all further interactions with that webserver were via clicking buttons ... then ann impersonating webserver could create a "pay now" button that is guaranteed to have a URL that exactly matches what is encoding into the SSL certificate.

since the person never looks at the actual certificate and never typed in the actual URL ... and the only thing the browser does is attempt to match what is specified in the current URL against what is in the provided certificate ... some number of impersonation attacks aren't actually very hard.

as a result of the way SSL is typically deployed .... it is now primiarly a countermeasure for evesdropping attacks and not particularly effective against impersonation attacks.

misc. past ssl certificate postings
https://www.garlic.com/~lynn/subpubkey.html#sslcert

misc. archeological references to e-commerce on the internet
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 15 Dec 2004 16:27:06 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
The slipup occurs exactly because of lack of care or attention. The problem with C is that avoiding buffer overruns in C requires extraordinary care and a lot of careful attention at every place you use a buffer. No one is perfect. When given thousands of opportunities to err, it is only human that people will slip up at least once. Safe languages are about reducing the harm caused by one such slip-up (for instance, so that the impact is loss of availability rather than root compromise).

or that it is harder to make the slipup in the first place ...

from an old thread in comp.arch ... real programmers don't eat quiche
https://www.garlic.com/~lynn/2001e.html#31 High Level Language Systems was Re: computer books/authors

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How many layers does TCP/IP architecture really have ?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How many layers does TCP/IP architecture really have ?
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 15 Dec 2004 17:21:08 -0700
adykes@panix.com (Al Dykes) writes:
The problem with comparing IP to the OSI is that (a) OSI is a model, not a design or implementation and all but the most diehard OSI fans would have admitted that reality required some out-of-model realities in working networks and (b) IP and lots of applications existed and were in use prior to the 7 layer model, so it doesn't fit well, but nobody cares now.

as indirectly refered to in previous posting
https://www.garlic.com/~lynn/2004q.html#41

to
https://www.garlic.com/~lynn/2004q.html#34
https://www.garlic.com/~lynn/2004q.html#35

iso had a rule that iso and iso chartered standards organizations couldn't standardize networking protocols that violated the osi model.

trying to get ansi x3s3.3 (us chartered organization responsible for networking protcol) to accept work on HSP ... they rejected it as violating the OSI model. HSP would go directly from the transport/layer4 interface to the lan/mac interface:

  1. lan/mac interface violated OSI ... since the lan/mac interface sits someplace in the middle of layer3/networking; therefor any network protocol that interfaces to lan/mac interface also violates osi model

  2. hsp was going directly from layer4 interface to lan/mac interface, bypassing the interface between layer3/layer4 ... violating osi model

  3. hsp provided support for internetworking ... aka IP; osi model doesn't contain an internetworking layer ... so any protocol supporting internetworking violated osi model
note also that federal gov. in the late '80s and early 90s were mandating the elimination of the internet and converting everything to gosip. there were some number of iso standardized protocols that conformed to the osi model.

random past osi, hsp, gosip, etc postings
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

note that the great switch-over to IP (internetworking) occurred on 1/1/83 ... in the same time frame ISO finally passed some standard(s) regarding the OSI model.

I've frequently observed that the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

was larger than the arpanet thru most of the 70s and up until about mid-85 ... was that the internal network nodes contained gateway-like capability from just about the beginning.

i've claimed that the lack of internetworking/gateways in the arpanet ... prior to 1/1/83 was an inhibitor to its growth ... and that the 1/1/83 conversion to internetworking w/gateways was a big factor in the growth of the internet and it overtaking the internal network in number of nodes approx. mid-85.

random other observations
https://www.garlic.com/~lynn/internet.htm

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

C v. Ada

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: C v. Ada
Newsgroups: sci.crypt
Date: Wed, 15 Dec 2004 18:51:20 -0700
BRG writes:
Of course this is an Ada oriented site, but it does nevertheless give a picture of the character of systems that Ada is being used for and, as can be seen, these tend to be applications where high integrity is an important requirement.

frequently shows up in systems that are "human-rated" ... aka systems where failures/mistakes could result in deaths.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Thu, 16 Dec 2004 07:03:31 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
I appreciate the references, but those were frankly not very helpful in providing what I was looking for. The material found off of those sites looks like it was written for a not terribly well informed audience, and didn't talk about security. There was talk about strong typing, object-oriented programming, abstraction, and the like. All well and good, but that's well-known stuff these days, and you hardly need Ada for that.

one of my favorites on assurance ... not specifically ada .. although some of the members do human-rated projects and use ada
http://www.software.org/quagmire/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Fri, 17 Dec 2004 07:25:52 -0700
daw@taverner.cs.berkeley.edu (David Wagner) writes:
Certainly not. You're not going to stop *all* security flaws with bounds checking. But you are going to stop some of them, or reduce their impact. That's enough to be valuable.

my original assertion is that a lot of buffer overflows are because of copying/moveing a string into a buffer and are relying on the implicit data-pattern defined length paradigm that uses null-terminations (as the source) and for a target, a buffer paradigm that only explicitly maintains the starting buffer address (and makes it the responsibility of the programmer to supply the buffer length).

as mentioned several times, C is flexible enuf that anybody could define their own private environment where there is a new buffer paradigm that keeps track of both the buffer start address as well as the buffer length ... eliminating numerous failure modes that happen because of lapses in programmer memory.

the obvious is that since these types of buffer overrun exploits continue to occur frequently spanning a period of decades ... that possibly the default paradigm should be changed to have both the buffer origin and buffer length be tracked ... and instead allow programmers to change the default C semantics so that the default operation only tracks the buffer origina address and requires the programmer to keep track of the length.

in the 90s, several of the early firewalls were application firewalls, that explicitly checked the length of incoming data and discarded it if the length exceeded some known buffer length value for the associated application.

when we were doing the original payment gateway for electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

we had such firewalls ... as an extra precaution layer (as well as port & ip-address filtering routers).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Fri, 17 Dec 2004 13:47:38 -0700
"karl malbrain" writes:
Right, using a PROTOCOL response to a PROTOCOL problem, blocking the error (or attack) from having a negative effect. karl m

not exactly ... some of the application code was binary and that we knew to be wrong ... and couldn't fix it. we knew there were large amounts of c-code using traditional c-programming techniques of copying data from one place to another w/o the appropriate safeguards.

to the extent that there was (network) protocol involved ... it was because the applications & programs had been placed on an isolated machine where all input/output was carefully restricted to just a network interface (aka it wasn't that the applications were specifically related to any network protocol layer ... it was that the configuration was arrainged so that was the major remaining unhandled vulnerability point ... and all other interfaces had been removed via administrative configuration).

the problem was that some number of the programs/applications were known to perform traditional and common c-related mismanagement of strings and buffers ... and that some amoount of this mismanagmenet could be compensated for via administrative configuration.

the remaining vulnerability for which administrative configuration wasn't a countermeasure (or compensating procued .... to traditional and common c-related mismanagement of strings and buffers) ... was the networking interface. for this remaining networking-related vulnerability (that wasn't addrssed via administrative configuration countermeasures), an explicit incoming filter was created to specifically eliminate various kinds of mal-formed strings (at least from the standpoint of correct application operation).

somewhat the observation at the time was that we could have every application implemented twice ... the base application using traditional c-programming techniques that were known to be highly prone to buffer length mismanagement ... and a 2nd version of each application that just did network related validation of expected properly formed input .... and that a relatively simple paradigm change could be made which would only need to have a single implementation.

ref:
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

it is somewhat analogous to having highway guardrails and center meridian barriers ... because you know that every car on the highway was guaraneteed to be technically faulty and it would be impossible for any car to transverse the highway even once safely ... as opposed to the majority of the cars negotiating the highway safely but still there are sporadic problems where the additional safeguards might be considered beneficial (somewhat protection in depth). in such a protection in depth scenario ... if some specific kind of failure characteristic reaches even a couple percent of all failures and is found to be associated with a specific feature ... there would be remediation efforts to rework that feature to minimize such commoningly occurring failures in the future.

the claim is that there is specific feature of c environment that accounts for possibly 25-33 percent of common failures ... and there has been little or no successful remediation of the feature.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

creat

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: creat
Newsgroups: alt.folklore.computers
Date: Sat, 18 Dec 2004 07:39:49 -0700
jmfbahciv writes:
If you have a table of commands in core, it's a lot faster to compare a single word than a double word. PDP-11s were not that sophisticated (double-word compares) at least in the early days. I have no idea what the machine language was extended to later.

cms had a scale-up problem with command lookup that way.

the original interface for commands was svc202 (0xCA ... aka cambridge) ... and command lookup then would search the file systems for an "EXEC" file (i.e. command scripting file) with that file name and then a "MODULE" file (i.e. binary executable) with that file name ... and finally search the (core) name table of kernel services (along the way it would also try alternative values suggested by synonym and abbreviation tables).

it was possibly somebody at perkin-elmer(?) that first did an enhancement to the filesystem that sorted the filenames (and left a bit laying around whether the filenames were still sorted ... certain operations would turn the bit off). for some systems with really large number of files ... the sorted structure would improve general filename lookup ... but it would also have a significant impact on all command lookup (since even high frequency calls for kernel services went thru the command lookup process).

eventually there was enhancement that applications could use for calling known kernel services with SVC203 (for things like read the next file record) which passed an index to table of well-known kernel services (instead of doing the extensive command search operation).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sat, 18 Dec 2004 08:58:32 -0700
"karl malbrain" writes:
That's what PROTOCOLS do -- provide barriers to faults/allow passage to correctness.

besides the observation about the OSI model being so terribly perfect and so horribly wrong
https://www.garlic.com/~lynn/2004q.html#34
https://www.garlic.com/~lynn/2004q.html#35

was that the ISO standards for protocols implementing the OSI model were the stuff defined for implementations that went from the different interface boundaries (and the side note that iso directive about violating osi model extended to skipping a boundary layer).

however, possibly 90 percent of the correct operation of an ISO/OSI standard environment wasn't in the protocols defined going between the layers ... it is all the out-of-band administrative and operational stuff that keep the environment going (and the administrative and operational stuff could have organization that is almost totally unrelated to the definition of the protocol between the boundary/layers).

layered architecture with protocols have tended to provide fault isolation as an aid to helping correctness (fault isolation contributes to fault diagnoses and remediation). the protocols tend to define formal interface between the layers ... which then becomes part of forming the barriers between different layers and isolating faults.

there is also some characteristic of KISS to layered implementation with formal interfaces ... the complexity of the problem that the programmer has to deal with tends to be bounded ... and the number of things that the programmer has to carry in their head concurrently is also bounded. reducing complexity and number of tasks tends to be also a characteristic of layered implementations (along with interface specification between the layers) in addition to fault isolation.

the observation that carrying the buffer length in addition to the buffer origin as part of the buffer construct and move/copy libraries by default using the buffer length in move/copy operations is more of a fault avoidance (correct operation) technique as opposed to a fault isolation technique.

the observation with respect to guardrails and median barriers was that while they are a fault isolation technique ... they are also part of an infrastructure that stresses fault remediation ... including correcting infrastructure characteristics that appear to contribute to faults happening in the first place ... aka fault avoidance as opposed to fault isolation (both can be characteristics of protection/defense in depth).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sat, 18 Dec 2004 13:40:33 -0700
"Karl Malbrain" writes:
You still have room for sanity checks -- that the requests on the buffer manager can not be fulfilled under any conditions. It's not clear to me how to avoid out-of-band handling of these exceptional conditions other than check-point/restart types of hierarchy that invoke user-notification/developer-rectification.

there can be hundreds of different kinds of potential buffer related problems ... i've only been commenting about eliminating some of the most frequent mistakes ... if one particular characteristic out of a thousand happened to account for 99.999% of the mistakes ... would it be worthwhile addressing the one most frequent source of mistakes ... even if you didn't solve the other 999 possible sources of mistakes (which might account for the remaining small fraction of problems).

minor topic drift with two RAS (reliability, availability and servicability) examples:

1) long ago and far away, I wrote this driver that supported remoting over telco link locally bus attached devices. the driver had a bunch of diagnostics and recovery ... but if it wasn't successful ... it simulated a bus (channel check) error back to the standard operating system ... which then went thru some amount of additional recovery, rety and recording operations. so there is this industry service that captures large amounts of the EREP (error recording) data directly from customer installationas and publishes reports. There is a lot of clone device, controller and processor products in this market place, so the industry service stats is one way of comparing different vendor products. There was this new processor being developed with greatly improved RAS characteristics ... and after it had been at customer sites for a year ... one of the product owners contacted with an enormous problem they had. The channel i/o interface had a a design point of 5 channel checks per year ... and the industry service was reporting 15 channel checks the first year (this wasn't 5 channel checks per year per processor ... this was five channel checks per year across the all processors that existed). Well it turned out that the additional channel checks were from installations running the remote device driver software that simulated channel check errors for unrecoverable telco errors. So after quiet a bit of investigation of the standard operating system error recovery operations ... i figured out that I could report a simulated IFCC (interface control check) error (for unrecoverable telco errors) ... and basically get the same error retry and recovery operations (which would make the processor product owner a lot happier that his product wasn't being so completely disparaged by having 15 channel checks per year across all processors in existance). some random refs:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

2) i had previously referenced the original payment gateway work
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

it had basically started before we got involved ... they had tken the payment message protocol specification and remapped it from a circuit-based infrastructure to a packet based infrastructure ... w/o realizing that there was an enormous amount of service-based stuff associated with having circuit-based infrastructure ... which was totally lacking in the internet packet-based operation (it is only relatively recently that ISPs would even talk about providing service level agreements ... level as in degree of service).

so at the time, a call to the trouble desk (about problem with payment transaction) had a standard of five minute mean-time to first level problem determination. this was, in part possible because there is quite a bit of infrastructure associated with the service of an end-to-end (circuit) infrastructure ... totally unrelated to the protocols that run over those circuits. the first trouble ticket opened associated with transaction thru the payment gateway was closed after three hrs as NTF (no trouble found).

so one of the things we had to do was to crawl through the characteristics of a circuit-based service ... and try to invent equivalent operations for a packet-based service. one of the things we also created was a failure-mode matrix ... for something like 20-40 failure modes and 4-6 states ... the software had to demonstrate either 1) automatic recovery or 2) sufficient information to result in 5 minute mean-time first level problem determination. almost none of this stuff was in anyway associated with protocol ... as used in the common definition of protocol associated with the bits that pass boundary interfaces. then there had to be a diagnostic manual for what to do when a payment transaction problem is reported (which had to do with the operation of the service and had nothing about the payment protocol).

in this particular situation ... there was existing software in existing operating systems and tcp/ip stacks running on client machines all over the world ... as well as existing software running at the major ISPs around the world.

going back to the origins of the ha/cmp project going on nearly 20 years ago
https://www.garlic.com/~lynn/subtopic.html#hacmp

it was slightly hoped that some small improvements might have been made over a period of the last 20 years ... however it seems that rate/frequency that programmers made mistakes associated with copying a string into buffer is as frequent today as it was 20 years ago (which turns out to possibly have been the original subject of this thread ... as opposed to some possible types of other buffer related mistakes).

general random comments about assurance:
https://www.garlic.com/~lynn/subintegrity.html#assurance

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sat, 18 Dec 2004 16:51:53 -0700
"Karl Malbrain" writes:
Why couldn't you emulate the TELCO hardware-circuit with an appropriate layer that used end-to-end "keep-alive" packets to determine whether the "circuit" was up or down? karl m

it wasn't just whether it was up or down ... it problem determination/isolation ... where in the infrastructure did things fail. like the underlying infrastructure might be having sporadic congestion problems resulting in erattic up/down indications, there were possibly dozens of internet operational charactistics that don't show up in purely circuit-based infrastructure.

we also did some of the telco provisioning stuff with no single point of failure ... facility that has connections coming into tha bldg. from opposite physical directions, that went to different physical access pointes via totally different routes, chosing isp that was co-located in central exchange type facility with 48v equipment and on the same backup power infastructure as other stuff in the phone faclity.

in the circuit-based infrastructure there are some with leased-lines that have modems and the modems are frequently probed for availability (aka the keep-alive packet stuff) .... it is when it is found to not be available that the trouble desk starts its procedures ... the keep-alive stuff is just the tip of the iceburg.

for instance what is the connection between the client and the merchant server ... and was there some failure in that part of the transaction or is it in the merchant server and the payment gateway. if the merchant server isn't talking to the payment gateway (the keep-alive isn't getting thru) ... where is the fault ... and what are the diagnostic procedures that determine the fault in the infrastructure between the merchant web server and the payment gateway.

[Lit.] Buffer overruns

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sat, 18 Dec 2004 16:27:36 -0700
and to bring it slightly back to crypto ...
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

this organization had come up with this stuff called https/ssl.

the stuff was in place for the clients/browsers to authenticate the webservers as part of the https/ssl protocol.

the idea was to use this new thing called https/ssl also for the interface between the webservers and the payment gateway ... so the details of something new called mutual authentication had to be worked out within this thing called https/ssl (it hadn't been done yet).

the payment gateway into the payment infrastructure used a traditional leased line circuit based operation and the trouble desk could do loopback to the modem at the payment gateway ... and setup procedure to do emulated loopback between a machine at the trouble desk and the actual payment gateway machine (standard bootstrap diagnostic process). traditional circuit based can do something akin to incremental traceroute ... but at a low-level hardware level .. say you had smnp visiability into every hardware box in the world).

the next was to trace possible service interruptions between the payment gateway and then possible webservers at random locations around the world ... and problem determiniation on what components were working and what components might not be working.

this organization that had come up with this stuff called https/ssl was dependent on this things called certificates from various other vendors. so as part of the financial integrity stuff ... we had to go perform detailed audits on the primary vendors providing this things called certificates and write-up a vulnerability and exposure report on possible problems (a business vulnerability and exposure analysis ... which wasn't just limited to things like protocols). some of the stuff we recommended got accepted and other stuff didn't.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

FC3 sound

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: FC3 sound
Newsgroups: linux.redhat
Date: Sun, 19 Dec 2004 07:28:40 -0700
Anne & Lynn Wheeler writes:
i'm not sure fc3 gets much better ... i just did brand new fc3 install on new vaio a290 laptop after completely wiping the disk. it has ac97 motherboard chip.

i also recently did upgrade of two machines from fc2 to fc3 ... old dell precision 410 with motherboard CS chip, and a dell dimension 8300 with (bios) disabled motherboard ac97 chip and a soundblaster card.


finally got sound w/a290 undocked ... but still unable to get sound when docked (docking cuts out the internal speakers and routes to the external speakers of the docking station ... during power-on/reset, bios plays a few notes on the docking station external speakers ... but then all is quiet).

still no success with the two dells.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

creat

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 21 Dec 2004 21:55:39 -0800
Subject: Re: creat
Charlie Gibbs wrote:
No matter how few extra keys there are on your keyboard, you must still lift your hand from its home position and move it to wherever the rodent sits - and then move your hand back again afterwards. Those of us who are skilled touch typists can enter a lot of data in that length of time. I realize that we are in the minority (and therefore by our perverted rules of democracy don't count), but it means that the increasing amount of software that mandates the use of a pointing device leaves us feeling crippled - and therefore resentful.

At least the "eraser tip" pointing devices buried in the middle of the keyboard don't require you to move your hands. That's often enough to make up for their awkwardness compared to a mouse. Not that mice are perfect - although they'll be much better once Microsoft "invents" the mouse accelerator that I've had running on my Amigas for almost 20 years.


the precursor work to the erasor tip was two pressure wands along the space-bar for each thumb .... one controclling horizontal and one controlling vertical; this was before the technology was good enuf to do the eraser tip.

before that ... early 80s, the human factors group in san jose had somewhat half egg shape cord keyboards ... one for each hand. there were depressions for the fingertips with rocker switches ... for the cords. numerous people claimed to easily hit 80wpm (after getting used to the patterns). the shape was conducive to being a mouse ... and your hands never had to leave the device.

this was more hand form fitting cord keyboard than the one used for augment.

couple augment & cord keyboard posts:
https://www.garlic.com/~lynn/2000g.html#31 stupid user stories
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)

misc. other past posts mentioning augment:
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2000g.html#26 Who Owns the HyperLink?
https://www.garlic.com/~lynn/2002o.html#48 XML, AI, Cyc, psych, and literature

CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 24 Dec 2004 16:39:08 -0800
Subject: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
Dan Koren wrote:
The most delightful operating system I've ever worked with had complete flexibility in this respect (it was a proprietary hard real time OS no one ever wrote a single paper about). A thread could use/attach/detach as many address spaces as it pleased. Those could in turn be structured in any way one liked, from completely disjointed to identical, and anything in between with any degree or number of overlaps. There was in fact no hard line separating threads from processes, since threads could share as many or as few resources as one liked. The lightest thread had nothing more than a stack and a program counter (very handy for servicing interrupts). Devices and files could be shared (or not) in any combination. Total freedom! ;-)

one of the other things down at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

(besides virtual machines, gml, interactive stuff, early performance and capacity planning work, etc) was the stuff for the internal network. the internal network was larger than the arpanet/internet for just about the whole period up to sometime mid-85 ... which I've claimed was at least partially due because it had a gateway-like function in most of the nodes (at least the real vnet nodes ... as opposed to jes & other nodes):
https://www.garlic.com/~lynn/subnetwork.html#internalnet

... something that the arpanet/internet didn't get until the great switch-over on 1/1/83.

several years ago ... the author of the internal network implementation (original done in 360 assembler) commented that in working on one of the major real-time systems (implemented in C) ... he noticed something familiar ... and cross checked it with some assembler code he had written 25(?) years earlier. At least the real-time operating system task scheduling module appeared to be a line-for-line translation of his 360 code into C .... including faithfully preserving all his original comments.

high speed network, cross-over from sci.crypt

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 24 Dec 2004 17:02:48 -0800
Subject: high speed network, cross-over from sci.crypt
Anne & Lynn Wheeler writes:
the payment gateway into the payment infrastructure used a traditional leased line circuit based operation and the trouble desk could do loopback to the modem at the payment gateway ... and setup procedure to do emulated loopback between a machine at the trouble desk and the actual payment gateway machine (standard bootstrap diagnostic process). traditional circuit based can do something akin to incremental traceroute ... but at a low-level hardware level .. say you had smnp visiability into every hardware box in the world).

ref:
https://www.garlic.com/~lynn/2004q.html#35
https://www.garlic.com/~lynn/2004q.html#53

and
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

and
https://www.garlic.com/~lynn/95.html#13

....

even more drift ... my wife and I had done this high-speed backbone
https://www.garlic.com/~lynn/subnetwork.html#hsdt

for the internal network
https://www.garlic.com/~lynn/subnetwork.html#internalnet

the internal network was larger than the arpanet/internet from just about the beginning up until about mid-85. we were not allowed to bid on the original nsfnet1 for whatever reason ... but my wife did talk the nsf director into getting a technical audit of what we had running ... which resulted in a letter that said it was a least five years ahead of all bid submissions
https://www.garlic.com/~lynn/internet.htm#0
https://www.garlic.com/~lynn/rfcietf.htm#history

so we were attempting to create a noc-like environment for the high-speed backbone. we had multiplexors on every =|>T1 link and was using subchannel to constantly run BER testers. The output of the BER testers was pulled off their terminal serial rs232 interface into some software that mapped into administrative monitoring operation. We were also building IEEE 488 interfaces data collection to all the hardware that happened to support it.

now the internal network required that all links leaving facility be encrypted .... and the claim was that for extended period that the internal network had over half of all the link encryptors in the world.

during this period ... the telcos started demanded that they would no longer support clear-channel ... and that ones-density had to be supported. since the data-channel was encrypted ... and the link encryptors wouldn't guarantee ones-density ... we eventually had to reconfigure the multiplexor to have two side-channels ... one for the BER boxes ... and one that constantly transmitted pattern that then would guarantee the required ones-density.

the other problem i had was paying something like $16k for link encryptors (these are pretty standard T1, not earlier custom designed T3 bulk link encryptors) ... so started work on a encryptor board that i wanted to be able to easily support multi-megabytes/sec and have sufficient keying agility to possibly change key on every minimum-sized packet ... and manufacture for under well under $100 (some combination of which seemed to bring the MIB around).

so we eventually get up to interop '88
https://www.garlic.com/~lynn/subnetwork.html#interop88

this was start of the period where ISO/OSI was going to completely replace TCP/IP ... and the big network montoring standards ... which were going to completely role over SNMP (there were a lot of OSI products at interop 88). collection of past comments on gosip, iso/osi, etc
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

Case had snmp demo at the end of a booth that was at the end of a row ... immediately at right angles to a booth where we had a couple workstations. During the course of the show, Case installed snmp on one of these workstations.

from my rfc index
https://www.garlic.com/~lynn/rfcietff.htm
rfc 1067 snmp
https://www.garlic.com/~lynn/rfcidx3.htm#1067
1067 -
Simple Network Management Protocol, Case J., Davin J., Fedor M., Schoffstall M., 1988/08/01 (33pp) (.txt=67742) (Obsoleted by 1098) (See Also 1065, 1066) (Refs 768, 1028, 1052) (Ref'ed By 1089, 1095, 1156, 1704)

.. cut ...

as always clicking on the ".txt=" field in the rfc summary retrieves the actual rfc.

.. little digression ... a lot of the burgeoning NOC-stuff for our high-speed backbone we were doing in early turbo pascal. a problem became accessing/contacting these distributed boxes. as always ... if you put the command&control channel thru the same data flow ... as it is intended to manage ... you have single point of failure. you need separate secure channel for command&control (which can become a snmp issue also).

so we have our NOC-like PCs ... concentrating the diagnostic information and diagnostic capability ... how do we utilize it via a secure channel.

well the company had started looking at needing encryption for home/travel/hotel access (I had gotten home terminal mar70, well before this period). a detailed vulnerability and threat study turned up hotel PBXs being one of the most vulnerable points around; which led to a requirement for all offsite dial-in lines to also be encrypted. As a result, a new hayes-compatible encrypting modem was designed and built ... had goo covering all the encryption parts (it did secure session key negotiation at initial startup ..akin to ... but different than what you found much later in ssl). so we get these to put in all the onsite NOC-PCs.

a little tale out of school. after the initial encrypting modem testings, all products from the company with phone jacks had deeply recessed contacts. one of the first testers for this home/travel/hotel encrypting modem was a corporate VP ... who apparently had been an EE in some prior lifetime. He was trying to test out the modem and stuck his tongue on the contacts to see if there was any current. unfornately, the phone decided to ring at that moment. the result was an edict that all phone jack contacts had to be recessed so that people (especially corporate VPs) were unable to touch them with their tongue.

.....

and more topic drift; one of the things we had done was rate-based pacing ... and were later on the XTP technical advisery board ... where rate-based pacing was also specified .... recent news item ....

Packeteer's purchase of Mentat could boost XTP Protocol
http://www.commsdesign.com/news/showArticle.jhtml;jsessionid=NDC21Q0RT2AJQQSNDBCCKHSCJUMEKJVN?articleID=56200180

CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 25 Dec 2004 11:42:35 -0800
Subject: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
Dan Koren wrote:
lynnwrote in message > https://www.garlic.com/~lynn/subtopic.html#545tech

This points to a page with hundreds of links.

Which one(s) in particular did you have in mind?

Thx,


lots of past posts about science center, 4th floor, 545 tech sq. ... it is where charlie did the work and invented compare&swap (aka charlie's initials are CAS ... so we had to come up with a mnemonic that used his initials). misc. other posts on smp
https://www.garlic.com/~lynn/subtopic.html#smp

it is where virtual machine stuff and a lot of interactive stuff was done ... aka some of the people on ctss went to 4th floor and the science center and other people went to the 5th floor and worked on multics. melinda has a much longer thing on some of this history
https://www.leeandmelindavarian.com/Melinda#VMHist

it is where gml and much of sgml was done ... precursor to html, xml, etc (old location is a block or two from current w3c)
https://www.garlic.com/~lynn/submain.html#sgml

it is where i did a bunch of benchmarking and capacity planning stuff
https://www.garlic.com/~lynn/submain.html#bench

it is where i did a lot of performance stuff (for the 2nd or 3rd time, much of the original had been done and shipped when i was an undergraduate)
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

and it is where the original stuff for the internal network was done
https://www.garlic.com/~lynn/subnetwork.html#internalnet

this thread (compare&swap) had originally started in bit.listserv.ibm-main .. and i had x-posted to alt.folklore.computers to include comments about charlie originally inventing compare&swap and coming up with a mnemonic that were charlie's initials. somebody else then subsequently x-posted to comp.arch (dropping the original bit.listserv.ibm-main). totally random past postings about origin of bitnet and earn (where the origins of the bit.listserv hierarchy originated ... with gateway from bitnet/earn mailing lists to usenet)
https://www.garlic.com/~lynn/subnetwork.html#bitnet

note that original internal network stuff had effectively gateway-like function built into every node with layered implementation and effectively no limitation on node addressing. it ran on the majority of the nodes on the internal network. the official mainline, strategic batch operating system had something called jes2 and came up with a kind of networking with driver commoningly referred to as nje/nji. the folklore is that the jes nje/nji driver came in large part from its hasp heritage ... and much of the original source code had the letters "TUCC" in cols 68-71. HASP/JES had an implementation that used 256 entry table of virtual devices. A typical HASP/JES installation might have 60-80 defined virtual devices ... and the "networking" support scavanged the remaining entries for networking nodes (resulting in typical max. practical number of defined nodes on the order of 180-200). random collection of hasp references (precursor to jes2):
https://www.garlic.com/~lynn/submain.html#hasp

by the time the jes nje/nji drivers were released, the intneral network was well over 256 nodes. in some sense this "other" internal network implementation had similar restrictions as some characteristics of arpanet ... which was alleviated (for arpanet) with the cut-over to internworking on 1/1/83 (when it had about 250 nodes)

about that time the internal network was nearing a 1000 nodes ... which it reached that summer ... minor reference
https://www.garlic.com/~lynn/internet.htm#22

and the jes nji/nje was still stuck with its 255 entry table. the other problem was that the nji/nje implementation somewhat confused the networking layer and other characterstics. minor release-to-release changes in some totally unrelated field ... could cause incompatibilities between jes releases ... resulting in JES crashing and bringing down the whole mainframe system (caused by incoming networking traffic from other jes nodes w/mismatched release). aggravating all this, nji/nje drivers would also discard incoming &/or outgoing stuff that originated at and/or was destined for nodes not defined in the local table.

as a result jes nje/nji nodes had to be restricted to boundary nodes on the internal network behind standard vnet nodes 1) couldn't address all nodes, 2) had habit of discarding traffic when it didn't recognize either originating &/or destination node, and 3) had habit of crashing and bringing down the related operating system. so there was a whole family of nje/nji drivers developed for the native vnet nodes .... which would canonicalize the nje/nji header information and then translate to specific format expected by the release/version of specific jes boundary node that it happened to be talking directly to (to avoid the system failure scnearios). there was sort of infamous incidents involving changes to san jose jes systems crashing mainframes in Hursley.

jes nje/nji drivers did get enhanced to support 999 nodes well after the internal network had exceeded 1000 nodes ... and then was later enhanced to support 1999 nodes well after the internal network was over 2000 nodes. however, it was the vanilla nje/nji drivers that were shipped to customers in the bitnet/earn time-frame (by that time, the higher performance and more functional native drivers being restricted to internal corporate use only).

my wife and i did do a high performance backbone as part of hsdt
https://www.garlic.com/~lynn/subnetwork.html#hsdt

but weren't allowed to bid on the nsfnet backbone ... we did get a NSF audit which stated what we had running was at least five years ahead of all bid submissions for building new nsfnet backbone ... minor reference
https://www.garlic.com/~lynn/internet.htm#0

also, minor recent x-post to a.f.c.from sci.crypt
https://www.garlic.com/~lynn/2004q.html#57

CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)

From: lynn@garlic.com
Newsgroups: comp.arch, alt.folklore.computers
Date: 25 Dec 2004 13:52:49 -0800
Subject: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
Dan Koren wrote:
This points to a page with hundreds of links. Which one(s) in particular did you have in mind?

... oh, and it is small progress from including the hundreds of URLs to past postings in the posting itself.

Will multicore CPUs have identical cores?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 26 Dec 2004 10:18:55 -0800
Subject: Re: Will multicore CPUs have identical cores?
Dan Koren wrote:
Not meaning to taunt you ;-) but if you're willing to have dedicated silicon for the kernel, one might as well put the kernel in silicon. Nowadays, compiling C/C++ into silicon isn't that much harder than compiling C/C++ into soft object files.

can you say i432?

last sigops held only at asilomar (had the midnight vote to alternate coasts and have it different places, something about descriminated against mit students who couldn't afford the trip to the alternate coast) ... '81?

the i432 presentation had some amount to say about the difficulty on applying bandaids to deployed silicon in the field.

although sigops made it back again in 91 ... with side trip to monterey acquarium .... i remember having a running discussion about whether it would be possible to build scalable, high-availability systems out of commodity parts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

random past posts menmtioning 432
https://www.garlic.com/~lynn/2000d.html#57 iAPX-432 (was: 36 to 32 bit transition
https://www.garlic.com/~lynn/2000d.html#62 iAPX-432 (was: 36 to 32 bit transition
https://www.garlic.com/~lynn/2000f.html#48 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001k.html#2 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2001m.html#32 Number of combinations in five digit lock? (or: Help, my brain hurts)
https://www.garlic.com/~lynn/2002d.html#46 IBM Mainframe at home
https://www.garlic.com/~lynn/2002l.html#19 Computer Architectures
https://www.garlic.com/~lynn/2002q.html#11 computers and alcohol
https://www.garlic.com/~lynn/2003c.html#17 difference between itanium and alpha
https://www.garlic.com/~lynn/2003e.html#54 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#55 Reviving Multics
https://www.garlic.com/~lynn/2003e.html#56 Reviving Multics
https://www.garlic.com/~lynn/2003.html#5 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003m.html#23 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
https://www.garlic.com/~lynn/2003m.html#47 Intel 860 and 960, was iAPX 432
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004d.html#12 real multi-tasking, multi-programming

the i432 documentation makes some reference to s/38 ... which is supposedly the remnants of the canceled FS project (with some FS refugees retreating to rochester to build the s/38) ... random past FS postings
https://www.garlic.com/~lynn/submain.html#futuresys

will there every be another commerically signficant new ISA?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 26 Dec 2004 11:03:35 -0800
Subject: Re: will there every be another commerically signficant new ISA?
Rick Jones wrote:
Yes, I remember using Landrew (Andrew) as an undergrad at CMU and of the three - Sun 3/mumble, DEC MicroVaxII and IBM PC/RT we tended to use the PC/RT last - again part of the dimm memory. That would have been <= 1988.

but the romp in the pc/rt was originally targeted as a displaywriter replacement by the office products division ... it was cp.r based written in pl.8. it was only after that product got killed that the idea to retarget to the unix workstation market came up. the pl.8 programmers were put to use building the VRM ... an abstract, virtual machine interface ... and an UNIX AT&T port was subcontracted to the company that had done the PC/IX port (but to the VRM interface ... rather than the bare metal).

note that later, the palo alto acis group (who had been working on a bsd->370 port using the metaware c-compiler) were redirected to doing pc/rt port (although building to the bare metal rather than the vrm layer).

random past 801, romp, rios, etc posts
https://www.garlic.com/~lynn/subtopic.html#801

will there every be another commerically signficant new ISA?

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 26 Dec 2004 11:11:29 -0800
Subject: Re: will there every be another commerically signficant new ISA?
Terje Mathisen wrote:
It is interesting though that part which has increased the least (by far!) is screen resolution: At 1920x1200 ( = 2+ MPix) my laptop is at the high end of most systems, and a dual-screen workstation with (combined) 4096x1536 (= 6 MPix) resolution is close to the limit for commondity systems.

i recently got new laptop with 17in screen and set for 1920x1200; i completely wiped the machine and installed linux ... there are still some number of features that aren't working yet ... but i'm getting used to the 1920x1200 screen (this post comes from that laptop).

creat

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 26 Dec 2004 16:32:46 -0800
Subject: Re: creat
Brian Inglis wrote:
CMS screen filelist had some nice features: type a meta-command over a file in the directory listing on screen, type '=' over random other files to perform the same meta-operation on them, then hit enter to execute all the commands.

Some PC file manager programs had a similar useful feature.

MS products never offered any such useful capabilities.


Theo Alkema (uithoorne hone system) did fulist, browse, and ios3270 for cms; there were then some number of clones ... including XEDIT macro implementation.

later, jim wyllie at sjr did a similar package for ibm/pc that was released under some software productivity program.

some random past refs:
https://www.garlic.com/~lynn/2001f.html#8 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#9 Theo Alkema
https://www.garlic.com/~lynn/2001f.html#21 Theo Alkema
(apologize if this is dup, just had a connection glitch)

Will multicore CPUs have identical cores?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 26 Dec 2004 17:08:10 -0800
Subject: Re: Will multicore CPUs have identical cores?
Dan Koren wrote:
BTW I once interviewed for a job on the iAPX432, which I did not get since the interviewing crowd obviously did not think I shared enough of their religion! ;-)

i worked on putting critical pieces of vm operating system into microcode on the 138/148. there was about 10:1 ratio of native engine instructions to 370 instructgions ... and much of operating system type code dropped into microcode instructions almost on a one-for-one basis ... given 10:1 speedup ... and some cases were state save/restore could be avoided ... more than 10:1 speedup. the feature was released as ecps. The design constraints was that there was 6kbytes of micrcode space available for the effort (so we were looking for approx. the highest used 6kbytes worth of kernel instructions) ... a specific ecps microcode posting
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
random additonnal posts mentioning microcode efforts
https://www.garlic.com/~lynn/submain.html#mcode

about the same time I was also working on a 5-way SMP ... where i had a little more latitude in designing microcode features. I did a little bit better effort of design a logical dispatch/scheduling metaphor for abstracting execution of the multiple processors. I also got to do a higher level abstraction for some of other common operating system features to microcode. unfortunately this project called VAMPS was canceled before the product shipped ... lots of random past posts mentioning VAMPS and abstracting smp kernel support:
https://www.garlic.com/~lynn/submain.html#bounce

an issue here was that all of the machines loaded microcode from floppy at power-up ... as opposed to having it etched in silicon and reguiring physical stuff to be done to correct issues. a new floppy could be just shipped out for corrections. minor digression ... floppy had been invesnted by shugart originally for loading microcode into 370 disk controllers ... but it came to be used also for loading microcode into lots of other boxes ... lots of past misc. posts about work in bldg. 14&15, disk enginneering and product test labs:
https://www.garlic.com/~lynn/subtopic.html#disk

the sigops i432 presentation had a lot to say about difficulty they had with complex operating system stuff embedded in silicon and the problems with fab'ing new silicon (with fixes) and getting the new generation of chips out into the running boxes. the smp abstraction that they had done in i432 was similar to the earlier stuff that i had done for VAMPS ... except if any problems showed up in VAMPS, i would have had a much easier time correcting it by shipping out new floppy disks.

This was done in the 138/148 ECPS case (and available on follow-on processors 4331 & 4341) ... not so much because of bugs found in ECPS ... but because ECPS was so tightly tied to specific operating system operation ... that sometimes you needed new ECPS load to go along with newer kernel changes. The kernel did have work around being able to query ECPS version load and disable specific ECPS features that were incompatible with the executing kernel.

list of past i432 posts including in the previous post
https://www.garlic.com/~lynn/2004q.html#59

Will multicore CPUs have identical cores?

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 27 Dec 2004 11:16:06 -0800
Subject: Re: Will multicore CPUs have identical cores?
Stephen Fuld wrote:
I am not going to comment on the idea of putting the kernel into silicon except to say that makes changes harder. But my original proposal was NOT for a "kernel only" core, but for a core which had no floating point nor enhanced graphics instructions in order to save die space in a many cored system. I felt that many, and for some applications, the vast majority of threads, even non kernel ones, could run on such cores. Nick made the point about many "non-tradtional" uses of floating point in what one might assume > were integer-only threads (which still surprises me!), thus believeing that my idea would not be useful. Sander was the one who suggested segregating workload to different cores based on the kernel/non-kernel distinction

relatively recently, i actually got a silicon-only chip fab'ed ... everything is cast in silicon and there was no provision for changing any of the programming (w/o replacing the chip). there is small amount of eeprom ... but for data only. note, however, it is a relatively straight forward function w/o a lot of options.
https://www.garlic.com/~lynn/x959.html#aads

Will multicore CPUs have identical cores?

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 27 Dec 2004 13:31:29 -0800
Subject: Re: Will multicore CPUs have identical cores?
Dan Koren wrote:
Isn't this exactly what an OS is supposed to be? ;-)

long ago and far away, my first programming job (as undergraduate) was to re-implement 1401 MPIO (unit record<->tape front end for 709) in 360 assembler to run on 360/30 (as opposed to running the 360/30 in 1401 emulation mode). I got to design and write my own task manager, storage/buffer manager, interrupt handler, device drivers, etc. program grew to about 2000 cards. basically a fairly simple monitor ... as a distinction from things normally called operating system. except for the crypto, what is on the chip is less complex than this long ago & far away monitor (nearly 40 years ago).

Integer types for 128-bit addressing

From: lynn@garlic.com
Newsgroups: comp.arch
Date: 26 Dec 2004 11:34:30 -0800
Subject: Re: Integer types for 128-bit addressing
del cecchi wrote:
I was happy before. I was just surprised by the "go to the board and write this program, work this problem" because I thought it was dead for the reasons you so clearly documented. Microsoft didn't exist the last time I was looking for a job. The practice cited was occasionally encountered in those days. Probably why I didn't get an offer from HP Microwaves. Ah well, their loss. :-)

hp was doing it at least in the mid-90s. the unix products group was trying to decide between staying with box orientation ... or adopting a system's orientation ... and were bringing in people to interview for chief system architect.

Will multicore CPUs have identical cores?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Will multicore CPUs have identical cores?
Newsgroups: comp.arch
Date: Tue, 28 Dec 2004 13:06:15 -0700
prep writes:
It is a pity that the code for RSX was not more widley and better known. RSX could handle CPUs with not differing ISA sets, but different and dynamically changing IO topologies. Each CPU had its own IO bus, and they could conect to other buses via a bus switch, so part of the fun was "what CPU do we have to be on to diddle the controller that can do this IO...". All done long ago. Oh, and that could still handle the nasty case of asynchronous FP-11 interupts into kernel mode.

BTW, lots of this was dormant in VMS, and was I'm told resurected for Galaxy systems.


standard 360 & 370 SMPs had shared memory ... but independent I/O (channels). characteristic of 360 & 360 multiprocessors were that they could be separated into independent uniprocessors and still function with their independent i/o interfaces.

common i/o was simulated by having device controllers with multiple I/O (channel) attachments ... you configured the same controller at the same address on the different I/O (channel) interfaces for the different processors. basically the multi-interface device controllers used the same technology for providing common device addressability in smp ("tightly-coupled") configurations as well as availability and common access in cluster ("loosely-coupled") configurations. SMP i/o driver had to be capable of recognizing situation where a device controller was only available on a specific i/o interface for a specific processor as well as possibly available on i/o interfaces for all processors.

the exception was the 360/67 multiprocessor which had something called the channel controller ... which configurated both memory boxes and channel interfaces. with the channel cotnroller you could cleave an multiprocessor configuration into uniprocessor ... allocating specific memory boxes and channel interfaces to specific processors. in multiprocessor configuration, the channel controller provided configuration so that all processors could access all memory boxes (as found in rest of 360 & 370 multiprocessor operation) as well as access to all channel interfaces (not available in the other 360 & 370 multiprocessor configurations).

360/67 also supported both 24-bit and 32-bit addressing. it wasn't until you got to the 3081 dyadic ... that you again found configuration where all processors could access all i/o (channel) interfaces (and supported both 24-bit and 31-bit ... not 32-bit ... addressing). however, it wasn't possible to cleave a 3081 dyadic into two independent operating uniprocessors.

slight drift, my wife did her stint in pok in charge of loosely-coupled (aka cluster) architecture ...
https://www.garlic.com/~lynn/submain.html#shareddata

somewhat useful when we later started ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

minor specific reference
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Organizations with two or more Managers

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Organizations with two or more Managers
Newsgroups: microsoft.public.sqlserver.programming,comp.databases.theory
Date: Tue, 28 Dec 2004 16:41:17 -0700
"DBMS_Plumber" writes:
Actually, I believe it to be the norm. (Learnt this while failing to pursuade an ERP to use SQL tree encodings.)

Google("Matrix Management")

The idea is that employees have a role, and a set of skills. You compose a project team out of people with different skill sets (you call the team lead the Project Manager) but you also need to manage people according to their skills (called your Functional Manager). Think about how many DBAs and programmers report to both their own "boss" and the managers of the projects they're involved in. More projects, more skills; more managers.

It might be OK if the management chain was hierarchical, but that ain't so. Project managers might have responsibility for several projects, some of which involve the same individual contributors.

What's worse, it ain't even a-cyclic! An individual might be a contributor on project A, a manager of project B, and manage people with certain skills in role C, one of whom is the manager of project A! The landscape of modern business organizations is less and less 'command and control' and more and more 'social networking with shared goals and responsibilites.'


boyd's talk organic design for command and control, i sponsored boyd's talk at sjr ... about the time system/r tech transfer was going on from sjr to endicott for sql/ds.

misc. reference:
http://www.belisarius.com/
https://web.archive.org/web/20010722050327/http://www.belisarius.com/

lots of boyd references
https://www.garlic.com/~lynn/subboyd.html#boyd

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: comp.arch
Date: Tue, 28 Dec 2004 17:19:12 -0700
Rupert Pigott writes:
So how does that prove that message passing is inherently slower than a shared memory programming model ? Also, don't databases *still* get shipped and operated on clusters ? Are you seriously telling me that there's no message passing going on underneath all that ?

one of the issues in the cluster stuff from the 90s was that while locking could be message passing ... records tended to be flushed from cache to disk and then read-back from disk. as part of ha/cmp distributed lock manager ... i did some work on being able to pass record(s) piggybacked on the same message with the lock ... as opposed to forcing record(s) out to disk before coming back in on another processor.

the issue in the non-clustered is that some form of fast commit could be done with the altered image written to log ... and lazy writes for record in the buffer migrating to disk. crossing processer boundary wasn't necessarily so much the message passing ... it was forcing the record(s) to transit to disk and back in (rather than effectively buffer to buffer copy).

the problem wasn't so much the buffer copy ... it was that fast commits for the same record might exist in several different logs and the problem of ordering records from multiple logs in recovery scenario.

lots of ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

random past posts about dlm
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#5 Tera

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

will there every be another commerically signficant new ISA?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: will there every be another commerically signficant new ISA?
Newsgroups: comp.arch
Date: Tue, 28 Dec 2004 19:47:32 -0700
Lee Witten writes:
I agree. I was at IBM at the same time as DEC was making hay with the VAXen, and IBM liked to point out that the S3x/AS4xx business alone was much bigger than the VAX business.

I was at DEC during the decline/demise period and kept wondering why they were killing the goose laying the golden eggs. For gods sake, encouraging your customers to migrate to Windows NT? What were they thinking?


vax and 4341 had crossed price/performance threshold and lot of them were being deployed as departmental server/computers. there were customers buying 4341s in units of hundreds. internally, you saw places like STL converting a conference room on every floor, in every tower to 4341 departmental computer room. you then started seeing this market segment being eaten by workstation and then larger PCs class machines.

from thread in a.f.c. giving domestic & world-wide vax shipments by year
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction

for some drift .... the following post on vaxcluster dlm:
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures

slightly related recent post
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC

one of the advantages that I had when I started DLM for HA/CMP ... was a lot of input from various RDBMS vendors that had done vax/cluster implementations and what they thot would be improvements.

in any case, back to a departmental server/computer market thread in a.f.c
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

marginally related thread:
https://www.garlic.com/~lynn/2003f.html#48 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?

the explosion in 4341 deployments also help fuel some of the early '80s growth in the internal network. part of the internet overtaking the internal network in number of nodes in mid-85 was that the internal network remained mainframe nodes ... and you started to see an big increase in workstation and PCs as internet nodes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IUCV in VM/CMS

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IUCV in VM/CMS
Newsgroups: alt.folklore.computers
Date: Wed, 29 Dec 2004 09:17:52 -0700
"David Wade" writes:
I know there are one or two folks who worked on this type of thing in the early days. Normally in VM/CMS when you want to call a CP (VM) service, the DIAGNOSE instruction is used. However for IUCV another unused OP Code was substituted. Any one any ideas why?

when i was an undergraduate ... i had done a lot of fastpath stuff to get os/360 thruput improved. i then did some generalized stuff for paging and scheduling. after that, i was looking at some of the cms pathlength ... and noticed that cms disk i/o always used the same ccw pattern (effectively cms pre-formated a ckd disk as logical fixed block and then used record oriented access ... stuff that is common today with fixed block disks) and always waited for the disk i/o to complete w/o bothering with any asynchronous activity. reference to long ago and far away presentation that I made at fall68 share meeting
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

so i invented a "new" ckd ccw opcode ... which condensed the seek/search/read/write sequence into two ccws ... and furthermore the ccws were defined to always return CC=1, csw stored (aka from the virtual machine standpoint, the i/o was complete when the SIO instruction finished ... aka it became synchronous instead of asynchronous).

cambridge (primarily bob adair) were very emphatic about not violating the principles of operation ... and suggested instead that the implementation be done with a "diagnose" instruction (under the theory that the principles of operation defines the diagnose instruction as being machine/model dependent ... leaving an opening for cambridge to define a virtual machine model ... and free to define how diagnose instruction operates with a virtual machine model). the cms disk i/o was then remapped to effectively do the same thing that i had done with cc=1, csw stored ... but with a diagnose code.

cms was modified to test for running in a real machine or in a virtual machine at boot ... and set a flag. disk i/o routine then tested the flag and either used real machine SIO sequence ... or virtual machine diagnose sequence. later in the morphing from cp/67 to vm/370 ... and from cambridge monitor system to conversational monitor system ... the option for CMS to run on bare/real machine was eliminated.

later, at cmabridge, i had done a whole set of things for supporting automated benchmarking, including something called the (cp) autolog command. cp kernel at startup had some changes to simulate the autolog command for a "autolog" virtual machine. You could then put code in the autolog virtual machine to perform all sorts of startup initialization ... as well as automatically autologging other virtual machines.
https://www.garlic.com/~lynn/submain.html#bench

there was starting to be a growing number of virtual machines that provided generalized services ... or service machines ... that on cp boot/ipl required the operator to login, start some application, and then "disconnect".

I had also done some stuff that i referred to as virtual memory management ... a bunch of stuff having to do with shared segment operation as well as a paged mapped file system. misc. vmm posts
https://www.garlic.com/~lynn/submain.html#mmap
https://www.garlic.com/~lynn/submain.html#adcon

there are actually two early cambridge scientific center reports titled "virtual memory management I" and "virtual memory management II".

at this time there was starting to be specialized code in shared segments ... but the only interface to the shared segment facilities was via the (virtual) ipl command (and being able to IPL named systems ... which could include shared segment definition). One facility that this was made available for was with APL ... the interpreter was a large body of code ... it was sort of packaged as part of the CMS kernel and a special named system was saved ... which then could be "IPL" where both the CMS kernel and the APL interpreter code was "shared".

Possibly the largest vm time-sharing service
https://www.garlic.com/~lynn/submain.html#timeshare
was HONE
https://www.garlic.com/~lynn/subtopic.html#hone

which was vm/cms service with heavy use of apl applications that supported world-wide branch office & field sales and marketing. the problem for hone was that it also had some number of applications written in fortran ... and while it was possible to automatigically put a salesman into the APL application environment as soon as they logged on ... it was rather gludgy to have a salesman perform "IPL CMS" and "IPL APL" to switch between some fortran-based applications and apl-based applications. besides the "VMM" as local modifications at cambridge and some number of other internal locations ... HONE was a big consumer of the "VMM" operations ... enabling them to automagically switch from APL and FORTRAN application environments (transparent to the sales and marketing people around the world).

So along comes VM/370 release 3 ... and they pick up the autolog command and a subset of the VMM stuff for product release (the vmm subset was called DCSS ... or discontiguous shared segments).

the other stuff included in VM/370 release 3 was vmcf and special message. standard environment had message command that users could send messages to other users on the system or send requests to the (human) operator to have something or other done. An issue was how to request stuff from the service virtual machines that had no human operator. vmcf/spm allowed a virtual machine to set option so that incoming messages were intercepted before physical display on the terminal and placed in a buffer that could be read by program. so one thing you could do was send messages to the network service machine ... which would parse it and do specific things.

network service machine then had list of privilege users that were allowed to send control commands. another use for general users was to send a message to the network service machine which would be forwarded to another node on the network and delivered in real time to remote user. so the standard system started out having "instant message" support as the default for users on the same system ... and the methodology using the network service machine extended this "instant messaging" paradigm to all users in the network.

up until this point ... most of the work was still being done by people in the cambridge area. the development group had broken off from the science center and moved into the 3rd floor and taken over the boston programming center ... and when the group outgrew that, it moved out into the vacated SBC (service bureau corporation) bldg in burlington mall.

also in this time-frame i was doing what became the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

and working on ecps (relased in vm/370 release 3 time-frame, vm microcode enhancement for 138/148) and VAMPS (5-way smp support that was canceled before customer ship) ... recent post
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores

general ecps & microcode posts
https://www.garlic.com/~lynn/submain.html#mcode
and VAMPS
https://www.garlic.com/~lynn/submain.html#bounce

so with ecps, endicott was getting heavily into vm support for its product line. there was even a whole strategy that had been done for 138/148 to release the machine as vm/370 only machine ... with vm/370 preloaded as it came from the factory ... which got pre-empted before it actually was able to ship to customers (which would have made it almost like current day LPARs ... where effectively a stripped down subset version of VM as part of the microcode of all machines).

in any case, what i remember of IUCV was that it came out of people with engineering background in the endicott lab ... as opposed to the vmcf/spm oriented stuff that. Folklore has the guilty party as Tom DeForrest ... who long ago and far way had email address of EG16TNDC@GDLS2. In theory, IUCV was going to be able to map directly into the microcode of the real machine w/o needing to be handled via the DIAGNOSE code support in the cp kernel.

some drift ... the resource manager was first scp charged-for kernel/SCP code. previously application code was licensed/charged-for ... but all kernel code was still free. the resource manager got to be the guinea pig for kernel priced software (in theory kernel code that was needed for direct hardware support, device drivers, etc ... was still free ... but other kernel code could be charged for). i got to spend six months off & on with the business people mapping out the business strategy for the change to charging for kernel code.

the other opportunity was as part of ecps ... was putting together the strategy for 138/148 w/ECPS and possibility as vm/370 only machine. At the time, the domain for the higher end machines 158/168 was primarily the US ... but 138/148 had much more of a world-wide market. Putting together the ECPS & VM/370 business strategy for 138/148 ... required various one week meetings with not only US business planning people ... but EMEA and AFE business planning people was well as business planning people from the larger world trade companies (england, germany, france, japan, etc).

I had interacted with some of the organizations ... but at a different level ... as part of deploying and propagating HONE-clones world-wide (but this was in support of the sales, marketing, and business planning people ... as opposed to directly working with them on projects).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Athlon cache question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Athlon cache question.
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 29 Dec 2004 11:35:18 -0700
"Dan Koren" writes:
IBM researched cache architectures pretty much to death during the '70s.

Of course younger designers tend to ignore the work of earlier generations.


slightly related is replacement strategies ... my recollection is that at the same asilomar/sigops meeting where i432 people presented ... including lots of comments about patching complex operating system features that had been dropped into silicon ... recent refs:
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?

.. jim introduced me to a co-worker at tandem that was having trouble getting his stanford phd ... so it must have been after jim had left sjr for tandem .... random sjr/systemr posts
https://www.garlic.com/~lynn/submain.html#systemr

the problem was that the thesis was basically on global LRU replacement strategy and stanford was getting a lot of pushback from strong advocate of local LRU replacement.

in the late 60s there was some academic literature on local LRU replacement and working sets. at that time i was an undergraduate and doing lots of operating system changes ... including having come up with this global LRU idea, implemented it and it had shipped in products.

the problem at hand was to show global LRU replacement significantly better than local LRU.

much of the 70s, i was at the cambridge science center ... and in the early 70s the grenoble science center had done a study using the same operating system and the same hardware and the same type of workload ... but implementing a "working set dispatcher" and even gotten a paper published in the cacm. they had come by cambridge and left me with rough draft ... as well as a lot of the detailed backup study information. The primary difference between cambridge and grenoble was that grenoble had a 1mbyte real storage 360/67 (154 4k "pageable pages" after fixed memory requirements) and were running 35 concurrent users while cambridge had a 768k real storage 360/67 (104 4k "pageable pages" after fixed memory requirements) and 75-80 users. Cambridge with approx. twice the load and significantly smaller real storage using a lobal LRU replacment strategy was getting about the same performance as the Grenoble "working set dispatcher" and local LRU (with half the users and 50 percent more effective paging storage).

In any case, all the backup material showing local LRU significantly outperforming global LRU on directly comparable hardware, software, and workload help tip the balance in getting the Phd approved.

Not including in the comparison ... but about that time in the early 70s, I was also playing with some coding tricks with global LRU implementation. Normally any sort of LRU-like implementation effectively degrades to FIFO when there isn't sufficient information to otherwise distinguish reference patterns between pages (except in some rare cases, FIFO isn't a particularly good replacement strategy). I had this slight of hand coding tricks for global LRU ... which continued to look, smell and taste like standard global LRU replacement ... but had the unusual characteristic of degradding to random replacement (instead of FIFO ... in lots of detailed simulation studies it was shown to out-perform true LRU across wide-variety of conditions ... as compared to the LRU-approximation implementations which strived just to be nearly as good as true LRU).

lots of past replacement algorithm postings
https://www.garlic.com/~lynn/subtopic.html#wsclock

some specific past references to the thesis (as well as grenoble paper):
https://www.garlic.com/~lynn/93.html#4 360/67, was Re: IBM's Project F/S ?
https://www.garlic.com/~lynn/94.html#1 Multitasking question
https://www.garlic.com/~lynn/99.html#18 Old Computers
https://www.garlic.com/~lynn/2001c.html#10 Memory management - Page replacement
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names

for some topic drift mention of jim leaving sjr and going to tandem:
https://www.garlic.com/~lynn/2002k.html#39 Vnet : Unbelievable
https://www.garlic.com/~lynn/2002o.html#73 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2002o.html#75 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2004c.html#15 If there had been no MS-DOS
https://www.garlic.com/~lynn/2004l.html#31 Shipwrecks

some of the cambridge detailed trace and simulation work was eventually released as a product called vs/repack in the mid-70s (i.e. took detailed trace of application and attempted semi-reorg of large application to minimize paging). it was also used by a number of corporate products that were making the transition from real stroage environment to virtual memory (compilers, database managers, etc) .. as well as starting to study cache-sensitivity issues ...

random past vs/repack references:
https://www.garlic.com/~lynn/94.html#7 IBM 7090 (360s, 370s, apl, etc)
https://www.garlic.com/~lynn/99.html#68 The Melissa Virus or War on Microsoft?
https://www.garlic.com/~lynn/2000g.html#30 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#31 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001c.html#33 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001i.html#20 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2002c.html#28 OS Workloads : Interactive etc
https://www.garlic.com/~lynn/2002c.html#45 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#46 cp/67 addenda (cross-post warning)
https://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
https://www.garlic.com/~lynn/2002e.html#50 IBM going after Strobe?
https://www.garlic.com/~lynn/2002f.html#50 Blade architectures
https://www.garlic.com/~lynn/2003f.html#15 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#21 "Super-Cheap" Supercomputing
https://www.garlic.com/~lynn/2003f.html#53 Alpha performance, why?
https://www.garlic.com/~lynn/2003g.html#15 Disk capacity and backup solutions
https://www.garlic.com/~lynn/2003h.html#8 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#32 Language semantics wrt exploits
https://www.garlic.com/~lynn/2004c.html#21 PSW Sampling
https://www.garlic.com/~lynn/2004.html#14 Holee shit! 30 years ago!
https://www.garlic.com/~lynn/2004m.html#22 Lock-free algorithms
https://www.garlic.com/~lynn/2004n.html#55 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#7 Integer types for 128-bit addressing

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 29 Dec 2004 12:05:30 -0700
Paul Rubin <http://phr.cx@NOSPAM.invalid> writes:
Unless your large systems that you've actually deployed are 100% bug-free, and they're not, you need every safety net you can get.

my original comments were that in the 90s and into current decade (actually we started looking at it in the late 80s), between the majority to 1/3rd of all exploits were buffer overrun/overflow. i also relatively recently looked at the vulnerability database to see if there was any semantic structural information for doing stuff similar to my taxonomies ... security, x9f crypto, payment, financial, etc https://www.garlic.com/~lynn/index.html#glosnote

... there wasn't (although they've since come out and said that they will attempt to start doing classification). i did do word & word-pair frequency ... and "buffer <something>" (overflow, overrun, exploit, etc) showed up in approximately 1/5th of the exploit type descriptions in the vulnerability database (other vulnerabilities may have also involved buffer exploits ... it just was that the free-form description format didn't happen to explicitly call it out). the descriptions tended to slightly favor the description of the vulnerability (aka denial of service, gain root, etc) as opposed to description of the vulnerability cause.

in any case, the lower-bound seems to be at least 1/5th and the upper bound is possibly over half of exploits/vulnerabilities types are buffer overrun related.

post about cve entry analysis
https://www.garlic.com/~lynn/2004e.html#43 security taxonomy and CVE

with such a high percentage ... it seems to be a relatively large bang-for-the-buck if something relatively straight-forward could be done to address such a large percentage of failures.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 29 Dec 2004 12:18:43 -0700
BRG writes:
But note also that John has said that redundancy should only be used to cope with _hardware_ failures. In his view the software in safety critical systems should only be deployed if it can be guaranteed to be perfect and completely free of errors in all circumstances that can occur in practice.

some time ago (20+ years) i think jim published survey from tandem ... recent total topic drift about jim & tandem
https://www.garlic.com/~lynn/2004q.html#73

that failure modes had tipped from being hardware related to being primarily software &/or human related.

in the late 90s, looking at one of the large financial transaction operations ... they claimed to have 100percent availability for six year period ... and they attributed it primarily to

• ims hot standby (at three physically different sites)
automated operator

i.e. people mistakes and logical redundancy.

my wife did her stint in pok in charge of loosely-coupled (aka cluster) architecture for mainframes and had come up with peer-coupled shared data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

and the ims hot standby people were one of her major customers at the time.

non-human rated systems (i.e. life isn't directly at stake) can still have high-availability requirements that can allow for error recovery as opposed to strictly error avoidance.

to some extent jim also pioneered a lot of transaction ACID properaties for database transactions ... creating boundaries that allowed roll-forward &/or roll-back error recovery techniques. some stuff related to ACID properaties ... but mostly focused on original relational database manager at sjr:
https://www.garlic.com/~lynn/submain.html#systemr

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Athlon cache question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Athlon cache question.
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 29 Dec 2004 14:35:54 -0700
"Dan Koren" writes:
Sorry to top post. I do not want to force people to read your entire post before they get to the point.

I don't mean to spoil your fun, but there is a lot of empirical evidence that frequency of reference based replacement strategies tend to outperform pure LRU (whether local or global).


there was acm paper from somebody at brown univsity in the early 70s proposing a per page hardware reference counter ... but the studies of 70s (which was the statement somebody made in the original post that i was responding to) was primarily focused on the practical hardware that was available at the time. It wasn't about having fun ... it was about responding to somebody's comments about cache studies in the 70s at a certain company.

the very late 70s saw some looking more complex solutions as it looked like more complex hardware might be available ... a lot of it being done with some form of the vs/repack technology that we had done at cambridge
https://www.garlic.com/~lynn/subtopic.html#545tech

multics had an acm paper in the mid-70s about having 1-4bits of page reference information ... but it was orientated towards extending the lemgth of history .... i.e. more like shift register with one bit per cycle of the global LRU clock algorithm (as opposed to count of references). i did mention in the early 70s coming up with variation on standard global "wsclock" algorithm that would beat true/strick global LRU (because even with small memories, short residencies, and short term history, there was ways of beating global LRU).

the late 70s found some more detailed studies, many of them using the product released from the science center, which in its product version was primarily focused on re-organizing programs for better virtual memory characteristics ... but the detailed i & d traces were also used for other kinds of analysis (yes we could break out i & d information as well as changed & non-changed references).

going into the late 70s ... real storages were starting to get so big that virtual page lifetimes in real storage was far exceeding the recent reference history information that the earlier replacement algorithms had been oriented towards; in fact, the whole system resource balance was starting to significantly change ... I was making the comment that the relative disk system performance had declined by a factor of ten times over a period of 10-15 years. this initially upset the disk division so much that they assigned their performance organization to refute the statement. after a couple months they managed to come up with the fact that i had slightly understated the problem. this eventually turned into a presentation made to the share user group on recommendations to improve disk thruput.

the issue then for real storage management was using the excess capacity to offset the bottlenecks with disks ... i.e. more use of real storage for record caching, paging done in much larger blocks (in the 3380 case, transfer rate had increased by a factor of ten while arm access only increased by 3-4; ... so it was possible to do much larger transfers; possibly wasting real storage and transfer capacity ... if you could reduce the number of transfers).

as part of the resource manager in the mid-70s ... I had included done some tweaks to reduce cache thrashing under heavy load as well as some slight of hand in the smp support ... to improve cache affinity.

the "big" change in cache stuff made in that time-period was operating system changes for 3081. went thru the operating system and very carefully aligned kernel (and various other structures) on cache boundaries. there was stuff to get buffers aligned on cache boundaries and allocated in multiples of cache lines ... and attempts to make sure that different data structures that might be concurrently accessed by different processors didn't co-reside in the same cache line.

i had helped with the vs/repack stuff when i was at the cambridge science center ... and when i went out to SJR ... i did some work on a disk record trace. it was here that we really started to see effects of much more complicated longer term history ... than was useful in the smaller real memory & shorter lifetimes of the 60s and early 70s. it was initially used to establish that for a given amount real storage ... that a global cache was nearly always better than local caches aka ran detailed simulation with huge amounts of live trace data where there was a fixed amount of electronic cache ... and it could either be used for a single global cache ... or partitioned among the channels, or finer partitioning among the disk controllers ... or even much finer partitioning out to individual disks. This somewhat validated the earlier GLOBAL vis-a-vis LOCAL observations ... regardless of the exact pattern used for line management in the cache.

with various enhancements to the disk record tracing in the early 80s we started to see much longer lifetimes histories and started to identify other types of longer term reference pattern information (and program and applications "pages" were just another type of disk record). the short period access patterns involving LRU from the 60s & 70s ... starting looking more & more like background noise compared to the longer period and more complex real world use patterns that we were starting to identify.

minor references to the resource manager interrupt window that I played with to reduce cache thrashing (aka the system would go from completely asynchronous, anarchy I/O interrupts which could cause a lot of cache thrashing to very controlled mechanism for periodicly handling i/o interrupts)
https://www.garlic.com/~lynn/2002l.html#25 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2002n.html#29 why does wait state exist?
https://www.garlic.com/~lynn/2004f.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation

random past references to disk cache work and studies:
https://www.garlic.com/~lynn/94.html#9 talk to your I/O cache
https://www.garlic.com/~lynn/94.html#13 talk to your I/O cache
https://www.garlic.com/~lynn/98.html#6 OS with no distinction between RAM and HD ?
https://www.garlic.com/~lynn/2000d.html#11 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#13 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001c.html#74 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001.html#18 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
https://www.garlic.com/~lynn/2001l.html#55 mainframe question
https://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
https://www.garlic.com/~lynn/2002b.html#16 hollow files in unix filesystems?
https://www.garlic.com/~lynn/2002b.html#20 index searching
https://www.garlic.com/~lynn/2002d.html#55 Storage Virtualization
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002n.html#50 EXCP
https://www.garlic.com/~lynn/2003f.html#5 Alpha performance, why?
https://www.garlic.com/~lynn/2003f.html#9 Alpha performance, why?
https://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
https://www.garlic.com/~lynn/2003j.html#7 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2003j.html#65 Cost of Message Passing ?
https://www.garlic.com/~lynn/2004b.html#54 origin of the UNIX dd command
https://www.garlic.com/~lynn/2004e.html#25 Relational Model and Search Engines?
https://www.garlic.com/~lynn/2004g.html#17 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#18 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004g.html#21 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004i.html#0 Hard disk architecture: are outer cylinders still faster than
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004k.html#41 Vintage computers are better than modern crap !
https://www.garlic.com/~lynn/2004l.html#36 FW: Looking for Disk Calc program/Exec
https://www.garlic.com/~lynn/2004o.html#8 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Athlon cache question

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Athlon cache question.
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 29 Dec 2004 14:40:55 -0700
Bernd Paysan writes:
One problem with frequency of reference aging is that you need to keep the statistics longer than the items (pages, lines). An item that is thrown out and soon reloaded deserves to keep the priority. Access frequency would be age/accesses; perhaps a digital filter on the accesses would provide even more useful data.

in some respects that is what my slight-of-hand trick did ... where the standard LRU-approximation algorithms all tend to emulate LRU and degenerate to FIFO under various conditions (preserving recent reference history regardless of whether it was useful or not) ... the slight-of-hand trick would effectively start ignoring strict recent reference history when it wasn't helping.

i had done the original global LRU stuff in the 60s while i was an undergraduate ... and it was incorporated and shipped in products. in the early '70s while i was at the science center ... i came up with this efficient slight-of-hand to effectively ignore recent reference history information when it wasn't being much use.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 29 Dec 2004 15:38:33 -0700
"Douglas A. Gwyn" writes:
Fine, but merely switching PLs doesn't do that.

no ... but as also previously referenced many of these buffer overruns specifically involved copying a string into a buffer ... and getting the target buffer length wrong ... and there have been detailed studies of systems that used a paradigm (regardless of the PL, pli, assembler, machine language, etc) where string to buffer copies involved target buffer paradigm that included both the buffer origin pointer and the buffer length as a buffer implementation characteristic ... and these system were found to not have any of the string to buffer copy problems so prevalent in C implemented environments.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Athlon cache question

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Athlon cache question.
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 29 Dec 2004 17:00:10 -0700
somewhat random aside ... we had the disk record cache trace stuff installed at sjr ... so we got to include a lot of system/r and later rdbms research stuff ... random refs:
https://www.garlic.com/~lynn/submain.html#systemr

.... at longer period history than they were doing themselves (besides just run of the mill data center use). we also got it installed at stl ... which was another type of development house ... lots of ims database development as well as use ... and then later db2 development and use ... course stl was doing other types of development ... languages like apl and pli and misc. other things. we also got some of the machines that were doing the disk division commercial data processing for the operation of the business ... so there was a variety of data processing type things ... some with cyclic daily, weekly and monthly operation (database, vsam, flat files, etc).

we were also able to get long term use traces out of some number of other sites around the silicon valley area,

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Wed, 29 Dec 2004 22:46:19 -0700
"Douglas A. Gwyn" writes:
I frankly don't think that is a prevalent problem, except perhaps in some of the antique BSD servers. It is certainly easily avoidable by any C programmer worth his salt.

this posting has list of sentences from the cve database descriptions that start with something like buffer overflow in
https://www.garlic.com/~lynn/2004j.html#58

there are about 460+ that have such a sentence someplace in the short free form description (this is fewer than the total number that mention overflow and/or might be overflow related ... but it seemed to be a representative sample because it included indicattion of broad occurance of the problem). this is out of approx 2600 total entries in the cve database at the time (as mentioned before about 1/5th of the entries in the cve database at least mentioned something with regard to overflow).

i sorted the list and truncated the sentences at 40 chars (to cut down on the posting size). they don't particularly seem to be BSD biased.

somewhat regardless of any opinions about the ease or difficulty of avoiding such problems ... they still seem to have a fairly wide occurance across a large number of different evironments.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Thu, 30 Dec 2004 10:38:31 -0700
infobahn writes:
It seems to me that this says nothing whatsoever about C, but it does say quite a lot about those who practice it. In other words, if these are all problems with C programs (which is possible but perhaps a touch unlikely), it only means that 460+ C programmers should take their salt back to the pay desk.

so part of the thread has been that the common c programming environment has people making a lot of mistakes with buffer overflows and that the previous post points out that other environments have not seen similar frequency of buffer overflow problems.

so two possible conclusions 1) something specific to common c programming environment and/or 2) something specific to programmers who use c. there wasn't enuf data in the various studies to indicate whether common c programming environment is responsible or that people who program in c are unique.

it does verge into some of the other side threads ... that in other venues, if there is a high incidence of mistakes associated with specific environment ... whether it is the environments fault or the humans fault ... the nature of the mistakes are studied and frequently compensating changes are made in the environment to help reduce the tendency for mistakes to happen. in other venues, there is eventually a realization that it is helpful to make changes in the environment as a method of helping people avoid making frequent and common mistakes.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Fri, 31 Dec 2004 11:46:38 -0700
newstome writes:
I have no idea what you're saying here.... I'm not sure why software in PROM would help at all.

some amount of it is similar to the no-execute and read-only execute flags in some current processors and various operating system support for it; assuming completely separate execution and data ... and flags set on memroy ... then non-priviledged processes can't modify memory that is marked executable and I-fetch doesn't work on data not marked executable. to preclude some of the buffer overflow exploits that involve modifying jump addresses and point to some exploit code packaged as part of data. this is changing processor hardware architecture and operating systems to address a widely prevalent flaws known to be associated with C programming environments (this is sort of the guard rail approach to driving mistakes ... especially when it appears that some common engineering characteristic is known to precipitant large number of accidents and might never be corrected).

in the distant past ... a low frequency kernel programming mistake (but becomming larger percentage of operating system failures as other mistakes were corrected) was using a zero pointer to overlay physical memory address zero ... a hardware feature was added to not allow any sort of normal instruction (privileged or non-previleged) to alter the first 512 bytes of storage (low storage protect).

for some drift ... there is slightly related side-thread in comp.arch discussing manufacturing code in silicon where it can't physically be changed (which doesn't directly issue of exploits involving modifying jump addresses to middle of data areas .... which software in PROM doesn't directly address either ... unless you also have the modification that only instructions in PROM will be executed and it is very, very hard to make changes to PROM) ....
https://www.garlic.com/~lynn/2004q.html#60 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#64 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#65 Will multicore CPUs have identical cores?
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Fri, 31 Dec 2004 14:03:08 -0700
aka ... not being able to easily modify executable code AND not executing data ... precludes the attacker from supplying exploit code as part of a string ... and jumping into the supplied exploit code. it doesn't preclude eliminate the modification of the jump address ... just precludes setting the jump address to exploit code that was packaged as part of some string (and executing that exploit code).

you are still possibly stuck with denial of service jump modifications ... or if somebody was really cleaver being able to jump into the middle of some already existing code.

getting execution of delivered exploit code then pretty much becomes a social engineering attack ... rather than buffer overrun attack.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Fri, 31 Dec 2004 14:46:51 -0700
"Douglas A. Gwyn" writes:
Is that relative frequency or absolute frequency? Normalized how? If 99% of the apps are in C and 95% of the bugs are in C then that would show that C seems much less bug-prone than the other languages.

Changing the environment does not help if the bugs are not *caused* by the environment. Since most bugs of the kind under discussion are caused by bad thinking, they will persist into other environments.


percent of failures/exploits for the specific environment that are related to buffer overflows. I'm only looking at percentage of all reported exploits/vulnerabilities types (for a specific environment) that happen to be related to buffer overflows.

that should normalize it for amount of use of specific kind of environment (both across the number of different applications ... as well as the frequency that such applications are used).

long ago and far away ... a large commercial mainframe environment routinely had customers sending detailed trace and memory image copies to the vendor as part of problem determination and resolution. the standard tool used by the vendor basically dealt with the information in hex and had the vendor employees doing problem determination (& resolution) performing manual examination of the trace and storage image(s).

I completely rewrote the tool ... minor random references
https://www.garlic.com/~lynn/submain.html#dumprx

and added a bunch of programmable analysis features. I then did a detailed study of the reported types of failures and the failure signatures .... so i could automate much of the failure analysis process for the most common failure characteristics.

the vendor failure management and reporting structure (tried to) mapped all occurance of a specific failure cause to a single failure occurance (regardless of the number of times the same customer and/or different customers would report that specific failure).

in any case, at that time i got to have a pretty complete look at all customer reported failures and did a pretty detailed analysis of the structural characteristics related to the failure in order to build automated failure signature analysis.

the cve database effort appears to also be attempting to map all occurances of a specific failure cause to a single failure reporting entry (not taking into account the number of times that the related program might have been observed to have had that failure).

some more random drift ... earlier I had done a detailed analysis of all kernel serialization related failures i.e. hung/zombie processes and/or using pointers after the related operations had all be completely and discarded ... which sometimes also involved pointers to location zero ... minor reference to hardware guard rail from a recent posting here (low storage protect)
https://www.garlic.com/~lynn/2004q.html#82 [Lit.] Buffer overruns

then as part of releasing the operating system resource manager product ... i completely rewrote the kernel serialization primitives to completely eliminated all instances of hung/zombie processes as well as dangling pointer use. random past pointers to the resource manager product
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock
https://www.garlic.com/~lynn/submain.html#bench

so when my wife and i started the project producing the ha/cmp (High Availability Cluster Multi-Processing) product in the late 80s ... minor refs
https://www.garlic.com/~lynn/95.html#13
and
https://www.garlic.com/~lynn/subtopic.html#hacmp

we started with detailed failure and vulnerability study to try and identify all possible failure characteristics. it was then that we predicted that common c programming environments would have an order of magnitude (or more) increase in the percentage of failures that were buffer length related (compared to other environments that we had experience with).

and of course to slightly bring this back to crypto ... two people at the mentioned meeting
https://www.garlic.com/~lynn/95.html#13

were then at another company (that was doing this thing called https/ssl) in charge of building something called commerce server.
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The TransRelational Model: Performance Concerns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The TransRelational Model: Performance Concerns
Newsgroups: comp.databases.theory
Date: Fri, 31 Dec 2004 16:15:50 -0700
pc writes:
i wonder would we be seeing a different world now had the old core memory been as cheap and plentiful as today's.

there is actually a number of different issues ... the amount of core memory as well as the relatively performance.

i took some heat starting in the late '70s for claiming that the disk relative system performance had declined by a factor of ten over a 10-15 year period (disk performance had increased by 3-5 times but cpu & memory had increased by 50 times .,.. so the relative system disk performance declined by a factor of 10).

the disk division got annoyed and assigned their performance organization to refute the statements. they took a couple months and came back saying that i had slightly under stated the problem. this eventually turned into a user group presentation on recommendations about how to structure & use disks for better system performance. random recent posting somewhat related to the topic
https://www.garlic.com/~lynn/2004q.html#76

I've frequently claimed that the CKD architecture from the 60s was a trade-off of i/o capacity vis-a-vis limited real memory ... aka it was possible to put the index structure on disk and have the i/o subsystem execute a program to find specific kinds of data and read only what was necessary into storage. this avoided having to use any of real storage for caching of either data and/or index pointers. by the mid-70s the resource constraint was starting to shift from real memory to i/o ... and CKD became the wrong trade-off in that environment. random past postings related to the CKD trade-off (and the criteria that assumptions were based on) totally reversing over a period of time
https://www.garlic.com/~lynn/submain.html#dasd
and slightly related posts on bdam &/or cics
https://www.garlic.com/~lynn/submain.html#bdam

in the early 90s ... one of the large airline res systems were complaining that they had applications with (at least) ten impossible things that they couldn't do in the existing infrastructure. I looked at one of the applications (routes) that accounted for approx. 25percent of total system load and rewrote it from scratch.

Much of the fundamental architecture and assumptions hadn't changed since the 60s .... so I reset to zero and re-examined the fundamental architecture assumptions that had been made and tried to determine if they still applied (and if not, what could i do if I was starting totally from scratch). I got about a 100-fold speed up AND was able to implement all ten impossible features (which when done resulted in only a net 10-fold speed up .. since it was doing a lot more).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Organizations with two or more Managers

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Organizations with two or more Managers
Newsgroups: microsoft.public.sqlserver.programming,comp.databases.theory
Date: Fri, 31 Dec 2004 17:48:26 -0700
"Mikito Harakiri" writes:
Are multiple managers exception rather than rule? Does any fortune 500 companies have organization chart that is not a tree?

Handling trees is infinetely more simple than [directed acyclic] graphs. For one thing, how would you specify a constraint that your graph is acyclic? If your graph happens to be a tree, how would you define this constraint?

Next, assume your RDBMS have SQL extensions allowing to express queries on graph (represented as adjacency list). Expect some problems. How good is the cost model? Can optimizer come up with a realistic prediction how many joins it would perform? Certainly not, as for a given program it's even impossible to tell in advance if the program going to stop at all.

Those are couple of reasons why tree encodings really shine.


in the early 80s, one of boyd's assertions was that modern corporate culture is reflection of many of the executives getting training on how to run large organizations as members of the US army during ww2.

basically he contrasted much of the experienced and professional german military with the US army at entry to the war ... large numbers of quickly recruited and quickly trained individuals and only a very small group of professionals to direct them. as a result much of the strategy was based on exceedingly heavy overwhelming resources with extremely rigid command & control structures (attempting to leverage scarce skill resource across massive amount of resources and large numbers of inexperienced/unqualified people) ... somewhat related to pushing all decisions as high as possible up the organization (as opposed to pushing decisions to the best qualified person on the spot).

boyd would contrast that with the blitzkrieg and Guderian's directive about verbal orders only. the scenario was that in times of heavy action ... that the only people that never make a mistake are the people that never do anything ... if you acting and especially having to act quickly ... there are going to be mistakes. Guderian's directive supposedly epitomized that he trusted his troops as professionals to make their own decisions ... and he didn't want to have any bureaucrats running around afterwards trying to blame them for possible mistakes; he wanted to encourage that the person on the spot was to make the best decision he deemed possible (and not have to constantly worry about having any sort of paper trail that bureaucrats might later use to play monday afternoon quarterback).

this is slightly related to the humorous definition of auditors as the people that go around the battlefield, after the war, stabbing the wounded ... a posting about a misunderstanding related to this and a number of auditors:
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out

the other contrast might be a set of generals superbly qualified to make decisions about overall strategic issues ... but totally unqualified to make decisions about any moment-to-moment tactical issues. It sometimes shows up in the split between CEO and COO ... where the CEO is handling long-term strategic issues and may be totally unqualified to handle the day-to-day issues faced by the COO.

you also see something similar in some techno startups where the VCs may want to have somebody they select as vp of finance and possibly even vp of marketing.

my past boyd postings:
https://www.garlic.com/~lynn/subboyd.html#boyd
and numerous boyd references from around the web
https://www.garlic.com/~lynn/subboyd.html#boyd2

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/



previous, next, index - home