List of Archived Posts

2002 Newsgroup Postings (01/01 - 01/12)

index searching
The demise of compaq
The demise of compaq
The demise of compaq
Buffer overflow
index searching
index searching
The demise of compaq
The demise of compaq
How to get 128-256 bit security only from a passphrase?
index searching
The demise of compaq
A terminology question
A terminology question
index searching
index searching
index searching
index searching
Infiniband's impact was Re: Intel's 64-bit strategy
Buffer overflow
Younger recruits versus experienced veterans ( was Re: The demise of compa
"blocking factors" (Was: Tapes)
index searching
Buffer overflow
Buffer overflow
ICMP Time Exceeded
Buffer overflow
Buffer overflow
Buffer overflow
Buffer overflow
Younger recruits versus experienced veterans ( was Re: The demise of compa
index searching
Buffer overflow
Buffer overflow
Buffer overflow
Buffer overflow
a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
Buffer overflow
Buffer overflow
Buffer overflow
Multi-Processor Concurrency Problem
Movies with source code (was Re: Movies with DEC minis)
a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
hollow files in unix filesystems?
Calculating a Gigalapse
VM and/or Linux under OS/390?????
hollow files in unix filesystems?
School Help
Microcode?
OT Friday reminiscences
Microcode?
Microcode?
Microcode?
School Help

index searching

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch
Date: Tue, 01 Jan 2002 21:47:28 GMT
Terje Mathisen writes:
This was the reason that I never understood that particular prof's infatuation with special hw for the task:

Originally, when they started up, most data had to come from disk arrays, and even 486 and earlier cpus could keep up with the fastest (or most cost-efficient) disk systems.

Today the same holds true, but for main memory access vs cpu opcodes.


there used to be some number of implementations where matching engines were co-located with every read head .... with the electronics to simultaneously read from every head doing searches.

i think commodity disk & processor engine got large enuf, fast enuf, and cheap enuf ... that it was simpler to just create large cluster of PCs & disks ... and do the parallel searching with commodity priced parts rather than custom hardware.

from disk array stand-point ... i believe CM5s had 32+8 disk arrays and even floating point system boxes from the '85 era had 40mbyte/sec disk arrays.

how 'bout teradata & britton-lee database machines ... misc. ref:
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The demise of compaq

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The demise of compaq
Newsgroups: comp.os.vms,comp.arch,comp.sys.intel,alt.folklore.computers
Date: Tue, 01 Jan 2002 23:20:01 GMT
"Charlie Gibbs" writes:
This is the result of the shift in mindset from batch to interactive systems. "Interactive" is seen as more desirable, compared to stodgy old batch systems. But, as you've pointed out, there is a downside.

i'm not sure that it is a shift in mindset .... there is still huge amount of batch that continues to exist .... i would even claim that the backend "webserver" have most of the characteristics of traditional batch systems with regard to drive for things like service level agreements and "darkroom" operations.

it may not be so much that batch-like systems have dwindled ... it is that there has been a huge explosion in the interactive systems market ... moving into home & consumer marketplaces (home pcs far out numbering backend batch systems ... doesn't necessarily mean that the backend batch systems have declined).

the downside issue in terms of the batch-like operations for the backend webservers .... is trying to apply interactive-paradigm platforms to a fundamentally batch-paradigm environment (including the backend systems ... aka webservers have relatively similar operational characteristics to the CICS, IMS, etc ... "online" subsystems that are traditional batch operations). Part of the issue is that the batch-paradigm and interactive-paradigm platforms tended to have very different design-points during development. However, possibly the majority of the really large, scaled-up backend web & internet systems have been platformed on non-interactive paradigm platforms like Tandem (which is a compaq operation).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The demise of compaq

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The demise of compaq
Newsgroups: comp.os.vms,comp.arch,comp.sys.intel,alt.folklore.computers
Date: Wed, 02 Jan 2002 06:02:37 GMT
JF Mezei writes:
That is because they were forced to streamline all the dataflow to be able to reach that magical 10 transactions per second the banks needed. There was no need for middleware when you stuck to the IBM mainframe and 3270 terminals. As a matter of fact, when a mainframe interrogated another, it would often format the transaction as if it came from a 3270 terminal and parse the response.

The interrogated mainframe didn't need any additional software to handle this because everyone appeared to be a 3270 terminal.

But bring in the incompatible systems and all of a sudden you need to add a layer that allows them to communicate between each other. Just look at the overhead of HTML and XML where a simple transaction needs to be formatted, transmitted as a chunk orders of magnitudes bigger and then parsed back into a transaction the receiving computer can handle.

Just look at VMS mail. Instead of changing VMSmail to make it native RFC822 , they added a layer that converts from VMSmail to RFC822 and sends it out (and vice-versa).


but the "solution" to application migration to PCs and the whole client/server was supposed to be SAA ... my wife and I took a lot of heat on the middleware issue (we use to go by and kid the executive that "owned" SAA ) ... this happened before HTML & XML.
https://www.garlic.com/~lynn/96.html#16 middle layer
https://www.garlic.com/~lynn/96.html#17 middle layer
https://www.garlic.com/~lynn/98.html#50 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/99.html#124 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/99.html#201 Middleware - where did that come from?
https://www.garlic.com/~lynn/99.html#202 Middleware - where did that come from?
https://www.garlic.com/~lynn/2000b.html#59 7 layers to a program
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2001d.html#69 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001j.html#4 I hate Compaq
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#55 9-track tapes (by the armful)

slightly related was the departmental server issue (from the late '70s & early '80s ... which had similar requirements/solutions)
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The demise of compaq

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The demise of compaq
Newsgroups: comp.os.vms,comp.arch,comp.sys.intel,alt.folklore.computers
Date: Wed, 02 Jan 2002 16:11:35 GMT
Brannon_Batson@yahoo.com (Brannon Batson) writes:
I'd be willing to wager that any off the shelf Alpha could handle the thousands of users with terminal shells and the batch processed text jobs of yesteryear, and still have plenty of cycles left over. The difference is that software has become more demanding of the hardware on a per-user basis, because (a) the users have become more demanding of the software, and (b) as you said earlier, there are so many layers of cruft between the user's problem and the hardware that solves it. I don't see how either of these are the fault of the system architect.

for the web environment ... most of the gui stuff that is cpu demanding is on the front-end personal computers/workstations. the back-end webservers (in principle operate very much the way of mainframe legacy systems doing online transactions) have suffered from lack of scalable implementation.

first web servers ran http over tcp/ip and spawned a new address space and task for every http request. for web servers that operated at a few transactions per minute there wasn't much of a scalable issue.

http requests are effectively connectionless transactions ... while tcp is a connection protocol ... tcp requires a minimum 7-packet exchange for setup/teardown of the session, which is heavy-weight all by itself. FINWAIT was a tcp teardown mechanism to verify there wasn't any dangling packets and were all queued on a linear list and all incoming packets did linear search of the FINWAIT list to see if it was a dangling packet. circa 1996, dynix was possibly the only unix that had experienced large numbers of FINWAIT and gone to FINWAIT management that wasn't linear scan ... other webserver platforms that started experiencing high-number of "quick" connections resulting in huge FINWAIT lists were starting to see 90% of total CPU devoted to FINWAIT list scan. NETSCAPE kept adding servers for connections as well as FTP downloads of new browswers ... picking the NETSCAPE1, NETSCAPE2, ... NETSCAPE19 node for connection was becoming a major topic. NETSCAPE saw a dramatic change when they put in a large dynix configuration for NETSCAPE20 node.

The use of (new) connection TCP for connectionless transaction HTTP has been a real scaling issue as well as spawning & then just having different address space for every transaction.

The other evolution for production webservers ... I believe was first at YAHOO with the front-end routers with rotating routing to large pool of backend servers (I have recollections of discussing it with the vendor engineer responsible for the implementation). There had been Q&D hack to DNS prior to that to return rotating IP-address from pool but that was somewhat random and couldn't reach out to the caching intermediate servers. The initial deployment of the rotating routing code at YAHOO allowed significantly improved load balancing across the back-end servers.

However, there still is significant extra computing resources expended in the web environment to use connection & command oriented paradigms in support of connectionless transaction activity.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt
Date: Wed, 02 Jan 2002 16:29:05 GMT
Richard Heathfield writes:
It's better to write clear, maintainable code than to bum every cycle. On that, I agree. I also think that it is better to play safe than to be sorry. If you are not confident that a particular circumstance cannot arise, then add production code to check against that possibility. If you are confident that a particular circumstance cannot arise, because of the way the program works, then assert it.

don't you just hate it when you do a custom kernel for somebody and then they track you down in some other job 10 years later looking for assistance ... they've continued to migrate the kernel to newer generations of machines and then something really quirky hardware change shows up (especially when the custom kernel has been propagated all over some place like AT&T longlines).

random refs:
https://www.garlic.com/~lynn/95.html#14 characters
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000b.html#74 Scheduling aircraft landings at London Heathrow
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001f.html#3 Oldest program you've written, and still in use?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 02 Jan 2002 18:05:30 GMT
"Stephen Fuld" writes:
The idea of using disk controller hardware to search the disks at media speed is an old one. This was implemented by Univac for their Fastrand drums where you could search the first word of each or all words of a set of sectors. Of course, it was also implemented by IBM in the CKD architecture, where the K stands for the key. There were (and still are) commands to search the keys of the records on a track for equality or other relations.

But as someone else pointed out, these schemes were never very practical and quickly lost out to various indexing schemes such as b-trees, etc. However, in a nod to backwards compatibility, the mechanism to search a partitioned data set (sort of like a directory) in an IBM mainframe still uses keys and the relevant search commands (little enhanced) to search for a desired entry.


CKD was in the controller not in the head .... it was a '60s memory/bandwidth trade-off. The problem with CKD past about mid-70s was that there was significant "excess" memory for in-core indexes and I/O resources were becoming a bottleneck.

from posting here and elsewhere several times ... comparison from early '80s but stuff that I had started on in the late '70s


system          3.1L            HPO     change
machine         360/67          3081K


mips .3 14 47* pageable pages 105 7000 66* users 80 320 4* channels 6 24 4* drums 12meg 72meg 6* page I/O 150 600 4* user I/O 100 300 3* disk arms 45 32 4*?perform. bytes/arm 29meg 630meg 23* avg. arm access 60mill 16mill 3.7* transfer rate .3meg 3meg 10* total data 1.2gig 20.1gig 18*

Comparison of 3.1L 67 and HPO 3081k
I had tried to get a non-multi-track search in the product ... but they gave me a price-tag for product support (no R&D, development, just documentation and release) of $26m. specific problem
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/99.html#74 Read if over 40 and have Mainframe background

by comparison CMS had an "in-core" filesystem ... from the beginning ... mid-60s ... but it was linear search; the in-core directory got "sorted" enhancement in the early '70s for fast directory searching.

some recent CMS filesystem discussions:
https://www.garlic.com/~lynn/2001n.html#1 More newbie stop the war here!
https://www.garlic.com/~lynn/2001n.html#62 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#88 A new forum is up! Q: what means nntp
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)

misc. 3.1l/hpo comparison postings (w/more detailed explaination):
https://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
https://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
https://www.garlic.com/~lynn/98.html#46 The god old days(???)
https://www.garlic.com/~lynn/99.html#4 IBM S/360
https://www.garlic.com/~lynn/99.html#103 IBM 9020 computers used by FAA (was Re: EPO stories (was: HELP IT'S HOT!!!!!))
https://www.garlic.com/~lynn/99.html#190 Merced Processor Support at it again
https://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?

various CKD, multi-track, vtoc, & pds discussions:
https://www.garlic.com/~lynn/93.html#29 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14
https://www.garlic.com/~lynn/94.html#35 mainframe CKD disks & PDS files (looong... warning)
https://www.garlic.com/~lynn/97.html#16 Why Mainframes?
https://www.garlic.com/~lynn/97.html#22 Pre S/360 IBM Operating Systems?
https://www.garlic.com/~lynn/97.html#28 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/97.html#29 IA64 Self Virtualizable?
https://www.garlic.com/~lynn/98.html#21 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000c.html#34 What level of computer is needed for a computer to Love?
https://www.garlic.com/~lynn/2000d.html#50 Navy orders supercomputer
https://www.garlic.com/~lynn/2000e.html#22 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000f.html#18 OT?
https://www.garlic.com/~lynn/2000f.html#19 OT?
https://www.garlic.com/~lynn/2000f.html#42 IBM 3340 help
https://www.garlic.com/~lynn/2000g.html#51 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2000g.html#52 > 512 byte disk blocks (was: 4M pages are a bad idea)
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#54 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001.html#55 FBA History Question (was: RE: What's the meaning of track overfl ow?)
https://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#48 VTOC position
https://www.garlic.com/~lynn/2001d.html#60 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001d.html#64 VTOC/VTOC INDEX/VVDS and performance (expansion of VTOC position)
https://www.garlic.com/~lynn/2001f.html#21 Theo Alkema
https://www.garlic.com/~lynn/2001g.html#24 XML: No More CICS?
https://www.garlic.com/~lynn/2001i.html#30 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001k.html#37 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001k.html#51 Is anybody out there still writting BAL 370.
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2001l.html#39 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2001m.html#38 CMS under MVS
https://www.garlic.com/~lynn/2001n.html#4 Contiguous file system

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 02 Jan 2002 18:56:41 GMT
"Stephen Fuld" writes:
The idea of using disk controller hardware to search the disks at media speed is an old one. This was implemented by Univac for their Fastrand drums where you could search the first word of each or all words of a set of sectors. Of course, it was also implemented by IBM in the CKD architecture, where the K stands for the key. There were (and still are) commands to search the keys of the records on a track for equality or other relations.

in this particular on-site customer call (fortunately their national data center was within driving distance):
https://www.garlic.com/~lynn/99.html#75 Read if over 40 and have Mainframe background
https://www.garlic.com/~lynn/99.html#74 Read if over 40 and have Mainframe background

the disks had 19 platter/tracks & 19 heads per cylinder (actually 20 if you included the servo head/track/platter). since CKD didn't have independent paths to each head ... the controller had to sequentially read each track ... for a multi-track search of a whole cylinder it was 19 revolutions & at 3600RPM (60RPS) ... the I/O operation took nearly 1/3rd second elapsed time.

now since this was a '60s memory/bandwidth trade-off .... the controller didn't have its own memory for the matching string/search so it had to use the I/O channel/bus to continuelly retrieve the data from the processor storage; the result was the I/O channel/bus, controller, and drive/disk was "locked-out" for the duration of the operation.

Now a processor might have six to 16 channel/buses ... so that wasn't necessarily really terrible ... but there tended to be 16-30 drives/disks per controller and the controller could be attached up to eight processors ... so all processors in a complex (as in the cited example) would tend to be locked out of accessing all drives on the specific controller. In the above example ... there were serveral high-use drives/disks (especially the main application program library for all processors).

Again because this was a mid-60s memory/i/o trade-off paradigm/design-point, not only was there not memory for in-core indexing there was also almost no (disk) data caching (nearly every application invokation required numerous disk accesses).

This was donwside of using a legacy trade-off design-point from the mid'60s well past when the trade-off issues had totally changed. It was very recognizable by the late '70s ... and continued even up thru current day ... that many of these legacy based platforms still have significantly higher i/o rates because of poor leveraging of the explosion in real storage availability (caching, in-storage indexes, etc) .... this is independent of various legacy based platforms that do heavily leverage caching/indexes and just are performing an extrodinary amount of transaction ... typically requiring lots of data that has low re-use patterns (and for which caching doesn't mitigate the memory/IO issues).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The demise of compaq

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The demise of compaq
Newsgroups: comp.os.vms,comp.arch,comp.sys.intel,alt.folklore.computers
Date: Wed, 02 Jan 2002 21:08:15 GMT
jmfbahciv writes:
I always thought that this was due to that "distributed processing" craze gone awry. Once the processing got doled out to pieces of gear, collecting it back became impossible. One reason was that a piece would become a separate company. Once it did, it would go off on its own business path without the leash it would have had if the group had stayed within an organization.

it wasn't even that ... just migration of function from mainframe out to the PC ... combination of 3270 emulation with screenscraping and spreadsheets.

one of the "real" & "serious" problems is that the mainframes tended to have serious business continuity efforts (backups, recovery, etc) which seldom existing on the PCs ... when serious corporate assets weren't just being copied to the PC ... but actually (only) resided at the PC ... there started to be an upswing in business problems.

there was a study done in mid-90s that of the businesses that had serious business assets resident on a non-backedup disk that had crashed ... 50 percent filed for bankruptcy within the first 30 days.

there was some serious efforts by the mainframe disk operations to address the problem starting by at least the late '80s .... however various kinds of corporate "in-fighting" seriously hampered the deployment.

The SNA business group didn't want to give up its market turf and revenue that it had from treating the PCs as emulated terminals ... the disk division wanted to deploy serious support providing "disk speed" level bandwidth between the PCs and backend mainframe (along with the appropriate applications ... making it practical for PCs, workstations & departmental servers to treat backend mainframe disk farms on par with local disks). Complicating all this was the SAA effort which was a slightly disquised effort attempting to migrate as much of the "lost" applications back to the mainframe ... treating the PC was a really fancy, gui 3270 (does anybody remember the early '90s efforts to have lotus 123 running on backend mainframe?).

this is somewhat related to the middleware and departmental computing efforts.

related postings:
https://www.garlic.com/~lynn/2002.html#2 The demise of compaq
https://www.garlic.com/~lynn/2001m.html#15 departmental servers

random sna & saa postings
https://www.garlic.com/~lynn/94.html#33a High Speed Data Transport (HSDT)
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#123 Speaking of USB ( was Re: ASR 33 Typing Element)
https://www.garlic.com/~lynn/2000.html#3 Computer of the century
https://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
https://www.garlic.com/~lynn/2000.html#90 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#29 20th March 2000
https://www.garlic.com/~lynn/2000b.html#78 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000e.html#42 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#45 IBM's Workplace OS (Was: .. Pink)
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001.html#49 Options for Delivering Mainframe Reports to Outside Organizat ions
https://www.garlic.com/~lynn/2001b.html#49 PC Keyboard Relics
https://www.garlic.com/~lynn/2001d.html#38 Flash and Content address memory
https://www.garlic.com/~lynn/2001e.html#8 Blame it all on Microsoft
https://www.garlic.com/~lynn/2001f.html#49 any 70's era supercomputers that ran as slow as today's supercompu
https://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001i.html#7 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#21 3745 and SNI
https://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
https://www.garlic.com/~lynn/2001j.html#16 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#45 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#19 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001k.html#21 OT: almost lost LBJ tapes; Dictabelt
https://www.garlic.com/~lynn/2001k.html#46 3270 protocol
https://www.garlic.com/~lynn/2001l.html#17 mainframe question
https://www.garlic.com/~lynn/2001l.html#23 mainframe question
https://www.garlic.com/~lynn/2001m.html#19 3270 protocol
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2001n.html#23 Alpha vs. Itanic: facts vs. FUD
https://www.garlic.com/~lynn/2001n.html#34 Hercules etc. IBM not just missing a great opportunity...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The demise of compaq

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The demise of compaq
Newsgroups: comp.os.vms,comp.arch,comp.sys.intel,alt.folklore.computers
Date: Wed, 02 Jan 2002 21:59:18 GMT
gherbert@gw.retro.com (George William Herbert) writes:
The basic truth is that the current website infrastructure environments are highly scalable; they're also a lot more complex than outsiders usually give them credit, and that complexity and functionality are there for good reasons.

and some of largest aren't even the "interactive" evolved platforms ... unix, m'soft, etc ... but more of the legacy oriented platforms like tandem.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

How to get 128-256 bit security only from a passphrase?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How to get 128-256 bit security only from a passphrase?
Newsgroups: sci.crypt
Date: Thu, 03 Jan 2002 04:01:58 GMT
"David Midkiff" writes:
I don't think its impossible to remember 45 characters: "In the beginning God created the heavens and " That's 45 chracters. A key doesn't have to be all unintelligible chracters. It could be obscure phrases. I once memorized 235 digits of pi for a contest in high school. Lame? Yes. But my point is the human brain has a high capacity. If you can remember all the phone numbers you've had in a lifetime you can remember number far greater than 45 chracters.

As for a practical solution ... it's called public-key cryptography ... like RSA. One doesn't have to remember an entire 256-bit key. All one has to do is establish a session key using public key schemes like RSA. Reusing keys isnt a very smart idea.


the problem with shared-secret paradigm ... is that every security domain requires a unique shared-secret ... which menas that there isn't just one 45 character pass-phrase but tens. also english sentences/phrases tend to not be completely random letters so the amount of entropy is less than the number of characters (45 character pass-phrase may only be the equivalent of ??? i don't remember the value but possibly 100bits? ... even if it wasn't sensible combination of words ... the number of valid word combinations totaling 45 characters is significanlty less than the number of 45 random letter combinations).

one of the benefits of public key cryptography is that the public key can be shared with a large number of different security domains. The corresponding private key needs protection ... which could be with a passphrase .... but note that in this case ... it is a single non-shared passphrase protecting the private key (and so the issue of requiring a unique shared-secret per security domain isn't an issue ... or conversely, there is only a single security domain ... that of protection of the individual's private key).

The issue of the need for session key depends on whether there is authentication or encryption. Session keys for encryption is a method used in both public key and secret key paradigms ... aka things like debit/ATM network with secret key technologies have session or even transaction keys (basically network & device shared-secret keys used for exchanging session keys). Within that debit/ATM infrastructure there are also different shared-secret keys which are the individuals' PINs for transaction authentication.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 Jan 2002 21:46:27 GMT
"Stephen Fuld" writes:
Yes, but even if the controller had the memory (BTW, it would have required only 256 bytes!) it would have saved the channel utilization but the controller would still have been tied up reading the disk data, doing the comparisons, etc.

problem was getting the architecture changed ... either a new kind of operation or a non-CKD disk (aka like trying to get VTOC & PDS support for FBA ... the main multi-track search culprits).

they aren't called channel programs for nothing.

effectively the i/o "channel" (nominally) an i/o bus is also effectively the instruction fetch unit.

The i/o controller is somewhat the instruction decode unit.

combination of the i/o controller and the device (in this case disk) are the execution units.

the CKD disk programming for simple searching is


seek
          search
tic   -8
          read/write

the seek instruction generates an exception if it doesn't work correctly.

the search instruction falls thru to the following instruction if the record just examined doesn't meet search criteria (equal, less than, greater than, greater or equal, etc). If the record just examined does meet the search criteria, the search instruction generates a "branch" to the next instruction plus 8 (skips the immediately following instruction). Say a track has ten uniquely keyed records, the channel program could loop ten times examining each record. The "search" instruction is slightly misnamed, t isn't really a search multiple record instruction ... it is a check current record for match instruction (the search operation is implemented with multiple "channel program" instructions in a loop).

Now the channel "architecture" precludes pre-fetch ... each instruction is uniquely executed. A combination of precluding pre-fetch and not having a real "search" operation created a lot of the CKD problem. Effectively both mainframe processor architecture allowed dynamic instruction modification (creating significant performance impact on being able to pre-fetch & pipeline processor instructions) and as well as channel program (creating similar problems with trying to do instruction optimization).

The architecture precluding instruction pre-fetch also creates huge problems for virtual memory operations. Effectively, for virtual memory operations ... a "channel program" (one or more instructions) is completely copied and "translated" (all virtual addresses replaced with real addresses ... all branch instructions translated to point to the copied program, etc). This worked for all channel programs that weren't self-modifying. Self-modifying channel programs didn't work in the virtual memory environment since the storage getting modified was the "virtual" instructions not the "real" instructions (similar to split I/D cache problem in harvard architecture and not being to easily/directly modify instructions).

Searching implemented as a (channel) program loop (as opposed to an instruction) and not being able to prefetch precluded officially outboarding search operation into controller/device.

Now, this restriction also causes problems trying for channel extensions because of latency issues with sequentially fetching & executing each i/o instruction (ESCON, etc). I also encountered this in the early '80s when I was doing the kernel HYPERchannel I/O implementation (in part being able to remote several hundred people in the IMS group 5-10 miles from STL). The objective was to take all the locally processor attached devices used by standard programmers and put them 5+ miles away from the data center. A major issue was that the IMS group was all using local, cahnnel-attach 3270 terminals with approx. .25 secon system response. The standard SNA answer was with "remote" VTAM terminals ... where the basic hardware response was greater than one second (and the "system" response was much greater). some recent related thread
https://www.garlic.com/~lynn/2002.html#7 The demise of compaq

some remote 3270 response threads
https://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2001k.html#46 3270 protocol

HYPERchannel had an A510, remote device adapter ... basically a box that emulated a mainframe channel ... which standard mainframe control units could be connected to. Because of the HYPERchannel communication latencies ... it wasn't possible for the A510 to rely on correct operation by accessing mainframe memory for all data. Instead, the channel program was copied and downloaded into the A510 for local exeecution (somewhat analogous to how the virtual memory systems dynamically make a copy of virtual channel programs, creating translated channel program). However, the standard A510 would take all referenced data from mainframe memory (transferring it over network). This worked for many things ... but would not work for the "looping" argument to the "search" instruction. The point of this effort was to preserve the IMS's group quarter second interactive response ... an unanticipated side-effect was about a 10-15% total system thruput because of reduced channel busy/contention (lots of channel processing had been offloading into the A510 adapters) ... misc references:
https://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
https://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/2000c.html#65 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?

Towards the mid-80s, NSC created an enhanced remote device adapter where the seek & search arguments could also be downloaded (supporting CKD devices on HYPERchannel networks) called the A515. The A515 had extended memory for complete copies of channel programs as well as necessary processing for seek & search arguments.

One place that the A515 was used extensively was at NCAR For the Mesa system ... which effectively implemented one of the early network addressed storage (i.e. crays and other processors being able to do direct transfers to/from ibm mainframe disk farm).

misc. ncar/a515 refs:
https://www.garlic.com/~lynn/99.html#146 Dispute about Internet's origins
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#78 Free RT monitors/keyboards
https://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
https://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
https://www.garlic.com/~lynn/2001l.html#34 Processor Modes

misc. other hyperchannel & hsdt refs:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Turns out in the late '70s, I was wondering around the SJ plant site ... seeing how I could get access to early processor models. Turns out that initial processors went to the processor engineers ... and the 2nd processors went to the disk engineers.

The bldg. 14 & 15 machine rooms had disk "test cells" (secure steel cages with combination locks, inside secure machine rooms, inside secure buildings, inside secure plant site) containing "engineering devices" undergoing R&D. There was complex switch setup since normal operation of a single test-cell under MVS had a MTBF for MVS of 15 minutes .... so test-cells were connected one at a time to a processor and special stand-alone code was used to drive the devices (some FRIEND based). So, I decided if I could totally rewrite the kernel I/O supervisor so it absolutely wouldn't crash ... SJ could migrate from a scheduled stand-alone environment ... to any number of concurrent test-cells operational under an operating system. The benefit to the engineers was that they could now have a dozen test-cells working concurrently ... instead of one at a time. The benefit to me was that even with a dozen concurrent test cells ... the processor utilization was less than one percent ... which met that there was all sorts of interesting things I could think of doing with the other 99% of the processors. random refs:
https://www.garlic.com/~lynn/subtopic.html#disk

The downside ... was when problems occurred ... I would get called since they would first blame it on "my" operating system. Turns out mostly what i encountered when i went over were engineering hardware problems. This in turn led to two developments:

1) I got asked to all of these POK/SJC things between the processor/channel engineers and the disk/controller engineers. when I asked why I was being asked one of the engineers explained that most of the senior engineers that previously oversaw this sort of stuff had left in the mid-70s (the SJC senior engineers appeared to have been the core of non-IBM disk efforts, various vendor IBM plug compatible disk efforts, and various things like floppy and small disk startups).

2) I got to complain loadly about CKD being a 1960s paradigm and I would write the software for a 1980s paradigm if the engineers would build it for me. That was part of the problem where STL quoted $26m for effectively "FBA" support for VTOC & PDS ... eliminating the CKD (multi-track) search loop.

somebody tells this joke that i use to work 1st shift in research/bldg.28, 2nd shift in engineering/bldg14&product-test/bldg15, and 3rd shift in STL/bldg90.

Note this was still pre-DB2 ... I had helped do the technology transfer from research on System/R to Endicott for SQL/DS. Baker had yet to do the technology transfer from Endicott to STL for DB2 (so STL was still mostly IMS, access methods, and misc. other stuff). It is interesting that System/R went all the way to endicott (upstate NY) and back to STL ... when research/28 and STL/90 were only ten miles apart.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

The demise of compaq

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The demise of compaq
Newsgroups: comp.os.vms,comp.arch,comp.sys.intel,alt.folklore.computers
Date: Thu, 03 Jan 2002 22:06:54 GMT
jeffj@panix.com (Jeff Jonas) writes:
I have hurt feelings about the way IBM connectivity evolved since I worked for related companies. AT&T Teletype made IBM compatible terminals and controllers that we used extensively internally, yet few folks outside of AT&T were brave enough to use non-IBM terminals since IBM was apparently very harsh and downgraded service to folks with mixed venders.

as an aside ... besides the issue with middleware and SAA ... the communication division didn't particular like me.

as an undergraduate, there were four of us that built our own IBM controller (reverse engineered channel protocol, built wire-wrap channel attachment board, etc); it was a replacement for the 2702 terminal controller and is also credited with originating the ibm plug-compatible controller (PCM) business, misc. refs:
https://www.garlic.com/~lynn/submain.html#360pcm

another in the late '80s was I was also trying to do a project that would port a pu4/pu5 emulator to a rios/power processor; it would have had 10times the performance, 10times the function, 1/10th the cost, 10times the availability, etc of the communication division's 37x5 boxes (and would have also helped with distributed/network access to the backend disk farms):
https://www.garlic.com/~lynn/99.html#67
https://www.garlic.com/~lynn/99.html#70

and about the same time, I also did the RFC1044 implementation and was "tuning" the thruput at cray research between a Cray and a 4341-class processor so that it was running at 4341 hardware channel speed (1mbyte/sec) ... using a nominal amount of the 4341. By comparison, the communication division wanted to see that TCP thruput was less than LU6.2 thruput ... i.e. the base non-RFC1044 support got about 44kbytes/sec and would consume nearly a full 3090 processor).
https://www.garlic.com/~lynn/subnetwork.html#hsdt
https://www.garlic.com/~lynn/internet.htm

speaking of AT&T ... I had helped supply a customized operating system to AT&T longlines in the 70s ... into which it effectively disappeared. Nearly ten years later the IBM branch office tracked me down (in a completely different job) to try and help AT&T get off the kernel. the operating system continued to run on new generation of processors for nearly ten years ... but finally a processor came out which the modified operating system wouldn't automatically migrate to ... which met that IBM wasn't going to sell longlines new processors unless something could be done about the operating system.

random longlines refs:
https://www.garlic.com/~lynn/95.html#14 characters
https://www.garlic.com/~lynn/96.html#35 Mainframes & Unix (and TPF)
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/2000.html#5 IBM XT/370 and AT/370 (was Re: Computer of the century)
https://www.garlic.com/~lynn/2000f.html#60 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001f.html#3 Oldest program you've written, and still in use?
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A terminology question

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A terminology question
Newsgroups: sci.crypt
Date: Thu, 03 Jan 2002 22:21:40 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
Are the terms 'covert channel' and 'subliminal channel' equivalent or not? Thanks.

M. K. Shen


I'm note sure about subliminal ... it is possible that subliminal channel can be a type of covert channel. I think submliminal implies that you aren't aware of it ... whether or not it violates security policy ... and covert is that it violates security policy even if you are aware of it.

from:
https://www.garlic.com/~lynn/secure.htm
covert channel

(1) A communication channel that allows a process to transfer information in a manner that violates the system's security policy. A covert channel typically communicates by exploiting a mechanism not intended to be used for communication. (2) The use of a mechanism not intended for communication to transfer information in a way that violates security. (3) Unintended and/or unauthorized communications path that can be used to transfer information in a manner that violates an AIS security policy. [AJP] (I) A intra-system channel that permits two cooperating entities, without exceeding their access authorizations, to transfer information in a way that violates the system's security policy. (O) 'A communications channel that allows two cooperating processes to transfer information in a manner that violates the system's security policy.' (C) The cooperating entities can be either two insiders or an insider and an outsider. Of course, an outsider has no access authorization at all. A covert channel is a system feature that the system architects neither designed nor intended for information transfer:

'Timing channel': A system feature that enable one system entity to signal information to another by modulating its own use of a system resource in such a way as to affect system response time observed by the second entity.

'Storage channel': A system feature that enables one system entity to signal information to another entity by directly or indirectly writing a storage location that is later directly or indirectly read by the second entity.

[RFC2828] A communication channel that allows a process to transfer information in a manner that violates the system's security policy. [TCSEC] A communications channel that allows a process to transfer information in a manner that violates the system's security policy. A covert channel typically communicates by exploiting a mechanism not intended to be used for communication. [TNI] A communications channel that allows two cooperating processes to transfer information in a manner that violates the system's security policy. [AFSEC][NCSC/ TG004] Any communication channel that can be exploited by a process to transfer information in a manner that violates the system's security policy. [IATF] The use of a mechanism not intended for communication to transfer information in a way which violates security. [ITSEC] Unintended and/or unauthorized communications path that can be used to transfer information in a manner that violates an AIS security policy. [FCv1] (see also overt channel, security-compliant channel, storage channel, timing channel, channel, exploitable channel, security, threat) (includes covert storage channel, covert timing channel)


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

A terminology question

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A terminology question
Newsgroups: sci.crypt
Date: Thu, 03 Jan 2002 23:01:06 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
From all literatures cited in this thread till now, it seems to me to be the case that a subliminal channel is a (special case of a) covert channel, if one wants to stress a difference (nuance) between the terms, though e.g. RSA ascribes to them the same meaning. Anyway, I suppose that it is not a big sin, if one chooses, for example, to follow Schneier or RSA in the use of the words for crypto contexts.

M. K. Shen


simple, just make all channels not known (subliminal) to the security officer, a violation of security policy (covert). security officers are big on banning all things that they don't know about and only permitting things that they explicitly permit (aka you can only do what i tell you that you can do ... as opposed to you can do anything that you are not forbidden to do).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 Jan 2002 23:54:01 GMT
"Stephen Fuld" writes:
Yes - But think of "wasting" a few hundred bytes of main memory to keep the names and addresses of the few most popular programs. Perhaps that could have eliminated many of the searches for program loading.

a similar ... but totally different scenario was linear searching of some in-core structures.

When I first saw the CP/67 operating system ... it had very significnat processor overhead. Over that fist year (i was still undergraduate) I did some significant pathlength optimizations ... reducing overall kernel processing by up to 80% for many common things ... and some selected pathlengths by possibly 100 times with things like "fastpath" ... random ref:
https://www.garlic.com/~lynn/94.html#18

with such significant reduction ... other/new areas of the kernel was starting to dominate kernel processor time. One of the major areas was kernel storage management/allocation. Basically, storage was ordered on a chain by real memory address. When storage was made released, it was put into the chain based on its real memory address (as well as a check made to see if adjacent storage locations could be combined into large storage block). When requesting a block of storage, the chain was scanned for block with exact match ... and if that failed, the first larger block was taken (and appropriately adjusted).

This process was now starting to consume 20 percent or more of total kernel processor utilization (chain lengths frequently of 300-400 elements ... this is analogous to the FINWAIT issue recently mentioned in another thread in this NG (although the FINWAIT scanning could hit 99 percent of total cpu utilization):
https://www.garlic.com/~lynn/2002.html#3 The demise of compaq

The 360/67 had a special RPQ (I believe first specified by lincoln labs) instruction called the Search List (mnemonic SLT).

random refs:
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/2000d.html#47 Charging for time-share CPU time
https://www.garlic.com/~lynn/2001c.html#15 OS/360 (was LINUS for S/390)
https://www.garlic.com/~lynn/2001d.html#23 why the machine word size is in radix 8??
https://www.garlic.com/~lynn/2001d.html#33 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001h.html#71 IBM 9020 FAA/ATC Systems from 1960's
https://www.garlic.com/~lynn/2001i.html#15 IBM 9020 FAA/ATC Systems from 1960's

The search list instruction could chain follow and perform various kinds of matching ... and would do it 2-3 times faster than tightly coded assembler loop (i.e. 5-10 percent intead of 20 percent of kernel processing).

However, around 1970, the CSC group invented storage subpools. Storage of certain characteristics were attempted to be allocated from LIFO subpool chain ... which could be performed in around a total of 14 instructions. This reduced the storage management from around 20 percent (growing to 30 percent as loads increased) to typically less than five percent and eliminated the need for the linear search (and the need for the SLT instruction to do fast linear searching).

Later when a couple of us were working on migrating kernel code into machine hardware ... for CP/67 succesor VM/370 ... we measured


fre+5a8                      73628   132     3.77
'FRET'
fre+8                        73699   122     3.47
FREE

... "FRET" is kernel storage deallocation ... "FREE" is kernel storage allocation for total of 3.77+3.47 or 7.24 percent of kernel processor time. ref:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist.

mainframe got an even more complex support with luther's radix partition tree/sort hardware instructions:
https://www.garlic.com/~lynn/98.html#19 S/360 operating systems geneaology
https://www.garlic.com/~lynn/98.html#20 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/2001.html#2 A new "Remember when?" period happening right now
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions

note see the URL pointer in the "most complex instructions" posting ... earlier URL references for the mainframe instructions "principle of operations" pointed to server in POK ... I think the official server is now a server in boulder.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Thu, 03 Jan 2002 23:59:57 GMT
"Stephen Fuld" writes:
I understand. Not to mention that if IBM had made the interface more straight forward, then it would have been easier to PCM and you would have had more competitors (the Ellen Hancock scenario again). But I do wish you had succeeded :-).

but they also blame for the PCM stuff in the first place
https://www.garlic.com/~lynn/submain.html#360pcm

some speculation that the whole pu4/pu5 interface is the way it is because of having originated plug compatible controller business (reference recent "The demise of compaq" thread postings going on concurrently in same n.g.)

note that STK has since bought NSC.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 04 Jan 2002 02:54:38 GMT
"Stephen Fuld" writes:
Is that the Com FE you have mentioned? I have heard several stories about the first storage PCM. I don't know who was first there.

i'm not positive but the first disk/controller/storage PCM may have been memorex ... one of the engineers ... among other things was responsible for a lot of the 2321 datacell ... went to memorex to do PCM disk & controller. He and the business guy then left and formed BLI (& hired epstein) to do the first relational database engine (aka attach to ibm channel & controller that did relational database disk searches).

when epstein left (teradata and then founding sybase) BLI recruited somebody i was working with who tried to talk me into going with him.

there has been this joke in the valley about there only be 200 people total in the business ... they just keep moving around.

random refs:
https://www.garlic.com/~lynn/2000e.html#20 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?
https://www.garlic.com/~lynn/2002.html#0 index searching

random data cell refs:
https://www.garlic.com/~lynn/2000.html#9 Computer of the century
https://www.garlic.com/~lynn/2000b.html#41 How to learn assembler language for OS/390 ?
https://www.garlic.com/~lynn/2001.html#17 IBM 1142 reader/punch (Re: First video terminal?)
https://www.garlic.com/~lynn/2001.html#51 Competitors to SABRE?
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Fri, 04 Jan 2002 22:39:24 GMT
misc extracts from various web resources ....

http://www.disktrend.com/disk3.htm
But IBM wasn't standing still. In 1961, having joined IBM right out of school 10 years earlier, Alan F. Shugart took over a project called the Advanced Disc File which, like its parent, the RAMAC 350, packed fifty 24-inch disks. But now, for the first time, self-acting slider bearings made it possible to have a head for each disk surface; it was no longer necessary to move heads up and down to reach different disks. This development did away with the need for an air supply. It also improved areal density and access time. A lot.

On Jan. 25, 1968, a month after the Dirty Dozen left IBM, Shugart was promoted to the position of product manager for direct-access storage devices at IBM. He reported to Vic Witt, as had the Dirty Dozen. But in the summer of 1969, Memorex hired Shugart to take over its young disk-drive operation as vice president of product development.

After Shugart joined Memorex, a large number of IBMers followed. A very large number. Some estimates had the number at 200. Recruiting was casual. Nursing a beer or two, Shugart would hang around the eatery where many IBM engineers would lunch. Casual old-buddy greetings would ensue and, pretty soon, the disk-drive staff at Memorex would grow while that at IBM would shrink. Shugart was assisted by a telephone.

By 1971, Shugart was responsible for all product development at Memorex. In January 1973, he left to found Shugart Associates. In December 1974, impatient with the absence of products ready for the market place, the venture capitalists who funded Shugart Associates fired Alan Shugart. Or maybe he quit.

The difference, Shugart said later, was about five microseconds. Shugart was replaced by Don Massaro, a cofounder and director of engineering. Under his leadership, the company brought out the first 5.25 inch floppy drive in 1976. A few years later, Xerox bought the company and closed it down within three years.

In the late 1950s, IBM was developing a disk drive called RAMAC, in order to support a software application for storing data about manufacturing processes. The first disk drives were huge: They stood vertically on end, were three feet in diameter, and required pumped air pressure to spin. But they only stored about 5,000 characters what we would refer to as 5K bytes today.



http://www.mhhe.com/cit/concepts/oleary/olc/chap5/compulore5.html

Shugart, who joined IBM right out of college, was assigned the Advanced Disc File project, trying to squeeze more efficiency out of RAMAC's progeny, now down to only two feet in diameter. He was responsible for perfecting the technology that used multiple disk platters and multiple read/write heads. He succeeded, and in 1968 was promoted to product manager for direct-access storage devices at IBM.

By this time he had became a much-sought-after disk storage engineer, and Memorex hired him as vice president of disk drive development. In 1971, he invented the first floppy disk, eight inches in diameter, with a storage capacity of 128K (today's 3.5in floppy is over ten times that amount). The first practical use for the disk was with the IBM Displaywriter, a huge dedicated (single-task) word processing machine.

Restless to see the floppy disk drive succeed, Shugart left Memorex in 1973 to found Shugart Associates. However, he was forced out a year later, while his company went on to develop and introduce the first 5.25 inch floppy drive in 1976. Unable to stay away from the storage industry, in 1979 he founded Shugart Technology, with Finis Conner, to manufacture hard-disk drives. The company was soon renamed Seagate Technology, which became a $9 billion company and remains in business to this day. Finis Conner went on to launch Conner Peripherals. Interestingly, Seagate and Conner (re)merged in 1996.



http://moore.sac.on.ca/NORTHERNLYNX/northern%20lynx/hdrive.htm

In 1970, IBM expanded the computer's ability to store data when it introduced the memory disk, an 8-inch plastic disk designed for the IBM Merlin 3330 with a capacity of 100Mbytes. Now called the floppy disk, this memory disk could be removed from the computer making it possible, for the first time, to move information from one computer to another. Alan Shugart is credited with inventing the floppy. In the summer of 1969, Shugart left IBM, with around 200 other IBM employees following him, for the position of vice-president of product development at Memorex. In 1973, Shugart left Memorex to create Shugart Associates and develop the first 5.25-inch floppy drive for Wang Laboratories. In 1979 Shugart and Finis Conner founded Shugart Technology which changed its name in 1980 to Seagate Technology. Seagate Technology introduced the first non IBM hard disk in 1980 as the ST506 with a 5.25 inch drive, stopper motor and a capacity of 5Mbytes. Soon other companies, including Sony, in 1981, with the first 3.5 floppy drive and Rodime, in 1983, with the first 3.5 rigid disk drive, were producing hard disks.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Infiniband's impact was Re: Intel's 64-bit strategy

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Infiniband's impact was Re: Intel's 64-bit strategy
Newsgroups: comp.arch,alt.folklore.computers
Date: Sat, 05 Jan 2002 18:16:10 GMT
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
You can eliminate the problem of latency for large transfers by using the techniques I was mentioning earlier, as was done on MVT and similar systems of the 19760s and 1970s. The only practical solution for small ones is to have compound interfaces; e.g. MPI

one aspect that affected everybody in the country ... was that the IBM check-sorter required low-latency, nearly real-time response from the processor. Because of some of the switch to virtual storage and the extra layer between real i/o and the application ... there was some non-standard operating system tweaking that had to be done in the "VS" versions (i.e. MFT->VS1; MVT->VS2).

there is the story of Federal Express getting started by picking up checks from all over the country in the evening and flying to nashville for a country-wide check sort ... and then the return flights back after the sort (things like bill-payment checks mailed to out of area addresses, cleared thru the payor's bank ... and having to find their way back to payee's bank). I believe flights had to arrive by 10 or 11pm and then they were back in the air around 2am.

refs:
https://www.garlic.com/~lynn/99.html#155
https://www.garlic.com/~lynn/99.html#136a

there were also local bank sorting for intra-region exchanges (checks not needing to be sent to a national sort).

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt
Date: Sat, 05 Jan 2002 21:07:00 GMT
two buffer-overflow items in today's posting to comp.risks

http://www.csl.sri.com/users/risko/risks.html
http://www.csl.sri.com/users/risko/risks.txt


See last item for further information, disclaimers, caveats, etc.
This issue is archived at <URL:http://catless.ncl.ac.uk/Risks/21.84.html>
and by anonymous ftp at ftp.sri.com, cd risks .

  Contents:

See last item for further information, disclaimers, caveats, etc.
This issue is archived at <URL:http://catless.ncl.ac.uk/Risks/21.84.html>
and by anonymous ftp at ftp.sri.com, cd risks .

  Contents:

Peak time for Eurorisks (Paul van Keep)
More Euro blues (Paul van Keep)
 ING bank debits wrong sum from accounts (Paul van Keep)
Euro bank notes to embed RFID chips by 2005 (Ben Rosengart)
 TruTime's Happy New Year, 2022? (William Colburn)
Airplane takes off without pilot (Steve Klein)
Harvard admissions e-mail bounced by AOL's spam filters (Daniel P.B. Smith)
Risk of rejecting change (Edward Reid)
Security problems in Microsoft and Oracle software (NewsScan)
Buffer Overflow security problems (Henry Baker, PGN)
 Sometimes high-tech isn't better... (Laura S. Tinnel)
When a "secure site" isn't (Jeffrey Mogul)
Abridged info on RISKS (comp.risks)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Younger recruits versus experienced veterans ( was Re: The demise of compa

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Younger recruits versus experienced veterans  ( was Re: The demise  of compa
Newsgroups: alt.folklore.computers
Date: Sat, 05 Jan 2002 21:21:32 GMT
Brian Inglis writes:
If at that time we had the source control systems and id strings in the binaries that we have available now, I could have just emailed them the file name and line number to fix, instead of calling the programmers, asking them where they kept the latest source code for their programs, which module used arrays, telling them they weren't checking their array bounds, and they were set too low for production work anyway. Most of them thought ten was a big enough size for all arrays.

there has been a buffer overflow thread running in sci.crypt newsgroup. One of my past comments was that when we did vulnerability analysis as part of our HA/CMP project ... prediction was that C-language environments would have 10 times (to possibly 100 times) higher occurance of buffer overflow problems & exploits than what we had been used to in other environments (rough assumption given similar applications & similar programming skill levels .... purely characteristic of the C-language environment ... but awareness of the problem might lead some to exercise caution and compensating procedures).

there are a couple items on the buffer overlow & array bound checking subject in posting today to comp.risks
http://www.csl.sri.com/users/risko/risks.html
http://www.csl.sri.com/users/risko/risks.txt

namely
Security problems in Microsoft and Oracle software (NewsScan)
Buffer Overflow security problems (Henry Baker, PGN)

misc. cluster & ha/cmp related postings
https://www.garlic.com/~lynn/subtopic.html#hacmp

general exploit, fraud, & risk postings
https://www.garlic.com/~lynn/subintegrity.html#fraud

some random past postings
https://www.garlic.com/~lynn/aadsm9.htm#cfppki10 CFP: PKI research workshop
https://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)
https://www.garlic.com/~lynn/99.html#219 Study says buffer overflow is most common security bug
https://www.garlic.com/~lynn/2000.html#30 Computer of the century
https://www.garlic.com/~lynn/2000g.html#50 Egghead cracked, MS IIS again
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001d.html#58 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#52 misc loosely-coupled, sysplex, cluster, supercomputer, & electronic commerce
https://www.garlic.com/~lynn/2001k.html#43 Why is UNIX semi-immune to viral infection?
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2001n.html#72 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#76 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#84 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
https://www.garlic.com/~lynn/2002.html#4 Buffer overflow
https://www.garlic.com/~lynn/2002.html#5 index searching
https://www.garlic.com/~lynn/2002.html#11 The demise of compaq

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

"blocking factors" (Was: Tapes)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: "blocking factors" (Was: Tapes)
Newsgroups: alt.folklore.computers
Date: Sun, 06 Jan 2002 00:50:51 GMT
as was pointed at there was a little memory leakage (inter record gap size) over 20 years.

so from:
https://web.archive.org/web/20030115223723/http://www.digital-interact.co.uk/site/html/reference/media_9trk.html
....

Because these drives were originally start stop drives they needed space between data blocks in order to be able to do this. This space is known as the inter block gap (ibg) and is 0.6 inches for 800/1600 and 0.3 inches for 6250. This need for an ibg caused the tape usage to be very inefficient when used with small block sizes.

...

The following chart gives an indication of the capacities of using a standard 2400 ft tape with varying block sizes.


          512 Byte   1 Kbyte    2 Kbyte   4Kbyte    8Kbyte    Gapless
800 bpi    11.6 Mb   15.3 Mb    18.3 Mb    20.2 Mb   21.4 Mb   22.6 Mb
1600 bpi   15.5 Mb   22.5 Mb    30 Mb      36.1 Mb   40.2 Mb   45.3 Mb
6250 bpi   35.1 Mb   58.6 Mb    87.6 Mb   116.8 Mb  140.7 Mb  176.8 Mb

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 06 Jan 2002 01:25:59 GMT
"Stephen Fuld" writes:
Huh? What about the 2311 and 2314? I know they wren't floppies, but could,t they be used "to move information from one computer to another" and didn't they predate 1970?

wording of some of those web histories leaves something to be desired.

here is old table of 2305, 2314, 3310 (fba), 3330 (-11), 3350, 3370 (fba) & 3380
https://www.garlic.com/~lynn/95.html#8

some model numbers:
https://www.garlic.com/~lynn/2001l.html#63

I was doing some searching for 2311 reference and ran across "Computing at Columbia Timeline"
http://www.columbia.edu/acis/history

which has a number of interesting things with pictures 1301 disk:
http://www.columbia.edu/cu/computinghistory/1301.html
totally unrelated 407:
http://www.columbia.edu/cu/computinghistory/407.html
& 360/91
http://www.columbia.edu/cu/computinghistory/36091.html
2301 "drum"
http://www.columbia.edu/cu/computinghistory/drum.html
2311 disk drive (foreground):
http://www.columbia.edu/cu/computinghistory/2311.html
2321 datacell (background):
http://www.columbia.edu/cu/computinghistory/datacell.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 06 Jan 2002 17:14:09 GMT
"Douglas A. Gwyn" writes:
Actually who gets the blame in my book are the vendors who just slapped their own name on the system without fixing the problems.

with the reference to vulnerability analysis in the late '80s in support of HA/CMP .... and identifying buffer length issues as major vulnerability in C programming environment ... one of the issues we looked at was Reno & Tahoe (at that a large number of vendors were using as their tcp/ip support)

a little more HA related ... minor other things

1) we would have like to have any application specify a "trap" to catch all ICMPs that came back associated with any packet they sent out (possibly as much a protocol issue as a implementation issue). Not having that made it easier for various things including DOS attacks
https://www.garlic.com/~lynn/99.html#48 Language based exception handling

2) one of the things we wanted in as part of HA was ip-address take-over. Now, ARP cache nominally has a time-out so that clients would eventually re-ARP periodically and get (the new)MAC address. However, there was a bug in the Tahoe/Reno code in the IP layer just before calling the ARP lookup where it kept a one-deep cache of the last MAC address. If the previously used IP-address was the same as the current IP-address, it would bypass calling ARP-lookup and use the saved MAC address (this didn't have a time-out). There turns out to be fairly large number of client configurations on small nets where 99.9 percent of traffic is client to the same server ... or clients on subnet where all their traffic goes thru a router. As a result, all traffic was for the same IP-address so the IP-layer never directly called the ARP code (where MAC addresses did time-out) ... and there was no management code &/or time-out for the single saved MAC address. For some period, it seemed like 99.99 percent of all deployed machines in customer shops had that "bug" (because it seemed like nearly every vendor in the universe was using Tahoe/Reno for their IP implementation).
https://www.garlic.com/~lynn/94.html#16 Dual-ported disks?
https://www.garlic.com/~lynn/96.html#34 Mainframes Unix

3) marginally related was the problem of getting browser vendors to include support for multiple A-records. we got changes to servers that interfaced to payment gateway ... but it was harder problem getting browswer vendors (even tho tahoe/reno clients did support multiple A-records ... but unfortunately predated the advent of browser clients
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#158 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#159 Uptime (was Re: Q: S/390 on PowerPC?)

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp hacmp
https://www.garlic.com/~lynn/subintegrity.html#fraud fraud, exploits, vulnerabilities

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 06 Jan 2002 22:29:55 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
First, "traps", "interrupts", and other asynchronous events always ask for trouble and must be avoided when at all possible. Without more care than practice (as opposed to theory) shows is reasonable to expect from most programmers, asynchronous stuff produces too many difficult to diagnose bugs.

Second, connected sockets do get informed of appropriate ICMP errors in the BSD code. Sockets that are not connected don't, because the only clue that a system receiving an ICMP message has is a string of bytes that supposedly came from the packet that caused the error. The BSD code looks for a local socket with a destination address including port number that matches the bytes in the ICMP message, and arranges that the next system call on that socket will be told of the error. Sockets that are not connected don't hear about ICMP errors. One practical reason is that the system has no record of which sockets might have sent the packet the generated the ICMP error. Note that more than one socket might have been responsible for the packet the caused the ICMP error message, because ICMP error messages do not contain enough information to uniquely identify the sending socket. Another reason is that by the time an ICMP message arrives, the responsible socket might have sent several other UDP datagrams, and an API for telling the application "the operation you did 37 milliseconds ago had this problem" (or equivalent) is a can of worms experienced application writers and operating system designers do not want to open.

Third, ICMP error messages are not authenticated, and even when not forged, should almost always be ignored. The classic example of the need to ignore ICMP errors is that Unreachables for TCP packets should be ignored at least when the state machine is not in ESTABLISHED and perhaps in all states. There are very few ICMP messages that are not best ignored.


the issue is somewhat the difference between straightline application development and industrial strength services. Many of the batch derived platforms had extensive, optional trap & error handling semantics that few "normal" applications used but many of the industrial strength services would make extensive use of. Batch oriented systems tended to evolve more sophisticated services in this area because their applications already had the sense of running "unattended" ... and any industrial strength service application tended to have a design point of automating as much of the processing (including exception handling) as possible.

As referenced earlier ... one of the largest financial settlement networks attributed 100% availability for several year period to two things 1) a form of geographic distributed clustering (three site replication) and 2) automated operator. In a batch oriented, automated service ... humans providing machine care & feeding were called operators ... and to the extent that a service application interacted with humans for operational issues ... it was these operator/humans (even large scale "online" services that might have tens of millions of human service interactions/day ... say like ATM cash machines, the end-user/consumer interactions weren't classified as operational interactions ... just simply service interactions).

In any case, as other forms of failures were eliminated and/or automagically handled ... human mistakes started becoming the number one cause of application service failure/outages. automated operator methodology started provided expert-type systems that attempted to programmatically handle as many of the "operator" services & decisions as possible.

In any case, while IP had to follow various kinds of heuristics to try and figure out how to push ICMP packets up the protocol stack ... other types of systems handled the function by allowing things like an opaque handles/tags to be carried in both outgoing packets and included in responses. The service/implementation/stack could then use the opaque handle for origin identification purposes ... something that IP sort-of attempts to approximate by looking for string matches.

In other network environments, various large scale service applications would implement extremely sophisticated (possibly domain-specific) error, diagnostic and recovery processes. Part of the issue was whether or not the TCP/IP implementation and protocol had a design point for

1) the straight-forward, simple application environment with a huge point-to-point anarchy ...

and/or

2) the high-end industrial strength, automated service application environment ... with highly asymmetric implementation requirements ... the client-side end could look similar to #1, but the server end could involve radically different implementation and highly sophisticated technology.

frequently it is easy to subset #2 for the simpler application environment (i.e. the asymmetric characteristic with the client-end being radically simpler than the service server operation).

This is part of the claim that the effort to taking a normal, straightline, high quality application and turning it into an industrial strength service application may require 4 to 10 times more code than in the base application (and possibly significantly more complex code than in the base application since it is at least targeted at addressing many of the really hard exception conditions).

... aka claim could be made that 1) interactive-derived platforms, 2) most C language environments, and 3) much of TCP/IP were not really targeted at the high-end, industrial strength, highly automated service oriented application delivery market.

Many of the industrial strength high-end online services have been platformed on various legacy platforms because of various of the industrial automation facilities. Even some of the higher-end web services have migrated to such platforms.

As an aside note ... there have been numerous proposals over the years that ISP infrastructures (at all levels) discard incoming packets with origins not consistent with their routing tables (i.e. most probably spoofed packets). Possibly part of the justification for not doing that is that most environments have other compensating procedures for dealing with spoofed packets. However, from an industrial strength service application, if it the frequency of spoofed packets were reduced by even 95 percent, it would make their heuristics work much better.

In general, many of the arguments seem to be that there are so many things that are effectively anarchy that we should ignore them ... and there is no point in fixing any of the anarchy because everybody ignores them anyway.

misc. past IP spoofing postings:
https://www.garlic.com/~lynn/aadsm2.htm#integrity Scale (and the SRV record)
https://www.garlic.com/~lynn/aadsmore.htm#killer1 Killer PKI Applications
https://www.garlic.com/~lynn/99.html#160 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2001e.html#40 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001g.html#16 Root certificates
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement

random industrial strength refs:
https://www.garlic.com/~lynn/aadsm2.htm#architecture A different architecture? (was Re: certificate path
https://www.garlic.com/~lynn/aadsm8.htm#softpki19 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsmail.htm#parsim parsimonious
https://www.garlic.com/~lynn/aepay6.htm#erictalk2 Announce: Eric Hughes giving Stanford EE380 talk this
https://www.garlic.com/~lynn/aepay6.htm#crlwork do CRL's actually work?
https://www.garlic.com/~lynn/ansiepay.htm#x959bai X9.59/AADS announcement at BAI
https://www.garlic.com/~lynn/94.html#2 Schedulers
https://www.garlic.com/~lynn/94.html#44 bloat
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#31 Mainframes & Unix
https://www.garlic.com/~lynn/96.html#32 Mainframes & Unix
https://www.garlic.com/~lynn/97.html#15 OSes commerical, history
https://www.garlic.com/~lynn/98.html#4 VSE or MVS
https://www.garlic.com/~lynn/98.html#18 Reviving the OS/360 thread (Questions about OS/360)
https://www.garlic.com/~lynn/98.html#51 Mainframes suck? (was Re: Possibly OT: Disney Computing)
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#107 Computer History
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/99.html#136a checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#224 X9.59/AADS announcement at BAI this week
https://www.garlic.com/~lynn/2000.html#10 Taligent
https://www.garlic.com/~lynn/2000.html#22 Computer of the century
https://www.garlic.com/~lynn/2000e.html#46 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000e.html#48 Where are they now : Taligent and Pink
https://www.garlic.com/~lynn/2000f.html#12 Amdahl Exits Mainframe Market
https://www.garlic.com/~lynn/2001.html#43 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2001b.html#25 what is interrupt mask register?
https://www.garlic.com/~lynn/2001b.html#60 monterey's place in computing was: Kildall "flying" (was Re: First OS?)
https://www.garlic.com/~lynn/2001c.html#13 LINUS for S/390
https://www.garlic.com/~lynn/2001c.html#16 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001d.html#56 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#44 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001e.html#47 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001h.html#1 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#8 VM: checking some myths.
https://www.garlic.com/~lynn/2001j.html#36 Proper ISA lifespan?
https://www.garlic.com/~lynn/2001k.html#14 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#4 mainframe question
https://www.garlic.com/~lynn/2001l.html#14 mainframe question
https://www.garlic.com/~lynn/2001l.html#47 five-nines
https://www.garlic.com/~lynn/2001n.html#11 OCO
https://www.garlic.com/~lynn/2001n.html#47 Sysplex Info
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
https://www.garlic.com/~lynn/2001n.html#93 Buffer overflow

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

ICMP Time Exceeded

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ICMP Time Exceeded
Newsgroups: comp.security.firewalls
Date: Sun, 06 Jan 2002 22:55:32 GMT
Lars M. Hansen writes:
ICMP timeouts indicates that a packet destined for some IP address could not reach its destination because the TTL was exceeded. This is often caused by router outages and/or routing loops.

It is possible that the request didn't come from you at all, but from someone else forging your IP as the source IP...


using traceroute for the address lets you quickly see if there are any "normal" router loops. They have been much less frequent in the past couple years ... various backbone sites use to have them frequently.

somebody forging your origin address and setting the TTL/hop-count to 4-5 ... with the size of the internet these days ... hops are frequently at least 10 ... and standard software will typically set the outgoing TTL hop count to somewhere between 30 and 60. If somebody forged an outgoing packet with your origin address and a TTL hop-count of just a few ... possibly enuf to get it several hops away from their location before the hop-count decremented to zero and an ICMP error packet was generated with your origin address.

TTL ... time-to-live is slight misnomer ... it isn't really a time value, it is a hop-count ... i.e. the number of intermediate nodes/routers that the packet will pass through before it gives up. Each node decrements the supplied count before sending it on. When the count expires, an ICMP error packet is generated using the origin address.

ISPs could go a long way to eliminating many of these types of things if they rejected origin packets that didn't match their route tables (if they get a incoming IP-packet from a dial-up account where the origin/from IP-address is different than the assigned IP-address for that port ... treat it as an fraudulent packet and discard it).

Note that traceroute takes advantage of the limited count feature by sending out a series of packets with increasing hop counts ... purposefully getting back ICMP error packets from each node in a path to a final destination. You recognize routing loops with traceroute because intermediate nodes will be listed multiple times ... and the destination node is never actually reached.

RFC references:
2827
Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing, Ferguson P., Senie D., 2000/05/16 (10pp) (.txt=21258) (BCP-38) (Obsoletes 2267)
3013
Recommended Internet Service Provider Security Services and Procedures, Killalea T., 2000/11/30 (13pp) (.txt=27905) (BCP-46)
3168
The Addition of Explicit Congestion Notification (ECN) to IP, Black D., Floyd S., Ramakrishnan K., 2001/09/14 (63pp) (.txt=170966) (Obsoletes 2481) (Updates 793, 2401, 2474)


for more detailed RFC index ... see
https://www.garlic.com/~lynn/rfcietff.htm

random refs:
https://www.garlic.com/~lynn/aadsm2.htm#integrity Scale (and the SRV record)
https://www.garlic.com/~lynn/aadsmore.htm#killer1 Killer PKI Applications
https://www.garlic.com/~lynn/99.html#160 checks (was S/390 on PowerPC?)
https://www.garlic.com/~lynn/2001e.html#40 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001g.html#16 Root certificates
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#28 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#29 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#30 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#31 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow
https://www.garlic.com/~lynn/2002.html#20 Younger recruits versus experienced veterans ( was Re: The demise of compa

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 06 Jan 2002 23:11:31 GMT
Anne & Lynn Wheeler writes:
As an aside note ... there have been numerous proposals over the years that ISP infrastructures (at all levels) discard incoming packets with origins not consistent with their routing tables (i.e. most probably spoofed packets). Possibly part of the justification for not doing that is that most environments have other compensating procedures for dealing with spoofed packets. However, from an industrial strength service application, if it the frequency of spoofed packets were reduced by even 95 percent, it would make their heuristics work much better.

slightly related thread today in comp.security.firewalls
https://www.garlic.com/~lynn/2002.html#24 ICMP Time Exceeded

some RFC references from RFC index
https://www.garlic.com/~lynn/rfcietff.htm

RFC references:
2827
Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing, Ferguson P., Senie D., 2000/05/16 (10pp) (.txt=21258) (BCP-38) (Obsoletes 2267)
3013
Recommended Internet Service Provider Security Services and Procedures, Killalea T., 2000/11/30 (13pp) (.txt=27905) (BCP-46)
3168
The Addition of Explicit Congestion Notification (ECN) to IP, Black D., Floyd S., Ramakrishnan K., 2001/09/14 (63pp) (.txt=170966) (Obsoletes 2481) (Updates 793, 2401, 2474)

and the "internet like anarchy" thread in comp.security.misc.
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#28 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#29 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#30 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001m.html#31 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement
https://www.garlic.com/~lynn/2001n.html#30 FreeBSD more secure than Linux
https://www.garlic.com/~lynn/2001n.html#71 Q: Buffer overflow

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 06 Jan 2002 23:34:27 GMT
Anne & Lynn Wheeler writes:
www.garlic.com/~lynn/2002.html#24 ICMP Time Exceeded


fumble finger
https://www.garlic.com/~lynn/2002.html#25 ICMP Time Exceeded

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 07 Jan 2002 03:54:05 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
On the contrary, industrial strength services that are genuinely industrial strength avoid traps and other asynchronous mechanisms for error handling. Asynchronous mechanisms are always harder to deal with, and not to put to fine a point on it, practically incomprehensible to the canonical "COBAOL programmers" that write most supposedly industrial strength code. The naive sloppiness of the 4.2 BSD "student code" was not good, but anyone who has seen much COBOL, RPG, PL/1, and so forth knows it was absolutely wonderful compared to typical "industrial strength" code.

Note that I wrote "traps" instead of "traps & error handling" on purpose because I'm talking about one and not the other.


most online services have developed extensive trap & error handling for their "industrial strength" operation ... again potentially 10 times as much in the "service" code as in the base-line application.

There are huge number of baseline application written in all sort of languages; Cobol, RPG, PLI, C, etc .... which have never gone beyond the baseline stage.

Online &/or other types of application have had lots of industrial strength hardening added ... including lots of stuff potentially dealing with all sorts of activities ... many unpredictable and also effectively asynchronously.

SNA had a totally different design point that TCP/IP. TCP/IP had a networking design point ... SNA effectively for a long, long time has been VTAM which is a terminal control program .... targeted at managing tens of thousands of online terminals. There was some joke about SNA

System ... not a system
Network ... not a network
Architecture ... not an architecture

The closest thing that showed any networking capability was APPN ... and the "official" SNA group non-concurred with announcing APPN. APPN was finally announced, but the resolution with the SNA group was the announcement was done in such a way that there was no indication that APPN and SNA were in any way, what-so-ever, related.

The other downside characteristic of VTAM is the interesting PU4/PU5 relationship ... which some has considered to be a re-action to a project I worked on as an undergraduate ... supposedly the first non-IBM, IBM-compatible controller (supposedly originating the IBM PCM market).
https://www.garlic.com/~lynn/submain.html#360pcm

However, from the standpoint of industrial strength, just about any significant event in the system or network could be "trapped" by an application wishing to specify a sufficient level of control. This was not to just simply monitor/managed individual events and/or state operation with the individual events ... but could acquire a sophisticated overall view of all components in the environment within the context of an industrial-strength application ... say some critical online service. An example, in TCP/IP world would be to register for all SNMP events that might be in anyway what-so-ever related to providing an online service ... some query/response and some decidedly asynchronous.

In the VTAM, terminal controller world ... they had the advantage of lots of point-to-point links and various related telco-like provisioning along with service-level-agreements. A major online, industrial strength application supporting tens of thousands of terminals would typically have a trouble desk that was capable of doing first level problem determination within five minutes or less (and correcting huge numbers of the problems). Having point-to-point links and telco provisioning and facilities significantly simplified this compared to standard TCP/IP environment where it became significantly more difficult to do end-to-end problem determination ... especially within five minutes.

Another example, is my wife and I worked on the original electronic commerce server ... and part of the infrastructure was could it meet similar objectives of all problems resolved and/or at least first level problem determination be accomplished within five minutes. Some of it turns out to be protocol issues but a large part of it also turned out to be assurance and infrastructure issues.

Many of the ISPs and backbones are only just now started to address some of these infrastructure issues. An indiciation is a simple thing in the mainframe online infrastuctures with Service Level Agreements where there are detailed measurements with two-nines to five-nines type of objectives and financial penalties for not meeting objectives. There are also detailed issues regarding being able to diagnose and resolved problems within specific periods.

In the case of the first commerce server, my wife and I eventually went back and built an internet failure mode grid for every (significant) application state against every kind of internet/networking failure mode ... and required that the application have a predictable result for every point in the grid ... where desired objective was automatic correction and/or recovery but if not, have sufficient information that the problem could be identified within five minutes. At the time, fairly standard internet server and ISP would be spending 3-4 hrs and technician closing trouble ticketing w/NTF ... aka no trouble found.

Now, the TCP/IP protocol hasn't changed a lot in the 6-7 years that we worked on the first commerce server ... but to some extent the infrastructure has ... but I don't think that most operations are capable of taking a trouble call and doing first level problem determination within the first five minutes. There was some amount of things done in the IP protocol to do automatic recovery of certain types of failures ... but the protocol and overall infrastructure is still a way from many of the automated fault, diagnostic, and recovery capabilities of even some 20-year old online services. Part of the issue is some of the telco provisioning evolved based on end-to-end circuit. Traceroute is a poor substitute for doing end-to-end diagnostic compared to getting end-to-end SLA telco provisioning on circuits.

random refs:
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 07 Jan 2002 14:23:18 GMT
"Rupert Pigott" writes:
True, but you can run TCP/IP on top of FrameRelay, ATM or good old Leased Lines. So in the TCP/IP world you have a broad spectrum of price points of and Service Levels possible.

but how many ISPs will provide service level agreements for end-to-end operation with penalty clauses for not meeting service, performance and thruput objectives?

one of the claims regarding quality of service .... is that w/o measurement, comparison and open publication ... there is little incentive to actually meet objectives.

simple example of different market places with regard to availability and quality numbers ... how many vendors provide detailed published numbers for comparison purposes?

random references to importance of measurement and reporting to actually achieving assurance and availability:
https://web.archive.org/web/20011004023230/http://www.hdcc.cs.cmu.edu/may01/index.html
https://www.garlic.com/~lynn/94.html#24 CP spooling & programming technology
https://www.garlic.com/~lynn/96.html#27 Mainframes & Unix
https://www.garlic.com/~lynn/2000.html#22 Computer of the century
https://www.garlic.com/~lynn/2000.html#84 Ux's good points.
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#14 mainframe question

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Younger recruits versus experienced veterans ( was Re: The demise of compa

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Younger recruits versus experienced veterans  ( was Re: The demise  of compa
Newsgroups: alt.folklore.computers
Date: Mon, 07 Jan 2002 14:36:25 GMT
"Rupert Pigott" writes:
No I don't believe it was Laziness in all cases in the good old days (say pre 1980).

I should explain my perspective. When I first started coding in 1983, storage & memory was on the fast slide downhill pricewise. In that climate I figured that it was churlish not to anticipate Y2K and deal with it up front. The machine I was coding for had 32Kbytes of User RAM (more in some display modes) and 100Kbyte floppies.

I know for sure that there was 20 year old code around in the early 80s, so there was a precedent set which demanded that Y2K be tackled for all new code written.

Of course there are bound to be counter examples... But the cases I'm thinking of are guys who had hundreds of K to play with and Megabytes of storage... Trimming the odd char here & there makes little or no difference in the vast majority of cases.


(somebody else's) posting from 1984 in a "century" forum that was discussing the approaching (y2k) date problem (but also cited other date related problems):

https://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (was: re: Chinese Solve Y2K)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

index searching

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: index searching
Newsgroups: comp.arch,alt.folklore.computers
Date: Mon, 07 Jan 2002 14:44:34 GMT
jcmorris@mitre.org (Joe Morris) writes:
The formal name for the 2305 was "FHSD": Fixed Head Storage Device. The best-and-highest use of this name was for the rhyme in a HASP singalong contribution using the tune of "Oh What a Beautiful Morning" from _Oklahoma_:

there were two 2305 models (corinth & zeus) ... one with half the capacity and half the rotational delay of the other (basically half the heads were off-set 180 degrees so that there were two heads per track and the first head that a requested record passed under read it).

random ref:
https://www.garlic.com/~lynn/2001l.html#53 mainframe question

and does anybody remember 1655?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 07 Jan 2002 14:50:32 GMT
Anne & Lynn Wheeler writes:
In the case of the first commerce server, my wife and I eventually went back and built an internet failure mode grid for every

Lets start with some of the industrial strength characteristics of ARPANET. Basically they had diverse routing in the network implemented by the IMPs ... although I've seen no reference to redundancy &/or replication of IMP end-nodes (i.e. redundancy and replication is traditional feature for industrial strength operation, aka if a corresponding IMP failed, the server/service was off the air). All the IMPs had 56kbit leased lines. There is some rumor that by the end of the '70s, the IMP administrative chatter supporting the various diverse routing features was consuming a majority of the available bandwidth (aka it didn't scale well ... and the arpanet/internet was smaller than the internal network ... which didn't have a SNA base, until sometime mid-85).

In any case, the IMP diverse routing support is somewhat mute since the IMPs were replaced with IP in the 1//1/83 great switch-over. Various things did or didn't happen with IP during the '80s ... but if fast-forward to mid-90s, a big change was the infrastructure switch-over from mesh routing to hierarchical routing (another scaling issue).

In any case, with hierarchical routing ... the available "networking" industrial strength available to an online service (say large web server) became multiple addresses ... implemented via multiple A-records with the domain name infrastructure. Basically service sensitive servers installed multiple connections to ISP into different critical sections of the internet backbone. Each connection appears as different IP/subnet address. The domain name infrastructure then provides a list of all IP-address associated with a specific server.

Now, one the client side, multiple A-records (aka networking industrial strength) isn't a feature of the network layer or the transport layer (aka not supporting by either TCP or UDP) .... it is implemented in the application code which, if necessary, has to cycle through the address list looking for a working connection. Now since the industrial strength support has been bumped up into the client application code ... it would seem a requirement that when the client code was cycling thru the address with both UDP and TCP, that all corresponding ICMP host not reachables were immediately available (since ICMP host not reachables are part of the information associated with networking industrial strength ... and since the networking industrial strength support has been bumped up to the client application code).

Now, on the server side .... with multi-home support ... either for cluster subnet or a single host, a sophisticated server would have some awareness of networking topology; the more sophisticated and larger the server the larger the topology awareness. One of the things that the server (single host or cluster) is upon receiving an initial request should be able to respond using the best perceived IP-address ... aka regardless of the "TO" ip-address of the incoming initial packet from a client, the server should be able to select its outgoing "FROM" address based on its topology awareness. Furthermore, the reply packet should go out the corresponding path associated with that "FROM" address selection.

If industrial strength was really part of the network ... there wouldn't be necessary to have client-side application code support in order to make it work. We also wouldn't have butted our heads against a wall for a year trying to get one of the largest browswer vendors to provide multiple A-record support in their browswer. Initially the response was that was too advanced networking support (even tho we could point to examples in every Tahoe/Reno client implementation).

Another part is that multi-home support is only somewhat there. The multi-home draft never even made it to RFC stage. Furthermore, even tho networking industrial strength has been shoved up into the application layer ... I know of no way that the server application can force a specific "FROM" (different) IP-address back to the client ... and/or even easily say that when the server application specifies a specific "FROM" address it should go out a specific, corresponding interface (rather than letting the "TO" address routing determine the departing interface). There is a little bit of residual multi-home stuff in 1122 & 1123 but not a whole lot.

slightly related internet reference:
https://www.garlic.com/~lynn/internet.htm#0

the comment in the above about being at least five years ahead ... was comparing what we had at the time of the NSF evaluation against some future implementation based on bid submission proposals (it was actually more than five years ... since none of the bid proposals had yet been built).

some web server/browser related reference:
https://www.garlic.com/~lynn/aadsm5.htm#asrn3
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn4
https://www.garlic.com/~lynn/aadsm5.htm#asrn1

size of internal network reference:
https://www.garlic.com/~lynn/internet.htm#22

basically, internal network nodes contained the equivalent of gateway function since its origin ... that didn't happen in the arpa/inter net until the cut-over to IP on 1/1/83 ... then it took until about mid-85 for the size of the (world-wide?) internet finally caught-up and passed the size of the internal network.

some random refs:
https://www.garlic.com/~lynn/94.html#34 Failover and MAC addresses (was: Re: Dual-p
https://www.garlic.com/~lynn/94.html#36 Failover and MAC addresses (was: Re: Dual-p
https://www.garlic.com/~lynn/96.html#34 Mainframes & Unix
https://www.garlic.com/~lynn/99.html#16 Old Computers
https://www.garlic.com/~lynn/99.html#158 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/99.html#159 Uptime (was Re: Q: S/390 on PowerPC?)
https://www.garlic.com/~lynn/aadsm5.htm#asrn3 Assurance, e-commerce, and some x9.59 ... fyi
https://www.garlic.com/~lynn/aepay4.htm#comcert17 Merchant Comfort Certificates
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/2001n.html#15 Replace SNA communication to host with something else
https://www.garlic.com/~lynn/2002.html#23 Buffer overflow

The serious servers would also have diverse routing between their building location and central office (or fiber loop company), frequently making sure that at least two different fiber bundles enter the building from opposite directions and that the bundles never share common failure point. We once found a bldg that had two different local fiber loops ... one company trench went by the front of the bldg and the other company's trench went by the rear of the bldg.

There is the infamous backhoe failure mode. At least one instance is that arpanet connections into boston area were carefully laid out with nine(?) different 56kbit links sharing no common points (i.e. trunks, central exchange, etc). However, over a period of many years ... and with no explicit tagging of the different links, telco eventually consolidated all nine into the same fiber bundle. At some point, a backhoe (in conn?) took out the fiber bundle and the whole boston area arpanet was off the air.

mutli-homed hosts draft:

>

COMMENTS SHOULD BE SENT TO: lekash@orville.nas.nasa.gov

Date:    26-Apr-88

Title:   Multi-Homed Hosts in an IP Network

Authors: J. Lekashman (NASA Ames GE)

Host Behavior Working Group (retired)                       NASA Ames GE
IETF                                                        April 1988

Multi-Homed Hosts in an IP network

Status of This Memo

This is a request for commentary and ideas on how to handle multiple
interfaces in a networked environment.  Several problems are raised,
and some solutions are presented.  This is a draft.  It is presented
in order to generate discussion and suggestions for improvement.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 07 Jan 2002 15:38:30 GMT
internet reliability trivia question:

what problem was causing all the interop '88 floor nets to crash and burn up to just before the start of the show on monday morning.

hint: "fix" was incorporated into RFC 1122.

random interop '88 refs:
https://www.garlic.com/~lynn/94.html#34 Failover and MAC addresses (was: Re: Dual-p
https://www.garlic.com/~lynn/94.html#36 Failover and MAC addresses (was: Re: Dual-p
https://www.garlic.com/~lynn/2001h.html#74 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#5 YKYGOW...
https://www.garlic.com/~lynn/2001i.html#6 YKYGOW...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Mon, 07 Jan 2002 19:59:40 GMT
Darren New writes:
Well, for the SYN part of the conversation, yes. Once the connection is open, you don't necessarily want to kill the connection just because you get a host unreachable. Some routers broadcast those while rebuilding their routing tables after a reboot. If the "unreachable" were passed up without otherwise affecting the connection status, that would work too.

once a connection (or some other indication of connectivity) has been made then both the client and server know that there has been at least some connectivity via the particular path and would treat host not reachable differently then if it is during period of initial contact when the client code is cycling thru possibly multiple IP-addresses.

There is the transient case of router reboot happening exactly at initial contact. There is also the pathelogical case that all routers are simultaneously rebooting and all attempts at initial contact by the client has failed ... and the client repeats the cycle one or more times.

Part of the problem is that the network redundancy support is effectively implemented at the application level in the client code based on end-point network address. To compliment that client application code ... it would still be nice if the server application code could choose a alternate ip-address for the response ... especially for longer TCP sessions (forcing the client to use the indicated IP-address w/o having to force another layer of packet exchange ... there is the hijacking issue ... but the client could verify that the modified from address on the return packet was in the multiple A-record list).

Since the network redundancy support has been forced into the client application level for both UDP & TCP ... which implies that any ICMP messages related to the corresponding UDP & TCP traffice are passed up to the client application ... but also that the IP headers (or at least the IP from address) also needs to be passed up to the client application code. For various reasons then similar information is passed to server application code ... and that at least the server application can force the specification on the origin ip-address for both UDP & TCP activity ... as well as having the outgoing packet leave the host/subnet on the corresponding (multi-home) interface.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt
Date: Tue, 08 Jan 2002 01:13:10 GMT
Anne & Lynn Wheeler writes:
two buffer-overflow items in today's posting to comp.risks

http://www.csl.sri.com/users/risko/risks.html
http://www.csl.sri.com/users/risko/risks.txt


the previous comp.risk posting regarding buffer overflow ... seems to have prompted a whole slew of new contributions:

Re: Buffer Overflow security problems (Nicholas C. Weaver, Dan Franklin, Kent Borg, Jerrold Leichter, Henry Baker)

http://catless.ncl.ac.uk/Risks/21.85.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
Newsgroups: alt.folklore.computers
Date: Tue, 08 Jan 2002 21:05:38 GMT
ehrice@his.com (Edward Rice) writes:
No offense to you personally, Lynn, but another issue was the concept of tying I/O or tasks to processors. Honeywell and Univac did extremely well with SMP systems, after fully UN-linking I/O from specific processor boxes.

For some reason, IBM came to SMP late and slow. The same is true of virtual memory -- IBM badmouthed it non-stop until they "invented" it (with the 370/xx8 machines) and it became the greatest thing since sliced bread. Some concepts of modern operating systems, IBM was just real slow to figure out, whether due to the N.I.H., Syndrome or a lack in specialized technical areas.


they "invented" virtual memory the first time around with TSS/360 circa '66 or so (and cambridge "invented" virtual machines).

the standard 360 SMP was two machines that shared the same (non-virtual) memory ... and had a global lock for kernel entry.

the 360/67 (for tss/360) had both virtual memory and more advanced integration of components in the "channel director" (different from the 303x channel director). The channel director allowed that any processor could access any I/O (but could also cleave the configuration and dedicate resources to specific processors). The 360/67 also had 32-bit addressing mode (which didn't reappear in mainline products until the '80s (with 31-bit addressing). The work that cambridge (actually charlie) did on fine-grain SMP locking resulted in compare&swap (chosen because mnemonic are charlie's initials) which did show up in 370s.

i've conjectured before there were as many or more 360/67 machines out there as some vendor's total machine delivery .... but that 360/67 didn't get as much attention since there were hundreds of thousands of other kinds of 360s. It wasn't so much that there weren't IBMers doing such things .... it was that there were so many more IBMers doing other things (and even intra-corporate NIH ... you didn't even have to go outside). During this period, how many companies do you know had a rule(?) for business cases that it wasn't worthwhile considering unless you could show minimum $10b revenue over five years.

of course, the all time 360 multiprocessor were the ones that not many saw except the FAA.

What did happen was that FS came right after initial 370s ... where they went to the absolutely opposite extreme and considered everything. It was a mammonth undertaking with absorbing focus by the best & brightest for quite a period of time. And then it just sort of went away.

Of course my observation at the time was that it was an extreme case of the "inmates being in charge of the asylum" and the company really had to scramble to recover (I had already developed the philosophy of 3month staged real-live rubber-meets-the-road deliverables ... to try and maintain some perspective on reality; but the FS event strongly reinforced the view). There were a number of comments that possible only IBM could have made such a huge detour and still recover and come back.

random refs:
https://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#17 [OT] FS - IBM Future System
https://www.garlic.com/~lynn/2000f.html#40 Famous Machines and Software that didn't
https://www.garlic.com/~lynn/2001d.html#44 IBM was/is: Imitation...
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001f.html#33 IBM's "VM for the PC" c.1984??
https://www.garlic.com/~lynn/2001g.html#36 What was object oriented in iAPX432?
https://www.garlic.com/~lynn/2001n.html#46 Blinking lights
https://www.garlic.com/~lynn/2001n.html#65 Holy Satanism! Re: Hyper-Threading Technology - Intel information.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt
Date: Wed, 09 Jan 2002 01:11:33 GMT
Mok-Kong Shen <mok-kong.shen@t-online.de> writes:
Strange. I retried many times to access

http://catless.ncl.ac.uk/Risks/21.85.html


in the mean time ... the most recent is still
http://www.csl.sri.com/users/risko/risks.txt

& there is archive at sri ... but via ftp:


 The RISKS Forum is a MODERATED digest.  Its Usenet equivalent is comp.risks.
=> SUBSCRIPTIONS: PLEASE read RISKS as a newsgroup (comp.risks or equivalent)
if possible and convenient for you.  Alternatively, via majordomo,
send e-mail requests to <risks-request@csl.sri.com> with one-line body
   subscribe [OR unsubscribe]
which requires your ANSWERing confirmation to majordomo@CSL.sri.com .
 [If E-mail address differs from FROM:  subscribe "other-address <x@y>" ;
this requires PGN's intervention -- but hinders spamming subscriptions, etc.]
Lower-case only in address may get around a confirmation match glitch.
INFO     [for unabridged version of RISKS information]
 There seems to be an occasional glitch in the confirmation process, in which
case send mail to RISKS with a suitable SUBJECT and we'll do it manually.
   .MIL users should contact <risks-request@pica.army.mil> (Dennis Rears).
.UK users should contact <Lindsay.Marshall@newcastle.ac.uk>.
=> The INFO file (submissions, default disclaimers, archive sites,
copyright policy, PRIVACY digests, etc.) is also obtainable from
 http://www.CSL.sri.com/risksinfo.html  ftp://www.CSL.sri.com/pub/risks.info
The full info file will appear now and then in future issues.   All
 contributors are assumed to have read the full info file for guidelines.
=> SUBMISSIONS: to risks@CSL.sri.com with meaningful SUBJECT: line.
=> ARCHIVES are available: ftp://ftp.sri.com/risks or
ftp ftp.sri.com<CR>login anonymous<CR>[YourNetAddress]<CR>cd risks
   [volume-summary issues are in risks-.00]
[back volumes have their own subdirectories, e.g., "cd 20" for volume 20]
 http://catless.ncl.ac.uk/Risks/VL.IS.html      [i.e., VoLume, ISsue].
Lindsay Marshall has also added to the Newcastle catless site a
palmtop version of the most recent RISKS issue and a WAP version that
works for many but not all telephones: http://catless.ncl.ac.uk/w/r
 http://the.wiretapped.net/security/info/textfiles/risks-digest/ .
http://www.planetmirror.com/pub/risks/ ftp://ftp.planetmirror.com/pub/risks/
==> PGN's comprehensive historical Illustrative Risks summary of one liners:
http://www.csl.sri.com/illustrative.html for browsing,
http://www.csl.sri.com/illustrative.pdf or .ps for printing

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt,alt.folklore.computers
Date: Wed, 09 Jan 2002 17:33:31 GMT
vjs@calcite.rhyolite.com (Vernon Schryver) writes:
It may be relevant that sci.crypt instead of alt.folklore.computers seems to be the home of the recent talk about "industrial strength" applications that don't need any newfangled TCP stuff (i.e. do their own routing and can change IP addresses without exchanging any packets) and so don't support the nearly 20 year old notions of congestion avoidance and control.

One of the major congestion solutions in the last 20 years was van jacabson's slow start ... somewhat similar to the industrial strength availability solution ... it was pushed up into the client application code; not part of the networking infrastructure.

Part of the problem is that in "high" latency transmission, latency compensation methodology involves sending the next packet before receipt acknowledgment of the previously sent packet. To compensate for latency multiple packets are sent. This is a communication technique developed for communication between two nodes over point-to-point link.

A problem was that the packets could arrive at the receiving node and overrun the buffers in the receiving node. To compensate for this a "windowing" algorithm technique was used where the sender and receiver would agree on the number of pre-allocated buffers and that would control the buffer overrun problem.

"Slow-start" was an attempt to use a dynamic windowing algorithm to try and creat a trade-off between latency compensation and congestion problems in a large network. The idea was that sending application would spread out the number of packets that it injects into the network and cap the number of packets that had been injected; getting indication that packets were no longer in the network by the ACKs returned from the receiver. The sender would slowly start increasing the window size (number of outstanding packets in transit) until it got some indication of congestion (i.e. the sender was overloading the network) and then back-off that number.

The problem is that this isn't a point-to-point environment (which windowing algorithm addressed) but a large network (invalidating the assumptions of the original windowing algorithm).

1( there is no way of telling the number of available buffers at any of the intermediate nodes.

2) the problem at intermediate nodes then is translated into the arrival rate of packets ... or more directly the time interval between packet arrivals.

3) window algorithms doesn't control the packet transmission rate (i.e. the time interval between sending packets) except in the trivial one packet window case.

Now, there was possibly some assumption if the round-trip latency delay in a large network was 250mills and a window size of eight packets was chosen, somehow the sender would magically spread the transmission of the eight packets evenly out over a 240mills interval (30 mills between each packet transmission).

What actually happens in a large network is the activity tends to be very bursty ... especially the returning acks. The sender sends out its eight packets and then waits for the acks to dribble in. Each time the sender gets an ack, it is free to send out another packet. The problem is that ack return tends to be very bursty ... so that the sender can get multiple acks in a batch ... in which case multiple buffers are freed up nearly simultaneously and then multiple packets are transmitted nearly at once.

Now the "real" congestion problem at the intermediate nodes isn't the number of packets outstanding by the sender or the sender's windows size ... it is how close together packets are transmitted (aka the packet arrival rate or inversely the delay between the arrival of packets).

The problem with slow-start and dynamic window size adjustment isn't directly the size of the window, the problem is that (all) windowing algorithm has no directly control over the sending of back-to-back packets with no delay (aka the intermediate node problem is getting a string of back-to-back packets with little or no delay between the packets). The problem in large networks is that the ACKs don't tend to be evenly distributed over time but tend to bunch up in a bursty manner resulting in multiple ACKs arriving at the sender in small period of time which then causes the sender to send a burst of packets with little or no inter-packet delay ... overloading intermediate nodes.

Basically, slow-start is adjusting something that is not directly related to congestion ... network congestion can be shown to be the inter-packet arrival interval not the size of the window (which is an algorithm for addressing end-point buffer overrun not intermediate node network congestion; note that end-point buffer overrun is a different kind of buffer overrun ... it is correct handling, dropping packets because of no available buffer). There is small correlation between window size and congestion ... in so much larger windows can mean larger number of back-to-back transmitted packets.

Shorlty after the introduction of slow-start there was a paper in the proceedings of SIGCOMM ('88 or '89) giving details of bursty ACK in normal large network configurations and showing that slow-start is non-stable (slowly growing the window size, getting a ACK burst which results in multiple back-to-back packet transmissions, congestion, then having to cut back on window size).

I would contend that slow-start is better than nothing ... but it is terribly inefficient because it doesn't directly control the problem causing congestion (back-to-back packet transmission). I would contend that what slow-start should be doing is controlling a inter-packet transmission delay ... initially the delay is equal to the elapsed round-trip transmission latency. For some congestion, the inter-packet transmission delay may have to increase to larger than elapsed round-trip transmission latency (which isn't even possible for a windowing algorithm, even when sent to a window size of one). The slow-start algorithm slowly starts increase or decreasing the inter-packet transmission interval based on congestion indicators.

A problem for industrial strength networking is that a lot of the dependency for robustness have been pushed up into the client application code .... both for redundancy and congestion control (and is not directly part of the underlying networking infrastructure).

I would further contend that possibly the reason that we got slow-start as dynamic window size adjustment (rather than inter-packet transmission delay) was because that large percentage of the client machines at the time had very poor timer support faciltiies (and would have had extreme difficulty efficiently supporting fine-grain time delay events required).

In any case, we had to implement adaptive inter-packet transmission control in our backbone around '85 or '86 because of the non-stable characteristic of windowing algorithms not directly controlling packet arrival rates leading congestion. I believe this was also one reason that NSF review stated that what we had running was (at least) five years ahead of all the technology proposals for building something new.

random congestion, rate-based, etc postings:
https://www.garlic.com/~lynn/93.html#28 Log Structured filesystems -- think twice
https://www.garlic.com/~lynn/94.html#22 CP spooling & programming technology
https://www.garlic.com/~lynn/99.html#33 why is there an "@" key?
https://www.garlic.com/~lynn/aepay4.htm#miscdns misc. other DNS
https://www.garlic.com/~lynn/2000b.html#9 "Mainframe" Usage
https://www.garlic.com/~lynn/2000b.html#11 "Mainframe" Usage
https://www.garlic.com/~lynn/2000e.html#19 Is Al Gore The Father of the Internet?^
https://www.garlic.com/~lynn/2000f.html#38 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2000f.html#39 Ethernet efficiency (was Re: Ms employees begging for food)
https://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
https://www.garlic.com/~lynn/2001.html#68 California DMV
https://www.garlic.com/~lynn/2001h.html#44 Wired News :The Grid: The Next-Gen Internet?
https://www.garlic.com/~lynn/2001j.html#20 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001m.html#27 Internet like city w/o traffic rules, traffic signs, traffic lights and traffic enforcement

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Buffer overflow

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Buffer overflow
Newsgroups: sci.crypt
Date: Wed, 09 Jan 2002 19:17:17 GMT
Anne & Lynn Wheeler writes:
Re: Buffer Overflow security problems (Nicholas C. Weaver, Dan Franklin, Kent Borg, Jerrold Leichter, Henry Baker)

http://catless.ncl.ac.uk/Risks/21.85.html


somewhat related to the buffer overflow items in comp.risks

as an aside, my wife and I have put in nearly four years on the AADS chip/token. random ref (see aads chip strawman):
https://www.garlic.com/~lynn/x959.html#aads

US Cyber Security Weakening

http://www.wired.com/news/infostructure/0,1377,49570,00.html
Reuters
11:15 a.m. Jan. 8, 2002 PST

U.S. computer systems are increasingly vulnerable to cyber attacks, partly because companies are not implementing security measures already available, according to a new report released Tuesday.

"From an operational standpoint, cyber security today is far worse that what known best practices can provide," said the Computer Science and Telecommunications Board, part of the National Research Council.

"Even without any new security technologies, much better security would be possible today if technology producers, operators of critical systems, and users took appropriate steps," it said in a report released four months after the events of Sept. 11.

Experts estimate U.S. corporations spent about $12.3 billion to clean up damage from computer viruses in 2001. Some predict viruses and worms could cause even more damage in 2002.

The report said a successful cyber attack on the U.S. air traffic control system in coordination with airline hijackings like those seen on Sept. 11 could result in a "much more catastrophic disaster scenario."

To avert such risks, the panel urged organizations to conduct more random tests of system security measures, implement better authentication systems and provide more training and monitoring to make information systems more secure. All these measures were possible without further research, it said.

Investments in new technologies and better operating procedures could improve security even further, it noted.

Herbert Lin, senior scientist at the board, said information technologies were developing at a very rapid rate, but security measures had not kept pace.

In fact, he said, recommendations for improving security made by the panel a decade ago were still relevant and timely.

"The fact that the recommendations we made 10 years ago are still relevant points out that there is a real big problem, structurally and organizationally, in paying attention to security," Lin said.

"We've been very frustrated in our ability to get people to pay attention, and we're not the only ones," he added.

Increased security concerns after the Sept. 11 attacks on New York and Washington could provide fresh impetus for upgrading computer security, Lin said.

But he warned against merely putting more federal funds into research, noting that it was essential to implement technologies and best practices already available.

"The problem isn't research at this point. We could be so much safer if everyone just did what is possible now," Lin said.

For instance, the report notes that passwords are the most common method used today to authenticate computer users, despite the fact that they are known to be insecure.

A hardware token, or smart card, used together with a personal identification number or biometrics, would provide much better security for the computer system, the report said.

The report urged vendors of computer systems to provide well-engineered systems for user authentication based on such hardware tokens, taking care to make sure they were more secure and convenient for users.

In addition, it said vendors should develop simple and clear blueprints for secure operation and ship systems with security features turned on so that a conscious effort was needed to disable them.

One big problem was the lack of incentives for companies to respond adequately to the security challenge, the report said.

It said one possible remedy would be to make software companies, system vendors and system operators liable for system breaches and to mandate reporting of security breaches that could threaten critical social functions.


--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Multi-Processor Concurrency Problem

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multi-Processor Concurrency Problem
Newsgroups: comp.arch,comp.arch.storage,alt.comp.hardware
Date: Wed, 09 Jan 2002 19:32:17 GMT
"Peter Olcott" writes:

inline const char FastString::c_str() const
{
String[NextByte] = 0;
return (String);
}

(w/o looking at the rest of the code) depending on the compiler, machine, etc .... the compiler could be generating code that loads NextByte in a register and keeps it there for extended period of time. If NextByte is being changed (increment, decrement, etc), it is possible that multiple processors wouldn't necessarily see changes that each other are doing and NextByte gets "out-of-sync" ... so the different processors are now working with different NextByte values.

Then on lots of machines, one byte storage alterations require fetching a word (or more), updating the specific byte and then storing back the whole unit of storage.

If processor-1 was working with NextByte=71 and the processor-2 with NextByte=70;

proc1 fetches the word for byte 71,
proc2 fetches the same word for byte 70,
proc1 updates the byte and stores the word back,
proc2 update the byte and stores the word back ...

wiping out the change done by proc1.

aka the problem can be a combination of

1) compiler not generating code to maintain NextByte synchronized in multiple processors (i.e. keeping values in local registers & no serialization of value cross processors) and

2) the way processors may do storage alterations on smaller than minimum storage fetch/store unit.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Movies with source code (was Re: Movies with DEC minis)

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Movies with source code (was Re: Movies with DEC minis)
Newsgroups: alt.folklore.computers
Date: Wed, 09 Jan 2002 19:35:54 GMT
Pete Fenelon writes:
grin. We'll get BAH and Lynn Wheeler telling us about DEC and IBM's approaches to tendering for the Deep Thought contract next :)

only slightly related:
https://www.garlic.com/~lynn/2001g.html#5 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#6 New IBM history book out
https://www.garlic.com/~lynn/2001g.html#7 New IBM history book out
https://www.garlic.com/~lynn/2001j.html#31 Title Inflation

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
Newsgroups: alt.folklore.computers
Date: Thu, 10 Jan 2002 02:21:08 GMT
bhk@dsl.co.uk (Brian {Hamilton Kelly}) writes:
Indeed; however, the Ferranti Atlas had already been in manufacture for a number of years by then, and THEY invented VM about 1960ish. Those of us using Atlases in those days were supremely pissed off when IBM claimed to have come up with this "novel" concept.

But of course, not only was that NIH, but invented on the "wrong" side of the Atlantic :-(


i know somebody that sent a letter to the editor with regard to one of the early corporate article claims with respect to virtual memory ... that included some detailed atlas references; he may even still have the editor's reply.

random posts from couple years ago:
https://www.garlic.com/~lynn/2000.html#52 Correct usage of "Image" ???
https://www.garlic.com/~lynn/2000c.html#79 Unisys vs IBM mainframe comparisons
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

hollow files in unix filesystems?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: hollow files in unix filesystems?
Newsgroups: alt.folklore.computers
Date: Thu, 10 Jan 2002 16:27:39 GMT
"Bob Powell" writes:
That feature has been present back at least to 1976, and as I'm pretty sure the file system structure as of that time had changed little since Thompson's original design, maybe it was there in 1969?

Hollow wasn't quite free, I'm pretty sure the inode structure stored block numbers (zeroes) for each logical block in the file so a 1 gig empty file still consumed a few meg of real space.

As for the later BSD and 1980's ATT FS implementations I have no idea. A thing of beauty, all that original code.


i don't know how far back it goes in unix ... i believe in cms it goes back to cms on the 360/40 circa 65/66 and for all i know may have been on CTSS (CTSS->CP/CMS; CTSS->Multics->Unix?; ctss, cp/cms, multics all at 545 tech. sq, cambridge).

https://www.garlic.com/~lynn/2001n.html#67

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Calculating a Gigalapse

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Calculating a Gigalapse
Newsgroups: alt.folklore.computers
Date: Thu, 10 Jan 2002 20:06:15 GMT
Charles Eicher writes:
Read the NYTimes article, it specifically refers to a backhoe incident cutting off access to a single major site. The article mostly talks about redundancy measures to prevent single-point failures.

But anyway, I poked around RISKS Digest, and did find one description of a 2-hour nationwide AOL outage in 1996, but that is hardly close to a gigalapse. I'm still searching for info on that stock market outage, IIRC they just shut down all trading because they couldn't execute ANY trades. But my memory could be wrong..


nyse had a outage when the transformer in the basement of the building hosting the dataprocessing center blew up, contaminating the building with PCB and the building had to be evacuated (but it was late in the trading day)

the building had been carefully chosen so that it had diverse routing for water, power, and phone (aka two different water mains on opposite of the buildings, power on two different sides of the building going to different power substations, and telco on four different sides of the building going to four different telco central exchanges).

it was while we were doing HA/CMP. It was approximately the same era that the underground flood took out the chicago merchantile exchange dataprocessing.

we had coined the terms disaster survivability & geographic survivability as the next level to disaster recovery.

there was later the case of the major ATM/debit data-processing for the east coast having its roof collapse with snow ... and its disaster recovery site was in the WTC and had been recently disabled with bombing.

there have been a number of backhoe incidents ... one was the north-east arpanet black-out. originally something like 9 different trunks with diverse routing had been set up ... but over the years the telco had consolidated all the diverse lines into a single fiber bundle (backhoe incident was somewhere in conn?).

there also was a super-hardened commercial transaction processing site ... redundant everything, super PDUs, diesel, batteries, proof against hurricanes, etc. However, all telco came in via single overhead bundle. Lightning struck a tree, tree fell-over and broke the telco bundle, taking the operation off the air.

random refs:
https://www.garlic.com/~lynn/subtopic.html#hacmp
https://www.garlic.com/~lynn/98.html#23 Fear of Multiprocessing?
https://www.garlic.com/~lynn/98.html#35a Drive letters
https://www.garlic.com/~lynn/98.html#37 What is MVS/ESA?
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/99.html#71 High Availabilty on S/390
https://www.garlic.com/~lynn/99.html#128 Examples of non-relational databases
https://www.garlic.com/~lynn/99.html#145 Q: S/390 on PowerPC?
https://www.garlic.com/~lynn/99.html#184 Clustering systems
https://www.garlic.com/~lynn/aadsm9.htm#pkcs12 A PKI Question: PKCS11-> PKCS12
https://www.garlic.com/~lynn/aepay2.htm#cadis disaster recovery cross-posting
https://www.garlic.com/~lynn/2000.html#13 Computer of the century
https://www.garlic.com/~lynn/2000.html#22 Computer of the century
https://www.garlic.com/~lynn/2000c.html#45 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#47 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000f.html#12 Amdahl Exits Mainframe Market
https://www.garlic.com/~lynn/2000f.html#30 OT?
https://www.garlic.com/~lynn/2000f.html#54 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000g.html#27 Could CDR-coding be on the way back?
https://www.garlic.com/~lynn/2001.html#33 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#41 Where do the filesystem and RAID system belong?
https://www.garlic.com/~lynn/2001.html#43 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001d.html#70 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001d.html#71 Pentium 4 Prefetch engine?
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001e.html#44 Where are IBM z390 SPECint2000 results?
https://www.garlic.com/~lynn/2001g.html#44 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
https://www.garlic.com/~lynn/2001i.html#41 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#43 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#48 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001i.html#49 Withdrawal Announcement 901-218 - No More 'small machines'
https://www.garlic.com/~lynn/2001j.html#23 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#13 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#14 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001k.html#18 HP-UX will not be ported to Alpha (no surprise)exit
https://www.garlic.com/~lynn/2001l.html#47 five-nines
https://www.garlic.com/~lynn/2001n.html#3 News IBM loses supercomputer crown
https://www.garlic.com/~lynn/2001n.html#47 Sysplex Info
https://www.garlic.com/~lynn/2001n.html#85 The demise of compaq
https://www.garlic.com/~lynn/2001n.html#90 Buffer overflow
https://www.garlic.com/~lynn/2002.html#24 Buffer overflow

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

VM and/or Linux under OS/390?????

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: VM and/or Linux under OS/390?????
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Thu, 10 Jan 2002 22:11:43 GMT
SEYMOUR.J.METZ@CUSTOMS.TREAS.GOV (Shmuel Metz , Seymour J.) writes:
It looks like the main goal of this product is to allow you to run DPPX on OS/390 - at least most of the FAQ section on the web site is devoted to DPPX. Does anyone run DPPX anymore? We got rid of DPPX in 1998 or 1999 because of Y2K considerations, and the fact that macro level language needed in CICS was going away. Also, the 8100 hardware it ran on was extremely old and getting more and more unreliable.

i believe different dppx ... however 8100 was uc.5 micro-processor ... also used in 3705, 3081 service processor and a number of other boxes.

there were a number of efforts that tried to get peachtree engine (used in the S/1) to be used in place of the uc.5 (significantly more capable processor) but there was some amount of inter-divisional issues.

random refs:
https://www.garlic.com/~lynn/99.html#63 System/1 ?
https://www.garlic.com/~lynn/99.html#106 IBM Mainframe Model Numbers--then and now?
https://www.garlic.com/~lynn/99.html#239 IBM UC info
https://www.garlic.com/~lynn/2000b.html#66 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#69 oddly portable machines
https://www.garlic.com/~lynn/2000b.html#79 "Database" term ok for plain files?
https://www.garlic.com/~lynn/2001b.html#75 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001f.html#44 Golden Era of Compilers
https://www.garlic.com/~lynn/2001n.html#9 NCP

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

hollow files in unix filesystems?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: hollow files in unix filesystems?
Newsgroups: alt.folklore.computers
Date: Fri, 11 Jan 2002 00:35:40 GMT
CBFalconer writes:
CP/M also had/has them. Considering the scarcity of disk space in early microcomputers, I never understood why systems didn't use them.

there is some folklore that both cp/m & dos adopted simplified version of cms filesystem.

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

School Help

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: School Help
Newsgroups: alt.folklore.computers
Date: Fri, 11 Jan 2002 02:29:29 GMT
Eric Sosman writes:
Another possible source is CP/M, where it was important to pack four "directory entries" into a 128-byte disk record; by the time you'd filled in all the other required information, there were only twelve places left for a control byte and eleven name characters. I kind of doubt this, though, since CP/M is relatively recent and 8.3 "feels" older; it seems more plausible that CP/M took 8.3 as a given and then worked hard to pack everything else into the remaining 21 bytes ... But this is just speculation, not knowledge.

cms was 8+8+2 ... where the ending 2 contained the disk letter (aka a-z), cp/m-dos effectively moved the disk letter in front instead of at the end (defaulting to A as in CMS ... convention changed with addition of hard disk to C) .... and reduced the second 8 to three.

again folklore that cp/m-dos was simplified cms filesystem.

https://www.garlic.com/~lynn/2001n.html#37 Hercules etc. IBM not just missing a great opportunity...
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002.html#46 hollow files in unix filesystems?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode?
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Fri, 11 Jan 2002 14:54:22 GMT
"Lloyd Fuller" <Lloyd-Fuller@worldnet.att.net> writes:
I think that you have the 360s backwards: I think that most were hardwire and only one or two were micro-coded (the 67?).

I believe that all of the newer machines are microcoded, with different levels of microcode. For example, the G6 came with the IEEE floating point instructions, but they could be added to the G5 with an EC.


all of the 360 machines were m'coded except the 75 which was essentially a hired-wired 65.

the models were 30, 40, 50, 60, 62, 70

before shipment to customers, the 60, 62, & 70 were renamed 65, 67, & 75. I believe that the original model numbers were going to have 1mic memory and the change was to faster memory technology at 750nsec.

the 62/67 was effectively 60/65 with virtual memory & 32bit virtual addressing option. the other major change in the 67 was for SMP. the 65 SMP was effectively two 65s with memory ganged together so both processors had common addressing ... but still totally independent I/O and other facilities. The 67 SMP had what was called a "channel controller" ... which was a configuration box for both memory boxes and I/O channels (allowing memory boxes & channels to be switched on, off, shared, dedicated). The 67 SMP had control registers that "sensed" the settings in the channel controller and there was a special RPQ that allowed the processor to change the channel controller switch settings by changing the corresponding bits in the control registers.
https://www.garlic.com/~lynn/2001c.html#15
https://www.garlic.com/~lynn/2001.html#69
https://www.garlic.com/~lynn/2001.html#71

In addition the 67 duplex (two-processor) had tri-ported memory ... independent paths for channels and the two processors. The tri-ported memory introduced additional latency. A "half-duplex" '67 had about 10% slower memory cycle than a "simplex" '67 ... however under heavy I/O load, a "half-duplex" 67 had higher thruput than a "simplex" '67 because of the reduction in memory bus contention between processor and I/O.
https://www.garlic.com/~lynn/98.html#23
https://www.garlic.com/~lynn/2001j.html#3
https://www.garlic.com/~lynn/2001j.html#14

The work at CSC with on fine-grain locking for '67 SMP led to the compare&swap instruction in the 370 (compare&swap was chosen because the mnemonic was the person's initials that did most of the work).

The 370s were all m'coded ... with the 115-145 being vertical m'coded machines (effectively the native processor engines were programmed much like a 16-bit microprocessor which effectively simulated 370 instructions at about 1/10th the native processor engine speed ... aka the native processor engine speed in the 370 processors were approx. ten times faster than the respective 370 MIP ratings ... aka a .1mip 125 needed a 1mip native processor engine).

The 155, 158, 165, 168 were all "horizontal" mcoded machine ... effectively wide instructions that controlled the various functional units in the hardware ... and there could be multiple operations specified, started, and going on simultaneously ... with one m'code instruction executed per machine cycle. Rather than rate the m'code in terms of m'code instructions per 370 instruction, the machines were rated in avg. number of machine cycles per 370 instruction (because of the ability to have multiple things going on simultaneously in the same m'code instruction). For instance, the 165 ran at about 2.1 machine cycles per 370 instruction, optimization for 168 reduced that to about 1.6 machine cycles per 370 instruction.
https://www.garlic.com/~lynn/96.html#23

Except for the high-end 360 & 370, the machines had "integrated" I/O channels ... i.e. the m'code on the native engine not only implemented the 370 instruction set ... but also the channel command instruction set ... and the native processor engine basically was multiprogrammed between executing 370 instruction set and channel command operations.

The 303x was interesting because it re-introduced a limiting version of channel director (still lacked the full SMP capability of the 67 controller). A channel director was an external engine that implemented six I/O channels. A 3033 needed three channel directors for a 16 channel configuration. The channel director was a 370/158 with the 370 microcode removed and the box dedicated to the channel I/O microcode. A 3031 was a 158 processor reconfigured w/o the channel I/O microcode running only 370 instruction m'code (relying on a 2nd 158 engine configured as a channel director to implement the I/O function). A 3032 was basically a 168 configured to work with 158 channel director engine. A 3033 was basically a 168 remapped to faster chip technology. A 3031 was faster than a 158 in large part because the engine wasn't being shared between 370 and channel functions (there were two dedicated engines one for each function)
https://www.garlic.com/~lynn/97.html#20
https://www.garlic.com/~lynn/2000d.html#82

The precursor to such an implementation was the 115 & 125. The basic hardware for the two boxes was the same, a 9-way SMP shared memory bus. A 115/125 configuration tended to have 4-9 independent processors installed in a configuration ... but a standard confgiruation only had one of the processors executed "m'code" for 370 instruction set. The other processors implemented "m'code" for other configuration functions; disk controller, telecommunications controller, etc. In the 115 all the processors were identical ... only differentiated by the "m'code" program that the processor was executing. In the 125, the processor engine that execute the 370 m'code was about 25percent faster than the other engines.
https://www.garlic.com/~lynn/2001i.html#2
https://www.garlic.com/~lynn/2001j.html#18
https://www.garlic.com/~lynn/2001j.html#19

There were numerous special m'code developed for various 360 and 370 engines. A common special m'code on the 360/67 implemented the SLT (search list) instruction ... which could search a chained list of elements looking for matching on various conditions. There was a conversational programming system (CPS) developed by the boston programming system that had interpreted PL/I and Basic(?) along with a whole slew of special instruction m'code for 360/50 (CPS could run either with or w/o the special m'code).

The 370/145 had large m'code load for the vs/apl m'code assist (apl programs with the m'code assist on 145 ran about as fast as on 168 w/o m'code assist).

The 370/148 had portions of VS/1 and VM/370 kernel implemented in m'code as performance assist aka most kernel instructions mapped 1:1 into m'code so there was corresponding 10:1 performance improvement. Other VM/370 kernel assist were modifications to the way privilege instructions executed in "VM-mode", this allowed the instruction to be executed w/o taking an interrupt into the VM/370 kernel, saving state, simulating the instruction, restoring state, resuming virtual machine operation ... which resulted in significantly greater than 10:1 performance improvement.
https://www.garlic.com/~lynn/94.html#21

The various control units were also m'code engines. The 3830 disk controller was a horizontal m'code engine ... but was replaced with a vertical m'coded engine, j-prime, for the 3880 disk controller.
https://www.garlic.com/~lynn/2000b.html#38
https://www.garlic.com/~lynn/2000c.html#75
https://www.garlic.com/~lynn/2001l.html#63

The 3705 was a uc.5 (vertical m'coded) engine ... which was also used in a number of other boxes, including the 8100 and as the service processor in the 3081.
https://www.garlic.com/~lynn/2002.html#45

I'm not sure about the details of various 360 controllers ... but as an undergraduate, I worked on a project with three others that reverse engineered the ibm channel interface, built our own channel adapter board and programmed a minicomputer to be a plug compatible telecommunication controller (2702 replacement) ... and have been blamed for originating the IBM PCM controller business
https://www.garlic.com/~lynn/submain.html#360pcm

and possibly also responsible for the baroque nature of the PU4/PU5 interface
https://www.garlic.com/~lynn/2002.html#11

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

OT Friday reminiscences

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OT Friday reminiscences
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 11 Jan 2002 15:41:39 GMT
donna_spradley@COLONIALBANK.COM (Donna Spradley) writes:
Our "pre-computer" days in the late seventies (at BCBS of Ala) included a huge "rock-n-roll" tabulating and posting machine that could walk across the room while reading in cards. To change the "program" you opened a door, removed a huge board and plugged the wire connectors into the appropriate holes. This tabulator was built sometime in the 40s, I think. It was about twice as big as my washing machine, and its rocking sway was ... well, enjoyable. Most everyone enjoyed leaning against it while it was in operation, to prevent it from moving too far across the room, you understand. ;-)

'twas a thing of beauty, and seemed to have its own personality. I believe it was a model 400-something, like "409", but I know that's a kitchen spray, so maybe that's not it.


check picture of 407 at:
http://www.columbia.edu/cu/computinghistory/407.html

other things of possible interest at:
http://www.columbia.edu/acis/history

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode?
Newsgroups: comp.lang.asm370
Date: Fri, 11 Jan 2002 20:06:59 GMT
"Lloyd Fuller" <Lloyd-Fuller@worldnet.att.net> writes:
I think that you have the 360s backwards: I think that most were hardwire and only one or two were micro-coded (the 67?).

I believe that all of the newer machines are microcoded, with different levels of microcode. For example, the G6 came with the IEEE floating point instructions, but they could be added to the G5 with an EC.


some machine dates from:
https://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt

Bell Labs         48              Invention of transistor
Noyce, Kilby      59              Integrated circuit; patent in 1968
IBM Memory        60-?? 60-??     IBM 1 Mbit Mag.Core Memory: $1M
CDC 6600          63-08 64-09     LARGE SCIENTIFIC PROCESSOR
IBM 2701          6???? 6????     DATA ADAPTER 4 LINES, 230KB
IBM 2702          6???? 6????     DATA ADAPTER (LOW SPEED)
IBM DOS/360       6???? 6????     SCP FOR SMALL/INTERMEDIATE SYSTEMS
IBM OS/360        64-04 6????     PCP - SINGLE PARTITION SCP FOR S/360
IBM S/360-30      64-04 65-05 13  SMALL; 64K MEMORY LIMIT, MICROCODE CTL.
IBM S/360-40      64-04 65-04 12  SMALL-INTERMED.; 256K MEMORY LIMIT
IBM S/360-50      64-04 65-08 16  INTERMED.-LARGE
IBM S/360-60      64-04  N/A      LARGE - NEVER SHIPPED
IBM S/360-62      64-04  N/A      LARGE - NEVER SHIPPED
IBM S/360-70      64-04  N/A      LARGE - NEVER SHIPPED
IBM S/360-92      64-08           VERY LARGE SCIENTIFIC S/360
IBM S/360-91      64-11 67-10     REPLACES 360/92
CDC 6800          64-12           LARGE SCIENTIFIC SYSTEM - LATER 7600
IBM OS/360        65-?? 68-??     MFT - FIRST MULTIPROGRAMMED VERSION OF OS
IBM 2314          65-?? 65-04     DISK: 29MB/DRIVE, 4DR/BOX REMOV. MEDIA
$890/MB
IBM S/360-65      65-04 65-11 07  MAJOR LARGE S/360 CPU
IBM S/360-75      65-04 66-01 09  LARGE CPU; NO MICROCODE; NOT SUCCESSFUL
IBM S/360-95      65-07 68-02     THIN FILM MEMORY - /91, /92 NOW RENAMED /91
IBM S/360-44      65-08 66-07 11  INTERMED. CPU,;SCIENTIFIC;NO MICROCODE
IBM S/360-67      65-08 66-06 10  MOD 65+DAT; 1ST IBM VIRTUAL MEMORY
IBM PL/I LANG.    66-?? 6????     MAJOR NEW LANGUAGE (IBM)
IBM S/360-91      66-01 67-11 22  VERY LARGE CPU; PIPELINED
IBM PRICE         67-?? 67???     PRICE INCREASE???
IBM OS/360        67-?? 67-12     MVT - ADVANCED MULTIPROGRAMMED OS
IBM TSS           67??? ??-??     32-BIT VS SCP-MOD 67; COMMERCIAL FAILURE
1Kbit/chip RAM    68              First commercial semicon memory chip
IBM CP/67         68+?? 68+??     MULTIPLE VIRTUAL MACHINES SCP-MOD 67
IBM S/360-25      68-01 68-10 09  SMALL S/360 CPU; WRITABLE CONTROL STORE
IBM S/360-85      68-01 69-12 09  VERY LARGE S/360 CPU; FIRST CACHE MEMORY
IBM SW UNBUNDLE   69-06 70-01 07  IBM SOFTWARE, SE SERVICES SEP. PRICED
IBM S/360-195     69-08 71-03 20  VERY LARGE CPU; FEW SOLD; SCIENTIFIC
IBM 3330-1        70-06 71-08 14  DISK: 200MB/BOX, $392/MB
IBM S/370 ARCH.   70-06 71-02 08  EXTENDED (REL. MINOR) VERSION OF S/360
IBM S/370-155     70-06 71-01 08  LARGE S/370
IBM S/370-165     70-06 71-04 10  VERY LARGE S/370
IBM S/370-145     70-09 71-08 11  MEDIUM S/370 - BIPOLAR MEMORY - VS READY
AMH=AMDAHL        70-10           AMDAHL CORP. STARTS BUSINESS
IBM S/370-135     71-03 72-05 14  INTERMED. S/370 CPU
IBM S/360-22      71-04 71-07 03  SMALL S/360 CPU
IBM LEASE         71-05 71 06 01  FixTERM PLAN;AVE. -16% FOR 1,2 YR LEASE
IBM PRICE         71-07 71+?? +8% ON SOME CPUS;1.5% WTD AVE. ALL CPU
IBM S/370-195     71-07 73-05 22  V. LARGE S/370 VERS. OF 360-195, FEW SOLD
Intel, Hoff       71              Invention of microprocessor

misc. 360 or 370 pcitures
http://www.comune.po.it/scuole/dagomari/museo/htm/p3_32a.htm
http://wb4huc.home.texas.net/pds2pds/ibm_370.htm
http://www.nfrpartners.com/comphistory/
http://www.brouhaha.com/~eric/retrocomputing/ibm/stretch/
http://www.columbia.edu/cu/computinghistory/36050.html
http://www.columbia.edu/cu/computinghistory/36075.html
http://www.columbia.edu/cu/computinghistory/core.html#2361
http://www.columbia.edu/cu/computinghistory/generations.html
http://www.columbia.edu/cu/computinghistory/36091.html
http://www.columbia.edu/cu/computinghistory/2311.html
http://www.columbia.edu/cu/computinghistory/drum.html
http://www.columbia.edu/cu/computinghistory/panorama.html
http://www.columbia.edu/cu/computinghistory/disks.html
http://www.columbia.edu/cu/computinghistory/mss.html
http://www.columbia.edu/cu/computinghistory/cuvmab.html

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode?
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Fri, 11 Jan 2002 20:49:17 GMT
misc. other 360/370 references
http://www.beagle-ears.com/lars/engineer/comphist/ibm360.htm
http://www.beagle-ears.com/lars/engineer/comphist/c20-1684/
http://www.beagle-ears.com/lars/engineer/comphist/model360.htm

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

Microcode?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Microcode?
Newsgroups: comp.lang.asm370,alt.folklore.computers
Date: Sat, 12 Jan 2002 14:12:49 GMT
Sam Yorko writes:
What were the 195s? I had heard that they were hardwired....

from
https://web.archive.org/web/20010218005108/http://www.isham-research.freeserve.co.uk/chrono.txt

IBM S/360-92      64-08           VERY LARGE SCIENTIFIC S/360
IBM S/360-91      64-11 67-10     REPLACES 360/92
IBM S/360-95      65-07 68-02     THIN FILM MEMORY - /91, /92 NOW RENAMED /91
IBM S/360-195     69-08 71-03 20  VERY LARGE CPU; FEW SOLD; SCIEN
IBM S/370-195     71-07 73-05 22  V. LARGE S/370 VERS. OF 360-195, FEW SOLD

note with respect to 195 ... one of the biggest things in 370/195 vis-a-vis 360/195 was some additional RAS & instruction retry. I also have vague recollections being told by some engineer that a lot of the 85 was preliminary for 155/165.

also from:
http://www.beagle-ears.com/lars/engineer/comphist/model360.htm
360/75 - the fastest of the original 360s. Had the entire S/360 instruction set implemented in hardwired logic (all the others had microcode). Ran at one MIPS.

360/85 - 1969 - the first production machine with cache memory. Other 360 novelties in this model were extended-precision (16-byte) floating point, relaxation of some instruction operand alignment requirements, and an optional I/O channel which allowed multiple disk channel programs to run (sort of) concurrently. These later became standard in 370's. Writable control store?

360/91 - the first pipelined processor. Fully hardwired. Most of the decimal instructions were missing (emulated in software). Because it had multiple instruction execution units, it had "imprecise interrupts". When an interrupt or an exception occurred, the program counter might not point to the failing instruction if the multiple execution units were all active at the same time. For this reason, it was advisable to put NOPs around instructions that might lead to exceptions. Not many of these were built, and they may all have had slightly different tweaks as each of them was successively hand built by the same engineering team.

360/95 was a 91 equipped with a higher-performance thin-film memory instead of core. Only a couple were built.

360/195 was a faster successor to the 91, again fully-hardwired. It came in both 360 and (later) 370 versions. The 370 version had the new 370 instructions plus the 370 TOD clock and control registers, but not virtual-memory hardware. Also had imprecise interrupts.


....

random refs:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#51 Rethinking Virtual Memory
https://www.garlic.com/~lynn/97.html#20 Why Mainframes?
https://www.garlic.com/~lynn/98.html#26 Merced & compilers (was Re: Effect of speed ... )
https://www.garlic.com/~lynn/99.html#7 IBM S/360
https://www.garlic.com/~lynn/99.html#90 CPU's directly executing HLL's (was Which programming languages)
https://www.garlic.com/~lynn/99.html#116 IBM S/360 microcode (was Re: CPU taxonomy (misunderstood RISC))
https://www.garlic.com/~lynn/99.html#187 Merced Processor Support at it again
https://www.garlic.com/~lynn/99.html#204 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/99.html#209 Core (word usage) was anti-equipment etc
https://www.garlic.com/~lynn/2000.html#8 Computer of the century
https://www.garlic.com/~lynn/2000.html#12 I'm overwhelmed
https://www.garlic.com/~lynn/2000.html#63 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#78 Mainframe operating systems
https://www.garlic.com/~lynn/2000.html#86 Ux's good points.
https://www.garlic.com/~lynn/2000b.html#50 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000b.html#51 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#19 Hard disks, one year ago today
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#75 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#76 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000c.html#83 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#12 4341 was "Is a VAX a mainframe?"
https://www.garlic.com/~lynn/2000d.html#20 S/360 development burnout?
https://www.garlic.com/~lynn/2000d.html#60 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000d.html#82 "all-out" vs less aggressive designs (was: Re: 36 to 32 bit transition)
https://www.garlic.com/~lynn/2000e.html#6 Ridiculous
https://www.garlic.com/~lynn/2000e.html#54 VLIW at IBM Research
https://www.garlic.com/~lynn/2000e.html#56 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2000f.html#37 OT?
https://www.garlic.com/~lynn/2000f.html#55 X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
https://www.garlic.com/~lynn/2000f.html#57 X86 ultimate CISC? No. (was: Re: "all-out" vs less aggressive designs)
https://www.garlic.com/~lynn/2000f.html#59 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2000f.html#78 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#7 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#8 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#11 360/370 instruction cycle time
https://www.garlic.com/~lynn/2000g.html#21 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001.html#38 Competitors to SABRE?
https://www.garlic.com/~lynn/2001b.html#29 z900 and Virtual Machine Theory
https://www.garlic.com/~lynn/2001b.html#40 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#42 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#83 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#1 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#2 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#3 Z/90, S/390, 370/ESA (slightly off topic)
https://www.garlic.com/~lynn/2001c.html#87 "Bootstrap"
https://www.garlic.com/~lynn/2001d.html#22 why the machine word size is in radix 8??
https://www.garlic.com/~lynn/2001d.html#26 why the machine word size is in radix 8??
https://www.garlic.com/~lynn/2001d.html#54 VM & VSE news
https://www.garlic.com/~lynn/2001d.html#55 VM & VSE news
https://www.garlic.com/~lynn/2001d.html#68 I/O contention
https://www.garlic.com/~lynn/2001e.html#5 SIMTICS
https://www.garlic.com/~lynn/2001e.html#9 MIP rating on old S/370s
https://www.garlic.com/~lynn/2001e.html#73 CS instruction, when introducted ?
https://www.garlic.com/~lynn/2001f.html#16 Wanted other CPU's
https://www.garlic.com/~lynn/2001f.html#17 Accounting systems ... still in use? (Do we still share?)
https://www.garlic.com/~lynn/2001f.html#23 MERT Operating System & Microkernels
https://www.garlic.com/~lynn/2001f.html#43 Golden Era of Compilers
https://www.garlic.com/~lynn/2001f.html#69 Test and Set (TS) vs Compare and Swap (CS)
https://www.garlic.com/~lynn/2001g.html#22 Golden Era of Compilers
https://www.garlic.com/~lynn/2001h.html#2 Alpha: an invitation to communicate
https://www.garlic.com/~lynn/2001h.html#10 VM: checking some myths.
https://www.garlic.com/~lynn/2001h.html#28 checking some myths.
https://www.garlic.com/~lynn/2001h.html#69 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001i.html#2 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#3 Most complex instructions (was Re: IBM 9020 FAA/ATC Systems from 1960's)
https://www.garlic.com/~lynn/2001i.html#37 IBM OS Timeline?
https://www.garlic.com/~lynn/2001i.html#38 IBM OS Timeline?
https://www.garlic.com/~lynn/2001j.html#3 YKYGOW...
https://www.garlic.com/~lynn/2001j.html#48 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001k.html#8 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2001k.html#29 HP Compaq merger, here we go again.
https://www.garlic.com/~lynn/2001k.html#33 3270 protocol
https://www.garlic.com/~lynn/2001k.html#65 SMP idea for the future
https://www.garlic.com/~lynn/2001l.html#36 History
https://www.garlic.com/~lynn/2001l.html#42 is this correct ? OS/360 became MVS and MVS >> OS/390
https://www.garlic.com/~lynn/2001l.html#63 MVS History (all parts)
https://www.garlic.com/~lynn/2001n.html#0 TSS/360
https://www.garlic.com/~lynn/2001n.html#9 NCP
https://www.garlic.com/~lynn/2001n.html#18 Call for folklore - was Re: So it's cyclical.
https://www.garlic.com/~lynn/2001n.html#26 Open Architectures ?
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2001n.html#92 "blocking factors" (Was: Tapes)
https://www.garlic.com/~lynn/2002.html#14 index searching
https://www.garlic.com/~lynn/2002.html#48 Microcode?
https://www.garlic.com/~lynn/2002.html#50 Microcode?
https://www.garlic.com/~lynn/2002.html#51 Microcode?

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

School Help

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: School Help
Newsgroups: alt.folklore.computers
Date: Sat, 12 Jan 2002 14:44:13 GMT
Anne & Lynn Wheeler writes:
cms was 8+8+2 ... where the ending 2 contained the disk letter (aka a-z), cp/m-dos effectively moved the disk letter in front instead of at the end (defaulting to A as in CMS ... convention changed with addition of hard disk to C) .... and reduced the second 8 to three.

the cms syntax convention is (still ... since '65 or so) specifically filename (8), filetype (8) and disk letter(1) + mode(1). for many programs you specify just the filename and the program would supply the default filetype(s) & perform file search/lookup.

for instance the markup program "script" (which around '70 or so evolved from runoff like markup to dual mode with support for both runoff like markup & gml markup ... precursor to sgml, html, xml, etc) and the corresponding default filetype was "script".

misc. detail
https://www.garlic.com/~lynn/2001c.html#88 Unix hard links
https://www.garlic.com/~lynn/2001n.html#67 Hercules etc. IBM not just missing a great opportunity...

--
Anne & Lynn Wheeler | lynn@garlic.com - https://www.garlic.com/~lynn/

next, previous, index - home