List of Archived Posts

2005 Newsgroup Postings (01/01 - 01/19)

[Lit.] Buffer overruns
[Lit.] Buffer overruns
Athlon cache question
[Lit.] Buffer overruns
Athlon cache question
[Lit.] Buffer overruns
[Lit.] Buffer overruns
How do you say "gnus"?
[Lit.] Buffer overruns
OSI - TCP/IP Relationships
The Soul of Barb's New Machine
CAS and LL/SC
The Soul of Barb's New Machine
Amusing acronym
Using smart cards for signing and authorization in applets
Amusing acronym
Amusing acronym
Amusing acronym
IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
The Soul of Barb's New Machine (was Re: creat)
I told you ... everybody is going to Dalian,China
The Soul of Barb's New Machine (was Re: creat)
The Soul of Barb's New Machine (was Re: creat)
Network databases
Network databases
Network databases
Network databases
Network databases
Smart cards and use the private key
Network databases
Network databases
Do I need a certificat?
8086 memory space [was: The Soul of Barb's New Machine]
some RDBMS history (x-over from comp.databases.theory)
increasing addressable memory via paged memory?
Do I need a certificat?
Network databases
[OT?] FBI Virtual Case File is even possible?
something like a CTC on a PC
CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
something like a CTC on a PC
Higher Education places still use mainframes?
increasing addressable memory via paged memory?
John Titor was right? IBM 5100
OSI model and SSH, TCP, etc
8086 memory space
creat
[OT?] FBI Virtual Case File is even possible?
increasing addressable memory via paged memory?
something like a CTC on a PC
something like a CTC on a PC
8086 memory space
8086 memory space
creat
Foreign key in Oracle Sql
8086 memory space
Foreign key in Oracle Sql
Foreign key in Oracle Sql
8086 memory space
8086 memory space

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sun, 02 Jan 2005 08:02:59 -0700
bryanjugglercryptographer writes:
Right; that's why Mr. Wheeler's previous comparisons distinguished C environments from others.

another minor example ..

I was at a small conference in '76 ... presenting part of a new 16-way smp hardware design and software support for the hardware, virtual memory, use of virtual memory address space to partition environment and use of privilege/non-privilege hardware modes to also partition environment.

The 801 group also did presentation on design of new type of hardware architecture ... where the hardware was extremely simplified and lots of traditional integrity and isolation features had been eliminated from the hardware and moved to the software. Everything would be implemented in a simplified version of PLI called PL.8 and the operating system was called cp.r. There was no hardware support for privilege/non-previlege mode ... all programs could access all hardware features ... it was totally up to the compiler to guarantee correct program operation. The operating system had a special binder/loader ... that would validate that any programs being loaded only came from a valid, acceptable PL.8 compiler (the compiler generated a form of signature on the executable tha could be checked by the operating system at load time). Once a program was loaded, it (and runtime libraries) had direct access to all hardware features (w/o any sort of intervening operating system layer). random past 801 postings
https://www.garlic.com/~lynn/subtopic.html#801

One of the features was that the hardware support had been significantly simplified ... instead of having virtual memory segment tables with possibility of large number of defined virtual memory objects, there were just 16 segment registers. Instead of having operating system calls (like unix mmap) to handle calls for accessing different defined virtual memory objects (and validating privileges), the program could directly manipulate the segment registers (as easily as it could manipulate any base/address registers). The integrity of the system was totally dependent on the PL.8 compiler only generating correct code (and the loader only loading valid PL.8 compiled programs).

This was a integrated design trade-off involving processor architecture, operating system implementation and compiler technology that favored simplifying hardware implementation by shifting several hardware integrity features to compiler correctness technology.

Current day scenarios have privilege/non-privilege mode and use of virtual memory hardware as part of integrity isolation ... with various kinds of privilege checking by kernel calls (crossing the privilege/non-privilege boundary). There are other hardware features like low-storage-protect to trap/isolate use of zero pointers by incorrect code inside the kernel ... i.e. a big issue is problem detection & isolation (for possibly subsequent correction). A big issue is that normal failure modes when incorrect code uses zero pointer to modify low-core ... the actual failure may occur much later and it proves difficult to isolate and identity the original cause of the failure (making correct and remediation extremely difficult).

In the past couple years, hardware support for identifying memory regions as execute/no-execute and modifiable/non-modifiable has been newly introduced on common processor chips (that didn't have the support in the past). It was specifically was done to address a common length exploit associated with c programming environment where an incorrect/long length could introduce a branch to some exploit code packaged as part of extra long input data. Executable code would be marked as non-modifiable (precluding incorrect length of standard string libraries from overlaying executable code with exploit package carried in long string) and also include incorrect length overlaying branch address being able to branch to some exploit code carried as part of some long string (this results in program failure, but it catches and isolates being to execute exploit code packaged inside long string and taking advantage of common c programming environment mistakes).

so there has been a lot of information gathered about standard c programming environment leading to large number of length related exploits ... and apparently sufficient information to introduce (new) hardware related features to specifically address some of these exploits that are common in standard c programming environment.

the cve database is one such source of information about the frequency of such exploits.

for some of the hardware threads (in a number of different groups) discussing the necessity of new hardware features (and support by operating systems/kernels) to address these buffer length related exploits so common in standard c programming environments
http://hardware.mcse.ms/message13436-4.html
http://groups-beta.google.com/group/comp.sys.ibm.pc.hardware.chips/browse_thread/thread/77e8d7bce716a43e/449306e22ebc73f8?q=%2Bcomp.arch+%2B%22no+execute%22&_done=%2Fgroups%3Fq%3D%2Bcomp.arch+%2B%22no+execute%22%26hl%3Den%26lr%3D%26sa%3DN%26tab%3Dwg%26&_doneTitle=Back+to+Search&&d#449306e22ebc73f8
http://groups-beta.google.com/group/comp.sys.ibm.pc.hardware.chips/browse_thread/thread/77e8d7bce716a43e/449306e22ebc73f8?q=%2Bcomp.arch+%2B%22no+execute%22&_done=%2Fgroups%3Fq%3D%2Bcomp.arch+%2B%22no+execute%22%26hl%3Den%26lr%3D%26sa%3DN%26tab%3Dwg%26&_doneTitle=Back+to+Search&&d#449306e22ebc73f8
http://groups-beta.google.com/group/comp.sys.ibm.pc.hardware.chips/browse_thread/thread/77e8d7bce716a43e/449306e22ebc73f8?q=%2Bcomp.arch+%2B%22no+execute%22&_done=%2Fgroups%3Fq%3D%2Bcomp.arch+%2B%22no+execute%22%26hl%3Den%26lr%3D%26sa%3DN%26tab%3Dwg%26&_doneTitle=Back+to+Search&&d#449306e22ebc73f8

the above are just a few examples; for futher discussions about buffer overflow and no-execute hardware support, use google groups
http://groups-beta.google.com/
and search on buffer overflow and no execute.

the implication is that (at least in some circles) that the buffer overflow specifically associated with common c programming environment is so well accepted ... that hardware specific implementations are being done to address the problem(s).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sun, 02 Jan 2005 08:36:00 -0700
note this article from 2001
http://www.theregister.co.uk/2001/08/29/win_xp_slays_buffer_overflow/

however, even given the concerted effort ... new buffer overflow mistakes continue to be coded (the assertion is that it is so especially easy in common c programming environment).

more recent article
http://www.theregister.co.uk/2004/12/24/amd_dutch_ads/
about AMD chip hardware and support by Windows XP service pack 2

other kind of descriptions about no execute hardware for various kinds of buffer overflow issues:
http://gary.burd.info/space/Entry81.html

example comment/post from one of these hardware venues (looking at trying compensate for buffer length associated exploits in common c programming environment)

... trivial quote from some random post in one of the hardware forums
Trivial: use a language where it's automatically enforced. I.e. basically any language other than C.

... snip ...

other topic drift, just for the fun of it ... a lot of other 801-related posts
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Athlon cache question

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Athlon cache question.
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 02 Jan 2005 11:38:28 -0700
glen herrmannsfeldt writes:
I would think it would depend on the workload, and also the expectation of the users. In a case where you have mixed long and short term jobs, where the long term jobs are supposed to have lower priority, I would have thought that global would not work so well. That is, everyone is not equal, but global assumes that they are.

In the case where all users are to be treated equally, my guess would be that global would be better.


in general, global ... regardless of cache type, processor, paging, file, etc ... attempts to keep the most highly used information around for the overall system thruput. local tends to perform less well since it doesn't optimize for the overall system performance ... resulting in keeping around lower-use pages for one process while kicking out higher-user pages for another process.

The issue of LRU is whether or not recent acdess patterns accurately predict future access patterns (but that is somewhat independent of the global/local issue).

so another way of viewing it is the impact on overall system performance (cpu stall) of not having a specific page compared to not having another specific page.

so long ago, i did this policy based resource manager ... and a lot of customers started calling it the fair share scheduler because the default policty was fair share. processes using less then their target (possibly target as defined as fair share) got better dispatching priority ... so that they ran faster (at least until they caught up in their resource utilization objective).

The issue on total system thruput isn't so much dispatching priority ... but total resource thruput objectives. If you have a process that is supposed to get 80 percent of the processor ... and it is having frequent page faults ... then the impact on the overall system is much worse than a process that is supposed to get 5 percent of the processor (having the same page fault rate).

Some number of strategies were developed to directly address page fault rate of processess that deemed critical (supposed to get some significant resources) and behind schedule. One such stratigy was to run the standard global replacement ... but sporadically bypass/skip-over selected pages for process in such situation. There effectively was a global ordering of pages based on projected overall benefit to the system .... but (in effect) pages belonging to behind-schedule critical processess could be given extra points).

Part of the issue wasn't strictly page-fault rate ... is was the combination of relative page-fault rate and page-fault service time ... resulting in total non-executable time for a specific process ... and to what extent that total page-fault related non-executable time was contributing to the process not meeting the established resource utilization objective.

this is all predicated that some existing used virtual page in any way relates to future use of that same virtual page. there are the edge and pathelogical cases where there is no virtual page re-use and caching provide no benefit what-so-ever.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt
Date: Sun, 02 Jan 2005 11:12:05 -0700
"John E. Hadstate" writes:
I don't remember the "problems" of the C programming environment being a big factor influencing the architectural development of S/360 and S/370 (with their Execute-only memory protection). In fact, I'm pretty sure that most of the buffer overflow problems in those days were found in FORTRAN, COBOL, RPG and Autocoder programs. C hadn't even been invented.

so i assume that you are being flippant or sarcastic ... since 360/370 didn't have execute only memory regions. my reference to no-execute (aka somewhat the inverse of execute-only) is to recent changes to existing hardware processors to address buffer length related exploits in common c programming environments. also, i specifically mentioned that the no-execute (sort of inverse of execute-only) was recent addition to some existing processors.

original 360 had storage protection (store & fetch options), in part because standard 360 had linear real storage addressing model ... all execution was in the same, single, real address space (kernel and all applications). common in virtual memory architectures, different address spaces have been used to partition kernel and different applications from each other (i.e. the requirement for store & fetch protection features are somewhat mitigated if it isn't even possible to address the region). however, it didn't have a mechanism for marking specific storage areas as either execute or non-execute (I-fetch in the PSW could point at any location ... and could fail because of generalized storage fetch protection or invalid operation code ... but didn't have any feature tha would prevent I-fetch if the storage was accessible).

the folklore is that some of the CTSS people went to the science center on the 4th floor, 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

and worked on virtual machine/memory operating system, gml/sgml, internal operating system, etc.

other CTSS people went to the 5th floor, 545 tech sq and did Multics implemented in PLI. The unix/C genre was somewhat a re-action to the complexity and early performance implementation of multics (in much the same way that the early success of virtual machine/memory operating system done on the 4th floor was due to the complexity and early performance implementation of tss/360).

virtual memory wasn't originally introduced with 370 ... and when it was finally introduced ... the virtual machine/memory work done on the 4th floor (545 tech sq) was adapted to 370 virtual memory operation. However, the traditional batch operating system environment had linear addressing (and pointer passing paradigm) so ingrained that even with multiple virtual address space (in theory providing separation and partitioning), the kernel and applications continued to occupy the same (virtual) address space (and require storage protection to keep application code from overwriting kernel or other application routines ... carried over from the 360/370 real storage days).

the original base 370 virtual memory architecture included read-only "segment" protect (i.e. flag in the segment table entry that identified the whole virtual memory as read-only storage) ... but because of hardware implementation problem in the 370/165, it was not announced (and disabled on the models that had already implemented it). a page table entry R/O flag was finally introduced with the 3033 (providing store protection on a 4k virtual page boundary). somewhat orthogonal, low-storage-protection was introduced in the 80s ... specifically to address issue of zero pointer failure in kernel code (i.e. bugs in kernel code using incorrect zero pointer to scribble over low real storage).

the pointer passing paradigm was so ingrained in the batch orientation ... that for the evoluation into coordination between different applications isolated in their own virtual address space ... dual-address space (which was later generalized to multiple address sapce) operation was introduced. With some specific kinds of restrictions and special use ... applications could utilize pointers to address storage in different virtual address spaces (using new forms of semi-privilege operation).

some of the issues in traditional c programming environments is heavy use of value passing paradigm (compared to pointer passing paradigm) ... coupled with frequent/common length ambiquity about buffers and strings. the general scenario is the copy of data with arbitrary length into storage area that is of possibly ambiquous size.

the rather recent hardware support to some processors for non-execute support is specifically looking at scenario of copy operation of data of arbitrary length into an area ... which results in overlaying other data. the exploit scenario is to package into a long string (longer than anticipated by the application) a relative address pointer and some exploit code ... which when copied, the exploit address pointer is designed to overlay an infrastructure value; eventually resulting in execution being transferred to the packaged exploit code.

The countermeasure for executing the exploit code embedded in an arbitrary string ... is to require storage areas to be specifically identified as executable (and/or non-executable). Note this is different that the execute-only feature which was originally designed as sort of a copyprotect feature ... it wasn't designed to prevent execution of arbitrary memory areas ... it was designed so that only I-fetch could fetch form the specific memory areas and prevent code from examining the executed instructions (i.e. the exact nature of the executable code was to be hidden ... but still be allowed to execute).

• Execute-only is different than no execute ... execute-only is sort of copyprotect countermeasure on executable code; no execute is countermeasure to prevent random/arbitrary locations of memory being able to execute.

• 360 had fetch protect (not limited to execution ... but all kinds of fetches) ... in part because the standard paradigm was single linear real address space (no virtual memory address spaces to provide partitioning/separation). fetch protect wasn't very extensively used

• 360 had store protect ... again, in part because the standard paradigm was single linear real address space (no virtual memory address spaces to provide partitioning/separation).

• execute-only was not a 360/370 feature ... but was found in some places in the industry ... somewhat more as a copyprotect of executable code.

• virtual memory systems haved tended to have things like r/o segment protection ... allowing the same storage image (shared segment) to appear in multiple different address spaces concurrently ... but preventing polution of one address space by application running in a different address space (preserving partitioning/isolation of virtual address space paradigm). this was more difficult to adapt to a lot of 360/370 code because of relatively common use of "self-modifying" code technique (where a preceeding instruction modifies a following instruction).

• finer granularity memory region R/O protection is possible to specifically address (shared) code modification by other code (in error) ... as opposed to the more generalized R/O storage potectuion. this is somewhat aided added by RISC/Harvard architectures with separate I & D caches and requiring specific instructions to materialize storage alteration operations (that might show up in the D cache) in the I cache. 360/370 actually has performance/implementation issues in this area since it has allowed "self-modifying code" (aka the previous instruction stores some data into the following instruction ... which causes problems with things like pipelined prefetch & decoding of instructions in pipeline).

• some-what relatively recent no execute memory regions (as distinct from more generalized fetch protect) as countermeasure to relative common length related exploits in standard c programming environments.

for some topic drift ... random past posts about common 360/370 use of self-modifying instruction paradigm
https://www.garlic.com/~lynn/2001.html#39 Life as a programmer--1960, 1965?
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002h.html#19 PowerPC Mainframe?
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe

some extended posts about pointer passing paradigm and the evoluation of dual-address space, multi-address space and access register operation:
https://www.garlic.com/~lynn/98.html#36 What is MVS/ESA?
https://www.garlic.com/~lynn/2000c.html#84 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2000d.html#28 RS/6000 vs. System/390 architecture?
https://www.garlic.com/~lynn/2000e.html#58 Why not an IBM zSeries workstation?
https://www.garlic.com/~lynn/2001d.html#28 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001d.html#30 Very CISC Instuctions (Was: why the machine word size ...)
https://www.garlic.com/~lynn/2001h.html#73 Most complex instructions
https://www.garlic.com/~lynn/2001i.html#13 GETMAIN R/RU (was: An IEABRC Adventure)
https://www.garlic.com/~lynn/2001k.html#16 Minimalist design (was Re: Parity - why even or odd)
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2002g.html#5 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#17 Black magic in POWER5
https://www.garlic.com/~lynn/2002g.html#18 Black magic in POWER5
https://www.garlic.com/~lynn/2002h.html#21 PowerPC Mainframe
https://www.garlic.com/~lynn/2002l.html#51 Handling variable page sizes?
https://www.garlic.com/~lynn/2002l.html#57 Handling variable page sizes?
https://www.garlic.com/~lynn/2002n.html#58 IBM S/370-168, 195, and 3033
https://www.garlic.com/~lynn/2002n.html#74 Everything you wanted to know about z900 from IBM
https://www.garlic.com/~lynn/2002p.html#43 cost of crossing kernel/user boundary
https://www.garlic.com/~lynn/2002q.html#1 Linux paging
https://www.garlic.com/~lynn/2003c.html#13 Unused address bits
https://www.garlic.com/~lynn/2003d.html#53 Reviving Multics
https://www.garlic.com/~lynn/2003d.html#69 unix
https://www.garlic.com/~lynn/2003e.html#0 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003e.html#12 Resolved: There Are No Programs With >32 Bits of Text
https://www.garlic.com/~lynn/2003g.html#13 Page Table - per OS/Process
https://www.garlic.com/~lynn/2003m.html#29 SR 15,15
https://www.garlic.com/~lynn/2004c.html#6 If the x86 ISA could be redone
https://www.garlic.com/~lynn/2004e.html#41 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004f.html#27 [Meta] Marketplace argument
https://www.garlic.com/~lynn/2004f.html#53 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004n.html#26 PCIe as a chip-to-chip interconnect
https://www.garlic.com/~lynn/2004n.html#54 CKD Disks?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2004o.html#57 Integer types for 128-bit addressing

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Athlon cache question

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Athlon cache question.
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 02 Jan 2005 14:13:08 -0700
the other slightly related access pattern "cache" management is more associated with (longer term) file access ... although the basic principle applies to all kinds of caches.

i mentioned having done the disk record access trace (highly optimized, capable of being used in high-thruput production environments) ... first for modeling lots of different file behavior (system file caches, controller caches, disk device caches, etc) ... and then later looking at using the information as part of standard production system operation
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#79 Athlon cache question

i had also done the implementation for an internal backup/archive system that was used in a lot of internal locations (especially in the silicon valley area) ... which went thru a few internal releases ... and then became available as workstation datasave ... morphing into adsm and currently tsm
https://www.garlic.com/~lynn/submain.html#backup

there have been a couple of infrastructures that attempt to manage the disk storage space as file cache ... HSM (hierarchical storage manager), SMS (system managed storage), and ADSM/TSM.

In the vs/repack traces and program re-ordering ... it attempted to pack things into the same page(s) that tended to be used together ... this frequently would take large, relatively weak working sets and turn them into much more compact, stronger working sets.

the file storage stuff has coming up with similar constructus ... i think in TSM they are currently referred to as something like containers ... collection of files that will tend to be migrated together ... because the indications are that they tend to be used together.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 02 Jan 2005 17:16:14 -0700
"John E. Hadstate" writes:
If you recall, there were two bits for each memory region: read-enable and write enable. No-read, No-write meant no-access. Write-no-read meant I-fetch-only access if the processor was in non-privileged state. It meant read-write if the processor was in privileged state.

Your Big Blue credentials notwithstanding. I spent many nights reading the system architecture documents for S/360, some of which were intended for distribution only to Field Service engineers, during the summer of 1970. The architecture did have support for no-access, read-only, read-write, and execute only. Whether the hardware for a particular model of S/360 supported it depended on other (sometimes unreasonable) things, such as the presence of floating point support or a memory expansion option.

You are right about the lack of VM support in S/360. O/S 360 MFT could have multiple partitions, but the linker did all the relocation work up front. However, I believe the S/360 Model 91 did have virtual memory support. To be fair, I believe NASA was the only customer for it and it had problems and a price tag that made it uninteresting to the commercial world. Clemson University had a S/370-155, then a 165 in 1970 that had virtual memory. IBM sales tried to sell me a S/370-110 in 1972 that had virtual memory hardware. They were still peddling DOS for it which, by that time, had virtual memory support grafted on.

Self-modifying application code became an interesting problem in the 360 Model 91 (was there also a 61) because of pipelined prefetch. IBM's official (and documented) answer was, "Don't do that." The 360/370 instruction set included an "Execute" instruction for modifying fields (mostly length fields) in arbitrary instructions outside the instruction stream without actually changing the contents of memory where the modified instruction was fetched from. (This presented some interesting problems if the target instruction crossed a page boundary and one of the pages was missing.) Thus, while the code could be self-modifying in a sense, it didn't have to be as bad as it sounds. Of course, that's not to say that some people didn't get away with it, especially on small 360's running DOS or TOS where memory management was pretty loose.


there might be lots of stuff that could be model specific ... but because they weren't in general use ... didn't gain support as part of the standard programming paradigm.

the original virtual memory on 360 was done on a specially modified 360/40 ... but as far as I know, only one such machine was built. This was built for the virtual memory/machine project at the science center, 4th floor, 545 tech sq
https://www.garlic.com/~lynn/subtopic.html#545tech

The standard virtual memory offering for 360 was the 360/67, which also supported both 24-bit and 32-bit virtual addressing modes. It basically was a 360/65 with virtual memory hardware support added (and in the smp case, other types of stuff). The official corporate virtual memory operating system for the 360/67 was tss/360. Because of complexity, performance, and implementation issues, a lot of places took their 360/67 and used them for other stuff. Some number of places just ran their 360/67 in straight 360/65 mode with batch operating system. UofM wrote the Michigan Terminal System (MTS) which some number of locations ran on 360/67. The majority of the 360/67s eventually were running the virtual machine/memory system developed at the science center (the center converted the system from the custom modified 360/40 to 360/67 when the machine became available) ... or some commercial derivative of the same ... aka IDC, NCSS, etc for commerical time-sharing service bureaus, misc. past postings
https://www.garlic.com/~lynn/submain.html#timeshare

There are some similarities about the unic/C split off from Multics and the virtual memory/machine system done by the science center vis-a-vis the official coporate strategic operating system TSS/360.

The folklore is that the original announced products were 360/60, 360/62 and 360/70 ... all machines with real storage that cycled at one microsecond. I don't know if any such machines actually shipped. These machines were quickly replaced with the announcement of the 360/65, 360/67, and 360/75 ... which were the machines upgraded with real storage that cycled at 750ns.

note/update:

I remember reading an early document about 360/6x machine with virtual memory having one, two, and four processors. I sort of had vaque recollection that it was model number other than 360/67.

however, i've subsequently been told that 360/60 was with 2mic memory and 360/62 was with 1mic memory. both models never shipped, and were replaced with 360/65 with 750ns memory. the 360/67 then shipped as 360/65 with virtual memory ... only available in one (uniprocessor) and two processor (multiprocessor) configurations

https://www.garlic.com/~lynn/2006m.html#50 The System/360 Model 20 Wasn't As Bad As All That


I never heard of 61. There was 91, 92, 95, 360/195 and 370/195. There was also a 360/44 ... sort of subset of 360/40 instructions/hardware with enhanced floating point hardware performance.

I had some involvement with a project that looked at doing a 370/195 that emulated two processor machine ... doing red/black tagging of registers and instructions in the pipeline. The problem with these pipeline machines was that a (almost all) branch instruction would drain the pipeline (no branch prediction &/or speculative execution). Except for very specialized codes, 370/195 pipeline rarely got more than half-full before encountering a branch instruction (resulting it running at something like half the peak rated mip rate). Simulating two-processor machine with dual i-streams (something like the modern day hyperthreading support), could have two i-streams, each keeping the 370/195 i-stream half-full ... which might amount to a full pipeline.

The execute instruction took another instruction as an argument and used a parameter from the execute instruction to modify the 2nd byte in the target instruction. This was the length field in the character and decimal instructions and the "immediate" field, in the immediate instructions. The 360/67 required a mimimum of 8 hardware relocate look-aside registers ... since the worst case minimum instruction execution could require 8 different addresses:
2; execute instruction crosses a page boundary, 2; target instruction crosses a page boundary, 2; target is "ss" instruction (aka storage to storage) with two storage address operands 2; target is "ss" nstruction with both storage operands crossing a page boundary (i.e. precalculates both start and end of storage operands) = 8

The maximum possible length of ss-instruction storage operand is 256byte, instruction decode only needed to preresolve the start and end address (which wouldn't cross more than one page boundary using either 2k pages or 4k pages). on 360 and most 370 instructions, instruction decode would resolve both starting and ending operatnd address ... and test for access permission. If there wasn't access permission for all storage references (single storage operate for rs/rx/ri and two in the ss-instruction case) for both the starting and ending addresses of the resepective storage location, there would be a hardware interrupt before start of instruction execution.

It might be observed that this may have helped promote software paradigms that kept track of all lengths ... since there were no standard machine instructions that supported implied lengths.

Images of executable code on disk included control information about address location dependencies in the code. The link/loader would modify the executable code image after it was brought into storage to adjust address location dependenies. This loader/link function was independent of whether you ran PCP, MFT, or MVT. Even in the purely PCP environment running only a signle application at a time ... an application program executable image would be loaded (and the address dependencies adjusted) ... and then any library program images tended to get loaded after the application program code. Since the application program code could be of arbitrary size ... the load location of library code could be somewhat arbitrary. Quite a few of the address location dependencies were address constants of subroutine entry points that tended to be intermixed with code. A specific PCP release would tend to have a constant starting address for loading application code ... but things like library code would be loaded at somewhat arbitrary locations (dependent on the size of the application being run). MFT (& MVT) could have multiple different addresses for starting the load of application code (but the link/loader already had to default for arbritrary address adjustments because of the generalize issue of loading library code). This is characteristic somewhat orthogonal to having a pointer passing paradigm vis-a-vis possibly value passing paradigm, whether standard programming model allowed for execute only, read only, no execute, etc. however, for lots of postings on the address dependency problem with shared, r/o code (in 360/370 architecture):
https://www.garlic.com/~lynn/submain.html#adcon

The first engineering 370 with virtual memory support was the 370/145. The standard front panel of all 370/145s had a light labeled XLATE (long before virtual memory was announced for the 370). There was lots of customer speculation that "XLATE" stuod for translate-mode (aka address relocation support virtual address spaces).

This machine was used to port the virtual machine/memory operating system done by the science center from 360/67 to 370 (there was some number of differences between the 360/67 and 370 virtual memory architecture, including the 370 architecture only supported 24-bit address mode and didn't support 32-bit address mode). The initial prototype of the standard corporate batch operating system involved taking a straight forward MVT operating system ... and crafting a small virtual address space layer .... quite a bit of it from bits & pieces from the science center operating system. For the most part the existing MVT operating system code ... continued to operate as if it was running on a real storage machine that happened to have 16mbytes of memory.

Actually, the science center had a joint project with the endicott engineers responsible for the 370/145 virtual memory hardware. The standard 360/370 hardware reference manual is called the principles of operations ... which has detailed description and operation of instructions had infrastructure for 360/370. It has traditional been (since the late '60s) a machine readable file that is part of something called the architecture manual "red-book" (because it was traditional distributed in a red 3ring binder). The file could be printed/formated under command control as the full architecture manual ... or as the publicly available principles of operation subset (with conditional formating controls). The full architecture manual tended to have lots of engineering notes, justifications for the instruction or feature, and various trade-off discussions ... like why some things might be feasable with a specific hardware technology ... but not possible as part of the official 360/370 architecture (because it possibly wasn't practical across all hardware technology used in the various different processor models). In any case, special operating system software was crafted to emulate the full virtual memory 370 architecture ... running on a 360/67. The result was 370 virtual machines that could be used for testing (even tho the 360/67 and 370 virtual memory architecture had numerous differences). This project had virtual 370 machines up and running production a year before the first engineering virtual memory 370 machine was operational.

principles of operation is a generally available customer manual (current versions for 390 and Z series have been available online)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/CCONTENTS?SHELF=EZ2HW125&DN=SA22-7201-04&DT=19970613131822

it is unlikely the architecture red-book version was available in the field.

for specific models, you could also get the functional characteristic manual. Some number of the 360 functional characteristics manuals have been scanned and made available online:
http://www.bitsavers.org/pdf/ibm/360/

including 40, 44, 65, 67, 91, 195, etc, the 360 functional characteristic manauals tended to included detailed model specific things like instruction timings. numerous other early hardware and software manuals have also been scanned and are online

For each specific machine, FEs would also have detailed wiring and microcode implementation manuals ... typically located in the machine room near the machine. Specific machine models might have implementation for stuff not prescribed by 360/370 architecture

at the above site, there are (at least) "-0", "-6", and "-7" of the principles of operation manual. Page 17 has short description of "protection features" ...which might be available on a 360 machine, one is store protection feature and the other is fetch protection feature. I believe for OS/MFT (OS/MVT, and later) the only required feature was store protect (I don't believe the operating system and/or any other software required fetch protection feature to be available for operation).

Prior to MFT (and then MVT) requiring store protect feature, it was possible for applications to stomp on low-storage and do things like enter privilege/supervisor state ... possibly an exploit, but also a mechanism used by some number of regular application ... like hasp ... some number of past hasp postings
https://www.garlic.com/~lynn/submain.html#hasp

Note, it doesn't mention execute-only as any standard 360 feature. However 360 architecture wouldn't preclude the 360/40 engineers from implementing such a feature ... it just wasn't part of 360 (and wouldn't likely be used by any standard programming paradigm). 360/40s typically had other stuff that weren't part of 360 ... like microcode for hardware emulation of 1401/1410/etc ... as an alternative to the micrcode engine emulating 360 instructions ... typically controlled by some switch on the front panel ... a picture of real 1401:
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH2044.html

picture of 360/40 (there should be some sort of switch on the front panel ... probably lower center cluster that could select between 360 instruction microcode or 1401 instruction microcode operation of the machine):
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH2040.html

a picture of similar 360/44 used for floating point/scientific workloads:
http://www-03.ibm.com/ibm/history/exhibits/mainframe/mainframe_2423PH2044.html

the following reference has some amount material about the evolution of CTSS to multics ... project mac choosing the GE machine instead of the ibm 360/67 hardware, and various and sundry tidbits about tss/360 ... but mostly about the virtual memory/machine operating system developed by the science center (initially on a custom modified 360/40 with virtual memory and later converted to standard 360/67 product line machine)
https://www.leeandmelindavarian.com/Melinda#VMHist

The "low-end" 370 was the 115/125 models developed by Boeblingen. Except for a few exceptions, the 360 and 370 machines were microcoded implementations. On the low & mid-range 370s ... the native processor engine typically executed an avg. of 10 instructions for every 370 instruction. The 115/125 hardware implementation used a 9 position, shared memory buse ... which could have microprocessor engines installed. One microprocessor would have microcode loaded for 370 instruction set and the other (typically) 3-8 microprocessor engines would have other microcode installed to handle various control and I/O functions. In the 115 all the microprocessors were identical (except for the microcode load). The 125 was identical to a 115 except the microprocessor engine that had the 370 microcode load was approx. 25% faster than the other engines. The 115 was rated at about 80KIPS 370 (requiring a native processor engine of about 800KIPS) and the 125 was rated at about 100KIPS 370 (requiring a native processor engine of about 1MIPS).

a site listing most of the 360 and 370 models with original announce data as well as first customer ship (including 360/60, 360/62, 360/70 announce dates):
https://web.archive.org/web/20050207232931/http://www.isham-research.com/chrono.html

per above reference, 370/165 (w/o virtual memory) was announced June, 1970 and first customer ship of a machine was spring of 1971.

and for some clemson related topic drift:
https://people.computing.clemson.edu/~mark/fs.html
misc. other clemson references that somebody might find of interest:
https://people.computing.clemson.edu/~mark/acs_technical.html
http://www.cs.clemson.edu/~mark/acs_timeline.html

also per above, misc 370 announce & ship dates: 370/125 ANN 10/72, first ship, spring 73; 370/115 ANN 3/73, first ship spring 74.

370 virtual storage was announced 8/72 and immediately available on 370/135 and 370/145 with new microcode load .. but required purchase of field installable hardware to add virtual memory to 370/155 and 370/165. Operating systems were dos/vs (virtual storage version of dos), vs1 (somewhat virtual storage version of mft), and vs2 (virtual storage version of mvt).

There was some argument with the 360/165 engineers about implementing the full 370 virtual memory architecture (as per the "red-book"); they claimed it would add six month delay to the schedule to include support for selective invalidate as well as segment r/o protect. They won since it was more important to get virtual storage announced and in the field than to make sure that all features of the architecture were included (the excluded features then had to be removed from the 135 & 145 implementations that already existed).

minor summary
8/70 370/165 announced 2q/71 370/165 FCS .. first customer ship 8/72 virtual memory/storage announced 10/72 370/125 announced 3/73 370/115 announced 2q/73 370/125 FCS 2q/74 370/115 FCS

short description of 360 & 370 machine models:
http://www.beagle-ears.com/lars/engineer/comphist/model360.htm

30 year history of MTS:
http://www.clock.org/~jss/work/mts/30years.html

random past references to 370/195 dual i-stream project:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/95.html#3 What is an IBM 137/148 ???
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003f.html#33 PDP10 and RISC
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing

random past references to UofM MTS (virtual memory operating system built for the 360/67 and later ported to 370 virtual memory)
https://www.garlic.com/~lynn/2000b.html#61 VM (not VMS or Virtual Machine, the IBM sort)
https://www.garlic.com/~lynn/2000c.html#44 WHAT IS A MAINFRAME???
https://www.garlic.com/~lynn/2000f.html#52 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000g.html#0 TSS ancient history, was X86 ultimate CISC? designs)
https://www.garlic.com/~lynn/2000.html#91 Ux's good points.
https://www.garlic.com/~lynn/2001m.html#55 TSS/360
https://www.garlic.com/~lynn/2001n.html#45 Valid reference on lunar mission data being unreadable?
https://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we need it?
https://www.garlic.com/~lynn/2002n.html#54 SHARE MVT Project anniversary
https://www.garlic.com/~lynn/2002n.html#64 PLX
https://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#10 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003f.html#41 SLAC 370 Pascal compiler found
https://www.garlic.com/~lynn/2003j.html#54 June 23, 1969: IBM "unbundles" software
https://www.garlic.com/~lynn/2003k.html#5 What is timesharing, anyway?
https://www.garlic.com/~lynn/2003l.html#30 Secure OS Thoughts
https://www.garlic.com/~lynn/2003l.html#41 Secure OS Thoughts
https://www.garlic.com/~lynn/2004c.html#7 IBM operating systems
https://www.garlic.com/~lynn/2004g.html#4 Infiniband - practicalities for small clusters
https://www.garlic.com/~lynn/2004.html#46 DE-skilling was Re: ServerPak Install via QuickLoad Product
https://www.garlic.com/~lynn/2004.html#47 Mainframe not a good architecture for interactive workloads
https://www.garlic.com/~lynn/2004l.html#16 Xah Lee's Unixism
https://www.garlic.com/~lynn/2004n.html#4 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004n.html#25 Shipwrecks
https://www.garlic.com/~lynn/2004n.html#34 RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#20 RISCs too close to hardware?
https://www.garlic.com/~lynn/93.html#23 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#25 MTS & LLMPS?
https://www.garlic.com/~lynn/93.html#26 MTS & LLMPS?
https://www.garlic.com/~lynn/98.html#15 S/360 operating systems geneaology --
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Sun, 02 Jan 2005 19:17:57 -0700
Anne & Lynn Wheeler writes:
for specific models, you could also get the functional specification manual. Some number of the 360 functional characteristics manuals have been scanned and made available online:
http://www.bitsavers.org/pdf/ibm/360/


oh and this is the 370 principles of operation ... after virtual storage was announced ... and it includes a little more detailed description of the storage key-based protection mechanism ... as well as stuff added to the storage keys for virtual memory operation (pg. 38) ... and is nominally defined to be upward compatible with 360.
http://www.bitsavers.org/pdf/ibm/370/princOps/GA22-7000-4_370_Principles_Of_Operation_Sep75.pdf

The psw status storage protection is privilege status ... it can have a value between 0&15 (zero meaning matches everything). If the value in the psw is non-zero and doesn't match the 4bit value in the "storage key" (one for each 2k block of real memory) then there is a storage protection exception.

for 370 there are 7 bits (of logical 8bit byte) defined (see pge. 38 in the above referenced document):
bit(s) 0-3 protection key value 4 fetch protection flag 5 storage reference bit 6 storage change bit 7 unused

bits 5&6 are status bits added for 370 so that the hardware can record if the associated 2k block of memory has ever been referenced and/or changed. normal programming operation is to test the bits and then reset the storage key value with zeros for bit 5 (and possibly bit 6).

standard 360 only had (privilege mode) insert storage key (ISK) and set storage key (SSK) operations (which is how the page replacement algorithm on 360/67 interegated the reference bit). for 370, a new instructiuon, reset reference bit (RRB) ... the instruction condition code indicated the reference bit value before it was cleared to zero.

standard 360 (& 370) store protect was implied by a non-zero key in the psw and a mismatch between the PSW key and the storage key for a 2k block of storage. there was no separate bit that specified store protection. Setting the fetch protect flag resulted in both fetches and stores being checked for a mismatch between the 4bit PSW key and the 4kbit storage protection key (for 2k range of storage). the description on pg. 38 in the above referenced principles of operation specifically says that with fetch protection flag is set ... fetch protection is applied to all storage fetches ... both instruction and instruction operands (w/o regard to the type of fetch).

there is no provision (at least in the standard 360/370 architecture implementation) for execute-only fetch operation (i.e. allow instruction fetches but disallow instruction operand fetches) because there is only a single bit used .... the fetch protection bit

• when the fetch protect bit is zero there is only store protect checks when there is a non-zero PSW key and a mismatch between the PSW key and the storage key for the 2k block of storage.

• when the fetch protect bit is one there is both store protect checks and fetch protect checks when there is a non-zero PSW key and a mismatch between the PSW key and the storag ekey for the 2k block of storage (and as specified, fetch protection is applied equally to instruction fetches as well as instruction operation fetches).

In most of the 360s & 370s .. the storage key array could have an 8bit byte for each 2k block of real memory. In the vanilla 360 case, that might result in their being 3 unused bits for every storage key (although in 360 time-frame, it might just be as likely to have implemented only the five bits per 2k real memory ... and if the feature wasn't installed the storage key array wouldn't exist at all on the machine ... and ISK/SSK instructions would just be null operations).

The 360/67 used the 7bit storage key array defined in 370 ... aka a 4bit protection key value, a fetch protection flag, a storage reference bit and a storage change bit.

I have no knowledge of any possible "under the covers" support in the storage protection feature implementation on the 360/40 that could have used an extra bit in the storage key array to be able to specify execute-only (aka allow instruction storage fetch ... but preclude instruction operand storage fetch). I never tripped across any definition of any 360 or 370 documents that happen to mention it.

Given the description of separate fetch and store protection "features" on 360 ... it is possible that if the fetch protection wasn't installed ... there would only be 4bit key for each 2k storage block in the storage key array ... and the only support check for storage protection violation on stores (with no hardware or microcode added to check for storage protection violations on fetches).

note that in the 360 principles of operation manual (referenced in the previous posting) description that starts on pg.17 and carries over to pg.18 ... specifically describes setting store protect (i.e. fetch protect bit is zero) or setting "fetch-and-store" protect (i.e. setting the fetch protect bit to one). There is no additional flag bit defined (at least in the 360 principles of operation) ... supporting an 3rd state/condition ... allow instruction fetch but disallow instruction operand fetch.

for some topic drift ... a couple recent posts on cache replacement algorithms:
https://www.garlic.com/~lynn/2004q.html#18 PR/SM Dynamic Time Slice calculation
https://www.garlic.com/~lynn/2004q.html#58 CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
https://www.garlic.com/~lynn/2004q.html#72 IUCV in VM/CMS
https://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#76 Athlon cache question
https://www.garlic.com/~lynn/2004q.html#77 Athlon cache question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

How do you say "gnus"?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: How do you say "gnus"?
Newsgroups: gnu.emacs.gnus,alt.folklore.computers
Date: Sun, 02 Jan 2005 19:51:54 -0700
David Sumbler writes:
Now for the trivial question: how do you say "Gnus"? Is it pronounced like "News", or do the cognoscienti pronounce it in some other way?

i always pronounced it ga-news ... similarly to having pronounced gnosis as ga-nosis ... and later gnu as ga-new (g wasn't silent)
http://www.gnu.org/

gnosis was/is "great new operating system in (the) sky" (capability-based operating system) done by tymshare in the late 70s and early 80s ... before M/D bought them and spun-off tymnet to british telecom and gnosis to a startup called key logic and renamed keykos. in that time-frame, i tended to have regularly monthly meetings with some of the tymshare people (in part because they had a vm/370-based time-sharing service) ... and was brought in by M/D to do gnosis technical audit for the spin-off.

keykos/gnosis somewhat was reborn as eros on intel architecture ... and is being touted as operating system designed to get an CC/EAL7 evaluation:
http://www.eros-os.org/

and keykos
http://cap-lore.com/CapTheory/upenn/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[Lit.] Buffer overruns

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [Lit.] Buffer overruns
Newsgroups: sci.crypt,alt.folklore.computers
Date: Tue, 04 Jan 2005 17:30:29 -0700
and for even more on microcode, languages, etc lore

multics was on 5th floor of 545 tech sq ... and had implemented it in PLI ... and relatively recent study claimed that there had been no instances of buffer overrun in multics.

the science center was on the 4th floor
https://www.garlic.com/~lynn/subtopic.html#545tech

and the boston programming center was on the 3rd floor. the boston programming center was responsible for something called CPS (conversational programming system ... that supported interactive PLI and BASIC ... running under various flavors of os/360. BPC had done a microcode package for the 360/50 that allowd CPS to run significant faster than with standard 360 instruction set (I don't have details on what was in the microcode package, but I believed it got most of its speed up from specifialized hardware string processing instructions).

when the development group split off from the science center ... they eventually expanded and took-over the 3flr and much of BPC people. A couple people that they didn't pick up, like Nat Rochester and Jeen Sammet ... eventually moved up to 4th floor science center.

could use search engine on "programming languages" and "Sammet".

in the early '70s, the science center ported apl\360 to cms\apl ... from a 16k-32k byte real memory environment to a 8m-16mbyte virtual memory environment. one of the things that had to be done as part of that effort was to redo the storage manager for a virtual memory environment. apl & list share some of the same characteristics that pointers and storage allocation are underneath the covers (somewhat analogous to java).

when i transferred from science center to SJR in the late '70s ... i got an office about 8-10 doors down from John Backus. search engine could be used on "backus" and "fortran".

at that time, sjr was still running 370/195 with mvt (one of the last before the conversion of batch operating system to virtual memory). Numerous people were complaining that job stream backlog on 370/195 was 3 months. palo alto science center eventually added some checkpointing on one of their jobs that they were getting 3mth turn-around and ran it in the background and offshift on their 370/145. They claimed to have gotten slightly bettern than 3mth turn around using that mechanism.

another job that was getting long turn around was the air bearing simulation work. work was going on with regard to "flying" disk heads and the air bearing simulation was critical part of finding out how to keep the heads off the surface.

we had done some operating system assurance work in bldgs. 14&15 that enabled them to do their disk development testing in operating system environment including being able to concurrently test multiple devices (they had been doing stand-alone testing with one device at a time on schedule program .... because running with an operating system had been 15 minutes MTBF). Because of critical nature of disk testing, the engineering &/or product test labs would frequently get engineering model #3 (cpu engineers got the first two engineering cpus and disk enginneering got the 3rd).
https://www.garlic.com/~lynn/subtopic.html#disk

Any case, in that time-frame, bldg. 15 got something like engineering model #3 of 3033. For very specialized codes, you could get sustained peak thruput on 370/195, but most stuff ran about half of peak (about the same as 3033). bldg. 15 was happy that they were able to get 4-6 engineering devices concurrently ... w/o having to resort to scheduling dedicated, stand-alone machine time ... even if it only used 2-3 percent of the 3033 cpu. So, in general the machine was essentially idle (from CPU standpoint) and we managed to move the air bearing simulator from the 370/195 in bldg. 28 (with backlogs listed in weeks/months) to the 3033 in bldg. 15 (with effectively immediate access ... but only for our good friends).

random drift ... other stuff at sjr was original rdbms work ... random past posts
https://www.garlic.com/~lynn/submain.html#systemr

random past posts mentioning Jean Sammet:
https://www.garlic.com/~lynn/2000d.html#37 S/360 development burnout?
https://www.garlic.com/~lynn/2000f.html#66 360 Architecture, Multics, ... was (Re: X86 ultimate CISC? No.)
https://www.garlic.com/~lynn/2001m.html#47 TSS/360
https://www.garlic.com/~lynn/2002h.html#59 history of CMS
https://www.garlic.com/~lynn/2002j.html#17 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2002o.html#76 (old) list of (old) books
https://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
https://www.garlic.com/~lynn/2003c.html#0 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003c.html#1 Wanted: Weird Programming Language
https://www.garlic.com/~lynn/2003k.html#55 S/360 IPL from 7 track tape
https://www.garlic.com/~lynn/2004d.html#42 REXX still going strong after 25 years
https://www.garlic.com/~lynn/2004.html#20 BASIC Language History?
https://www.garlic.com/~lynn/2004m.html#54 Shipwrecks

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

OSI - TCP/IP Relationships

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OSI - TCP/IP Relationships
Newsgroups: comp.protocols.tcp-ip
Date: Wed, 05 Jan 2005 13:35:49 -0700
firadat writes:
Firstly - "Does TCP/IP inherit anything from OSI" and the next question is "Is it valid to say that OSI was the basis of TCP/IP existance"

OSI has a very point-to-point copper wire network orientation ... many claimed it reflects the viewpoint of telco people.

in the late 80s, there was an attempt to get work item on HSP (high speed protocol) introduced in x3s3.3 (iso chartered body for network & transport layer standards). ISO had compounded the problem of the severe OSI limitions by directives that no standards work could go on for anything that didn't conform to the OSI model.

HSP would go directly from the transport layer to the MAC interface (with internetworking support) and therefor violated the OSI model because

  1. it bypassed the layer 3/4 interface, violating the OSI model
  2. it supported internetworking (aka IP, which doesn't exist in the OSI model) and therefor violated the OSI model
  3. it interfaced to the LAN/MAC interface. LAN/MAC interface violates the OSI model ... being an interface that sits somewhere in the middle of layer3/networking .... and therefor anything interfacing to the LAN/MAC interface also violates the OSI model.
random other posts on the subject:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Soul of Barb's New Machine

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Soul of Barb's New Machine
Newsgroups: alt.folklore.computers
Date: Wed, 05 Jan 2005 14:19:25 -0700
"David Wade" writes:
Personally I think IBM has killed VM in its current incarnation, I thought SP6 and fullscreen CMS was dire, but with all the gunge on a modern system it seems like "bloatware" to me!

there was frequently an ongoing battle between architecture purity and consistency and people from traditional operating system background who were in the habit of adding anything to the system with any thot to the architecture consequences.

for a long time it seemed trivially easy to add stuff because there had been so much work on preserving architecture simplicity and purity ... but as more and more people crafted traditional operating system warts on the infrastructure with little or no regard to KISS & architecture ... it started to loose some of the very characteristic that made it so easy to modify (they modified because it was so easy to modify ... and with enuf modifications .... it can totally corrupt its original architecture consistency ... and at some point it starts to look like just another operating system).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

CAS and LL/SC

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: CAS and LL/SC
Newsgroups: comp.arch
Date: Wed, 05 Jan 2005 14:33:09 -0700
Del Cecchi writes:
Just to throw some gasoline on the fire, now that it is the new year, isn't a parallel Systplex of Z series e-servers (formerly known as 390) a cluster? Does it use message passing?

my wife did her stint in POK in charge of loosely coupled architecture (she and the person responsible for tightly coupled architecture reported to the same person). she specified a lot of the architecture at that time .... but almost all of the focus was on building bigger and faster SMPs.

she claimed that at the time, one of the few operations that paid any attention was the group doing IMS hot-standby. she lost some number of battles like trotter to try and turn it into a really efficient message passing operation. there was a story from SJR using a modified trotter with a broadcast protocol to synchronize eight loosely-coupled complexes taking subsecond time ... but when forced to a half-duplex LU6.2 protocol, it took on the order of a minute or two to achieve the same synchronization of the complex.

random past post related to my wife's stint in POK and her Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

we subsequently did a project that turned out ha/cmp ... lots of past references
https://www.garlic.com/~lynn/subtopic.html#hacmp

a specific reference from the project
https://www.garlic.com/~lynn/95.html#13

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

The Soul of Barb's New Machine

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Soul of Barb's New Machine
Newsgroups: alt.folklore.computers
Date: Wed, 05 Jan 2005 14:50:33 -0700
there is some claim, some place ... that KISS can be much, much harder than complex.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Amusing acronym

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Amusing acronym
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 05 Jan 2005 16:52:35 -0700
patrick.okeefe@ibm-main.lst (Patrick O'Keefe) writes:
Sarcasm. "I've been moved", like almost all other "meanings" does not reflect well on IBM. It's something IBM would not want associated with its initials. To make matters worse, it was a true attribute.

during the '60s & '70s was period of very rapid growth for IBM ... and experienced people in the sales, marketing, & field support were sometimes spread pretty thin. sometimes as soon as anybody gained any experience they were rotated to a new location. when i was an undergraduate ... tqhere was a period where it seemed the university got a new system engineer or two every six months ... and as soon as we started to get them trained ... they were rotated someplace else and we got system engineers.

i was helping with HONE during the 70s and into the 80s. It was the online infrastructure for sales and marketing (and sales/marketing support). In the late '70s, HONE US was consolidated at a single site in Cal. ... and it was pushing 40,000 userids (a couple years later as we got loosely-coupled working better, it was replicated with a second site in Dallas and then a third site in Boulder for availability .... basically triple redundant on the possibility that something like an earthquake might take out the cal. site).

In the early 70s when EMEA hdqtrs was moved from the US to La Defense (outside of paris) ... i handled the cloning of HONE to the new site. Eventually there were a large number of clones of the HONE US site (of various sizes and shapes) around the world (some that I got to handle).

i think that there was much less of a moving issue with people in research, development, manufacturing, etc .... it was the sales, marketing and other field people where they were constantly being moved around to try a get customer coverage.

misc. past HONE posts
https://www.garlic.com/~lynn/subtopic.html#hone

sligthly related post from another n.g.
https://www.garlic.com/~lynn/2005.html#11 CAS and LL/SC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Using smart cards for signing and authorization in applets

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Using smart cards for signing and authorization in applets
Newsgroups: alt.technology.smartcards
Date: Thu, 06 Jan 2005 08:28:29 -0700
mac@theoffice.net (Mac) writes:
First my apologies if my question has been already replied.

I'm new to smart cards and I need some help to understand this technology.

What we´re planning to do is: - let our clients to fill html forms, sign data with a smart card and send back signed data. - let clients authenticate via Web using smart cards.

For the time being this is only a plan, but we'd like to have an idea of how difficult it is to build this on java.

Is there any java sample code that makes this tasks? which card reader would fit best our needs?

Thanks a lot for your help, and sorry if my questions are to general.


basically from 3-factor authentication paradigm
something you know (like pin/password) • something you have (like smartcard or hardware token) • something you are (aka biometrics)

w/o any additional infrastructure ... a smartcard represents something you have authentcation ... the private key is certified as only existing in a specific hardware token, so a relying party, when validating a digital signature with the corresponding public key, can assume that it originated from a specific hardware token (aka something you have authentication)

w/o the additional infrastructure ... something you have authentcation doesn't directly represent "signing" or "authorization" as in "human signing" which includes the demonstration of some sort of intent, agrees, approves, and/or authorizes.

there has possibly been some comfusion generated by the use of the term "digital signature" ... with public/private key technology that is used for authentication and message integrity. Encrypting a hash of a message with a private key (for later validation with the corresponding public key) is orthogonal and independent of the things referred to as digital certificates ... various postings on certificate-less public/private key operation
https://www.garlic.com/~lynn/subpubkey.html#certless

... or for that matter, independent of human signatures that imply intent, consent, approval, etc. It is possible to use digital signatures (with or w/o digital certificates) as part of an infrastructure to establish human intent ... but a digital signature by itself only establishes authentcation and message integrity.

There is even the possibility of dual-use (digital signature) compromise ... where the same public/private key pair is used for both authentcation infrastructures as well as authorization infrastructures.

Numerous public key authentication infrastructures (use of private key to create a digital signature) are implemeted as challenge/response system; the server transmits a random challenge, the private key is used to digitally sign the random data and the digital signature is returned.

A human signature infrastructure implies that the human has read, understood, agrees, approves, and/or authorizes the content before it is digitally signed (and to repeat, digitally signing can occur in a certificate-based or a certificate-less-based infrastructure). The issue with dual-use compromise, is can an attacker ever transmit supposedly random data in a challenge/response authentication-only scenario ... where the random data actually looks like valid transaction (that might be found in a authorization paradigm). The key owner signs something that they believe to be random data (as part authentication infrastructure) which turns out to be some sort of transaction.

The problem is that an authorization infrastructure may have absolutely no control over additional uses that somebody puts their public/private key pair ... and in theory, may be defenseless against a dual-use attack. The key owner may choose to defend themselves against dual-use attack by possibly
always modify a random challenge message before signing with some additional disclaimer that reads something to the effect that the associated digital signature in no way implies any kind of intent, consent, agrees, approves, and/or authorizes the contents of what is being digital signed.

The issue is that if the key owner isn't meticulously about the use of their public/private key ... it puts any authorization/approval infrastructure at risk ... and the risk occurs outside of anything that the authorization/approval infrastructure may have control over.

misc. past posts related to dual-use compromise:
https://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
https://www.garlic.com/~lynn/aadsm17.htm#50 authentication and authorization (was: Question on the state of the security industry)
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#57 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#1 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#2 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#3 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#4 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#6 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#12 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#13 dual-use digital signature vulnerability
https://www.garlic.com/~lynn/aadsm18.htm#17 should you trust CAs? (Re: dual-use digital signature vulnerability)
https://www.garlic.com/~lynn/aadsm18.htm#32 EMV cards as identity cards

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Amusing acronym

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: 7 Jan 2005 11:36:45 -0800
Subject: Re: Amusing acronym
Dave Hansen wrote:
As a freshman, my college roommate and I were kicking around some ideas for a base-3 computer (who wasn't? kind of a fun exercise). We called it a "trinary" system, and the digits "trits". Unfortuately, "trinary" isn't really a word...

mildly related ... 3-value logic posts
https://www.garlic.com/~lynn/2003g.html#40 How to cope with missing values - NULLS?
https://www.garlic.com/~lynn/2004f.html#2 Quote of the Week
https://www.garlic.com/~lynn/2004l.html#75 NULL

Amusing acronym

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: 8 Jan 2005 09:03:32 -0800
Subject: Re: Amusing acronym
Joe Morris wrote:
...and the original name survived as the infix of the product number for PL/I under OS/360: 360S-NL-511. I never had a reason to poke around in the source for the compiler (yes, IBM distributed the full source to its systems back then -- and didn't even bother to copyright it!) to see if there were any comments referencing the original name.

i have vague memories of ibm coming by the university and demonstrated a new (not yet available) programming language that was going to be called soemthing (pli). they loaded the libraries on our system and ran some number of the demos. at the end of the week they scratched all the files.

they came back later to investigate the datacenter backup process and the probability that the drive with the temporarily loaded libraries had been backed up during the period. there was some issue about who had rights to any possible backup tapes (if there happened to be backup tapes)

Amusing acronym

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main, alt.folklore.computers
Date: 8 Jan 2005 14:05:12 -0800
Subject: Re: Amusing acronym
Lee Witten wrote:
Anyhow, I was an IBM employee back in 1990 or so, and I was bemused at the celebration in the KGN cafeteria. Linen tablecloths, prime rib being served for free, a live ragtime band in the corner. I wondered what the big fuss was all about. I was told that it had to do with two instructions being added to the mainframe instruction set, something to do with block move of data to/from expanded store using a single instruction. Thus were the joys of an IBM employee, before the first ever IBM layoff!

Also, in an earlier incarnation, I met both Lynn and Anne Wheeler!


expanded store move would be early 80s for trout/3090.

what was explained to me was that given the packaging ... and the amount of electronic memory ... that the physically distance from the processor exceeded some lactency limitation. so the memory was repackaged as near memory with expected processor memory bus .... and far memory (called expanded store) on a special, wide bus. they analyzed that they could do a synchronous bus move from far memory to near memory in much fewer cycles that any asynchronous i/o paradigm would take (say using any sort of electronic device I/O paradigm). you could either look at it as sort of a software managed cache mechanism .... or an electronic paging device using synchronous transfer operation.

later the expanded store bus was used to bolt HiPPI support on to the side of 3090. 800mbits/sec was to high instantaneous transfer for regular i/o interface. HiPPI commands were done by moving stuff to custom address on the expanded store bus (sort of a poke paradigm).

note that regular i/o could only be done to/from regular memory. if there was something in expanded store that needed to be used in regular i/o ... it first had to be moved to regular memory.

... for some topic drift ... one of the big annual corp. conferences was once held at some place in peachtree plaza in atlanta (shortly after it was built). there was big dinner in ballroom .... very large number of big round tables (seating for 10-12/table, white table cloths, etc). Only drink on the tables were water glasses. At least one table slipped a waiter some money and had the their water glasses filled with vodka.

... ok, and for some even more drift. At one of the Asilomar SIGOPS meetings, one of the dinners had a special treat ... provided by the people doing Ridge(?) wineries ... their first year of wine production. Also round tables with 8-10 people/table (also big white table cloths). The waiters were instructed(?) to bring a bottle to each table and then bring another bottle when that bottle was empty ... and keep it up until the table had three empty bottles. One table figured out the algorithm and kept putting the empty bottles (except the 1st) under the table. Apparently when dinner was over and people had left ... there were a dozen empty bottles under that table.

IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 8 Jan 2005 17:25:55 -0800
Subject: Re: IBM, UNIVAC/SPERRY, BURROUGHS, and friends. Compare?
John Savard wrote:
JCL is absolutely appalling, from what I've heard.

Basically, the problem is that to run a program takes multiple command lines; what, in UNIX, would be

program < file1 > file2

takes one line for the "< file1", another line for the "> file2", and then another line for "program" - in which you state how much memory you wish to reserve for the program. This can be mitigated through something called "compiled procedures"; basically, a batch file can be set up for each program one wishes to make it not insanely inconvenient to use.

The environment I am accustomed to in the mainframe world is IBM hardware with the *Michigan Terminal System* running on it.


some of the JCL issue has to do w/interactive vis-a-vis batch ... where the batch assumptions for the executable package was that there was no human/knowledgable resource available during the (extremely expensive) actual execution. lots of upfront resource specification in some detail (somewhat less simple guessing and what the heck, if its wrong ... no problem, ju st run it again). Furthermore, individual executable packages could be expected to use nearly all available resources on regular basis (real stroage, cpu, disk space, etc).

some production shell scripts get quite complex trying to address some of the issues about problems that might show up at runtime ...in. and little expectation that the possible humans actually present aren't likely to provide much help.

schell scripts can get quite verbose and wordy trying to address some of the automated handling of various possible anticipated problems.

another part of JCL issue could be considered trying toal pack a whole lot of specification into relatively compact form (preferably less than 72 chars)

Changing the design point to interactive, responsible human present, executable is inexpensive and typically small precentage of available resources .... then there is a lot more latitude in letting the human handle issues in real time (if they come up at all; rather than trying to anticipate all possible issues).

Both CP67/CMS and MTS were built in the 60s for, and ran on mainframe 360/67. Both had human interactive design point. Both heavily utilized major software packages from the os/360 batch infrastructure and both provided os360-emulation software to enable execution of these packages. Both provided wrapper software that defaulted lots of expected os360 batch oriented-options ... to avoid forcing the end-user to repeatedly specify such default information.

one possible batch design point was that weekly payroll had to complete succesffully week-after-week.

in the early 90s, I had a weekly application running in an unix environment (which was similar to many business &/or payroll applications ... but w/o nearly the business critical requirements). it did sort on people database, did some process on each person, and then output some information on per person basis.

one week, the sort working file filled the available disk space. but the disk-full condition didn't preculate up to the sort application. when sort was completed, it spit out about 15percent of the total people ... again with no error indication. Processing continued and eventually the final per-person report was completed ... again with absolutely no error indication.

if you have tens (or possibly hundreds) of thousand of people expecting their check every week ... and such problems frequently happened and went undetected (at least until the irate calls started coming in) .... there would start to be some higher level expectations.

The Soul of Barb's New Machine (was Re: creat)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 8 Jan 2005 17:36:43 -0800
Subject: Re: The Soul of Barb's New Machine (was Re: creat)
Walter Bushell wrote:
Isn't that exactly one of the innovations in the new Intel chips, multiple simultaneous threads on one cpu?

technology innovations ... or change for new Intel chips.

in the mid-70s (30+ years ago), there was dual i-stream project, taking a single processor 370/195 adding just the hardware for a second instruction-stream and a second-set of registers (w/o adding any additional executable units).

past dual i-stream 370/195 posts:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns

I told you ... everybody is going to Dalian,China

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.politics.org.cia, sci.crypt, alt.2600, alt.security, alt.folklore.computers
Date: 9 Jan 2005 11:16:51 -0800
Subject: Re: I told you ... everybody is going to Dalian,China ....
SecQrilious wrote:
I told you ... everybody is going to Dalian,China .... .... High costs threaten valley's competitive edge The state budget crisis could make Silicon Valley an even more expensive place to do business as taxes rise and services are cut -- jeopardizing any economic recovery -- according to members of a business and government regional group. By David A. Sylvester / Mercury News

some ten plus years we were making a number of visits to far east, marketing ha/cmp ... and I remember reading an article in hong kong paper about india being much more competitive at high tech outsourcing (compared to what was aggressively being pushed in the province across from hong kong). supposedly the skill base and price/wage were competitive .... but that india had much more reliable and dependable services infrastructure (phones, communication, water, electricity, etc) ... also in indian comparison, it didn't require months & months delay and numerous payments to get even basic new services installed.

all the outsourcing stuff was far advanced over ten years ago. towards the end of the 90s the pace of outsourcing picked up because of huge demands created by the internet bubble coupled with significant resources needed to address y2k remediation efforts .... a lot of people may not have payed much attention at the time ... possibly because of the total amount of churn going on.

the completion of the y2k remediation efforts happened about the same time as the bursting of the internet bubble ... however, all the business relationships and connections that had been built up during the 90s weren't going to simply then evaporate.

about the same time as the mentioned hong kong article (ten-plus years ago), cal. papers were carrying articles that at least half of the high-tech advanced degree graduates from cal. univ were foreigners. There was some speculation that the long term, huge influx of foreigners in the high-tech industries was what help kept them going. There was some speculation tho about what conditions might result in the heavy foreign makeup of the US hightech workforce returning to their home countries (aka not just limited to straighforward outsourcing, but acutally shifting where fundamental hightech work went on). again, this is all from over ten years ago.

The Soul of Barb's New Machine (was Re: creat)

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 10 Jan 2005 13:06:01 -0800
Subject: Re: The Soul of Barb's New Machine (was Re: creat)
Bernd Felsche wrote:
And it still requires arbitration of some sort to resolve contention in case of coincident requests to attach a particular resource to two or more threads. The easiest way to do that is via a hardware "test-and-set" instruction operating on memory; with cache-coherence ensuring that all processors have an identical view of the arbitration space. The cache-coherence hardware must directly support test-and-set (or similar) as a special case because it has to happen in a single machine cycle. --

oops, not exactly.

360/67 had test&set instruction and no caches.

charlie, while working on fine-grain locking in cp/67 smp kernel at the science center, invented compare&swap. The selection of the compare&swap mnemonic was a slight issue ... because the objective was coming up with something that was charlie's initials, CAS. neither test&set nor compare&swap are likely to ever be a single machine cycle operation since it involves a fetch followed by (at least) a (conditional) store operation. what it does have to do is serialize all proceessors so that the instruction is atomic (given the combined fetch and store characteristic, it is likely to be multiple machine cycles ... during which all processors may be serialized).

compare&swap was added to 370 ... with a little work. The 370 architecture owners (padegs & smith) came batch that it was unlikely to be able to justify an SMP-only instruction. in order to get it justified for 370, a non-SMP justification was (also) needed. This was where the use description for multithreaded of (enabled for interrupts) application code originated. Multithreaded application code that was enabled for interrupts could safely update certain kinds of structures (whether running in uniprocessor or multiprocessor environment) because the instruction was atomic (would be interrupted in the middle with interrupt and possibly resume execution of the application in a different thread). As part of the inclusion of compare&swap instruction in 370, it was expanded to be both single-word and double-word compare&swap instruction.

originally, the description of the possible uses appeared as "programming notes" in the 370 principles of operation ... packaged as part of the instruction description. In some later version of principles of operation, the description was moved to the appendix.

misc. science center, 545 tech sq.
https://www.garlic.com/~lynn/subtopic.html#545tech
misc. smp, compare&swap, etc
https://www.garlic.com/~lynn/subtopic.html#smp

comapre&swap instruction from esa/390 principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822

compare double and swap instruction from esa/390 principles of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.23?SHELF=EZ2HW125&DT=19970613131822

appendix a.6 from esa/390 principles of operation; multiprogramming (aka multithreading) and multiprocessor examples:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6?SHELF=EZ2HW125&DT=19970613131822

The Soul of Barb's New Machine (was Re: creat)

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 10 Jan 2005 17:04:01 -0800
Subject: Re: The Soul of Barb's New Machine (was Re: creat)
glen herrmannsfeldt wrote:
And, since CAS runs on systems with cache it has to properly account for them. When CAS finishes with CC=1, it is defined as having loaded the memory operand into the appropriate register.

There is a story of someone using CAS in a loop, and when CC=1 reloading the appropriate register before reexecuting CAS. It seems that it can load the wrong value on a machine with a cache in that case. Maybe it loads from cache where CAS loads from memory? I am not sure on that one, but it seems that it fails.


original cas, 370 cache machines were very strong memory model and store-thru cache. stores invalidated all other caches in the complex. fetch for the compare basically did the invalidate and serialization until store completed (or compare failed).

other memory model implementations and store in/thru caches issues were suppose to still preserve the CAS semantics.

two processo r370 smp cache machine would slow the uniprocessor machine cycle down by 10 percent to allow the cache processing time associated with sending out the invalidates .... the base hardware of two processor smp was 2x0.9=1.8 times hardware performance of a uniprocessor (as a starting point; machine cycles for actually processing invalidates and any cache thrashing would further degrade hardware thruput).

3081 was announced as a "dyadic" ... two-processor smp ... but not in the sense of 360s & 370s sense where machine could be partitioned and be operated as multiple independent uniprocessor. 3081 was never intended to have uniprocessor version.

there was some issue with ACP/TPF (operating system for airline res systems and some number of high-performance financial transactions) which had cluster support (for scalability and availability) but didn't have SMP support. Upgrading TPF to newer 3081 processor resulted in the 2nd processor being idle (other than a large number of installations that ran vm/370 on 3081s and two copies of TPF ... each one with affinity to one of the 3081 processors). A lot of the TPF customers were looking for flat-out raw performance ... and since TPF didn't have SMP support ... eventually a uniprocessor 3083 was announced (which was never planned for in the original 308x products). The 3083 processor ran almost 15percent faster machine cycle (compared to 3081 processor machine cycle) because it didn't need the 10% cross-cache invalidate slow-down.

Later mainframes (especially with higher number of processors) ... started to run the cache machine cycle at much higher rate than the processor cycles (to help mask the cross-machine invalidate overhead).

Other processor architectures might weaker memory consistency models and other cache consistency protocols .... but the hardware implementations for CAS should still support the CAS semantics.

Network databases

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 11 Jan 2005 08:14:14 -0800
Subject: Re: Network databases
Mark D Powell wrote:
If my memory is correct a networked database could only follow predefined paths to the data. Nodes (chunk of data) were chained to other nodes using pointers and these prebuilt pointer chains were the only paths through the data.

A relational database on the other hand supports accessing data in a more flexible manner and does not use predefined pointers to the data elements. In theory any column can be used to join to any other column in another table. Obviously you would only do this when the data in question was actually the same data values and for performance reasons you would probably add indexes on these columns. But the point being a user can declare any relation that makes business sense at run time rather than at object definition time.

I hope this is clean enough. I believe that CA still supports a legacy > network database product. The methods being discussed in the OO world > may include "fixes" for some of the problems on network db designs of > the past. For certain types of applications a network design would be > very fast as only one IO would be required for each node.


as i've mentioned before ... the hierarchical/network side (bldg. 90) during the 70s against the relational (bldg.28) original relational
https://www.garlic.com/~lynn/submain.html#systemr

wasn't so much about the information organization ... but about the physical implementation. the hierarchical/network implementations from the 50/60s used direct physical pointers ... while relational used indexes. The direct physical pointers increased human administrative workload ... but was faster for the standard case ... being able to go directly to the desired data. relational reduced the human administrative effort but increased the overhead to get to the desired data and typically doubled the physical storage requirements (both space and processing overhead introduced by the indexes).

the physical pointer implementation vis-a-vis indexes can be independent of the information organization (aka it is possible to do physical implementation using indexes for both hierarchical and network information organizations).

so for NLM at the NIH ... they did card catalog using BDAM. The "card" was identified by its BDAM pointer (implementation from the 60s). They built indexes of cards by listing the BDAM pointer. So for a specific author .... there was a list of all BDAM pointers (that corresponded to papers/books/etc they had authored). a specific keyword had a list of all the corresponding BDAM pointers. They have 80-some categories that entries were indexed by (author name, keyword, address, subject, etc) ... aka categories of things for which lists of BDAM pointers are built for.

The claim was that sometime in the early '80s they hit the query problem .... a query response involving 5-8 qualifiers could be quite bi-model ... trivial change making the difference between 100,000 responses or zero responses.

Query strategy became returning the number of responses .... rather than the actual responses ... and experimentally trying queries until responses were reasonable number (greater than zero ... but less than thousand)

Basically the various BDAM pointer lists were treated as sets of pointers ... and queries with ANDs, ORs, NOT, ... became and'ing, or'ing, etc the BDAM pointer sets, and then counting the members of the resulting set.

Arriving at a set-size of reasonable number ... then could retrieve the list of set members (and eventually the corresponding "cards").

In this case, the BDAM pointers are both the physical pointer as well as analogous to globally unique primary key in relational. So the information organization might be considered relational? Are the various lists of BDAM pointers correspond to normalization? It would appear that the implementation would also work where the globally unique identifier was something other than a direct physical pointer.

Network databases

Refed: **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 12 Jan 2005 08:42:44 -0800
Subject: Re: Network databases
Alfredo Novoa wrote:
And it is possible to do physical implementation using pointers with relational databases.

In a short: with relational databases we can get the same or better performance but working a lot less.


the argument from the 70s was that the original relational database used indexes to effectively automate some amount of system administrative overhead (associated with physical pointers) at cost in disk space, real memory space, and processing time (trading off computer & processing resource against people time) ... this was somewhat independent of information organizational issue ... but physical implementation.

the original relational database work was somewhat targeted at bank accounts ... single table, bank account number is primary index, and rest of information was associated with bank account number. there was a very good match between the information organization and use with relational row/column organization.

there are other types of information where relatively trivial normalization can result in several hundred tables and there is little difference in the query complexity facing a user with respect to complex joins vis-a-vis network navigation (very low information uniformity, very large information uniqueness).

Network databases

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 13 Jan 2005 08:23:43 -0800
Subject: Re: Network databases
Alfredo Novoa wrote:
I can't disagree more!

It is exactly the contrary. There is little difference if you have two tables or less, but the difference increases exponentially when the number of tables grows.

It is very easy to manage many tables at the same time using updateable views. And you might create views that use views.


hum, yes, well; there was this bldg. in san jose referred to as sjr or bldg. 28. I had an office on the 1st floor, backus had an office down the hall and codd had an office above on the second floor. there was this project going on in sjr to implement something called system/r and sequel, random system/r past posts
https://www.garlic.com/~lynn/submain.html#systemr

i've joked that SQL was part of competition between san jose research and yorktown research (where query-by-example was going on) for best TLA .. aka QBE vis-a-vis SQL. some random qbe past posts
https://www.garlic.com/~lynn/2002e.html#44 SQL wildcard origins?
https://www.garlic.com/~lynn/2002o.html#70 Pismronunciation
https://www.garlic.com/~lynn/2003n.html#11 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2003n.html#18 Dreaming About Redesigning SQL
https://www.garlic.com/~lynn/2004l.html#44 Shipwrecks

about 10 miles south there was this other bldg ... santa teresa lab or bldg. 90. It had opened the same week as the Smithsonian air & space museum. it had access methods, databases, and language products. I would work some number of days in bldg. 90 ... riding my bike. south silicon valley/coyote valley has this interesting weather pattern where I would have a strong head wind riding south in the morning and a strong head wind riding north in the late afternoon.

so the physical databases of the 50 and 60s were developed when there was limited real disk space and limited real memory. direct physical pointers conserved the limited amount of scarce resources. they even had structures like isam ... where you could write I/O programs that could pickup physical pointers and follow them outboard in the i/o subsystem w/o bothering the processor. some amount of past postings about changing constrained physical resources in the 70s:
https://www.garlic.com/~lynn/submain.html#dasd

I had started writing some stuff that over a 10-15 year period that the relative disk system performance had declined by an order of magnitude (memory, disk capacity, processor had increased by factors of 50, disk access thruput had only increased by 3-5 times). this annoyed the disk division and the disk division performance group was assigned to refute my statements. after a couple months and came back and said that i had slightly understated the issue.

in any case, during the 70s, real storage, disk space and processor were increasing dramatically, the cost of hardware was declining and the cost of people was increasing. also with the increase in disk space sizes, the amount of data that had to be manually managed was increasing significantly.

the arguments about system/r doubling the disk space and having layered index between ... was becoming less of an issue because the hardware costs were dropping and the relative amount of disk space was increasing. it was also now possible to start caching lots of the index structure in the increasing amounts of real storage available (instead of incrementally threading thru the index structure because there was no excess real storage to keep any cached information around).

All of this was being traded off against savings in people time (becoming scarcer and more expensive) which were having to deal with increasing size of data to be managed (by the relative increase in disk space sizes).

So with some amount of resistance continuing from bldg.90 and database product organization ... the system/r tech transfer went from bldg28 to endicott to become sql/ds. later there was sort of tech transfer from endicott to bldg.90 to become db2.

so somewhat in parallel with some of this ... there was small contingent in blg. 90 looking at doing a "modern" network database implementation ... doing a lot of abstracting so that the database users are separated from a lot of low-level physical database gorp ... in much the same way that system/r had abstracted a lot of those details in relational. Some amount of the higher level abstraction work was also influenced by Sowa. So they came up with a query language paradigm that removed the physical pointer and lots of the network navigation characteristics from the interface (anologous to what SQL accomplished). eventually they came to wanted to do a side-by-side comparison with db2 on a level playing field.

somewhat west of bldg. 28 about 10 miles was bldg. 29 or the los gatos lab. I'm not sure all of its history, it was built in the 60s and housed ASDD for a time (possibly even advanced system development division hdqtrs). It seemed that ASDD sort of evaporated with the death of FS ... random FS past postings
https://www.garlic.com/~lynn/submain.html#futuresys

They had done AM1/AM0 there ... which had eventually morphed into VSAM and became responsibility of product group in bldg. 90.

At the time of the side-by-side comparison, most of bldg. 29 was occupied by VSLI chip design group. For the comparison they choose an extremely network oriented structure, large CPU chip ... all the circuits that goes into the chip (and pretty non-uniform ... not like what you might find in something like a memory chip). On the same machine with the same system and operations ... load the chip specification into the database. The comparison would be elapsed time from start of initial query until chip was drawn on screen ... no tuning and no optimization.

The SQL query statements were on the order of 3-5 times larger and more complex ... and it quickly became clear that with level playing field, a side-by-side comparison of untuned and unoptimized, DB2 was ten times slower. So to make it a little more fair to DB2, the whole thing was given to some DB2 performance gurus for a couple weeks ... they were allowed to use every DB2 trick in the book, trace the query to death and re-org it every way possible. There were eventually able to get totally optimized DB2 so that it was only three times slower than the untuned and unoptimzed comparison.

Now, it was easy to show that DB2 was possibly ten times faster than this "modern" network implementation for single large bank account oriented table .... however for anything that was large, complex, and non-uniform ... DB2 couldn't touch it .... either in the (lack of) complexity of the query statements or in thruput/performance. The abstraction of how the paradigm was presented also made it much simpler to change and update the organization (in addition to simple adding/deleting data) for complex organizations.

Along the way, I got to write code for both implementations ... help with things like tech transfer of system/r from blg. 28 to endicott for sql/ds, etc.

for some topic drift ... "sequel"
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-System.html#Index111
... from above ...
Don Chamberlin: So what this language group wanted to do when we first got organized: we had started from this background of SQUARE, but we weren't very satisfied with it for several reasons. First of all, you couldn't type it on a keyboard because it had a lot of funny subscripts in it. So we began saying we'll adapt the SQUARE ideas to a more English keyword approach which is easier to type, because it was based on English structures. We called it Structured English Query Language and used the acronym SEQUEL for it. And we got to working on building a SEQUEL prototype on top of Raymond Lorie's access method called XRM.

... snip ...

Lorie and I (and a couple others) transferred from scientific center to the west coast about the same time
https://www.garlic.com/~lynn/subtopic.html#545tech

now there is this other stuff out there that goes somewhat GML->SGML->HTML->XML, etc (somewhat analogous to the transition from SEQUEL->SQL) where GML was invented at the science center and the letters "G", "M", and "L" stand for initials of the people that invented it ... and the same Lorie (in the above) is the "L" in all those ML things floating around out there.
https://www.garlic.com/~lynn/submain.html#sgml

Network databases

Refed: **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 13 Jan 2005 10:23:17 -0800
Subject: Re: Network databases
frosty wrote:
So you are saying that "HTML" is _not_ an acronym for HyperText Markup Language?

so gml was invented at the science center ... misc. past references
https://www.garlic.com/~lynn/submain.html#sgml

by "G", "M", and "L" ... and the objective was come up with name that corresponded to their initials ... aka GML ... and eventually came up with Generalized Markup Language (where the "l" in "language" corresponds to the first initial of the last name of one of the inventors).

this was eventually standardized in iso as SGML ... standard generalized markup language ... where the "l" in language still corresponds with the initial of the last name of one of the inventors.

HTML then was outgrowth of SGML ... and XML is outgrowth of combination of HTML and SGML ... where the "l" in language still corresponds to the initial of the last name of one of the inventors.

this is somewhat akin to compare&swap instruction, also invented by one of the people at the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

however, there was only one inventor in this case whose initials are CAS. The task was coming up with mnemonic that corresponded to the initials of his name .... and eventually came up with compare&swap. random posts about compare&swap (and other multiprocessor related stuff)
https://www.garlic.com/~lynn/subtopic.html#smp

some slightly more specific sgml, html, science center, etc posts:
https://www.garlic.com/~lynn/2002b.html#46 ... the need for a Museum of Computer Software
https://www.garlic.com/~lynn/2003o.html#32 who invented the "popup" ?
https://www.garlic.com/~lynn/2004l.html#72 Specifying all biz rules in relational data

Network databases

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 13 Jan 2005 12:24:54 -0800
Subject: Re: Network databases
a html history page
http://infomesh.net/html/history/early/

mentioned in thread that ran last year in this newsgroup.

a couple of references from the "G" in gml/sgml
https://web.archive.org/web/20231001185033/http://www.sgmlsource.com/history/roots.htm
https://web.archive.org/web/20220422232222/http://www.sgmlsource.com/index.htm

minor comment on xml history from w3c
http://www.w3.org/XML/

a couple other xml history pages
http://www.users.cloud9.net/~bradmcc/xmlstuff.html
http://www.icaen.uiowa.edu/~bli/xml_proj/final-1.html

misc. postings on the subject from threads in this newsgroup last year
https://www.garlic.com/~lynn/2004l.html#51 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#52 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#53 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#58 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#72 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#73 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004l.html#74 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#3 Specifying all biz rules in relational data

Smart cards and use the private key

From: lynn@garlic.com
Newsgroups: microsoft.public.platformsdk.security,sci.crypt
Date: 13 Jan 2005 11:34:23 -0800
Subject: Re: Smart cards and use the private key
jordics wrote:
Basically my problem is that I know the theory about the certificates, cryptography and also the part of smart cards but I don't know very well with which tools I have to work. I've read about CryptoAPI, PKCS11, ... but I don't know how to "contact" with the card for example to do the mentioned encryptation of my message.

basically certificates are almost totally unrelated to asymmetric cryptography, public/private keys, digital signature, etc.

they were developed effectively as a mechanism for (public) key distribution between parties that have never before interacted and have absolutely no recourse to any other resources for determining who the other party is.

the original certificate model was offline email from the early 80s, where people dial-up their (electronic email) post-office, exchanged email files, and hung up. they processed the incoming email in totally offline environment. The scenario is that you have received email from some totally anonomous source that you have never interacted with ... and you need some way of validating the sender source w/o resorting to any other resources.

so the soluation was akin to the letter's of credit from the sailing ship days. you would go to some institution and get some credential attesting to some characteristic about yourself ... that you could carry off into the wilds and deal with total strangers that you've never met before, they've never met you and there is abolutely no kind of infrastructure available where they can check on you.

now in scenarios where you don't have total strangers deailing with each other, thya may have had prior dealings ... and/or there may be total strangers, but there are resources where it is possible to check on the other entity (trivial example is real time credit scoring from the various credit check services) ... there have been quite a few of certificate-less based solutions (aka you don't a brand new stranger-oriented, offline "letter of credit" like introduction for each of possible thousands of interactions that two people may interact ... and/or when they have recourse to other resources).

trivial example is the PGP genre of public/private email operations. two people that have some interaction exchange and record each other keys. processes are executed that validate the integrity of the respective keys w/o having to resort to certificates. the keys are stored in respositories maintained by the individuals w/o recourse to certificates. misc. past certificate-less postings
https://www.garlic.com/~lynn/subpubkey.html#certless

there is some analogy to these public key repositories (like the kind PGP uses) even in the certificate-based environments. These are repositories of trusted public keys. In the certificate-based environments ... these repositories of trusted public keys tend to contain the public keys of the certification authorities themselves. These trusted public keys are used to validate the certificates (in much the same way that public keys from trusted email public key respository are used to directly validate email). Some nud mber of certificate oriented infrastructures will ship a trusted public key repository as part of the product ... containing possibly scores of supposedly trusted public keys ... w/o the end-user possibly even being really cognizant of their existance.

Network databases

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 13 Jan 2005 16:35:56 -0800
Subject: Re: Network databases
lynn wrote:
a couple other xml history pages
http://www.users.cloud9.net/~bradmcc/xmlstuff.html
http://www.icaen.uiowa.edu/~bli/xml_proj/final-1.html


and for even more drift ... in section6/eulogy of the above xmlstuff reference there is some amount of comparison with ISO OSI.

in the late 80s and early 90s, some number of govs. had mandated the elimination of internetworking and complete transistion to OSI (little things like GOSIP by the us federal gov).

about the same time we were involved w/HSP (high speed protocol) in ISO chartered ansi x3s3.3. there were a number of problems ... primarily ISO had an edict that no standardization would occur for anything that violated OSI model (and, if effect, the OSI model couldn't be changed). HSP had several problems because it would go directly from level4/transport to LAN/MAC interface ... along the way supporting IP; some ISO complications (because of the edict that no standardization could occur for anything that violated the OSI model) were:

  1. HSP bypassed the level3/level4 interface ... violating OSI
  2. HSP supported internetworking (IP) ... IP doesn't exist at all in the OSI model, IP is violation of OSI model, so supporting IP is also a violation of OSI model
  3. LAN/MAC interface is somewhere in the middle of layer3/networking ... LANs/MACs are violation of OSI model ... so anything interfacing to LANs/MACs is also a violation of OSI model.

past postings
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

Network databases

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 14 Jan 2005 06:35:05 -0800
Subject: Re: Network databases
some more from
http://www.mcjones.org/System_R/SQL_Reunion_95/sqlr95-System.html
Don Chamberlin: I think it's going to need both of us to do this. I'll give it a start.

This shouldn't be a monologue; please stand up and help me out here. As Irv said, there was a long period after Frank arrived in California when we had a lot of meetings and a lot of discussions and task forces and tried to organize an approach to take to this business. Interestingly enough, Ted Codd didn't participate in that as much as you might expect. He got off into natural language processing and wrote a very large APL program called Rendezvous[24], [25]. He really didn't get involved in the nuts and bolts of System R very much. I think he may have wanted to maintain a certain distance from it in case we didn't get it right. Which I think he would probably say we didn't.

Mike Blasgen: Oh, he has said that, many times.

Don Chamberlin: What came out of this was we got organized into two groups, a higher-level group which ultimately was called the RDS[26] and which was interested mainly in language issues, and a lower-level group called the Research Storage System, which was interested more in physical data management issues. I can talk mainly about what was happening in the top half of the project in those days and I'm hoping that Irv and maybe some of the rest of you - Jim - will talk about what was happening in the bottom half.

What really happened in the early days was Irv's group began developing a new data management interface, with support for indexes, locking, logging, concurrency and transactions, and all those kinds of things. Meanwhile the language folks wanted to build a prototype of their language and they needed a base to build it on, and the RSS wasn't ready. The only thing we could get our hands on was something that Raymond Lorie had built at the Cambridge Scientific Center called XRM. So we built a prototype of our language on top of XRM in the early days; we called it Phase Zero 27. Brad has a wonderful tape which many of you saw last night that represents a complete working prototype of SEQUEL in 1976 I believe, complete with integrity assertions, which have just now made it into the product twenty years later. [laughter] And we demonstrated that, or at least showed the tape, at the SIGMOD conference in, was it 1976?


... snip ...

home page for above ..
http://www.mcjones.org/System_R/

going way back ... some of the ctss people went to 4th floor, 545 tech sq, to the science center ... and others went to 5th floor, 545 tech sq to do multics (aka gml/sgml was invented at the science center as well as lots of other stuff on the 4th floor ... and on the 5th floor was multics and bunch of other stuff).

eariliest released relational was on multics

multics reference page
http://www.multicians.org/

reference to multics relational data store, released in 1976
http://www.mcjones.org/System_R/mrds.html

from above:
Multics Relational Data Store (MRDS)

The Multics Relational Data Store (MRDS) was first released in June 1976. This is believed to be the first relational database management system offered by a major computer vendor, namely Honeywell Information Systems, Incorporated. The designers were familiar with, and influenced by, the work of Codd, the Ingres project at U.C. Berkeley, and the System R project at IBM San Jose.

MRDS provided a command-level interface for defining databases and views (called data submodels), and a call-level interface for queries and data manipulation. A separate Logical Inquiry and Update System (LINUS) provided an online query and update interface. The MRDS query language was similar to SEQUEL (as SQL was first called), with -range, -select, and -where clauses corresponding approximately to the FROM, SELECT, and WHERE clauses of SQL. Explicit set operations (intersection, union, and difference) were provided; there was no direct sorting support. A query was passed as a character string to the MRDS at runtime; there was no precompilation mechanism. Concurrent access to a database by multiple processes was supported; each process was required to explicitly declare the type of access (retrieval or update) and, for update, the scope (set of relations) of the update. The database could be quiesced and backed up in its entirety. A transaction mechanism for atomically committing multiple updates was added in a later release.

As its name implies, MRDS ran on the Multics operating system, and its implementation took advantage of Multics mechanisms for security and virtual memory-based storage. MRDS was written in PL/1.

When MRDS was released in June 1976, it was actually marketed as one of two components of a package called the Multics Data Base Manager (MDBM). The other component was the Multics Integrated Data Store (MIDS), which was a CODASYL database implemented as a layer on top of MRDS.

MRDS was designed by James A. Weeldreyer and Oris D. Friesen <oris@orisfriesen.com>; Roger D. Lackey and Richard G. Luebke contributed to the implementation.

References James A. Weeldreyer and Oris D. Friesen. "Multics Relational Data Store: An Implementation of A Relational Data Base Manager" Proceedings of the Eleventh Hawaii International Conference on Systems Sciences Volume 1, (January 1978), pages 52-66.

Oris D. Friesen and James A. Weeldreyer. "Multics Integrated Data Store: An Implementation of a Network Data Base Manager Utilizing Relational Data Base Methodology". Proceedings of the Eleventh Hawaii International Conference on System Sciences, Volume 1 (January 1978), pages 67-84.

Oris D. Friesen, N.S. Davids, and Rickie E. Brinegar. "MRDS/LINUS: System Evaluation" in J. W. Schmidt and M. L. Brodie, editors. Relational Database Systems: Analysis and Comparison. Berlin, Springer-Verlag (1983), pages 182-220.

Honeywell Information Systems. Series 60 (Level 68). Multics Relational Data Store (MRDS) Reference Manual, Order Number AW53, 1980.

Honeywell Information Systems. Series 60 (Level 68). Logical Inquiry and Update System (Linus) Reference Manual, Order Number AZ49, 1980.

"Honeywell Introduces Multics Data Base Management" Software Digest 8, 35 (September 2, 1976), pages 2-3.

Don Leavitt. "'MDBM' Backs Network, Relational Approaches" ComputerWorld 10?, 35? (September 6, 1976), page 11.

"Honeywell Introduces Data Base Management for Multics 68" Electronic News 21, 1096 (September 6, 1976), page 28.


Do I need a certificat?

From: lynn@garlic.com
Newsgroups: microsoft.public.platformsdk.security,sci.crypt
Date: 14 Jan 2005 07:18:38 -0800
Subject: Re: Do I need a certificat?
jordics wrote:
I want to develop a system to allow people to sign "messages" (not necessary e-mails). I've been thinking in provide them with an Smart Card in which they will have their key-pair to encrypt and decrypt their messages and also will be their system of authentification to use the application (instead of using login and password).

I know all the people who have to work with this system and so any person that is not a known user won't be provided by the mentioned card. My question is if is always needed to do the authentication (throw a CA and the corresponding certificate) to check if the person is "who is supposed to be". In other words, is enough to store the key-pair in the smart card to do the comunication without any certificate or this system is not secure? (I'm not sure if in a smart card (i've never worked with them yet) is stored a certificate or can be only the two keys)


bunch of my past postings on certificate-less public key operation
https://www.garlic.com/~lynn/subpubkey.html#certless

kerberos was originally developed in the 80s at MIT in Project Athena. It was a password based infrastructure .... and since found its way into core authentication infrastructures like windows, etc. The original pkinit draft for extending kerberos for public key operation was purely certificate-less operation. random kerberos related posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos

the other major authentication infrastucture used in internet environment is radius ... and also started as a password-based infrastructure ... but is also possible to straight-forward extend to (certificate-less) public key operation.
https://www.garlic.com/~lynn/subpubkey.html#radius

at fundamental level ... if you have a trusted repository for public keys ... which is required for certificate-based operation ... because that is where the trusted public keys go for the certification authorities ... then in principle ... it is possible to register public keys for direct authentication and use w/o having to go thru the levels of indirection and business process complexity created by the use of certification authorities.

8086 memory space [was: The Soul of Barb's New Machine]

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 14 Jan 2005 08:31:23 -0800
Subject: Re: 8086 memory space [was: The Soul of Barb's New Machine]
Bernd Felsche wrote:
The segment/page on other architectures should also be marked as exec-able for security reasons to distinguish it from other read-only memory spaces.

A buss error should be generated if the PC points to a non-execable page. That's how things *should* be done. Especially in general purpose computers, one of the traditional methods of attack is a buffer/stack overflow, causing the PC to execute arbitrary code inserted into the overflow region.


in thread in sci.crypt about buffer overflow related exploits ... i found some reference to execute-only going back to possibly the 50s.

the current genre of hardware for buffer/stack overflow is somewhat the inverse ... it is marking (data) areas as explicitly non-executable
https://www.garlic.com/~lynn/2005.html#1 buffer overflow

...

more recent article
http://www.theregister.co.uk/2004/12/24/amd_dutch_ads/
about AMD chip hardware and support by Windows XP service pack 2

other kind of descriptions about no execute hardware for various kinds of buffer overflow issues:
http://gary.burd.info/space/Entry81.html

some RDBMS history (x-over from comp.databases.theory)

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 14 Jan 2005 08:35:38 -0800
Subject: some RDBMS history (x-over from comp.databases.theory)
misc. postings related to dbms history; x-over from comp.databases.theory thread ... with some topic drifts to gml/sgml/html/xml and OSI
https://www.garlic.com/~lynn/2005.html#23 network databases
https://www.garlic.com/~lynn/2005.html#24 network databases
https://www.garlic.com/~lynn/2005.html#25 network databases
https://www.garlic.com/~lynn/2005.html#26 network databases
https://www.garlic.com/~lynn/2005.html#27 network databases
https://www.garlic.com/~lynn/2005.html#29 network databases
https://www.garlic.com/~lynn/2005.html#30 network databases

increasing addressable memory via paged memory?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 14 Jan 2005 09:32:04 -0800
Subject: Re: increasing addressable memory via paged memory?
josh.cu...@gmail.com wrote:
I remember learning in my computer architecture course about a memory model by which you could address more memory than using a simple linear addressing scheme. I forget the exact name of this memory system but from what I remember you used some of the most significant bits in an address to choose between a number of segment tables and the rest of the bits were used to index into that segment table. The prof. said that this would allow you to address more memory than if you just translated the address bits directly into a memory address without any translation.

I never understood at the time how this mechanism allowed you to address more memory and I am still curious. Can someone tell me the name of that memory model and explain how it allows you to address more memory?


maybe not what you were referring to, but 3033 announced support for 32mbyte real storage ... even tho it was limited to 24bit/16mbyte addressing.

one could claim that cluster of six (single processor) 4341s ... was about the same price as 3033, with aggregate mip rate almost 50percent more than single 3033, and each machine could have 16mbytes of real storage.

a two-processor 3033 SMP faired even worse ... because it was nominally limited to 16mbytes real storage.

so the gimmick was that the standard 370 page table entry was 16bits .... with 12bits used for specifying up to 4096 4096-byte pages (i.e. 12+12 == 24bits/16mbytes). the low-order 4bits had 2bits defined and 2bits reserved/undefined. The gimmick was that the two reserved/undefined bits could be redefined and used to specifying up to 16384 4096-byte (real pages) ... instruction addressing was still limited to 24bits ... but the page tables and TLB could map a 24-bit virtual address 4kpage into a 26bit effective real address 4kpage.

there was games played with some pages involving certain kinds of structures that had to be below the 16mbyte line .... so there was a move below the line function.

fortunately original 370 architecture had defined a IDAL for doing i/o transfers ... which happened to be a full-word field (base 360 & 370 i/o infrastructure only supported 24-bit real addressing). This allowed page in/out to be done directly above the 16mbyte line ... w/o a whole lot of copying above/below line

Do I need a certificat?

Refed: **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: microsoft.public.platformsdk.security,sci.crypt
Date: 14 Jan 2005 13:54:16 -0800
Subject: Re: Do I need a certificat?
one of the trivial examples of certificate/certificate-less operation in wide open system are the SSL domain name server certificates. some minor references to domain name server certificates and having to audit the process with respect to payments and electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

other comments about ssl domain name server certificate infrastructerrure
https://www.garlic.com/~lynn/subpubkey.html#sslcert

to the issue is some entity owns a domain name ... registered with the domain name infrastructure; some perceived weaknesses it the domain name infrastructure .... was driving factor in the motivation for SSL domain name server certificates (is the server I think i'm talking to, really the server I'm talking to).

well some entity applies for a SSL domain name server certificate with some recognized certificate authority (preferrably one that has their public key already loaded into trusted public key repository of large number of client applications). the applicant provides some amount of identity information which the certification authority attempts to use to certify that the owner of the domain name (as registered with the domain name infrastructgure) is the same entity that is applying for the SSL domain name server certificate. This is a costly and error prone process.

So somewhat backed by the certification authority industry, there is a proposal that when people register with the domain name infrastructure ... they also register a public key (certificate-less ... just goes on file in the domain name infrastructure along with the other information ... somewhat analogous when people register for a bank account, etc). Future correspondence between the domain name owner and the domain name infrastructure can be digitally signed ... and the domain name infrastructure can verify the signature with the public key on file for the domain name owner. This addresses some number of the perceived integrity weaknesses in the domain name infrastructure. Furthermore, application for SSL domain name server certificates can now be digitally signed. This allows the certification authority industry to change an expensive and error prone identification process into a simple authentication operation (by retrieving the registered public key from the domain name infrastructure). However, there is now something of a catch-22

1) by eliminating some of the integrity issues with the domain name infrastrucutre ... some of the justification for SSL domain name certificates (to compensate for integrity problems) is eliminated

2) if the certification authority industry can retrieve certificate-less public keys directly from the domain name infrastructure ... for authentication purposes; it is possible that other entities will discover that they also could retrieve certificate-less public keys directly from the domain name infrastructure for authentication purposes (further reducing need for SSL domain name server certificates).

Note that the issue of certificate/certificate-less public key operation is totally orthogonal to the issue whether it is an open or closed infrastructure. The issue of open/closed has to do with whether general parties have access to the authentication material. It is possible to make certificate. -based infrastructures either open or close ... and it is also possible to make certificate-less infrastructures also open or closed ...i.e. few would claim that either the internet domain name infrastructure or SSL domain name infrastructure is closed ... however, it is relatively straight forward to added naked public keys to the existing "open" domain name infrastructure and still have it totally open

Network databases

From: lynn@garlic.com
Newsgroups: comp.databases.theory
Date: 14 Jan 2005 14:25:08 -0800
Subject: Re: Network databases
Alfredo Novoa wrote:
But DB2 is not a relational DBMS.

here is one reference: Are SQL Server, DB2 and Oracle really relational?
http://www.handels.gu.se/epc/archive/00002948/

note that the mainframe DB2 was relatively straight descendent from system/r. another RDBMS was written from scratch in the late 80s ... originally for OS2 and AIX ... but targeted at open platform and also called DB2. While they are totally differen implementations, they have attempted to maintain some degree of compatibility

random/misc other references:
http://www.techweb.com/wire/26804747
http://expertanswercenter.techtarget.com/eac/knowledgebaseAnswer/0,295199,sid63_gci976464,00.html
https://en.wikipedia.org/wiki/RDBMS
http://doc.adviser.com/doc/12292
http://www4.gartner.com/press_releases/asset_86529_11.html
http://www.eweek.com/article2/0,1759,1099419,00.asp
http://www.informationweek.com/815/database.htm
http://publib.boulder.ibm.com/infocenter/txen/topic/com.ibm.txseries510.doc/atshak0039.htm
http://membres.lycos.fr/db2usa/eliste.htm
http://infoworld.com/article/04/05/26/HNgartnerdbreport_1.html
http://www-306.ibm.com/software/data/db2/
http://www.db2mag.com/

[OT?] FBI Virtual Case File is even possible?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch
Date: 14 Jan 2005 13:28:56 -0800
Subject: Re: [OT?] FBI Virtual Case File is even possible?
there have been quite a few federal dataprocessing modernization projects ... starting at least in the late 80s ... with quite a few running into extreme difficulties. in a number of situations it appeared as if new system integrators were brought in, who appeared to have the attitude that new technology would automagically solve all problems; in many situations it would appear that they failed to appreciate how devilishly innovative and complex the original mainframe solutions from the 60s actually were (and possibly that just new technology all by itself doesn't actually, automagically solve all problems).

even in cases where some of the original system integrators were being used ... it is likely they weren't actually the exact "same" (people) as the '60s.

something like a CTC on a PC

Refed: **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main
Date: 14 Jan 2005 15:17:26 -0800
Subject: Re: something like a CTC on a PC
John F. Regus wrote:
Is there anything like a CTC on a PC? TIA.

CTC ... channel-to-channel was typically 1mbyte/sec ... but because of the traditional half-duplex channel programming model, effective thruput was much less. basically the box looked like controller that allowed attachment to multiple channels. there have been quite a few cards for PCs that emulated controller interface and allowed attachment to channels. there have been much fewer cards built that emulate mainframe channel and allow attachment of bcontrollerb devices (some number of them built internally for use in internal regression testing of controllers).

in another lifetime, my wife did a stint in POK in charge of loosely-coupled architecture ... and did a fairly advanced architecture ... but lost some number of the issues for high-speed mainframe cluster interconnect (at least at the time, some number of them have since been revisited)
https://www.garlic.com/~lynn/submain.html#shareddata

slightly related, random reference
https://www.garlic.com/~lynn/95.html#13

starting in the mid to late 80s ... you started to see 16bit, "high-performance" 10mbit enet cards that had full-duplex programming api and getting very close to media thruput (starting to see higher effective thruput than most CTC programming). In the very late 80s, there were even some high-performance FDDI (100mbit cards) and fiber-channel standard (FCS) full-duplex 1gbit cards (i.e. 1gbit in each direction) .... all getting higher effective thruput than CTC.

these days there are new generation of high-performannce 1gbit enet cards and 10gbit enet cards. there is also more advanced technology, high-thruput, very low latency interconnects for clusters and GRID infrastructures.

some simple search engine use:
http://www.gridcomputingplanet.com/news/article.php/3301391
http://www.dolphinics.com/news/2003/6_9.html
http://www.intel.com/update/contents/sv08031.htm
http://www.hoise.com/primeur/04/articles/monthly/AE-PR-04-04-6.html
http://www.gridcomputingplanet.com/news/article.php/3414081
http://www.chelsio.com/news/pr_052404.htm
http://grail.sdsc.edu/cluster2004/poster_abstracts.html
http://www.linuxhpc.org/stories.php?story=04/10/25/7652530
http://enterthegrid.com/vmp/articles/EnterTheGrid/AE-ETG-profile-293.html
http://www.myri.com/
http://www.nwfusion.com/newsletters/servers/2004/0705server2.html

CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)

From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers,alt.os.multics
Date: 15 Jan 2005 05:21:31 -0800
Subject: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE)
Jeff Jonas wrote:
That's why the first IBM 360 instruction was test and SET: reading from core set it to all ONES; only the zeroes needed to be written back.

I recall 'half-read' and 'half-write' core instructions in the Basic-4 machine to allow read-modify-write instructions.

Before I learned about semaphores, I pondered using flip-flops for hardware test-and-set.


360 had other non-interruptable instructions that did fetch/modify/store ... however they weren't defined with multiprocessor semantics ... they were usable for some things in a uniprocessor environment ... mostly modifying flags .... that was possibly multi-threaded/multiprogramming where the application was enabled for interrupts. they were the immediate instructions, and-immediate, or-immediate, and exclusive-or-immediate.

however, these instructions weren't defined for atomic multi-processor semantics (i.e. another processor could be tracking right behind and do a fetch between the time the first machine did a fetch and stored the result back.

the 360 instruction test-and-set was the only instruction defined with atomic multi-processor semantics

the following is from principles-of-operation that discusses "immediate" failure-more that can work in uniprocessor, multi-threaded environment but fails in multiprocessor mode:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6.1?SHELF=EZ2HW125&DT=19970613131822

appendix discussing use of general instructions (including AND, OR, and EXCLUSIVE-OR)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.3?SHELF=EZ2HW125&DT=19970613131822

clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: comp.arch,alt.folklore.computers
Date: 15 Jan 2005 10:43:15 -0800
Subject: Re: clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
Jonathan Thornburg wrote:
Most of the supercomputers my colleagues and I use for simulating black-hole collisions are *not* shared-memory systems. There might be a few SGI Origin systems around, and some IBM Regatta's that are small enough to be shared-memory, but the vast majority of the systems we use are now clusters of the sort which used to be called "Beowulfs".

Sure, shared-memory is nicer to program... but clusters are so much cheaper that they've won out. Science codes usually seem to use MPI these days, though HPF is also seen. Our codes are all built on a high-level "application framework" (Cactus, http://www.cactuscode.org) which makes the parallelism pretty close to transparent, so clusters aren't a programming problem for us.


i had done a lot of work with Charlie on SMP changes for cp/67 (where charlie invented compare&swap instruction ... mnemonic chosen for his initials) and then later a lot more work on smp kernel support for vm/370 ...
https://www.garlic.com/~lynn/subtopic.html#smp

when my wife and I started work with the romp/rios organization ... there was a strong orientation in romp/rios chip designs to provide absolutely no support for cache coherency ... as a result about the only scale-up scenario left was cluster approach. we started ha/cmp as both availability and scale-up ... minor reference
https://www.garlic.com/~lynn/95.html#13

and a lot of related posts
https://www.garlic.com/~lynn/subtopic.html#hacmp

we also spent some time with the sci people .... and scalable shared memory strategies .... and then when the executive we reported directly to ... became head of somerset, they had effort that included trying to adopt 801 to cache coherent designs.

in the ha/cmp work ... i did the initial design and implementation for distributed lock manager .... working initially with the ingres people who had a vms vax/cluster database implementation. some amount of the design of the dlm was based on suggestions from the ingres people about "improvements" they would recommend to the vax distributed locking infrastructure. We spent quite a bit time with ingres, oracle, informix and sybase on various ways to use distributed lock manager in distributed cluster. The informix and sybase implementations were somewhat more orinetated towards fall-over ... while the oracle and ingres implementations tended somewhat more towards parallel operation (in addition to fall-over).

One of the issues was that i had worked on mechanism for piggybacking database cache records in the same payload with lock migration. The existing mechanism was that if a lock & processing for a record were to migrate to a different processor/cache ... that the record first had to be written to disk by the processor giving up control ... and then (re)read from disk by the processor/cache taking control (instead of just doing a straight cache-to-cache transfer piggybacked on the same transmission that passed the lock control).

The problem wasn't actually on the part with doing the direct cache-to-cache transfers (w/o first passing the record out to disk and reading it back in) ... it was some of the recovery scenarios involving the distributed logs ... and correctly ordering commit records in different distributed logs. Not being able to take advantage of direct cache-to-cache transfers could reduce the effective thruput of fully integrated cluster operation to little better than partitioned distributed database operation.

random past dlm postings:
https://www.garlic.com/~lynn/2001e.html#4 Block oriented I/O over IP
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002e.html#71 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002f.html#6 Blade architectures
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#8 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004q.html#71 will there every be another commerically signficant new ISA?

and for some archeological database topic drift:
https://www.garlic.com/~lynn/2005.html#23 Network databases
https://www.garlic.com/~lynn/2005.html#24 Network databases
https://www.garlic.com/~lynn/2005.html#25 Network databases
https://www.garlic.com/~lynn/2005.html#26 Network databases
https://www.garlic.com/~lynn/2005.html#27 Network databases
https://www.garlic.com/~lynn/2005.html#29 Network databases
https://www.garlic.com/~lynn/2005.html#30 Network databases
https://www.garlic.com/~lynn/2005.html#36 Network databases

something like a CTC on a PC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: lynn@garlic.com
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: 15 Jan 2005 11:24:14 -0800
Subject: Re: something like a CTC on a PC
ref:
https://www.garlic.com/~lynn/2005.html#38 something like a CTC on a PC

for other topic drift ... there is recent thread in comp.arch on cluster vis-a-vis shared-memory
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory

possibly the "largest" single-system image cluster mainframe systems from the late '70s and early 80s (using ctca and shaerd disk) was the internal US HONE system. this supported all us field, sales, marketing people in the US. In the late '70s, all the US HONE locations were consolidated into a single location in Cal. This was built out to handle all US field, sales, marketing people at one point starting to push 40,000 defined userids. Pieces of it were then replicated, first in Dallas and then a 3rd site in Boulder (for availability).

The HONE system was also cloned and had operations in quite a few locations around the world (supported field, sales, and marketing pople in other parts of the world). One of the issues was that starting with 115/125, all mainframe orders first had to be passed through a HONE application before the salesman could submit the order.

HONE was a large VM370/CMS infrastructure (having grown up from cp67 pilots) with large number of sales-support applications; many of them implemented in APL (initially CMS\APL ... and then migrated thru various follow-on versions; APL\CMS, APL\VS, APL2, etc). lots of past HONE &/or APL posts
https://www.garlic.com/~lynn/subtopic.html#hone

In the mid to late 80s ... there were some bigger mainframe clusters ... many of them using HYPERchannel as the interconnect in place of CTCA .... you would find it with some number of internal corporate locations and in the really large airline res systems and some of the really large online library query systems. misc. past hsdt posts ... some including HYPERchannel references:
https://www.garlic.com/~lynn/subnetwork.html#hsdt

Higher Education places still use mainframes?

From: lynn@garlic.com
Newsgroups: alt.folklore.computers
Date: 15 Jan 2005 12:09:29 -0800
Subject: Re: Higher Education places still use mainframes?
Lawrence Greenwald wrote:
Any higher education places (including places like DeVry) still use mainframes (IBM, Unisys, even an old CDC) for any academic instruction and/or research?

I'm sure some still have them for administrative purposes (payroll, accounting, etc).


no idea if it means anything ... listserv.uark.edu operates the vmesa-l mailing list (aka mainframe virtual machine product ... originated at the cambridge science center as cp/67 back in the mid-60s) ... a quicky grep on edu domain names from recent vmesa-l postings yields
cornell.edu emporia.edu maine.edu marist.edu msu.edu nacollege.edu nps.edu osu.edu rk.edu sru.edu texarkanacollege.edu uark.edu uc.edu

increasing addressable memory via paged memory?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: increasing addressable memory via paged memory?
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 16 Jan 2005 06:54:08 -0700
another was the original description for romp/801
https://www.garlic.com/~lynn/subtopic.html#801

801 hardware was originally described as hardware/software complexity trade-off ... originally making the hardware simpler so it could fit in single chip (or small number of chips and/or instructions operate in single cycle ... depending on whos telling the story).

hardware virtual address segmentation was simplified by just having 16 segments (in 32-bit virtual address space), inverse tables, no protection domains in the hardware. compiler was responsible for generating correct code ... and program loader was responsible for making sure that loaded executables came from certified compiler.

with only 16 segments ... the ability to address memory objects was somewhat limited ... however the claim was that in-line application code could change a segment register value as easily as it could change a general purpose register and/or address register (w/o having to cross a kernel/protection domain boundary). thru an application had potentially as much addressing as there were total possible segments (as opposed to 32bit addressing).

so for romp, there was 32bit virtual addressing ... with 4bits used to select a segment register and 28bits (256mbytes) used to address within a segment. a romp segment register held a 12bit segment id value ... and if you added the 12bits in the segment-id value to the 28bit segment displacement value ... you came up with 40bits. A lot of the early pc/rt documentation described romp has having 40bit virtual addressing.

romp was targeted as an office product's displaywriter replacement using PL.8 programming language and a proprietary closed CPr operating system. when the displaywriter project was canceled, it was decided to retarget ROMP to the unix workstation market. Note that this involved subcontracting a Unix port to the company that had done the PC/IX port ... and moving to a more traditional kernel protection domain model ... including added protected/kernel mode & unprotected/non-kernel mode domains to the hardware. This in turn resulted in eliminating application programs from doing inline code changing the segment register value (and using more traditional kernel call mechanism to manipulate segment register values).

the romp follow-on, RIOS/power/6000, doubled the number of bits in the segment-id from 12 to 24bits ... and you find some of the early rios/power/6000 documentationd describing virtual addressing as 52bits (24bit segment-id plus 28bit segment displacement).

having more traditional kernel/non-kernel protection calls for manipulating segment register values ... then made the inverted table design of romp/rios with 12/24 bit segment-ids, more analogous to the total number of unique page tables that a system could support across all possible virtual address spaces. say amount of total real storage devoted to page tables ... although i had developed & shipped "pageable" page table support as part of the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

so you can have systems that actually had more defined segments/page-tables that there was real-storage to contain them at any single moment.

previous post talking about 3033 page table hack allowing addressing 32mbyte real memory with only 24bit/16mbyte addressing:
https://www.garlic.com/~lynn/2005.html#34 increasing addressable memory via paged memory?

misc. past posts mentioning 3033 page table hack for 32mbyte real:
https://www.garlic.com/~lynn/2001m.html#15 departmental servers
https://www.garlic.com/~lynn/2002d.html#51 Hardest Mistake in Comp Arch to Fix
https://www.garlic.com/~lynn/2004n.html#50 Integer types for 128-bit addressing

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

John Titor was right? IBM 5100

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John Titor was right? IBM 5100
Newsgroups: alt.folklore.computers
Date: Sun, 16 Jan 2005 17:45:36 -0700
jsavard@excxn.aNOSPAMb.cdn.invalid (John Savard) writes:
I recall reading somewhere that the S/360 layer was not the machine's real native code, but an emulator, used only for APL, not for BASIC. And it was supposed to be a limited subset of the 360 as well.

note that this came out of palo alto science center ... cambridge had done the original apl\360 port to cms for cms\apl. palo alto science center did a follow-on called apl\cms ... as well as the apl microcode assist for the 370/145 (lots of apl\cms on 370/145 w/assist ran as fast as apl\cms on 370/168 w/o assist).

HONE was across the back parking lot from PASC ... and since they were heavy APL operation ... they were interested in most things APL ...
https://www.garlic.com/~lynn/subtopic.html#hone

palm reference (has pointers to lots of other references):
http://www.brouhaha.com/~eric/retrocomputing/ibm/5100/

pasc reference:
http://www.svec.org/hof/1994.html#friedl

past palm (Put All Logic in Microcode) postings ....
https://www.garlic.com/~lynn/2000.html#69 APL on PalmOS ???
https://www.garlic.com/~lynn/2000.html#70 APL on PalmOS ???
https://www.garlic.com/~lynn/2001d.html#54 VM & VSE news
https://www.garlic.com/~lynn/2002b.html#39 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002b.html#43 IBM 5100 [Was: First DESKTOP Unix Box?]
https://www.garlic.com/~lynn/2002h.html#60 Java, C++ (was Re: Is HTML dead?)
https://www.garlic.com/~lynn/2002.html#52 Microcode?
https://www.garlic.com/~lynn/2003b.html#5 Card Columns
https://www.garlic.com/~lynn/2003i.html#79 IBM 5100
https://www.garlic.com/~lynn/2003i.html#84 IBM 5100
https://www.garlic.com/~lynn/2004c.html#8 IBM operating systems and APL

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

OSI model and SSH, TCP, etc

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: OSI model and SSH, TCP, etc
Newsgroups: alt.computer.security
Date: Mon, 17 Jan 2005 14:03:04 -0700
"Joe Plowman" writes:
I am studying for the CISSP cert. and am having trouble with the OSI model. Does TCP reside at the application level or the network level? How about SSH and SSL? Are they at the data layer?

I have seen conflicting information. Some places say SSH and SSL are data layer protocols and other say they are network layer. Any info and explanations are appreciated.


one of the problems was that the OSI model was done independent of TCP/IP ... claim is by a lot of copper-wire, telco oriented people. ISO compounded the problem with edicts that ISO and ISO chartered standards organizations couldn't standardize stuff that violated the OSI model.

in the circa 1990, we were involved in trying to get HSP (high-speed protocol) into (ISO chartered) ANSI x3s3.3 for standardization (this was in the era that numerous govs. had mandated the elimination of tcp/ip and everything migrating to osi ... things like federal gosip, etc). HSP went directly from the transport layer to LAN/MAC interface, supporting internetworking protoctol along the way (i.e. IP).

because of ISO edit about conformance with OSI model ... HSP couldn't be considered because:

1) it went directly from transport interface to LAN/MAC interface, bypassing layer3/layer4 network/transport interface ... violating the OSI model.

2) it supported IP. IP doesn't exist in the OSI model (it is an internetworking layer ... somewhat sitting in an non-existent layer between the bottom of transport (layer4) and the top of network (layer3) ... IP violates the OSI model and therefor anything supporting the OSI model violates the OSI model

3) it went to LAN/MAC interface .... LAN/MAC interface sits somewhere in the middle of layer3/network ... and therefor violates the OSI model ... and therefore anything that supports LANs also violates the OSI model.

misc. past osi and hsp posts:
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

to the extent that TCP is a transport type of protocol ... it somewhat corresponds to layer4/transport ... however, TCP (now) interfaces to internetworking protocol (IP) ... which doesn't exist in OSI model.

SSH and SSL are transport-like protocols ... although implemented at the application layer (and they tend to utilize TCP as the lower level transport mechanism).

TCP predates the 1/1/83 great change over to internetworking protocol, having been originally implemented on the old arpanet ... which somewhat more corresponded to OSI model ... lacking an internetworking layer.

I've frequently claimed that major reason that the internal network had larger number of nodes than arpanet/internet from just about the start up thru about mid-85 was because majority of the internal network nodes had gateway-like capability built in (which the arpanet/internet didn't get until the great 1/1/83 switchover to internetworking protocol).

misc. past internal network posts:
https://www.garlic.com/~lynn/subnetwork.html#internalnet

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

8086 memory space

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Tue, 18 Jan 2005 07:52:46 -0700
the_big_bin writes:
That is amazing. I have been called out thousands of times to many customers when our hardware started acting flaky. After a few minutes I came to the usual realization that the software itself was bad. <cry>

i think somebody at tandem in the early 80s did a failure-mode study and came up with the fact that the hardware had become the minority of problems.

in the late 90s ... one of the large financial transaction operations claimed that the two primary factors contributing to them having 100 percent availability for the previous six years were

1) ims hotstandy (operations replicated in three physical locations)

2) automated operation (people mistakes had become a major failure-mode)

my wife had done her stint in POK in charge of loosely-coupled architecture and was responsible for Peer-Coupled Shared Data architecture
https://www.garlic.com/~lynn/submain.html#shareddata

one of the few operations back when that did some serious work on it was ims hotstandby

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

creat

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: creat
Newsgroups: alt.folklore.computers
Date: Tue, 18 Jan 2005 08:05:59 -0700
Greg Menke <gregm-news@toadmail.com> writes:
But its best if you can get the "ergonomic" ones with the split down the middle and the two halves tilted at some bizarre angle, and a divided spacebar that does backspace when you hit the left half of it. To make it perfect, all you'd need is a built-in mousepad that moves the mouse around whenever your right hand gets too close to it. Modern! Equals! Good!

back circa 1980, the san jose human factors group built left & right hand cord "keyboard" ... something like large mouse with depressions that the fingertips fit into that had small rocker-like switches. claim was that w/o a whole lot of practice, most people could hit 80 words/minute. today you could add optical motion sensors to such a device ... to achieve mouse/cursor operation also.

the inventor of the mouse also did augment ... which had a much simpler cord keyboard (somewhat more like four piano keys).

minor past cord keyboard refs:
https://www.garlic.com/~lynn/2000g.html#31 stupid user stories
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2004q.html#55 creat

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

[OT?] FBI Virtual Case File is even possible?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: [OT?] FBI Virtual Case File is even possible?
Newsgroups: comp.arch
Date: Tue, 18 Jan 2005 08:13:21 -0700
Robert Myers writes:
It's not the Federal Government, but Lockheed is bragging about how much it has reduced paperwork in its contract submissions by putting everything (including drawings in editable format) on computers. Come to think of it, the Aerospace Corporation probably does have some experience with a system with many of the capabilities the FBI wants, including the capacity to handle sensitive documents. Wonder how well it really works?

i believe one of the augment success stories was that lockheed had used it for very large (fed. gov) contract submissions (aka hundreds of thousands of pages, millions?) ... and that it was dutifully all x-ref'ed with hypertext links. augment had moved from sri to tymshare ... and when m/d bought tymshare, i don't think augment was one of the things that survived.

random past augment posts:
https://www.garlic.com/~lynn/95.html#00 old mainframes & text processing
https://www.garlic.com/~lynn/2000g.html#22 No more innovation? Get serious
https://www.garlic.com/~lynn/2000g.html#26 Who Owns the HyperLink?
https://www.garlic.com/~lynn/2000g.html#31 stupid user stories
https://www.garlic.com/~lynn/2001n.html#70 CM-5 Thinking Machines, Supercomputers
https://www.garlic.com/~lynn/2002b.html#25 Question about root CA authorities
https://www.garlic.com/~lynn/2002d.html#4 IBM Mainframe at home
https://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: learning how to use a computer)
https://www.garlic.com/~lynn/2002h.html#6 Biometric authentication for intranet websites?
https://www.garlic.com/~lynn/2002o.html#48 XML, AI, Cyc, psych, and literature
https://www.garlic.com/~lynn/2002q.html#8 Sci Fi again was: THIS WEEKEND: VINTAGE
https://www.garlic.com/~lynn/2003g.html#44 Rewrite TCP/IP
https://www.garlic.com/~lynn/2004j.html#39 Methods of payment
https://www.garlic.com/~lynn/2004k.html#39 August 23, 1957
https://www.garlic.com/~lynn/2004q.html#55 creat

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

increasing addressable memory via paged memory?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: increasing addressable memory via paged memory?
Newsgroups: comp.arch
Date: Tue, 18 Jan 2005 08:15:57 -0700
"Stephen Sprunk" writes:
I've read of people trying to implement things like bounded pointers via segmentation, but the limitation of 8192 objects quickly stymies that approach in practice.

and i thot i was raising an issue with only 16 (concurrent) objects in 801/romp/rios. 801 post collection
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

something like a CTC on a PC

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: something like a CTC on a PC
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 18 Jan 2005 08:46:31 -0700
R.Skorupka@ibm-main.lst (R.S.) writes:
Usually network connections give you bigger overhead in terms of CPU usage (network protocol), while "channel-like" ones give less overhead. That's why SAN backup is more efficient than LAN backup, even with similar speeds of medium (bandwith).

however, the issue with CTC was it was half-duplex protocol that required interrupts and changing the programming on every message in every direction. the disk/dasd scenario was somewhat simplified by having a master/slave relationship between the processor and the device ... although there have been lots of latency efficiencies gained by going to full-duplex for even disks. The issue with LAN/SAN is that there tends to be a lot more functionality (and pathlength) associated with normal network protocol stacks ... however, that is somewhat orthogonal to the underlying SAN/LAN hardware operation.

9333 which turned into ssa was somewhat packetized scsi commands running over pair of one-way (copper) serial links (aka dual-simplex emulating full-duplex) ... which provided much higher thruput than old-style half-duplex scsi programming. minor reference
https://www.garlic.com/~lynn/95.html#13

one of the big contentions that went on in the fiber channel standards (FCS) organization was certain POK interests trying to map traditional mainframe half-duplex operation on top of a native full-duplex (or at least dual-simplex emulating full-duplex) operation.

slightly recent post in comp.arch
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory

Part of the issue is that bus half-duplex starts to run into latency issues with end-to-end coordination.

I claim that possibly the original SAN implementation was done at NCAR using HYPERchannel. HYPERchannel was done in spin-off by one of the people responsible for 6600. It was a high-end mainframe oriented LAN. It was used in a number of places as a channel extender and generalized interconnect. NCAR had ibm mainframe that managed the disk farm that was connected via HYPERchannel to the ibm computer as well as some number of other machines (supercomputers, crays, etc). The supercomputers would send message to the ibm computer over HYPERchannel (operation more like CTC) ... the ibm computer would do some setup for the data in the HYPERchannel network and respond. The supercomputer then would use HYPERchannel to directly transfer the data off disk.

In the early 80s, there were a number of internal operations that used HYPERchannel to remote "local channel attached" devices over T1 link (large number of local channel devices were actually 10-20 miles away from the central processor). misc HYPERchannel, HSDT, etc posts
https://www.garlic.com/~lynn/subnetwork.html#hsdt

There were three projects somewhat started/backed at gov. labs

1) LANL was standardizing cray channel ... half-duplex copper, 100mbyte/sec as HiPPI
2) LLNL was standardizing a serial copper switched interconnect upgraded to use fiber technology as FCS (fiber channel standard) ... dual-simplex originally with full 1gbit/sec in each direction
3) SLAC was standardizing a number of protocols ... that had tended to be half-duplex bus-like operation as packetized, asynchronous full-duplex operation over point-to-point dual-simplex ... very anaologous to the 9333/ssa effort ... but applied to all kinds of serialized bus operations ... memory bus, disk i/o, etc. This was the SCI effort. Circa early 90s ... there were (at least) three companies that built large scale computers using Scalable Coherent Interface for the memory bus operation .... Sequent, Data General and Convex. Data General went out of business; Convex was bought by HP and Sequent was bought by IBM.

random past SCI posts:
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc

random past FCS posts:
https://www.garlic.com/~lynn/97.html#5 360/44 (was Re: IBM 1130 (was Re: IBM 7090--used for business or
https://www.garlic.com/~lynn/98.html#30 Drive letters
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/98.html#49 Edsger Dijkstra: the blackest week of his professional life
https://www.garlic.com/~lynn/99.html#54 Fault Tolerance
https://www.garlic.com/~lynn/2000c.html#22 Cache coherence [was Re: TF-1]
https://www.garlic.com/~lynn/2000c.html#56 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#59 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000c.html#68 Does the word "mainframe" still have a meaning?
https://www.garlic.com/~lynn/2000d.html#14 FW: RS6000 vs IBM Mainframe
https://www.garlic.com/~lynn/2001c.html#69 Wheeler and Wheeler
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#22 ESCON Channel Limits
https://www.garlic.com/~lynn/2001m.html#25 ESCON Data Transfer Rate
https://www.garlic.com/~lynn/2002e.html#32 What goes into a 3090?
https://www.garlic.com/~lynn/2002j.html#78 Future interconnects
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003h.html#0 Escon vs Ficon Cost
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
https://www.garlic.com/~lynn/2005.html#38 something like a CTC on a PC

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

something like a CTC on a PC

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: something like a CTC on a PC
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 18 Jan 2005 11:10:50 -0700
re:
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC

minor topic drift ... the original TCP/IP for mainframe was implemented in vs/pascal. because of a number of factors, it would consume a whole 3090 processor getting 44kbytes/sec.

i added rfc 1044 support ... and in some testing at cray research ... it was getting mbyte/sec (channel interface speed) between a cray and a 4341-clone ... using only a very modest amount of the 4341 processor.

total random comment on the testing at cray research ... we were scheduled to leave on flight for Minneapolis ... the flight was 20 minutes late leaving SFO ... five minutes after wheels up ... the quake hit.

misc. 1044 posts
https://www.garlic.com/~lynn/subnetwork.html#1044

rfc 1044 summary ... from my rfc index:
https://www.garlic.com/~lynn/rfcidx3.htm#1044

note: clicking on the ".txt=xxxx" field in the rfc summary fetches the actual RFC.

and even more drift ... recent post about trying to get HSP (high-speed protocol) for standardization
https://www.garlic.com/~lynn/2005.html#45 OSI model and SSH, TCP, etc

and past posts on HSP (and even OSI)
https://www.garlic.com/~lynn/subnetwork.html#xtphsp

and even more topic drift ... there has been some recent threads on buffer overflow ... especially in c language and networking environments ... i know of no known buffer overflow exploit in the vs/pascal implemented stack ... collection of buffer overflow posts:
https://www.garlic.com/~lynn/subintegrity.html#overflow

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

8086 memory space

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 10:26:08 -0700
Morten Reistad writes:
That issue with separate location is much forgotten these days.

When I worked for a large phone company there were standards for this; a minimum separation distance of 3 kilometers for really critical systems.

It is not just the hardware. The environment contributes just as much with modern, redundant hardware. Like electricity failuers in -20 cold spells where diesels have starting problems; flooding, breakins and other violent acts by man and nature.


we looked at a number of studies about backup/recovery sites. One of the numbers was more like 40km separation ... although you had to also check that they weren't subject to identical failure modes (aka the same river overflowing its banks).

when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

I coined the terms disaster survivabilty and geographic survivability to distinguish from disaster/recovery.

the transmission costs for supporting concurrent operations has come down quite a bit ... and i believe there may actually be a greater number of business continuity operations with multiple physical sites ... it is just that the total number of data processing installations (say small to medium size) have increased that the business continuity operations represent a smaller percentage.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

8086 memory space

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 10:36:00 -0700
jmfbahciv writes:
Our operators had to be a bit more adept because they only took care of development systems. Allowing operators to make reasonable decisions about the system and helping them to not commit a fumbled fingered act was not trivial. Our best efforts produced OPR. This probably would have not been a good program for IBM-flavored production systems; by this, I mean Real Data Processing in batch mode.

some of the automated operator stuff started at least by the early '70s ... middle '70s on vm had automated processes and programmatic capture techniques for all messages that would otherwise, ordinarily be delivered to physical terminal/screen .... along with some number of programmatic interfaces that analyzed the captured message and executed "operator commands/responses" under program control.

I had done the autolog facility originally for automating benchmarks ... kill the system, reboot, run a specific benchmark, kill the system, reboot, run the next benchmark.
https://www.garlic.com/~lynn/submain.html#bench

the autolog facility was picked up in the standard product and was used for automating some number of operations normally expecting a human to do something.

later in the 80s ... there were some number of similar operational facilities developed using PC 3270 emulation, screen scraping, HLLAPI type programs, etc.

basically you capture as many of the recognized operator messages as possible and developed automagical, programmatic scripts for each case ... eventually including stuff like sending off beeper messages to system support staff.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

creat

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: creat
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 10:52:59 -0700
CBFalconer writes:
No. The only contact I ever had with those systems was via TYMSHARE (I think) and I recall things being disgustingly slow. 5 minutes to echo an input char, etc. with access via 110 baud TTYs.

tymshare had a unit called tymnet that had their own backbone and lots of local dial-up numbers ... i believe not only in the US but also europe. besides tymnet being interface to access tymshare time-sharing services ... they also sold the service to other organizations that provided online services ... possibly early genesis of todays local-access number services (for instance, periodic looking at hostnames for various dialup IP addresses ... i see stuff like Level3 as part of the dialup hostname ... even tho Level3 obviously isn't the customers ISP).

when m/d bought Tymshare ... Tymnet was eventually spun off to BT (the Tymnet bldg. on 1st st. got a BT logo).

random old service bureau time-sharing posts
https://www.garlic.com/~lynn/submain.html#timeshare

tymshare started providing the vmshare online computer conferencing in the '70s to the ibm user group organization, share
http://www.share.org/

and lots of people used local tymnet numbers as their access.

vmshare archive
http://vm.marist.edu/~vmshare/

in the early 80s, i sent up a process with tymshare (& share.org) where i got regular shadow copy of all the vmshare files and put them up on the HONE system as well as a number of other internal sites.

HONE was the internal online infrastructure that supported all the sales, marketing and field people world-wide. misc. past HONE posts:
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Foreign key in Oracle Sql

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Foreign key in Oracle Sql
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Wed, 19 Jan 2005 14:29:48 -0700
DA Morgan writes:
Given that those other products didn't exist when Oracle was created. And given that those that have worked with Oracle for almost 20 years have a large volume of working code they don't want broken. And given that it really doesn't matter ... what's your point other than whining?

there is this from another thread in this n.g. (circa 1976, MDS)
https://www.garlic.com/~lynn/2005.html#30 Network databases

the folklore is that this fed gov. agency funded a company (had large bldg. just west of 101 in burlingame) to do something called Oracle for pdp11 and vm370 platforms (both system/r and ingres were already in progress ... system/r also having been done on vm370 platform).

later some of the people founded a company called SDL (1977) changing to RSI (1979) to commercialize Oracle ... first releasing the pdp11 and then the vm370 versions. They later changed the company name to be the same as the product name.

In the late 80s, they were running into cash flow problems and there were several press releases about selling big portion of the company to a large far east steel company. Shortly after the announcements, they announced a corporate-wide license with a large international oil company ... and were able to back out of the deal with the steel company (we subsequently visited the oil company corporate hdqtrs in europe and they commented corporate-wide license sometimes can be not such a good thing ... once the check is written, you can loose the interest of the local marketing and support people).

minor reference with some historical details:
https://en.wikipedia.org/wiki/Oracle_database

more recent posting on some of the technology (distributed lock manager work, from comp.arch)
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory

old reference to ha/cmp activity w/oracle:
https://www.garlic.com/~lynn/95.html#13

one of the people in the referenced meeting said that they had handled the majority of the code transfer from endicott/sqlds to stl for db2.

two of the other people in the referenced meeting later showed up in a small startup where they were responsible for something called a commerce server

some topic drift with tie-in between oracle and electronic commerce
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

there use to be this joke about there actually only being 200 people in the industry ... the same people just kept showing up in different places.

long ago and far away email from spring 1980
https://www.garlic.com/~lynn/2004o.html#email800329
in
https://www.garlic.com/~lynn/2004o.html#40

referencing oracle announcement for vm370

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

8086 memory space

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 14:33:51 -0700
Joe Pfeiffer writes:
I don't know of specific examples, but I was told there were several companies that replicated their data in the other tower at the World Trade Center....

there was a large financial transaction processing center in NJ that during a heavy snow storm in the early 90s had the roof fall-in ... not too long after their disaster/recovery site in the WTC had been taken out by a bombing. Took them a couple extra days to operational again.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Foreign key in Oracle Sql

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Foreign key in Oracle Sql
Newsgroups: comp.databases.theory,alt.folklore.computers
Date: Wed, 19 Jan 2005 14:45:42 -0700
Anne & Lynn Wheeler writes:
the folklore is that this fed gov. agency funded a company (had large bldg. just west of 101 in burlingame) to do something called Oracle for pdp11 and vm370 platforms (both system/r and ingres were already in progress ... system/r also having been done on vm370 platform).

... and based on this federal agency's participation at user group meetings
http://www.share.org/

during the 70s and 80s ... their primary dataprocessing platforms were mostly vm370.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Foreign key in Oracle Sql

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Foreign key in Oracle Sql
Newsgroups: comp.databases.theory
Date: Wed, 19 Jan 2005 16:15:41 -0700
DA Morgan writes:
Not at all. But the code bases for DB2 did not in its various host environments did not.

the mainframe db2 code base was direct evolution from system/r to sql/ds to db2. there might be a case made that ibm took so long shipping a rdbms product because of internal politics from its other dbms product groups. in the late 80s, there was an (new) open-system implementation done, originally for os2 and aix ... that was announced with the same name, db2.

this is from vmshare archive ... posting 2/27/83 discussing details of the sql/ds release 2 announcement:
http://vm.marist.edu/~vmshare/browse.cgi?fn=SQLDS&ft=MEMO

which was originally announced in 1981 ... although DB2 wasn't announced until 1983.


http://www.colderfusion.com/presentations/smartsql/tsld003.htm

from above history page:


History of SQL

• pre-70 hierarchical and network databases
 * 1970 E.F. Codd defines relational model
• 1974 IBM System/R project, inc. SEQUEL lang.
• 1978 System/R customer tests
• 1979 Oracle introduces commercial RDBMS
 * 1981 IBM introduces SQL/DS
• 1983 IBM introduces DB2
 * 1986 ANSI SQL1 standard ratified
• 1992 ANSI SQL2 standard ratified

.. snip ...

modulo first commercial RDBMS:
• 1976 Multics Data Store

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

8086 memory space

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 16:27:06 -0700
Brian Inglis writes:
What was the name of the VM standard automated operator service machine?

the default was called "autolog1" .... near the end of the boot procedure would call to the autolog command processor ... passing the command "autolog autolog1". It was expected that the installation would have installed scripts for autolog1 service machine to perform various operating system startup functions .... including issuing autolog commands for other service virtual machines.

Later the process was modified to have AUTOLOG1 and STARTER0.

from long ago and far away ... email concerning somebody doing cleanup work on various privilege settings ...

Date: 12/28/78 16:45:13
From: wheeler

Directory changes strikes again!!!!!
----
The 'C=' privileges have been removed from STARTER0. making it unable to run its AUTORUN exec
----
AUTOLOG1 and STARTER0 share R/W the same 191 disk, depending on how the system is brought up, either AUTOLOG1 or STARTER0 is autolog to perform system initializations. If the system is brought up COLD or WARM then AUTOLOG1 is autologged. However if the system is brought up CKPT or FORCE then STARTER0 is autologged. Not allowing STARTER0 to perform its function leads to all sorts of complications as we can know when we went thru this same problem a couple of months ago.
----
<< Lynn W., K03/281, San Jose Res., 408-256-1783 (8-276) >>
............. SEASON GREETINGS


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

8086 memory space

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 8086 memory space
Newsgroups: alt.folklore.computers
Date: Wed, 19 Jan 2005 16:34:25 -0700
Brian Inglis writes:
Ouch! Cheap and short-sighted. Like the ISPs with intended disparate routing who found out that their fibres from different telcos passed thru the same conduit at some point, after the backhoe dug it up! IME "will never happen" takes from six hours to six months to show up in production systems, so "never" <= six months. It's nice in those circumstances to be able to say the equivalent of: just rebuild from source with option -DNEVER_HAPPEN and rerun.

the folklore was the backhoe was in conn ... and caught the fiber that carried all the northeast internet traffic.

the problem was that they had actually, quite carefully laid out something like nine different 56kbit lines during arpanet days that had diverse physical routing (none of the nine shared the same physical wire). over the years ... the telcos had made various changes and upgraded technology and rerouted stuff ... while nobody was keeping track of what was going on ... and eventually all nine lines were eventually consolidated on the same fiber-optic cable coming out of the northeast.

this is the scenario that disaster plans need to have complete walk thru on a regular basis ... you can't just set them up and go off and forget about them.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

previous, next, index - home