List of Archived Posts

2006 Newsgroup Postings (02/12 - 02/26)

Multiple address spaces
Multiple address spaces
Multiple address spaces
Architectural support for programming languages
Obsolete phrases
Lynn at garlic.com
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
Mainframe Jobs Going Away
X.509 and ssh
Mainframe Jobs Going Away
X.509 and ssh
X.509 and ssh
Program execution speed
Obsolete phrases
X.509 and ssh
IBM 610 workstation computer
Change in computers as a hobbiest
virtualization?
Military Time?
Military Time?
Military Time?
Military Time?
Harvard Vs Von Neumann architecture
Change in computers as a hobbiest
Change in computers as a hobbiest
Mount DASD as read-only
Mount DASD as read-only
IBM 610 workstation computer
IBM 610 workstation computer
Worried about your online privacy?
A Historical Look at the VAX
Military Time?
X.509 and ssh
X.509 and ssh
Secure web page?
X.509 and ssh
X.509 and ssh
X.509 and ssh
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
IBM 610 workstation computer
Hercules 3.04 announcement
IBM 610 workstation computer

Multiple address spaces

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 12 Feb 2006 15:26:56 -0700
Gerhard Adam wrote:
If there isn't enough space available within the address space to start a unit of work, that work will fail with a GETMAIN-type of failure. This is not an uncommon problem with environments where there is a heavy 24-bit requirement in subsystems like CICS. In those cases, an installation is usually quite sensitive to changes in the boundary alignment of CSA and can experience problems if that changes. The solution, of course, is to convert the resident programs to 31-bit mode so they don't have the dependency.

old email (recently posted) mentioning mvs/370 system problem that burlington vt chip fab was having with a production chip design fortran application that exceeded 7mbytes (i.e. operation had 8mbyte mvs kernel and a 1mbyte common area ... leaving max. of 7mbytes for application).
https://www.garlic.com/~lynn/2006b.html#39 another blast from the past

Multiple address spaces

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 13 Feb 2006 10:24:20 -0700
Shmuel Metz , Seymour J. wrote:
ES came in with the 3081, where it was physically distinct. As I recall the ES on the 3090 was from the same pol but configured as ES in the LPAR definition.

expanded store was introduced in 3090 ... the memory was the same as regular storage ... the problem was physical packaging with the amount of memory. the amount of memory they could packaged couldn't be put all on the same bus within the instruction execution latency requirements ... so they went to a two level design ... that was software managed (as opposed to sci and numa designs that are hardware implementations). the expanded store bus had longer latency and wider. it was something akin to electronic paging drum ... but using synchronous instructions instead of asynchronous i/o. later machines didn't have the physical memory packaging problem ... but the construct lingered for other reasons.

note also, when kingston went to attach hippi to 3090 ... they cut into the side of the expanded store bus ... since it was the only available interface that could support 800mbit/sec hippi i/o. however, expanded store bus didn't directly have any channel program processor ... so they went to a peek/poke architecture for controlling hippi i/o operations.

later machines didn't have the physical memory packaging problem ... but the construct stayed around for other reasons. recent thread discussing expanded storage construct lingering on.
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006b.html#15 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#16 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#17 {SPAM?} Re: Expanded Storage
https://www.garlic.com/~lynn/2006b.html#18 {SPAM?} Re: Expanded Storage

circa 1980, several hundred electronic paging drums were acquired from another vendor for internal datacenter use that emulated 2305 ... they were called 1655. misc. past posts mentioning 1655
https://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
https://www.garlic.com/~lynn/2001l.html#53 mainframe question
https://www.garlic.com/~lynn/2002.html#31 index searching
https://www.garlic.com/~lynn/2002i.html#17 AS/400 and MVS - clarification please
https://www.garlic.com/~lynn/2002l.html#40 Do any architectures use instruction count instead of timer
https://www.garlic.com/~lynn/2003b.html#15 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#17 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003c.html#55 HASP assembly: What the heck is an MVT ABEND 422?
https://www.garlic.com/~lynn/2003m.html#39 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2004d.html#73 DASD Architecture of the future
https://www.garlic.com/~lynn/2004e.html#3 Expanded Storage
https://www.garlic.com/~lynn/2005e.html#5 He Who Thought He Knew Something About DASD
https://www.garlic.com/~lynn/2005r.html#51 winscape?
https://www.garlic.com/~lynn/2006.html#38 Is VIO mandatory?

past posts mentioning sci, numa (non-uniform memory architecture), hippi, etc
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2000e.html#8 Is a VAX a mainframe?
https://www.garlic.com/~lynn/2001.html#12 Small IBM shops
https://www.garlic.com/~lynn/2001.html#46 Small IBM shops
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001d.html#54 VM & VSE news
https://www.garlic.com/~lynn/2001d.html#55 VM & VSE news
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2001l.html#16 Disappointed
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2002p.html#8 Sci Fi again
https://www.garlic.com/~lynn/2002p.html#30 Sci Fi again
https://www.garlic.com/~lynn/2002q.html#6 Sci Fi again was: THIS WEEKEND: VINTAGE
https://www.garlic.com/~lynn/2002q.html#8 Sci Fi again was: THIS WEEKEND: VINTAGE
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#39 Flex Question
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003j.html#65 Cost of Message Passing ?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2003p.html#16 Star Trek, time travel, and such
https://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
https://www.garlic.com/~lynn/2004.html#0 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004c.html#37 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005.html#48 [OT?] FBI Virtual Case File is even possible?
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#17 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#26 IBM Plugs Big Iron to the College Crowd
https://www.garlic.com/~lynn/2005k.html#28 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005n.html#4 54 Processors?
https://www.garlic.com/~lynn/2005n.html#6 Cache coherency protocols: Write-update versus write-invalidate
https://www.garlic.com/~lynn/2005n.html#37 What was new&important in computer architecture 10 years ago ?
https://www.garlic.com/~lynn/2005n.html#38 What was new&important in computer architecture 10 years ago ?
https://www.garlic.com/~lynn/2005r.html#43 Numa-Q Information
https://www.garlic.com/~lynn/2005r.html#46 Numa-Q Information
https://www.garlic.com/~lynn/2005s.html#38 MVCIN instruction
https://www.garlic.com/~lynn/2005s.html#40 Filemode 7-9?
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006.html#32 UMA vs SMP? Clarification of terminology
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage

Multiple address spaces

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Multiple address spaces
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 13 Feb 2006 10:47:26 -0700
Shmuel Metz , Seymour J. wrote:
No. The B5000, dating back to 1959, is an example of a system without paging that supported virtual memory larger than physical memory. OS/2 1.x is another, more recent, example.

ref:
https://www.garlic.com/~lynn/2006b.html#26 Multiple address spaces

the semantics of the statement didn't preclude systems w/o paging but with defined virtual memory that was larger than physical memory. the semantics of the statement was merely observing that there had been "some" systems that had defined virtual memory that wasn't larger than physical memory ... and discussed an example.

it gave an example of boeing having modified a os/360 mvt release 13 to have virtual memory cobbled on the side running on 360/67 (supporting virtual memory hardware). the modifications didn't include support for paging, had the amount of defined virtual memory the same as physical memory ... and the use was to manage contiguous storage fragmentation for long running (2250 graphics) applications.

the semantics of the statement also weren't intended to apply to the architecture amount of virtual addressing vis-a-vis the amount of real addressing. the 360/67 support (normal) 360 24-bit real addressing, 24-bit virtual addressing (as well as 32-bit, not 31-bit, virtual addressing). However 360/67 had 1mbyte max. real storage (360/67 two-processor smp supported 2mbyte combined real storage ... 1mbyte from each processor). the semantics of the statement were intended to mearly point out that there had been some systems configured such that the amount of configured virtual storage and the amount of real storage were identical ... but doing so, could still serve some useful purpose.

a different variation of this done for vm/vs1 "handshaking". VS1 had a single virtual memory table (similar to vs2/svs). For vm/vs1 handshaking ... you might define a 4mbyte VS1 virtual machine ... VS1 would then define a (single) 4mbyte virtual address space ... effectively having a one-for-one mapping between the VS1 virtual address space and its perceived "real" (virtual machine) address space. When handshaking was enabled, VM could present "virtual" page faults to the VS1 virtual machine ... VM would perform the actual page (replacement) operation ... but it would allow VS1 supervisor and opportunity to perform task switch to a different task. VM would also present a psuedo I/O interrupt to VS1 when the paging operation had finished. This eliminated double paging overhead (various performance penalties having both the virtual machine and VM doing paging operations). In addition, the VM paging implementation was more efficient the VS1 implementation. I had drastically reduced the paging pathlength when I had rewritten it for cp67 (in the 60s when i was an undergraduate). There was also a significant reduction in paging pathlength as well as the accuracy of the page replacement algorith that I released in the resource manager for vm370.

misc. collected posts about scheduling features in the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare

misc. collected posts about paging and page replacement algorithms ... some of which also appeared in the resource manager release
https://www.garlic.com/~lynn/subtopic.html#wsclock

Architectural support for programming languages

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Architectural support for programming languages
Newsgroups: comp.arch
Date: Mon, 13 Feb 2006 11:51:20 -0700
torbenm@app-4.diku.dk (Torben Ægidius Mogensen) writes:
In the 70's and 80's many processors were designed to run specific languages, e.g., LISP, Occam, Forth, Pascal or Smalltalk. These came out of fashion in he 90's because it was found that better compilation techniques could get the same or better speed on stock hardware, which due to economics of scale were typically fabricated on better processes. Additionally, these specialised processors were typically not so good at running other languages like C or Fortran. You could say that the LISP machines etc. were the real victims of the RISC "revolution", since RISC obviously didn't manage to oust the more traditional CISC mircoprocessors (x86 being the case in point).

801 was 1970s risc definition ... i've claimed that it was sort of swinging to the opposite extreme because of the debacle with FS (future system, extremely complex hardware). misc. collected postings mentioning FS
https://www.garlic.com/~lynn/submain.html#futuresys

it also went to the opposite extreme over cache consistency ... again possibly in reachtion to the heavy overhead that the mainframe was paying to maintain very strong memory model and cache consistency (in smp systems). misc collected 801, risc, romp, rios, etc postings
https://www.garlic.com/~lynn/subtopic.html#801

from an old post that dredged up several even older references:
https://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics

one of the items included in the above (see above for mention of the difference between *801* and an *8100*).

Date: 79/07/11 11:00:03
To: wheeler

i heard a funny story: seems the MIT LISP machine people proposed that IBM furnish them with an 801 to be the engine for their prototype. B.O. Evans considered their request, and turned them down.. offered them an 8100 instead! (I hope they told him properly what they thought of that)


... snip ... top of post, old email index

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Obsolete phrases

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Obsolete phrases
Newsgroups: alt.folklore.computers
Date: Mon, 13 Feb 2006 14:09:36 -0700
David Scheidt writes:
Mail readers that tell recipients that someone else got a bcc are broken -- it's not blind then. The recipient doesn't, and has no way of knowing, whether the message was sent to other people not on the address list.

most of the time i've seen that message was when there was neither TO: nor CC: ... and so it seemed that the mail reader assumed that all the recipients were undisclosed (aka bcc:).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Lynn at garlic.com

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Lynn at garlic.com
Newsgroups: bit.listserv.ibm-main
Date: Mon, 13 Feb 2006 14:39:35 -0700
Terry Sambrooks wrote:
Hi,

This is off topic, but I keep getting e-mails from a lynn at garlic.com, and wondered if this was the same Anne and Lynn Wheeler.

If it is, please not that the only part of the e-mail I am receiving is from my ISP telling me that a virus has been deleted.

I apologies for using the list in this way but could not think of another way to reply, as I did not wish to respond to the e-mail in error in case it was spoofed.

Kind regards - Terry


since i don't have your email address (ibm-main mailing list mangles it) ... it is probably somebody impersonating "lynn@garlic.com" (that possibly has harvested your email address from somewhere else) sending to you. i thot ibm-main would have had a virus stripper (if somebody was trying to impersonate to the mailing list with attached virus).

my email address shows in the clear in usenet newsgroup postings (but not in ibm-main mailing list stuff) ... and is readily available from numerous web pages.

viewing the complete headers on the mail may give you some idea of where the email originates. some number of email impersonations will dummy up the originating "Received:" header information (many don't bother since most poeple never look) ... but typically the next "Received:" header is some ISP ... which will give both its "name" and ip-address ... as well as the "ip-address" of where it got the email from. of course, it is possible for impersonation to dummy up some sequence of "Received:" headers ... but eventually the email should get to some ISP where the "Received:" header information can be reasonably trusted.

Its not totally impossible that a (zombie) virus email could originate from this machine ... but highly unlikely (I know of no reported instances of email zombie/virus for this particular machine's configuration).

at times in the past, i've had episodes where i've gotten tens of thousands email bounce messages ... somebody impersonating "lynn@garlic.com" using (purchased?) mailing list that possibly involved hundreds of thousands of names (that happened to contain tens of thousands invalid email addresses).

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Tue, 14 Feb 2006 10:54:15 -0700
jmfbahciv writes:
Anything that has a schedular. Lynn understands what I mean and I think he has some documentation about it.

this is so much scheduler ... this is fastpath pathlength optimization and mis-match in processing speeds. i had done a lot of pathlength optimization in addition to lots of scheduler work ... like the resource manager
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

the disk controller for 3330s was 3830 ... it was horizontal microcode engine and was relatively fast. going to 3380 disks (floating heads, air-bearing, etc), the controller was 3880 controller. the 3880 used jib-prime "vertical" microprocessor and was doing a lot more function. data transfer was 3mbytes/sec for 3380s vis-a-vis .8mbytes/sec for 3330s. jib-prime was lot slower than the 3830 processor ... so part of the implementation was to take the jib-prime out of the (3mbyte/sec) dataflow and only use it for control path.

so part of product announce included performance QA vis-a-vis prior products. the test for 3880 control was done at santa teresa lab using a "two pack" VS1 system (i.e. VS1 system using only two disk drives). there were some issues with 3880 taking to long to process ending i/o status. 3880 controller was modified to present ending status "early" ... and then proceed with final clean up on the i/o operation in parallel with the processor handling the i/o interrupt. this got the 3880 performance QA qualitifcation within requirements.

so up in product test lab ... bldg. 15 (which does mostly electronic, functional and envirmental tests ... they have a huge pressure/humidity/temperature chamber that you can even roll big boxes in and then lock the door) ... they decided to install a 3880 replacing a 3830.

the disk engineering lab (bldg. 14) and the product test lab (bldg. 15) had been doing all their regression testing using dedicated mainframes with custom stand-alone monitor. they had tried doing this with standard operating system (allowing among other things, to do multiple tests concurrently on different boxes). however, they found that at the time, standard mvs operating system had a 15 minute MTBF in the testcell environment.

i had undertaken to rewrite i/o supervisor so that it was completely bullet proof and would never fail ... allowing it to be installed on the mainframes used for testing in bldg. 14 & bldg. 15. misc. past posts.
https://www.garlic.com/~lynn/subtopic.html#disk

so, bldg. 15 had this enhanced operating system running on a brand new 3033 (bldg. 14 & bldg. 15 tended to get one of the first machines ... nominally processor engineers got the first couple machines, and then the next one went out to the west coast for disk i/o testing). they could do concurrent testing of all available boxes and most of the processor was still idle (close to 99percent wait state). so they put up an additional string of disks and ran their own internal interactive timesharing service).

i've posted before that this internal service on the 3033 was used setup to run things like the air-bearing simulations for the new generation of 3380 floating heads (it had previously getting several week turn-around on the 370/195 across the street in bldg.28/sjr).

in any case, one weekend the upgrade the internal disk string/controller with 3880. monday morning they called me all aggetated and asked me what i had done to the operating system over the weekend ... that their internal service performance on the 3033 had degraded enormously. of course, i had done nothing. after some detailed analysis the problem was localized to the "new" 3380.

in the VS1 performance qualification tests, the 3880 early interrupts were taking some time for VS1 to process while the 3880 continued in parallel with its house-cleaning cleanup. however, my operating system on the 3033 was coming back almost immediately and hitting the 3880 controller with a new operation (before it had time to finish its cleanup of the previous operation). This forced the 3880 to present control unit busy (SM+BUSY) status ... and do a whole bunch of additional cleanup. Where the 3830 control unit was immediately able to accept the next operation ... the 3880 control unit had a significant delay ... and hitting it with a new operation before it was ready ... made the delay even worse. The 3033 disk i/o thruput appeared to drop by 80-90 percent using the "new" 3880 controller (compared to the 3830 controller).

A combination of the 3033 processor speed and the highly optimized i/o pathlength (which i did as part of making the i/o supervisor bullet-proof) was too much for the 3880 controller to handle.

Fortunately the incident was six month before 3880 first customer ship and there was time to patch up the worst of the problems in the 3880.

mentioning the 370/195 ... there is a completely different "processor" too fast issue with it. the 195 had various functional units and 64 instruction pipeline and would do a lot of asynchronous, overlapped instruction execution. however, most codes ran at around half 195 peak thruput because of various issues keeping the pipeline full.

there was a project (that never shipped) to add "hyperthreading" to the 195 (aka analogous to the recent next new thing "HT" technology ... the next-new-thing before the most recent next-new-thing, dual-core). this basically simulated a two-processor machine ... adding a second instruction stream, additional registers, etc to an otherwise unmodified 195. it still had the same pipeline and same functional units. instructions in the pipeline were identified with a one-bit flag as to the instruction stream that it belong to .. and each register also had a similar flag.

this was to address the problem that the processor hardware and functional units were faster than the memory technology feeding the beast. a recent post in a related thread in comp.arch
https://www.garlic.com/~lynn/2006b.html#22 Would multi-core replace SMPs?

misc. past posts about the 370/195 dual i-stream work in the early 70s
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#19 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#14 Multicores

misc. past posts mention the 3380 thin-film, air-bearing head simulation:
https://www.garlic.com/~lynn/2001n.html#39 195 was: Computer Typesetting Was: Movies with source code
https://www.garlic.com/~lynn/2002j.html#30 Weird
https://www.garlic.com/~lynn/2002n.html#63 Help me find pics of a UNIVAC please
https://www.garlic.com/~lynn/2002o.html#74 They Got Mail: Not-So-Fond Farewells
https://www.garlic.com/~lynn/2003b.html#51 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003b.html#52 Disk drives as commodities. Was Re: Yamhill
https://www.garlic.com/~lynn/2003j.html#69 Multics Concepts For the Contemporary Computing World
https://www.garlic.com/~lynn/2003m.html#20 360 Microde Floating Point Fix
https://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
https://www.garlic.com/~lynn/2004.html#21 40th anniversary of IBM System/360 on 7 Apr 2004
https://www.garlic.com/~lynn/2004b.html#15 harddisk in space
https://www.garlic.com/~lynn/2004o.html#15 360 longevity, was RISCs too close to hardware?
https://www.garlic.com/~lynn/2004o.html#25 CKD Disks?
https://www.garlic.com/~lynn/2005.html#8 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005f.html#4 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005f.html#5 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005o.html#44 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2006.html#29 IBM microwave application--early data communications

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Tue, 14 Feb 2006 11:42:48 -0700
Andrew Swallow writes:
How? CPUs do not output L1 cache address. You would have to wait for the data to be written to ram before the second CPU knows it to be dirty. If the location is a lock variable CPU1 has to wait until the ram has been updated before it is safe to do anything else. CPU2 has to read that ram location before it can enter the locked memory.

We basically have a race condition between CPU1, CPU2 and the ram. Not the normal part of a clock cycle but one lasting tens of clock cycles. These races can really mess the software up.


lots of cache implementations have significant (direct) cross-cache chatter about what cache lines they have (cache coherency protocols).

multi-level L1, L2, L3 cache architectures were discussed from at least the early 70s. some more recent convention was when L1 was on-chip ... and L2 was off-chip. then as chips starting getting larger number of circuits ... they were able to move L2 on-chip also.

801 from the 70s was strongly against cache coherency. I and D cache were separate and non-coherent ... and the early 801s had not provisions for cache-coherent smp/multiprocessing.

i've often claimed that this was in strong re-action to the performance penalty that the mainframes paid for very strong memory model and multiprocessor cache coherency. however, this is a reference to a mainframe with separate I and D cache ... where there was hardware cache-coherent between the two caches:
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

one of the issues in the 801 with separate non-coherent I and D cache has to do with "loaders" ... function that loads programs into memory preparing them for execution. loaders may treat some amount of the program as data ... resulting in modified areas of the program being resident in the D (data) cache. in such schemes, the loader needs to have a (software) instruction to force modified D cache-lines to RAM ... before starting program execution.

eventually somerset project (austin, motorola, apple, et al) basically undertook to redo 801/rios/power for lower-power, single chip, cache coherncy, misc. other stuff (i've somewhat characterized it as taking the multipleprocessor motorolo 88k stuff and some basic 801/power) ... to creat the power/pc. the executive that we had been reporting to when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

moved over to head up somerset. misc. postings on 801, romp, rios, power, power/pc, etc
https://www.garlic.com/~lynn/subtopic.html#801

other approaches were snoopy cache protocol ... like sequent used in the 80s. also IEEE sci standard used a directory lookup cache protocol to get some more scallability (and numa ... non-uniform memory architecture) sci web pages:
http://www.scizzl.com/

for other topic drift, lots of posts on multiprocessing, compare&swap instruction, caches, etc
https://www.garlic.com/~lynn/subtopic.html#smp

misc. past postings on numa, sci, etc
https://www.garlic.com/~lynn/96.html#8 Why Do Mainframes Exist ???
https://www.garlic.com/~lynn/96.html#25 SGI O2 and Origin system announcements
https://www.garlic.com/~lynn/98.html#40 Comparison Cluster vs SMP?
https://www.garlic.com/~lynn/2001b.html#39 John Mashey's greatest hits
https://www.garlic.com/~lynn/2001b.html#85 what makes a cpu fast
https://www.garlic.com/~lynn/2001f.html#11 Climate, US, Japan & supers query
https://www.garlic.com/~lynn/2001j.html#12 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001j.html#17 I hate Compaq
https://www.garlic.com/~lynn/2002g.html#10 "Soul of a New Machine" Computer?
https://www.garlic.com/~lynn/2002h.html#78 Q: Is there any interest for vintage Byte Magazines from 1983
https://www.garlic.com/~lynn/2002i.html#83 HONE
https://www.garlic.com/~lynn/2002j.html#45 M$ SMP and old time IBM's LCMP
https://www.garlic.com/~lynn/2002l.html#52 Itanium2 performance data from SGI
https://www.garlic.com/~lynn/2003.html#0 Clustering ( was Re: Interconnect speeds )
https://www.garlic.com/~lynn/2003.html#6 vax6k.openecs.org rebirth
https://www.garlic.com/~lynn/2003.html#39 Flex Question
https://www.garlic.com/~lynn/2003d.html#57 Another light on the map going out
https://www.garlic.com/~lynn/2003j.html#65 Cost of Message Passing ?
https://www.garlic.com/~lynn/2003p.html#30 Not A Survey Question
https://www.garlic.com/~lynn/2004.html#0 comp.arch classic: the 10-bit byte
https://www.garlic.com/~lynn/2004.html#1 Saturation Design Point
https://www.garlic.com/~lynn/2004d.html#6 Memory Affinity
https://www.garlic.com/~lynn/2004d.html#68 bits, bytes, half-duplex, dual-simplex, etc
https://www.garlic.com/~lynn/2004e.html#2 Expanded Storage
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005.html#50 something like a CTC on a PC
https://www.garlic.com/~lynn/2005d.html#20 shared memory programming on distributed memory model?
https://www.garlic.com/~lynn/2005e.html#12 Device and channel
https://www.garlic.com/~lynn/2005e.html#19 Device and channel
https://www.garlic.com/~lynn/2005f.html#18 Is Supercomputing Possible?
https://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005j.html#16 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#28 IBM/Watson autobiography--thoughts on?
https://www.garlic.com/~lynn/2005m.html#46 IBM's mini computers--lack thereof
https://www.garlic.com/~lynn/2005m.html#55 54 Processors?
https://www.garlic.com/~lynn/2005n.html#4 54 Processors?
https://www.garlic.com/~lynn/2005n.html#6 Cache coherency protocols: Write-update versus write-invalidate
https://www.garlic.com/~lynn/2005n.html#37 What was new&important in computer architecture 10 years ago ?
https://www.garlic.com/~lynn/2005n.html#38 What was new&important in computer architecture 10 years ago ?
https://www.garlic.com/~lynn/2005r.html#43 Numa-Q Information
https://www.garlic.com/~lynn/2005r.html#46 Numa-Q Information
https://www.garlic.com/~lynn/2005v.html#0 DMV systems?
https://www.garlic.com/~lynn/2006.html#16 Would multi-core replace SMPs?
https://www.garlic.com/~lynn/2006.html#32 UMA vs SMP? Clarification of terminology
https://www.garlic.com/~lynn/2006b.html#14 Expanded Storage
https://www.garlic.com/~lynn/2006c.html#1 Multiple address spaces

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Tue, 14 Feb 2006 17:14:16 -0700
Andrew Swallow writes:
If you are not processing a shared memory area the on chip ram does not need flushing (until the chip runs out of ram).

Virtual storage systems normally support more than one flush - flush and empty vs flush and keep. With ram, unlike fifos, flush and keep is the default.

Once a read only area has been backed up it does not need writing again and can stay on chip until the ram needs recycling. Partial flushing at the page level is normal practice.

The management software for virtual storage normally contains sophisticated code to decide which areas to keep and which to discard.

(Although KISS can work, write everything that has changed to disk and discard everything - then read back only what you need when you need it.)


there is what i call the dup/no-dup issue ... with multi-level "cache" architectures ... typically is aggravated when two adjacent levels are of similar sizes.

i guess i ran into it originally in the 70s with three levels of storage hierarchy; main memory, 2305 paging fixed-head disk, and 3330 disk paging. i had done page migration ... moving inactive pages from 2305 (fast) paging to 3330 disk paging.

the issue was that as main storage sizes were growing ... they were starting to be comparable in size to available 2305 page capacity; say 16mbyte real memory and a couple 12-mbyte 2305s.

up until then, the strategy was that when fetching a page from 2305 to main storage ... the copy on the 2305 remained allocated. this was to optimize the case if the page in real storage needing replacement ... and hadn't been modified during the most recent stay in memory ... then it could simply be discarded w/o having to write it out (the "duplicate" copy on the 2305 was still valid). the problem was that the 2305s could be nearly completely occupied with pages that were also in real storage. when 2305 capacity was being stressed ... it was possible to get better thruput by going to a "no-dup" stragegy; when a page was read from 2305 into memory, the copy on 2305 was deallocated and made available. this always cost a write when page was selected for replacement ... but the total number of "high-performance" pages might be doubled (the total number of distinct pages either in memory or on 2305).

this showed up later in the early 80s with the introduction of the ironwood/3880-11 disk controller page cache. it was only 8mbytes. you might have a system with 32mbytes of real storage and possibly four 3880-11 controllers (each w/8mbytes for 32mbytes total).

you could either do 1) a normal page read ... in which case the page was both in memory and in the disk controller cache or 2) a "destructive" read, in which case, any cache copy was discarded. when you replaced and wrote out a page, it went to cache.

the "dup" strategy issue was that the 32mbytes of disk controller cache was occupied a pretty much duplicates of pages in real storage ... and therefor there would never be a case to fetch them (since any request for a page would be satisfied by the copy in real memory). it was only when a non-dup strategy was used (destructive reads) that you would have a high percentage of pages in the disk controller that were also not in real memory... and therefore might represent something useful ... since only when there was call for a page not in real storage would there be a page i/o request that could go thru the cache controller. If the majority of the disk controller cache was "polluted" with pages also in real storage (duplicates) ... then when there was a page read .. there would be very small probability that the requested page was in the disk cache (since there were relatively few pages in the cache that weren't already in real storage).

something similar came up a few years ago with some linux configurations ... machines with real memory that were 512mbytes, 1gbyte, and greater than 1gbyte ... with 9gbyte disk drives. if you allocated a 1gbyte paging area, a 1gbyte real storage ... and a dup strategy ... then the max. total virtual pages would be around 1gbyte (modulo lazy allocation strategy for virtual pages). however, a no-dup strategy would support up to 2gbyte of virtual pages (1 gbyte on disk and 1gbyte in memory ... and no duplicates between what was in memory and what was on disk). The trade-off was that no-dup strategry required more writes (every page replacement whether the page had been altered or not) while potentially allowing significantly more total virtual pages.

.... and for another cache drift ... i had done the original distributed lock manager (DLM) for hacmp product
https://www.garlic.com/~lynn/subtopic.html#hacmp

it supported distributed/cluster operation and supported semantics similar to the vax/cluster dlm (in fact, some of the easiest ports to ha/cmp were dbms systems that had developed support for running on vax/cluster). one of the first such dbms (adopting their vax/cluster operation to ha/cmp) was ingres (which subsequently went thru a number of corporate owners and relatively recently was being spun-off as open source). in fact, some of the ingres people contributed their list of top things that vax/cluster could have done better (in hindsight) that i was able to use in ha/cmp dlm.

now some number of these database systems have their own main memory cache and do things that they may call fast commit and "lazy writes" ... i.e. as soon as the modified records are written to the log, they are considered commuted ... even tho the modified records remain in real storage cache and haven't been written to their (home) database record position. accesses to the records would always get the real-storage, modified, cached version. in the event of a failure, the recovery code would read modified, commuted changes from the log, read the original records from the dbms, apply the changes and write out the updated records.

so in distributed mode, dbms processes needed to obtain the specific record lock from the DLM. I had worked out a scheme where if their was an associated record in some other processor's cache, i would transmit the lock granting and the modified record (direct cache-to-cache transfer) avoiding a disk i/o. at the time, the dbms vendors were very sceptical. the problem wasn't the transfer but the recovery. they had adopted an implementation that in a distributed environment when a lock/record moved to a different processor (cache), rather than directly transmitting it, it would be first be forced to its database (home) record position on disk ... and the dbms on an another processor would then retrieve it from disk (this is somewhat akin to force it from cpu cache to ram before it could be loaded into another cpu cache). the problem was that there were all sorts of recvoery scenarios with distributed logs and distributed caches ... if multiple versions of fast commit records were laying around in different distributed logs. the issue during recovery was determining which was of possible multiple record changes in different logs. Potentially none of these modifications might appear in the physical database record ... aka what was the recovery application order of modifications from different logs for the same record.

at the time, it was deamed to complex a problem to deal with ... so went with the safer ... only have at most one outstanding fast commit modification and anytime a record moved from one distributed dbms cache to another dbms cache ... first force it to disk. however, in the past couple years, i've had some vendor people come back and say that they were now interested in doing such an implementation.

some specific posts mentioning dlm:
https://www.garlic.com/~lynn/2000.html#64 distributed locking patents
https://www.garlic.com/~lynn/2000g.html#32 Multitasking and resource sharing
https://www.garlic.com/~lynn/2001.html#40 Disk drive behavior
https://www.garlic.com/~lynn/2001c.html#66 KI-10 vs. IBM at Rutgers
https://www.garlic.com/~lynn/2001e.html#2 Block oriented I/O over IP
https://www.garlic.com/~lynn/2001j.html#47 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2001k.html#5 OT - Internet Explorer V6.0
https://www.garlic.com/~lynn/2002e.html#67 Blade architectures
https://www.garlic.com/~lynn/2002f.html#1 Blade architectures
https://www.garlic.com/~lynn/2002f.html#4 Blade architectures
https://www.garlic.com/~lynn/2002k.html#8 Avoiding JCL Space Abends
https://www.garlic.com/~lynn/2003i.html#70 A few Z990 Gee-Wiz stats
https://www.garlic.com/~lynn/2004i.html#1 Hard disk architecture: are outer cylinders still faster than inner cylinders?
https://www.garlic.com/~lynn/2004i.html#2 New Method for Authenticated Public Key Exchange without Digital Certificates
https://www.garlic.com/~lynn/2004m.html#0 Specifying all biz rules in relational data
https://www.garlic.com/~lynn/2004m.html#5 Tera
https://www.garlic.com/~lynn/2004q.html#10 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2004q.html#70 CAS and LL/SC
https://www.garlic.com/~lynn/2005.html#40 clusters vs shared-memory (was: Re: CAS and LL/SC (was Re: High Level Assembler for MVS & VM & VSE))
https://www.garlic.com/~lynn/2005.html#55 Foreign key in Oracle Sql
https://www.garlic.com/~lynn/2005h.html#26 Crash detection by OS
https://www.garlic.com/~lynn/2005h.html#28 Crash detection by OS
https://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
https://www.garlic.com/~lynn/2005r.html#23 OS's with loadable filesystem support?
https://www.garlic.com/~lynn/2005u.html#38 Mainframe Applications and Records Keeping?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe Jobs Going Away

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Jobs Going Away
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 15 Feb 2006 02:43:50 -0700
David.Speake wrote:
This stimulates. Why should they not be able to run UNIX/LINEX/AS400/Alpha-VMS or even Windows on Z chips without Z/OS or VM. I have no idea what the instruction set burned into the metal is like nor how I/O is really done at the hardware level. The ONLY metal instruction I know of is SIE and I saw that one here less than a month ago. Does only the milli/micro/nano code have to change for it to pretend to be anything desired? Does this level resemble S/360 decedents POP instructions at all? I saw some S/370 micro code listings about 30 years ago, but... For all I know the Z chips have the same "metal" instruction set as the Pentium X/Y/Z whatever. Pointers desired and apreciated.

recent posting on macrocode that Amdahl implemented original hypervisor ... tge eventual response was pr/sm on 3090.
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

the low-end and mid-range 370s tended to be "vertical" microcode ... i.e. instructions look a lot like machine code (for a long time, the sort of rule of thumb was that there was an avg. of 10 "microcode" instructions for every 370 instruction). the high-end 370s tended to be "horizontal" microcode (something more akin to itanium rather than pentium).

at one point there was a large project to converge the large number of (different) internal microprocessors to 801/risc. and the follow-on to the 4341 (4381) was going to use a 801/risc processor. that was eventually killed when it was shown that it was becoming possible to implement a large fraction of 370 instructions directly in silicon. misc. 801, risc, romp, rios, power, power/pc, fort knox and somerset postings
https://www.garlic.com/~lynn/subtopic.html#801

for some topic drift a somewhat related post in comp.arch
https://www.garlic.com/~lynn/2006c.html#3 Architectural support for programming languages

lots of past postings about 360 and 370 microcode
https://www.garlic.com/~lynn/submain.html#mcode

a recent posting touching on horizontal microcode processor in the 3830 disk controller being replaced with jib-prime vertical microcode processor in the 3880
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 15 Feb 2006 10:36:07 -0700
Chuck writes:
Couldn't something be built into the protocol to distribute known_hosts? For example I connect to server X, it's fingerprint matches the one I have on file, therefore I trust it and it pushes down a list of all the hosts it trusts? If there are any new ones, I get them the next time I connect to the server.

basically all the public key oriented authentication schemes is dependent on having a local trusted repository of public keys and what they are associated with. this is true even for PKI x.509 infrastructures ... although in the PKI case, the repository may be significantly obfuscated and almost cloaded in mysticism.

the PKI case may have all sorts of obstructions involved with dealing with any local tursted public key repository and/or adding new entries to the repository ... where an attempt is made in both software and policy that only certain privileged entities known as "certification authorities" have their trusted public keys in your repository.

another variation is secure DNS. basically almost any kind of information can be registered with the domain name infrastructure and distributed ... including lots of infrastructure for local caching of that information. basically secure DNS would do what you describe ... if public keys were registered in domain name infrastructure.

one of the issues that i've observed with the SSL domain name server certificates is that the PKI certification authority industry has somewhat sponsored secure DNS and registration of public keys.

the issue is that frequently that somebody reguests a SSL domain name server certificate from a certification authority ... and must supply a bunch of identification information. the certification authority then must validate the supplied identification information corresponds to the identification information onfile with the domain name infrastructure (for that domain name owner). this is a time-consuming, expensive and error-prone process.

note however, there have been various vulnerabilities with the domain name infrastructure like domain name hijacking ... which puts at risk the certification process done by the PKI certification authorities. So somewhat to improve the integrity of domain name infrastructure (and therefor the certification done by PKI certification authorities), the proposal is that domain name owners register their public key when they register a domain name.

this also offers an opportunity for the PKI certification authorities, they now can require that SSL domain name certificate applications be digital signed by the applicant. Then the PKI certificaiton authority can replace their time-consuming, error-prone and expensive identification process with a much simpler, more reliable, and less expensive authentication process by doing a real-time retrieval of the domain name owner's public key and verifying the digital signature attached to the SSL domain name certificate application.

This significantly enhancing the business process of the PKI certification authorities as well as the integrity of the domain name infrastructure (which the PKI certification authorities are dependent on for their certifications). The catch-22 or downside is that if the PKI certification authorities can do real-time retrieval of onfile public keys for verifying digital signatures ... then it is possible that the rest of the world could start doing it also ... which in turn could make any SSL domain name certificates redundent and superfluous.

Furthermore, one could imagine that such secure DNS implementation (of real-time retrieval of onfile trusted public keys) could have the local caching implementation slightly enhanced to interoperate with any local repository of trusted public keys ... and integrated with something like the PGP-like user interface allowing users to provide optional trust ratings for any public keys in their local repository of trusted public keys.

misc. past posts on SSL domain name server certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcert

one suggestion for an enhancement to the SSL protocol is to allow any public key and associated crypto-protocol options to be piggybacked on the domain name infrastructure response to request for a server's ip-address. then the client has the server's public key in the same response that it acquires the server's ip-address. the client could then do the session key generation, encode it with the server's public key and transmit it in the initial encrypted session setup ... eliminating all the upfront SSL session protocol chatter. One could even imagine it being done for a single round-trip transaction protocol ... say using something like UDP.

the client currently is able to cache ip-address response from the domain name infrastructure ... so one could imagine that any public key responses could be cached similar. with some enhancements, such public key responses could be even given longer cached lifetimes (say in the local trusted public key repository).

basically ... using the existing domain name infrastructure real-time information distribution and caching mechanism ... and eliminating the need for PKI stale, static, redundant and superfluous (x.509 or other) certificate-based operations.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mainframe Jobs Going Away

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mainframe Jobs Going Away
Newsgroups: bit.listserv.ibm-main,alt.folkore.computers
Date: Wed, 15 Feb 2006 12:48:17 -0700
Shmuel Metz , Seymour J. wrote:
1. AIX/370 and IX/370 ran on older S/370 processorsl it wouldn't take much to get them up on zSeries if the source code were still around and somebody cared.

aix/370 was port from ucla's locus (different heritage than the power aix, which was a at&t unix ... originally done by the company that had been contracted to do pc/ix). it was a package with both aix/370 and aix/ps2.

the acis organization in palo alto (adjacent to the palo alto science center) was working with a number of west coast universities.

one of the earlier ports they started on was bsd for 370. however that got retargeted to the pc/rt (as an alternative to austin's aix) and called aos. numerous posts on 801, aix, romp, rios, power, power/pc, somerset, fort knox, etc
https://www.garlic.com/~lynn/subtopic.html#801

they (acis) were also working with ucla's locus and had early ports to series/1 and some motorola 68k machines before starting on the 370 (and ps2) ports that became aix/370 and aix/ps2. in some sense this was the academic/university flavor of the more mainstream SAA ... aka in much of its efforts, getting PCs applications running on the backend mainframe and attempting to stuff the client/server genie back in the bottle.

locus in addition to providing distributed network filesystem ... ala nfs and afs ... provided local file caching (ala afs) ... but afs only provided "full" file caching ... while locus also supported partial file caching. locus also provided process migration ... both between machines of the same architecture ... but also between machines of different architectures (modulo some caveats about equivalent executable binaries being available for the different architectures).

in some of the early OSF and DCE meetings ... you saw CMU andrew people and UCLA locus people represented (in addition to MIT project athena people ... for things like kerberos). for some random topic drift, misc. posts concerning kerberos (and pk-init draft)
https://www.garlic.com/~lynn/subpubkey.html#kerberos

misc. past postings with some SAA reference ... especially when we had created 3-tier architecture and pitching it in customer executive briefings (and were taken some amount of grief from the SAA as well as t/r factions)
https://www.garlic.com/~lynn/subnetwork.html#3tier

also some postings about relationship between SAA and trying to return to more of the thin-client, terminal emulation paradigm
https://www.garlic.com/~lynn/subnetwork.html#emulation

at the time that we were taking all of the hits from the SAA forces for doing the 3-tier architecture stuff ... the guy promoted to head up SAA efforts had been somebody i had worked with in Endicott on ecps microcode (originally for the 370/148). a few misc. past posts discussing ecps vm microcode assist
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/2003f.html#43 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#47 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#52 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#54 ECPS:VM DISPx instructions
https://www.garlic.com/~lynn/2003f.html#56 ECPS:VM DISPx instructions

X.509 and ssh

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 15 Feb 2006 12:55:46 -0700
Chuck writes:
I like it. I really like it. Is this just at the proposal stage or is it actually being implemented somewhere?

ref:
https://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh

i think that the PKI/CA forces are quite ambivalent about it ... on the one hand they need the public key registration as part of overall improving the integrity of the domain name infrastructure (which they are dependent on when they go thru the business process of certifying the domain name owner, before issueing the digital certificate represented that they had done the appropriate certification).

They can also use real-time retrieval of the onfile, registered public key to move from the error-prone, time-consuming, and expensive identification process to a much simpler, less expensive, and more reliable authentication process.

however, opening up real-time retrieval of onfile, registered public keys is a pandoras box for them since it leads down the trail of obsoleting the stale, static, redundant and superfluous certificates.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 15 Feb 2006 14:30:56 -0700
Chuck writes:
If I were a CA I would NOT be pushing for this at all. What good would it be to me to speed up the certification process if it eliminates me in the end? Or perhaps they will simply get into the domain registration business. :)

ref:
https://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#12 X.509 and ssh

the part about making it cheaper, more reliable, and less time-consuming by verifying a digital signature on a certificate application (by doing a real-time retrieval of onfile, registered public key) ... is somewhat incidental to the real objective of improving the overall integrity of the domain name infrastructure.

by having a public key registered as part of domain name registration ... then all future communication between the domain name owner and the domain name infrastructure can be digitally signed (and validated) ... which helps to reduce things like domain name hijacking.

fundamentally, a digital certificate (or any other physical license, credential, etc) is a stale, static, locally cached representation of some certification process. in the case of ssl domain name certificates, the certification includes certifying that the entity requesting the ssl domain name certificate is the domain name owner.

if there are integrity issues with the domain name infrastructure with regard to the true owner of a domain name (from exploits like domain name hijacking) then the trust root for the whole ssl domain name pki/certificate infrastructure (the information on file as to the identity of the true domain name owner) is at risk ... aka attacker creates a dummy front company, hijacks the domain name, and then applies for a ssl domain name digital certificate ... and successfully is issued a ssl domain name digital certificate ... because the attacker is, in fact, the current, onfile, registered owner of the domain name. for more information use one of the search engines to find a lot more about (even recent) domain name hijacking exploits.

the monkey wrench in this whole scheme is the domain name infrastructure being fundamentally a real-time trusted information distribution implementation. if there are registered public keys in the domain name infrastructure ... then it is possible for the domain name infrastructure to also provide trusted, real-time distribution of any and all information on file.

i've frequently asserted that real-time distribution of trusted information is orders of magnitude perferred to using stale, static redundant and superfluous digital certificates.

in effect, digital certificates are paradigm analogous to the letters of credit/introduction from the sailing ship days ... where they substituted for the relying party having either their own information (about the subject in question) and/or direct communication with some recognized authority. over the centuries, credential/license/certificates have acted as substitutes for direct knowledge and/or direct communication. however, as infrastructures migrate to online, all-the-time operations ... direct communication is available ... and the credentials/licenses/certificates substitutes are becoming more and more redundant and superfluous.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Program execution speed

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Program execution speed
Newsgroups: alt.folklore.computers
Date: Thu, 16 Feb 2006 11:54:34 -0700
blmblm writes:
To bring things vaguely back on topic for the newsgroup ....

I remember a somewhat similar functionality (multiple "virtual consoles") on Amdahl mainframes in the early 1980s. It was a neat feature then, and it's a neat feature now, but at some point I get confused, and I think four would probably be past that point for me. "YMMV", obviously.


in the 70s, parasite/PVM gave the capability on the internal network (either local machine or networked machines). misc. postings mentioning internal network (internal network was larger than the arpanet/internet from just about the beginning until sometime mid-85):
https://www.garlic.com/~lynn/subnetwork.html#internalnet

also, parasite w/story gave hllapi-type (developed later as part of PC 3270 emulation and screen-scraping) functionality for programatic capability.

misc. past postings mentioning story/parasite
https://www.garlic.com/~lynn/2001k.html#35 Newbie TOPS-10 7.03 question
https://www.garlic.com/~lynn/2003i.html#73 Computer resources, past, present, and future
https://www.garlic.com/~lynn/2003j.html#24 Red Phosphor Terminal?
https://www.garlic.com/~lynn/2004e.html#14 were dumb terminals actually so dumb???
https://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
https://www.garlic.com/~lynn/2006.html#3 PVM protocol documentation found

3290 had multiple session support (it was large screen and you could partition it in various ways like into quardrants with four sessions/screens being displayed simultaneious). later PCs with 3270 emulation could take advantage of the multiple 3270 session capability.

here is an old story ... that created new terminal session (on local vm machine) and then connected to PVM to do terminal emulation over the internal network to the field engineering ("RETAIN") system and looked at database of software/hardware reports and any problems/fixes/changes/updates (retrieving and saving specific information).


*
• BUCKET --  Automatic PUT Bucket Retriever
*
ID    '< wait >'
Wait  Until 'VM/370'
Send  CLEAR
             Wait  Until 'CP READ'
When  ever( case ScrFull ) 'MORE...'  Send CLEAR
             When  ever( case Holding ) 'HOLDING'  Send CLEAR
Send  ENTER 5B60 SBA=5B60 'DIAL PVM'
Wait Until 'SPECIFIC NODE ID'
Send  ENTER 4166 SBA=4166 'RETAIN'
check1       When 'FIELD ENGINEERING' goto go1
When 'SIGNED OFF' goto go1
             When 'PORT NOT AVILABLE' goto quit
Wait until 'SIGNED OFF'
goto check1
go1       Send  ENTER 5B6D
             Wait  Until 'ENTER EMPLOYEE NUMBER/PASSWORD'
Send  ENTER 4C66 SBA=4C66 &PASSWD
             Wait  Until 'ENTER UPGRADE/SUBSET IDS'
Send  ENTER 406B SBA=406B &SSID
Wait  Until 'CHG/INDEX'
Send  PF11 C450 SBA=4150 'Y' SBA=4160 'Y' SBA=C450 'Y'
             Wait  Until 'OUTPUT QUED'
Send ENTER
             Wait  Until 'UPGRADE:'
Send Control 'C '
Send ENTER 5952
Wait  Until 'UPGRADE:'
          Send Control 'C '
When Ever( Case Wrap ) 'PG 001' goto  done
next      Send ENTER 5952
Wait  Until 'UPGRADE:'
Send Control 'C '
Goto next
done      Send ENTER 5952 SBA=5952 'SIGNOFF'
Wait Until 'TERMINAL'
          Send Enter 4F4F SBA=4F4F '####'
Wait  Until 'SPECIFIC NODE ID'
quit      Send  PA1
Stop

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Obsolete phrases

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Obsolete phrases
Newsgroups: alt.folklore.computers
Date: Fri, 17 Feb 2006 10:26:08 -0700
KR Williams writes:
More likely they were put there to reduce customer complaints abot receiving incomplete documentation. Rather, substituting that for questions sich as "why one would bother labeling blank pages?". ...when they're no longer blank. ;-)

i seen instances of blank pages in books because of some printing or possibly binding fault ... it wasn't intended that the page be blank.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Sat, 18 Feb 2006 12:51:04 -0700
"JKV" <jkvbe@N O S P A M y a h o o . c o m> writes:
I would like to use it the other way around. All users presenting a X.509 certificate issued by a trusted party can access the server. Then I only need to install the root certificate of the trusted party on the server and the user management doesn't need to be done on that server but can be done independently.

but do you also need to install userid and/or any user specific permissions on the server?

can any certificate from any trusted party be used to access the server ... or can only specific entities with specific kind of certificates from specific trusted parties be allowed to access the server.

i've frequently asserted that the stale, static certificates are redundant and superfluous if

1) the server has to have some sort of table or repository for allowed users ... i.e. for instance out of all possible entities getting certificates from some set of trusted parties ... which specific parties are actually allowed to access the system and/or possibly which permissions are allowed specific parties. this is kerberos type protocol (also used in windows platforms). one of the counter scenarios is that anybody with a certificate issued by an allowed trusted party can have access to your system (regardless of who they are). of course, then you may have a table of just those entities that can have access to your system; however, it you have table of which specific entities ... and potentially their permissions, then the certificates are again redundant and superfluous). misc. kerberos posts
https://www.garlic.com/~lynn/subpubkey.html#kerberos

2) the server can have access to the same table or repository maintained by the trusted party (since typically a trusted party issuing certificates needs some sort of table or repository that provide some information about the entities for which they are issuing certificates). on trivial example of this is the RADIUS protocol originally developed for giving modem server/concentrators (router type devices) access to the repository of entities, the associated authentication, allowed permissions, and possible accounting information (authentication, authorization, accounting). misc. RADIUS posts:
https://www.garlic.com/~lynn/subpubkey.html#radius

the trivial scenario for certificates is where you don't care about distinguishing between individuals ... that you treat all individuals exactly alike, don't actually need to know who they are ... and just need to know that they are member of the set of allowed individuals (this is the offline physical door badge access system) ... and that are only dealing with a specific trusted party that will only be issuing certifcates for the set of your allowed individuals.

the original offline door badge systems were only a slight step up from the physical keying (i.e. the badges and the keys both represented something you have authentication as well as the means of permissions/authborizations ... aka access). however, these early badges, while somewhat harder to counterfeit than physical keys ... still provided no individual differentiation.

in nearly the same time-frame as permissions were starting to be added to badges (allowing different badges to have potentially unique door access permissions), online systems door badge systems were also appearing. the value infrastructures fairly quickly migrated to online operation ... the badge was solely something you have autnetication ... and specific entity permissions (for specific door access) migrated to online information (rather being implicit in the badge). misc. past posts regarding 3-factor authentication
https://www.garlic.com/~lynn/subintegrity.html#3factor

the offline badge systems quickly become relegated to no-value infrastructures (as value infrastructures migrated to online badge systems that were rapidly decreasing in costs).

the x.509 identity certificates somewhat saw the resurgance in the offline door badge acces paradigm from this earlier era (for no value operations). you also saw some organizations pushing x.509v3 extensions for encoding permissions ... as a return to the brief period where door access permissions were encoded in the badge before the advent of online access systems took over for infrastructures with value operations (and the badge become purely authentication and all permissions and authorization characterists were encoded separately in online infrastructures).

this is the scenario where certificates become redundant and superfluous for operations of value. the ability to generate a digital signature implies the possesion of the corresponding private key (as something you have authentication). the verification of the digital signature is done with a public key stored with the permissions and authorizations for the entity.

misc past posts characterizing public key systems as something you have authentication ... aka an entity uniquely possesess a specific private key
https://www.garlic.com/~lynn/aadsm18.htm#23 public-key: the wrong model for email?
https://www.garlic.com/~lynn/aadsm22.htm#5 long-term GPG signing key
https://www.garlic.com/~lynn/2000f.html#14 Why trust root CAs ?
https://www.garlic.com/~lynn/2005g.html#0 What is a Certificate?
https://www.garlic.com/~lynn/2005i.html#26 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005i.html#36 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005j.html#0 private key encryption - doubts
https://www.garlic.com/~lynn/2005l.html#22 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005l.html#25 PKI Crypto and VSAM RLS
https://www.garlic.com/~lynn/2005l.html#35 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005m.html#1 Creating certs for others (without their private keys)
https://www.garlic.com/~lynn/2005m.html#15 Course 2821; how this will help for CISSP exam ?
https://www.garlic.com/~lynn/2005m.html#18 S/MIME Certificates from External CA
https://www.garlic.com/~lynn/2005m.html#27 how do i encrypt outgoing email
https://www.garlic.com/~lynn/2005m.html#37 public key authentication
https://www.garlic.com/~lynn/2005m.html#45 Digital ID
https://www.garlic.com/~lynn/2005n.html#33 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005n.html#39 Uploading to Asimov
https://www.garlic.com/~lynn/2005o.html#6 X509 digital certificate for offline solution
https://www.garlic.com/~lynn/2005o.html#9 Need a HOW TO create a client certificate for partner access
https://www.garlic.com/~lynn/2005o.html#17 Smart Cards?
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
https://www.garlic.com/~lynn/2005p.html#32 PKI Certificate question
https://www.garlic.com/~lynn/2005p.html#33 Digital Singatures question
https://www.garlic.com/~lynn/2005q.html#13 IPSEC with non-domain Server
https://www.garlic.com/~lynn/2005r.html#54 NEW USA FFIES Guidance
https://www.garlic.com/~lynn/2005s.html#42 feasibility of certificate based login (PKI) w/o real smart card
https://www.garlic.com/~lynn/2005s.html#43 P2P Authentication
https://www.garlic.com/~lynn/2005t.html#32 RSA SecurID product
https://www.garlic.com/~lynn/2005v.html#5 famous literature

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 19 Feb 2006 10:30:00 -0700
"David Wade" writes:
Because the glyph was for a BCD 0-8-2 punch and the 029 was an EBCDIC punch. and EBCDIC machines don't understand record mark? Did EBCDIC machines use 0-8-2? If so what for?

there was no symbol for 0-2-8 in ebcdic (punch holes listed in order from top to bottom of column; 12-11-0-1-2-3-4-5-6-7-8-9). 2540 reader would read punch column combinations that represented standard hex (0-2-8 was read as hex 'E0'). punch combinations that didn't have a hex representation resulted in (hardware) error on read ... unless you read the card in column binary (instead of ebcdic). regular ebcdic read one column into one (8bit) byte. column binary would read the 80 columns of 12 punch holes into 160 (8-bit) byte locations (as opposed to reading the 80 columns with a subset of 12 valid punch hole combinations into 80 bytes).

trusty green card (ibm system/360 reference data, GX20-1703-7) has a multi-panel table that gives decimal 0-255, the corresponding hex value, the mnemonic (if the hex was valid 360 instruction), the "graphic & control symbols" (for both bcdic and ebcdic), the 7-track tape bcdic, the punch card code, and the 360 8bit code.

for punch card code 0-8-2, it gives hex "E0", no (instruction) mnemonic, a bcdic symbol that looks like equal sign with vertical line thru it, no ebcdic symbol, and 7-track code of A-8-2.

for a (q&d) html representation of subset of green card info (and not the above referenced table) ... see
https://www.garlic.com/~lynn/gcard.html

my frequently used non-printing punch code (and there wasn't a key on 026/029 for it, so you had to "multi-punch" the column) was 12-2-9 which was used in column one in punched output from compilers and assemblers .. aka "12-2-9" ESD, TXT, RLD, REP, END, etc cards. past posts mentionin 12-2-9:
https://www.garlic.com/~lynn/93.html#17 unit record & other controllers
https://www.garlic.com/~lynn/95.html#4 1401 overlap instructions
https://www.garlic.com/~lynn/2001.html#8 finding object decks with multiple entry points
https://www.garlic.com/~lynn/2001.html#14 IBM Model Numbers (was: First video terminal?)
https://www.garlic.com/~lynn/2001.html#60 Text (was: Review of Steve McConnell's AFTER THE GOLD RUSH)
https://www.garlic.com/~lynn/2001m.html#45 Commenting style (was: Call for folklore)
https://www.garlic.com/~lynn/2002h.html#1 DISK PL/I Program
https://www.garlic.com/~lynn/2004h.html#17 Google loves "e"
https://www.garlic.com/~lynn/2004p.html#24 Systems software versus applications software definitions
https://www.garlic.com/~lynn/2005c.html#54 12-2-9 REP & 47F0
https://www.garlic.com/~lynn/2005t.html#47 What is written on the keys of an ICL Hand Card Punch?

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Change in computers as a hobbiest

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Change in computers as a hobbiest...
Newsgroups: alt.folklore.computers
Date: Sun, 19 Feb 2006 14:42:50 -0700
jmfbahciv writes:
Anybody who had a TTY could access any mainframe that had a phone line hooked into its frontends. I was calling up TOPS-10 in 1969. IIRC, some grad students were calling up the IBM 360 at UofMich in 1968.

UofMich was one of several universities that were convinced to order 360/67 on the promise of tss/360. when tss/360 floundered, many just used the 67 as 360/65 in batch mode with os/360 (ignoring the virtual memory hardware). UofM wrote Michigan Terminal System (MTS) for the 360/67 and the science center wrote virtual machine system CP67 (original had done cp40 on a 360/40 with special hardware modifications for virtual memory). lots of past posts mentioning science center
https://www.garlic.com/~lynn/subtopic.html#545tech

melinda's virtual machine history talks about early project mac, 360/67, tss/360 days.
http://www.leeandmelindavarian.com/Melinda/
http://www.leeandmelindavarian.com/Melinda#VMHist

a few MTS references from around the web:
http://www.umich.edu/news/?Releases/2004/Sep04/r090704b
http://www.umich.edu/~bhl/bhl/exhibits/mtstime/mtstime.htm
http://www.eecis.udel.edu/~mills/gallery/gallery8.html
http://tecumseh.srv.ualberta.ca/hyperdispatch/HyperDispatch19/1971-84.html
http://tecumseh.srv.ualberta.ca/hyperdispatch/HyperDispatch19/1985-90.html
http://www.msu.edu/~mrr/mycomp/mts/mts.htm
http://michigan-terminal-system.brainsip.com/
http://www.clock.org/~jss/work/mts/timeline.html
https://web.archive.org/web/20030922143644/www.cs.ncl.ac.uk/events/anniversaries/40th/webbook/transition/

when a couple people came out and installed cp67 in jan68, it had 1052 and 2741 terminal support ... so i had to add the TTY/ascii support. I did a hack involving one-byte length fields ... which later was the cause of system failure when somebody modified the code to support ascii devices that support more than 255 byte lengths. reference (which also mentions mts)
http://www.multicians.org/thvv/360-67.html
other stories from this same site:
http://www.multicians.org/thvv/tvv-home.html#stories

typical system configuration was to predefine in software the terminal characteristics connected to each "port" (address). for dial-up, specific ranges of telephone numbers were typically reserved for each terminal type (i.e. fixed mapping between telephone numbers and computers port/addresses) so that the operating system would know ahead of time what kind of terminal it would be talking to.

the guys at cambridge had implemened their terminal support so that it would dynamically determine whether it was talking to a 2741 or 1052 terminal ... and establish the hardware line scanner on the front-end telecommunications controller appropriately (so that the operating system didn't have to pre-configured for terminal type and you could use a common set of phone numbers for both 2741 and 1052).

looking at the specs for the terminal controller ... i decided that i could add tty/ascii support similarly ... dynamically being able to distinguish between 2741, 1052 and tty/ascii (which would allows a common pool of numbers for all terminals, not requiring any differentiation) ... and the telephone box could be configured to have a single telephone dial-in number and a common list of telephone pool numbers (for all terminals) that it roll-over to looking for a non-busy number (on an incoming call).

I had overlooked a minor point in the terminal controller box ... while it was possible to do dynamic terminal type identification and re-assign any line-scanner to any port address ... they had taken short cuts and hard-wired the line-speed oscillator (determining baud rate) to each port. this wasn't a problem with 2741 and 1052 since they operated at the same baud rate. however, it was a problem with tty terminals since they operated at a different baud rate. for hard-wired terminals it wasn't a problem ... but it presented a problem for dial-in lines. the restriction met that you had to have a different pool of phone numbers (with fixed connection to specific ports) for TTT/ascii terminals than for 2741/1052 terminals (you couldn't publish a single dial-in phone number for all terminals, you had to have one number for tty/ascii terminals and a different number for 2741/1052 terminals).

this somewhat prompted the univ. to start a project to build a front-end terminal controller that supported both dynamic terminal type identification as well as dynamic baud rate determination. somewhere there was a write-up blaming us for kicking off the plug-compatible controller business. the 360 mainframe channel was reverse engineered and a channel interface card was built for an interdata/3 that was programmed to simulate the mainframe front-end terminal controller. misc. past posts referencing this activity
https://www.garlic.com/~lynn/submain.html#360pcm

the 360 pcm activity was supposedly a major motivating factor for the future system project ... drastically increasing the integration between the main processor and external controller boxes (significantly raising the bar for producing a plug compatible controller). misc. past posts mentioning future system project
https://www.garlic.com/~lynn/submain.html#futuresys

somewhat in the same time-frame, "unbundling" was announced on 6/23/69 ... which including charging separately for application software (as opposed to all software being free) ... somewhat in response to various fed. gov. litigation. kernel software continued to be bundled (free) with the justification that it was required to operate the hardware. misc. past postings mentioning free software and unbundling:
https://www.garlic.com/~lynn/submain.html#unbundle

When virtual memory on 370 became available ... TSS/360, MTS, and CP67 were moved to 370. TSS/360 became TSS/370 (although hardly anybody continued to run it, at least until the stripped down version done for AT&T that had unix running on top), MTS stayed MTS, and CP67 became vm370.

in the 70s, you started to see emergance of 370 clone mainframe processors ... among them Amdahl. part of this was the size of the 360/370 customer market, some reduction in the design and manufacturing of processors and availability of free kernel software (i.e. to bring a processor to market you only needed to create a processor ... you didn't have to invest heavily in software also). In the early 70s, Amdahl had a seminar at MIT talking about the formation of his (clone 370) computer computing. one of the students asked him about how he talked the (VC) money people into funding the company. he replied that customers had already spent a couple hundred billion in developing 360/370 application software (some indication of the 360/370 customer market size) and even if ibm were to totally walk away from 370 (possibly could be construed as veiled reference to FS project), there was enuf customer software to keep him in business until the end of the century (which was nearly 30 years away at the time). the other advantage for clone manufacturs in the 70s was that most of R&D had been diverted to FS for several years, and when FS was finally killed (w/o even being announced or customers aware of the effort), there was a dearth of 370 enhancements coming out.

however, even with free kernel software ... the mainstream vendor batch operating system still required a lot of vendor hand holding and support ... which wasn't exactly forthcoming for customers of clone mainframes. as a result one of the big class of early adopters were universities running MTS. however, several universities also would be running vm370 (on both vendor processors as well as clone processors).

the advent of clone processors eventually contributed to changing policy and introduction of charging for kernel software. my resource manager was chosen to be the guinea pig and i got to spend several months on and off with business people working on the policies for kernel software charging (aka unbundling by any other name). the resource manager is coming up on 30th year in a couple more months (although an earlier version that i had done as an undergraduate had been available in cp67)
https://www.garlic.com/~lynn/subtopic.html#fairshare
https://www.garlic.com/~lynn/subtopic.html#wsclock

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

virtualization?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: virtualization?
Newsgroups: comp.arch
Date: Sun, 19 Feb 2006 15:00:39 -0700
Wes Felter writes:
This is called single-system-image clustering, so you can search on that keyword. The latest work in this area seems to be OpenMOSIX, OpenSSI, and Virtual Iron.

also may look at the (older) PVM (parallel virtual machine) stuff a couple sample websites (using search engine)
http://www.csm.ornl.gov/pvm/
http://www.netlib.org/pvm3/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Military Time?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Military Time?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 19 Feb 2006 16:47:43 -0700
John D. Slayton wrote:
Are ALL Mainframe systems have the Military or hour format?

Please advise...thanks


360s had a 32bit, binary timer ... located at location 80 (hex '50') in real storage. it was about 15hr period ... and most machines updated it about 3milliseconds. some machines had high-performance timer option which updated the low bit approx. every 13microseconds.

conversions to various other time representations were handled by software.

370 introduced 64bit hardware clock ... hardware spec called for machines to update the timer on approx. same period as instruction execution time ... but as if bit 51 represented one microsecond (bit 63 is 1/4096s of a microsecond) ... which made bit 31 equal to 1024/1000 seconds i.e. top 32 bits is approx. a second timer giving somewhat over 4billion seconds.

standard called for "zero" time to be the first second of the century. it has cycle period of approx. 143 years. again software provides converting from the hardware clock to various other time representations.

the memory location 80 timer was eventually dropped ... part of the issue was the excessive memory bus traffic generated by the constant clock location updating.

detailed discussion of TOD clock
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1?SHELF=EZ2HW125&DT=19970613131822

format description of tod clock
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1.1?SHELF=EZ2HW125&DT=19970613131822&CASE=

setting and inspecting the tod clock
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/4.6.1.4?SHELF=EZ2HW125&DT=19970613131822&CASE=

more recently an extended tod clock is defined with 104 bits ... format description
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/4.6.1.1?SHELF=DZ9ZBK03&DT=20040504121320

however the extended hardware clock has 8 additional bits defined as zero prefixing the tod value and 16bits postfixing the value (128bits total)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/4.6.1.4?SHELF=DZ9ZBK03&DT=20040504121320

Military Time?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Military Time?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 19 Feb 2006 18:45:39 -0700
Gene Cash wrote:
I never understood the reasoning behind this implementation. So it had to go across the bus to increment the clock? It wasn't just a hardware counter with an increment line tied to an oscillator?

originally, why i don't know.

360/67 had high-resolution timer option .... 13-some mic. version for use in accounting and time-slice.

cp67 would stuff value for something 50millseconds at location x'54' and then do an overlapping mvc instruction for 8 bytes ... i.e.
mvc x'4c'(8),x'50'

the current value of the timer would be moved to location x'4c' and the value in the location x'50' timer reset to the value at location x'54'. the elapsed period was then calculated by the difference in the value at location x'4c' and the original value loaded into x'50' (fairly consistent value kept at x'54').

there was an oscillator and a timer "tic", i.e. access location x'50' and tic the current value, aka the timer would tic and then attempt to update location x'50' (waiting for access to the memory bus). if the hardware timer tic'ed again while there was a pending update to location x'50' from a previous timer tic ... the machine would "red light" (i.e. machine check).

we ran into this with a project mentioned in recent post
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest

project at the university when i was an undergraduate building a clone controller; aka reverse engineer the channel interface and build channel interface board for a clone terminal controller. in one of the early tests ... we, in fact, caused the 360/67 to machine check ... because the channel interface board held the memory bus for too long a period (blocking timer tic update of location of x'50').

there was article someplace blaming four of us for plug-compatible (clone) controller. misc. other posts mentioning it
https://www.garlic.com/~lynn/submain.html#360pcm

Military Time?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Military Time?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 20 Feb 2006 08:19:35 -0700
Gene Cash wrote:
I never understood the reasoning behind this implementation. So it had to go across the bus to increment the clock? It wasn't just a hardware counter with an increment line tied to an oscillator?

note that the 360 just had the cpu timer (at location 80) ... and everything else was done in software.

370 introduced the tod clock, the clock comparator, and provided a new cpu timer that wasn't in memory (and over the years, the location 80 cpu timer was eliminated).

the clock comparator was a tod clock value ... that when the comparator and the tod value match ... generated an interrupt.

one of the issues with the location timer was that you wouldn't loose clock tics with the overlapping MVC gimmick ...
https://www.garlic.com/~lynn/2006c.html#21 Military Time?

the clock either tic'ed the old value or the new value ... but a tic wasn't lost.

with the new cpu timer ... there were separate instructions to store the current value and set a new value ... and the timer could "tic" between a store and a set (which would loose a tic). cp67 had been fairly scrupulous for accounting for all timer tics.

then there was also the cpu meter ... which was used for billing processor ... from the days when processors were leased. there was typically 4-shift billing schedule (i.e. 3-shift weekdays, and the weekend) ... with 3rd and 4th shift being billed (under leases) at possibly 1/10th 1st shift billing.

one of the "great" cp67 hacks was the use of prepare ccw on terminal i/o. normally the cpu meter ran when either the processor was running and/or there was "active" i/o.

cp67 was being pushed as time-sharing service ... and some number of commercial operations had spun-off to use it as base for commercial time-sharing service
https://www.garlic.com/~lynn/submain.html#timeshare

one of the issues was leaving the system up and operational 24x7 ... however, off-shift use could be relatively sporadic ... with accounted-for-use not covering the lease billing (based on cpu meter running constantly, even when cpu wasn't active ... and just waiting for terminal i/o). you could save off-shift costs by running dark room with no operators present ... but even at 1/10th billing, it could be difficult to recover sufficient accounting to cover the billing costs.

the "great" cp67 hack using prepare ccw on terminal i/o ... was that prepare ccw would suspend waiting for terminal character to be received ... but not be treated as an active i/o by the cpu meter. this change significantly reduced unrecoverable costs of providing off-shift, always up, 7x24 time-sharing service (system sitting idle and available waiting for somebody to dial-in, or people were dialed-in ... but thinking at the moment).

going into 370 time-frame, you were starting to see more and more purchased systems ... so that unaccounted for (leased) billing was becoming less and less of an issue. however, another characteristic of the cpu meter was that it would "coast" for 400 milliseconds after the last event that caused it to run.

for trivia question ... which mainframe operating system had a system event that woke-up every 400 milliseconds?

Military Time?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Military Time?
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 20 Feb 2006 08:38:40 -0700
Robert A. Rosenberg wrote:
IOW: January 1, 1900 (Remember that a century begins in year XX01 not XX00 - One of the classic misunderstandings about the year 2000 which was the last year of the 20th-Century/2nd-Millennium). I have the impression that early implementations for the TOD Clock had a 1960 epic in some operating systems (I think DOS/370 and DOS/VS) while others used the 1900 epic which lead to having to reset the TOD clock when you switched between them by booting.

OTOH, by checking if the TOD value was positive or negative you could determine which epic was used since the 1900 epic values would always be negative (ie: The first bit was 1).


i remember getting to spend something like 3months on a 370 timer taskforce ... the original spec called for the epoch to be first of the century ... so many installations got it wrong and set it to 1900 (instead of 1901) that the spec was changed (that is, when they weren't setting it to 1970). another issue that consumed a lot of taskforce time was what to do about leap-seconds.

in any case, most of this is also discussed in some detail in the various principles of operation URLs given in the previous posting
https://www.garlic.com/~lynn/2006c.html#20 Military Time

Harvard Vs Von Neumann architecture

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Harvard Vs  Von Neumann architecture
Newsgroups: comp.arch
Date: Mon, 20 Feb 2006 08:51:46 -0700
"Del Cecchi" writes:
Harvard architecture uses separate I and D caches. The IBM System/32 is a good example of Harvard architecture.

801 used separate I and D caches ... that weren't hardware consistent. recent posting that contains 25yr old email discussing hardware announcement that had separate I and D cache (although hardware consistent).
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

misc. posts mentioning 801
https://www.garlic.com/~lynn/subtopic.html#801

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Change in computers as a hobbiest

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Change in computers as a hobbiest...
Newsgroups: alt.folklore.computers
Date: Mon, 20 Feb 2006 09:23:14 -0700
ref:
https://www.garlic.com/~lynn/2006c.html#18 Change in computers as a hobbiest

slightly related concerning billing and off-shift dialin
https://www.garlic.com/~lynn/2006c.html#23 Military Time?

i didn't get home terminal until mar70, it was initially a "portable" 2741 ... two 40lb green suitcases ... shortly replaced with a real 2741.

picture of top of 2741
http://www.columbia.edu/cu/computinghistory/2741.html

note that it didn't provide any sort of work surface (only about 4in on sides and back of the typewriter case).

science center had 3/8in(?) white formica covered plywood ... that rested on the surface surrounding the 2741 typewriter case. the work surface was about 24in wide on one side and 6in on the other side and back (with cut-out fitting around the typerwriter case). it could be flipped so the work surface was either to the left or right of the typewriter.

a multics article mentioning home terminals
http://www.multicians.org/terminals.html

the science center occupied part of 4th floor, 545tech sq ... and the machine room (360/67) occupied part of the 2nd floor. multics group was on the 5th floor.

the 2741 i had at home and the 2741 at work were the same ... so there was little difference between using the computer at work or home ... except it was a little faster picking up printed output (from the 1403 high-speed printer in the machine room) when at work ... (just run down two flights of stairs).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Change in computers as a hobbiest

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Change in computers as a hobbiest...
Newsgroups: alt.folklore.computers
Date: Mon, 20 Feb 2006 09:54:02 -0700
Anne & Lynn Wheeler writes:
science center had 3/8in(?) white formica covered plywood ... that rested on the surface surrounding the 2741 typewriter case. the work surface was about 24in wide on one side and 6in on the other side and back (with cut-out fitting around the typerwriter case). it could be flipped so the work surface was either to the left or right of the typewriter.

oops that was about 24in wide on one side and back ... and 6in on the other side (with cut-out that allowed it to fit around the typewriter case) ... and it could flip so that the work surface was positioned to either the left or right.

one the rear work surface, you could place a tin metal two-tier tray ... sort of like a two-tier in/out basket. it was about 16in wide by 12in (big enuf to fit 14x11 fan-fold printer paper). there was about 4in between the bottom and the top. you could split off a 3in stack of fan-fold paper, put it on the bottom tier sitting on the work surface, feed it thru the 2741 rollers and have it feed out to the top tray.

the other is you could have a whole box of fan-fold paper behind the 2741, feed thru the gap between the bottom and top of the basket, thru the 2741 rollers and onto the top tray.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Mount DASD as read-only

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mount DASD as read-only
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 20 Feb 2006 10:16:18 -0700
Jon Brock wrote:
Not the first time a buffer problem has caused a system error.

Jon

<snip> About the same time we had a floor buffer plugged into the service port on the control unit. Ummm, no mystery there. "We weren't doing anything, it just died!" </snip>


lots of postings mentioning various buffer overflow, overruns vulnerabilities and exploits
https://www.garlic.com/~lynn/subintegrity.html#overflow

and for some real topic drift ... my wife remembers as a little girl living with her mother (for 3months) on the repose in tsingtao harbor (waiting for her sister to be born) after being evacuated by air from nanking. there were several other pregnant woman on the ship waiting to give birth, but my wife was the only child. one of the past times was getting the sailors to let her sit/ride the big floor buffers as they buff'ed the ship floors.

recent thread drift, computers, dc3 and repose:
https://www.garlic.com/~lynn/2006b.html#27 IBM 610 workstation computer

another repose drift ... including postmark from the repose (in tsingtao harbor) from letter announcing birth of my wife's sister
https://www.garlic.com/~lynn/2006b.html#33 IBM 610 workstation computer

Mount DASD as read-only

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Mount DASD as read-only
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Mon, 20 Feb 2006 11:06:07 -0700
Efinnell15@ibm-main.lst wrote:
Yeah, but it was years before C/C++ and didn't cause an integrity exposure..just a big honkin' outage

a classic buffer overflow story involving outage (27 system crashes in a single day).
http://www.multicians.org/thvv/360-67.html

the problem was as an undergraduate, i had added tty/ascii terminal support to cp67 and had used a hack involving one byte lengths. later somebody modified the code to support an ascii device that allowed something like 1200 byte lengths ... but didn't catch the one byte hack. the result was very large length moves clobbering all sorts of storage.

part of this story is that the fast, automatic reboot in cp67 compared to multics (which could take an hr?) was one of the things that led to the new storage system for multics.

the science center
https://www.garlic.com/~lynn/subtopic.html#545tech

that did cp67 (virtual machines, internal network, gml, interactive computing, bunch of other stuff) occupied part of the 4th floor. multics group was on the 5th floor. the mit urban system lab 360/67 cp67 system (mentioned in the reference story) I remember being across the courtyard (525? tech sq). at the time, tech sq. had three tall office buildings and one two story building (on the street, which had land's office, we had offices overlooking land's balcony and one story was catching glimpses of sx-70 prototype being tested, this was before it had been made public).

... again, lots of buffer overflow, overrun, vulnerability and exploit postings
https://www.garlic.com/~lynn/subintegrity.html#overflow

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Mon, 20 Feb 2006 12:58:52 -0700
Andrew Swallow writes:
Bus clock speed is limited by the speed of light down a "longish" wire so the on chip ram would be much faster than the bus. There are no line drivers so the ram and decode logic would be running under the CPU's clock giving a speed approximately equal to the L1 cache.

By reading the entire page in localised access is very fast, the next (say) 511 bytes are already on chip.

What is slow is processing widely dispersed locations that are used once, the new pages have to be read in from either main memory or disk.


there was a paper maybe ten years ago ... describing how c++ (and possibly object oriented programming in general) totally destroys instruction and data locality resulting in significantly worse cache utilization and thruput.

one of the approaches trying to keep cpu functional units fed with work has been hyperthreading ... worked on for 370/195 30+ years ago.

a similar but different approach has been tera's massive threading (they renamed themselves cray after buying various cray assets from sgi). they make do w/o any cache (to reduce memory latency) ... but utilize massive number of threads to try and have pending work during significant memory latencies for any specific thread.

misc. URLs turned up from search engine
http://kerneltrap.org/node/391/969
http://www.cs.washington.edu/research/smt/
http://www.xlnt.com/technology/computing/htt/index.htm
http://www.taborcommunications.com/archives/103610.html

misc. past 370/195 dual i-stream posts:
https://www.garlic.com/~lynn/94.html#38 IBM 370/195
https://www.garlic.com/~lynn/99.html#73 The Chronology
https://www.garlic.com/~lynn/99.html#97 Power4 = 2 cpu's on die?
https://www.garlic.com/~lynn/2000g.html#15 360/370 instruction cycle time
https://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
https://www.garlic.com/~lynn/2001j.html#27 Pentium 4 SMT "Hyperthreading"
https://www.garlic.com/~lynn/2001n.html#63 Hyper-Threading Technology - Intel information.
https://www.garlic.com/~lynn/2002g.html#70 Pipelining in the past
https://www.garlic.com/~lynn/2002g.html#76 Pipelining in the past
https://www.garlic.com/~lynn/2003l.html#48 IBM Manuals from the 1940's and 1950's
https://www.garlic.com/~lynn/2003m.html#60 S/360 undocumented instructions?
https://www.garlic.com/~lynn/2003p.html#3 Hyperthreading vs. SMP
https://www.garlic.com/~lynn/2004.html#27 dual processors: not just for breakfast anymore?
https://www.garlic.com/~lynn/2004e.html#1 A POX on you, Dennis Ritchie!!!
https://www.garlic.com/~lynn/2004o.html#18 Integer types for 128-bit addressing
https://www.garlic.com/~lynn/2005.html#5 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005.html#19 The Soul of Barb's New Machine (was Re: creat)
https://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
https://www.garlic.com/~lynn/2005p.html#1 Intel engineer discusses their dual-core design
https://www.garlic.com/~lynn/2005p.html#14 Multicores
https://www.garlic.com/~lynn/2006c.html#6 IBM 610 workstation computer

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Tue, 21 Feb 2006 11:17:49 -0700
glen herrmannsfeldt writes:
Hercules (an IBM S/360, S/370, etc. emulator) had that problem with some OS. The OS would do a timing loop like that to find out how fast a machine it was running on, not expecting zero time units to pass. Hercules is much faster than any S/360 or S/370 machine.

the base vm370 system had a table of processors identifiers and set various values based on the published numbers for the processors. for the resource manager, i removed the table of allowed processer types and replaced it with a timing loop ... trying to make the system dynamic adaptive ... rather than restricted to just the table of processors in the table. it seemed to (dynamically adapt) handle at least two orders of magnitude processor range (from slowest to fastest)

resource manager coming up on 30yrs fcs in a couple months.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Worried about your online privacy?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Worried about your online privacy?
Newsgroups: alt.folklore.computers
Date: Tue, 21 Feb 2006 11:35:20 -0700
abuse@dopiaza.cabal.org.uk (Peter Corlett) writes:
Ooh, MD5 *encryption*? Presumably as a bonus it also provides compression of all my files to just 16 octets each as a bonus?

i had gotten a email from somebody at crypto 2004 in real time (marvels of wireless internet) during the session on md5 collisions ... asking if my rfc index provided a cross reference to all RFCs that referenced MD5 ... it didn't, but i whipped one and added it to the index.
https://www.garlic.com/~lynn/rfcietff.htm

it isn't automatically updated after each new RFC ... I have to remeber to periodically run md5 update process to catch additional/new RFCs that continue to reference MD5
https://www.garlic.com/~lynn/rfcmd5.htm

for some topic drift ... i was one of the co-authors of the financial industry PIA standard ... and during the work, created a "privacy" subset merged glossary and taxonomy ... misc. ref.
https://www.garlic.com/~lynn/index.html#glosnote

past year or so there has been discussions about SSL being used to protect personal information ... and various ways that phishing websites ... even with SSL ... can collect personal information.

various past posts mentioning phishing (as well as the "chinese" md5 attack paper given at crypto 2004):
https://www.garlic.com/~lynn/aadsm14.htm#51 Feds, industry warn of spike in ID theft scams
https://www.garlic.com/~lynn/aadsm16.htm#2 Electronic Safety and Soundness: Securing Finance in a New Age
https://www.garlic.com/~lynn/aadsm16.htm#7 The Digital Insider: Backdoor Trojans ... fyi
https://www.garlic.com/~lynn/aadsm17.htm#10 fraud and phishing attacks soar
https://www.garlic.com/~lynn/aadsm17.htm#13 A combined EMV and ID card
https://www.garlic.com/~lynn/aadsm17.htm#20 PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#21 Identity (was PKI International Consortium)
https://www.garlic.com/~lynn/aadsm17.htm#25 Single Identity. Was: PKI International Consortium
https://www.garlic.com/~lynn/aadsm17.htm#46 authentication and authorization (was: Question on the state of the security industry)
https://www.garlic.com/~lynn/aadsm17.htm#47 authentication and authorization ... addenda
https://www.garlic.com/~lynn/aadsm17.htm#53 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#54 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#55 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#58 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#5 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#7 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm18.htm#18 Any TLS server key compromises?
https://www.garlic.com/~lynn/aadsm18.htm#36 Phishing losses total $500 million - Nacha
https://www.garlic.com/~lynn/aadsm18.htm#45 Banks Test ID Device for Online Security
https://www.garlic.com/~lynn/aadsm18.htm#56 two-factor authentication problems
https://www.garlic.com/~lynn/aadsm19.htm#1 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#5 Do You Need a Digital ID?
https://www.garlic.com/~lynn/aadsm19.htm#20 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#21 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm19.htm#27 Citibank discloses private information to improve security
https://www.garlic.com/~lynn/aadsm20.htm#23 Online ID Thieves Exploit Lax ATM Security
https://www.garlic.com/~lynn/aadsm20.htm#38 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm20.htm#40 Another entry in the internet security hall of shame
https://www.garlic.com/~lynn/aadsm21.htm#16 PKI too confusing to prevent phishing, part 28
https://www.garlic.com/~lynn/aadsm21.htm#18 'Virtual Card' Offers Online Security Blanket
https://www.garlic.com/~lynn/aadsm21.htm#23 Broken SSL domain name trust model
https://www.garlic.com/~lynn/aadsm21.htm#31 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#34 X.509 / PKI, PGP, and IBE Secure Email Technologies
https://www.garlic.com/~lynn/aadsm21.htm#42 Phishers now targetting SSL
https://www.garlic.com/~lynn/aadsm22.htm#0 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#1 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#2 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#3 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/aadsm22.htm#4 GP4.3 - Growth and Fraud - Case #3 - Phishing
https://www.garlic.com/~lynn/2003o.html#9 Bank security question (newbie question)
https://www.garlic.com/~lynn/2003o.html#19 More -Fake- Earthlink Inquiries
https://www.garlic.com/~lynn/2003o.html#35 Humans
https://www.garlic.com/~lynn/2003o.html#50 Pub/priv key security
https://www.garlic.com/~lynn/2003o.html#57 Pub/priv key security
https://www.garlic.com/~lynn/2004e.html#20 Soft signatures
https://www.garlic.com/~lynn/2004f.html#8 racf
https://www.garlic.com/~lynn/2004f.html#31 MITM attacks
https://www.garlic.com/~lynn/2004l.html#41 "Perfect" or "Provable" security both crypto and non-crypto?
https://www.garlic.com/~lynn/2005e.html#31 Public/Private key pair protection on Windows
https://www.garlic.com/~lynn/2005i.html#0 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005i.html#1 Brit banks introduce delays on interbank xfers due to phishing boom
https://www.garlic.com/~lynn/2005i.html#8 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005i.html#9 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005i.html#14 The Worth of Verisign's Brand
https://www.garlic.com/~lynn/2005i.html#38 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005j.html#3 Request for comments - anti-phishing approach
https://www.garlic.com/~lynn/2005j.html#10 Request for comments - anti-phishing approach
https://www.garlic.com/~lynn/2005k.html#29 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#31 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#32 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#33 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#34 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#35 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#36 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#37 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005l.html#42 More Phishing scams, still no SSL being used
https://www.garlic.com/~lynn/2005o.html#1 The Chinese MD5 attack
https://www.garlic.com/~lynn/2005p.html#25 Hi-tech no panacea for ID theft woes
https://www.garlic.com/~lynn/2005s.html#49 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2005s.html#51 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2005t.html#6 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2005t.html#9 phishing web sites using self-signed certs
https://www.garlic.com/~lynn/2005u.html#9 PGP Lame question

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

A Historical Look at the VAX

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: A Historical Look at the VAX
Newsgroups: comp.arch
Date: Tue, 21 Feb 2006 11:39:55 -0700
Grumble writes:
I've stumbled across

http://arstechnica.com/news.ars/post/20060218-6215.html

which points to Part I of John Mashey's series.

http://www.realworldtech.com/page.cfm?ArticleID=RWT012406203308

(I imagine most of you have already seen it, but in case some were asleep like me...)


ship numbers sliced and diced by model, year, domestic, world-wide, etc (as well as a few misc. other refs)
https://www.garlic.com/~lynn/2002f.html#0 Computers in Science Fiction
https://www.garlic.com/~lynn/2002f.html#5 Blade architectures
https://www.garlic.com/~lynn/2002i.html#30 CDC6600 - just how powerful a machine was it?
https://www.garlic.com/~lynn/2005f.html#37 Where should the type information be: in tags and descriptors

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Military Time?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Military Time?
Newsgroups: bit.listserv.ibm-main
Date: Tue, 21 Feb 2006 11:52:17 -0700
timothy.sipples@ibm-main.lst (Timothy Sipples) writes:
Slightly off topic: in Japan some of the bars list their hours as, for example, "1100 to 2800" (11:00 a.m. to 4:00 a.m.)

I have no idea what that factoid has to do with anything, but I'm learning. :-)


in a thread leading up to y2k ... i had dredged up a posting in a thread on the subject made in the early 80s. it mentioned issue with shuttle master timing unit allowing for 400 days in a year.
https://www.garlic.com/~lynn/99.html#24 BA Solves Y2k

part of the issue was that the MTU handled yyyyddd out to ddd=400 but the software emulator only handled dates out to ddd=399 (and raised an issue about possible five-week plus missions that launch on new year's eve).

... aka has all code been checked for handling differences in two time/dates calculations when the period wraps.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 22 Feb 2006 11:21:13 -0700
Dimitri Maziuk writes:
Add DS record (like MX, for ldap directory server) to DNS. Include host keys in host's ldap record.

We could do SPF the same way, too.

The problems here are opening ldap server to the world, replacing verisign with ldap server's administrator, standardizing use of encryption which may be illegal in some jurisdictions, etfc. Never gonna happen.


previous posts:
https://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#10 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#12 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#13 X.509 and ssh
https://www.garlic.com/~lynn/2006c.html#16 X.509 and ssh

you don't need to do encryption ... all you need to do is public key and digital signature as part of authentication.

one of the issues when working x9.59 financial standard (aka the x9a10 working group was given the requirement to preserve the integrity of the financial infrastructure for all retail payment transactions) was being able to do simple authentication w/o requiring encryption and/or other heavy duty operations.
https://www.garlic.com/~lynn/x959.html#x959
https://www.garlic.com/~lynn/subpubkey.html#x959

this resulted in simple digital signature on a retail payment transactions ... and the signing entity having a public key on file with their financial infrastructure.

one of the other issues has been a major exploit of retail payment transactions is skimming the account number and using it in fraudulent transactions (and all the references to data breaches and account fraud, being able to do fraudulent transactions with harvested account numbers)
https://www.garlic.com/~lynn/subintegrity.html#fraud

so the other part of x9.59 was a business rule that account numbers used in x9.59 transactions were not valid in non-authenticated transactions. this is the issue that account numbers are used in large number of business processes (besides the basic transaciton) and even if you were to bury the world under miles of encryption .. you still couldn't eliminate account number leakage ... slightly related post on security proportional to risk
https://www.garlic.com/~lynn/2001h.html#61

in the x9.59 scenario ... you is possible for the related account numbers to leak all over the place ... and crooks still can't use them in fraudulent financial transactions (that haven't been digitally signed with the correct private key).

so some of the historical lore is that the original x.500 dap was suppose to have lots of individual personal information. however, having enormous amounts of personal information in one place and publicly available is an enormous privacy issue. so along comes x.509 identity certificate ... that also can contain enormous amounts of personal information ... but at least they aren't all located in one place. however, by the mid-90s, you started finding institutions realizing that even x.509 identity certificates with enormous amounts of personal information were also a significiant privacy threat. so you saw some retrenchment to relying-party-only certificates
https://www.garlic.com/~lynn/subpubkey.html#rpo

which basically only contained some type of account number and a public key ... where all the personal information was kept by the institution in the account record ... and not generally available. however, it is trivial to demonstrated that such relying-party-only certificates are redundant and superfluous ... if you ahve to access the account record (which might only contains trivial amounts of personal information ... like a userid's permissions or some such thing).

the issue in domain name infrastructure ... is that it already provides a mapping between a domain name and an ip-address ... and that it provides real-time access and distribution of the same information. including the public key along with the ip-address doesn't increase the exposure. the domain name infrastructure has gone thru a spate where the registered email addresses were being harvested for spamming purposes ... and eventually got redacted (there also has been some recent news articles that possible 1/3rd of information on file with at the domain name infrastructure is incorrect ... possibly increase the vulnerability to domain name hijacking).

publishing public keys isn't any more of a threat than blanketing the world under digital certificates with the same public keys. it isn't as if public keys are shared-secret authentication
https://www.garlic.com/~lynn/subintegrity.html#secret

where the same value is used for origination and authentication (i.e. divulging a shared-secret creates threat of impersonation). public keys are purely used for authentication and aren't designed to be usable for origination and impersonation.

recent posting on related aspects
https://www.garlic.com/~lynn/aadsm22.htm#17 Major Browsers and CAS announce balkenisation of Internet Security

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 22 Feb 2006 14:55:58 -0700
Dimitri Maziuk writes:
The problems here are opening ldap server to the world, replacing verisign with ldap server's administrator, standardizing use of encryption which may be illegal in some jurisdictions, etfc. Never gonna happen.

part of this can also be looked at from the security PAIN taxonomy
P - privacy (sometimes CAIN & confidentiality) A - authentication I - integrity N - non-repudiation

encryption is frequently associated with P/privacy ... but can also be used for integrity (i.e. at least being able to recognize modifications in transit).

digital signatures can also be used as integrity countermeasure to modifications as well as being used for A/authentication.

i've frequently claimed that the vast majority of encryption sessions on the internet have been SSL associated with electronic commerce and supposedly hiding (encrypting) an account number.

the threat issue is that leakage of standard account number enables fraudulent transactions ... and is frequently lumped with identity theft/fraud ... but there has been activity in the past couple years attempting to strongly differentiate account fraud from identity fraud (although both involve leakage of sensitive information for fraudulent purposes).

the SSL activity around e-commerce and the original payment gateway
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

was supposedly both to authenticate the communicating website and hide the account numbers. however, there has been lots of stuff attacking the infrastructure at points other than the digital certificate part of the operation. part of this might be attributed to certification authorty industry embellishing the role of digital certificate as the cornerstone of the security process ... as opposed as to one specific mechanism that could used to implement one small piece of an end-to-end secure business process .. i.e. ignore the digital certificate and attack at some less well defended part of the infrastructure ... one of my analogies has been installing a specific kind of bank vault door in an open field w/o walls, ceilings, floors ... and then trying to convince everybody that the only way of getting what was behind the door was to attack the vault door.

now back to the early SSL activity
https://www.garlic.com/~lynn/subpubkey.html#sslcert

and attempting to hide (encrypt) account numbers during transmission. for long before the internet, the major attack on account numbers has been harvesting point-of-sale and/or backroom repositories (that needed the account numbers for other parts of the business process) ... frequently (some 70percent) by insiders. The encryption during transmissioin was just to not create additional avenues for harvesting account numbers. however, it ignored that havesting the backroom repositories (that were in use by several backroom business process) was still the major threat ... and just the fact that there was internet connectivity created numerous additional threat avenues for the major vulnerability (that the SSL encryption did nothing to address) related to repository harvesting.
https://www.garlic.com/~lynn/subintegrity.html#harvest

that gave rise to my observation that even if you buried the world under miles of encryption (and digital certificates), it wasn't going to address account number leakage and account fraud.
https://www.garlic.com/~lynn/subintegrity.html#fraud

what x9.59 financial standard ... mentioned in the previous post
https://www.garlic.com/~lynn/2006c.html#34 X.509 and ssh

tried to do was to also eliminate account number leakage as a vulnerability ... rather than trying to use constantly increasing amounts of encryption in an attempt to prevent account number leakage ... change the paradigm and create an environment where a leaked account number is no longer threat/vulnerability for fraudulent transactions.

one of the issues in the mid-90s was that there were some PKI-oriented financial transaction protocol definitions; they would use a digital signature for authentication/integrity along with an appended relying-party-only digital certificate. however, they failed to create a business rule that made account numbers used in digitally signed transactions invalid when used in transactions that weren't digitally signed. this met that account number leakage still presented a serious account fraud threat and so also required that the account number continue to be encrypted (in addition to using it in a digitally signed transaction) ... aka this is the scenario where encryption has to be used for things like shared-secrets
https://www.garlic.com/~lynn/subintegrity.html#secret

where divulging the information can result in fraud. the claim is if you change the paradigm and the items can no longer be used fraudulently, then the need for encryption is drastically reduced (similarly encryption isn't the only solution if only integrity needs to be addressed).

the other part was that since it was a relying-party-only certificate,
https://www.garlic.com/~lynn/subpubkey.html#rpo

the whole thing still had to be sent to the responsible financial institution that was responsible for the certificate (and the account). the financial institution pulled the account number from the transaction and retrieved the account record ... which had all the financial information as well as the originally registered public key. the financial institution then validated the digital signature with the public key in the account record. as a result the attached digital certificate was totally redundant and superfluous.

it is actually slightly worse than totally redundant and superfluous. the typical retail financial transaction size is on the order of 60-80 bytes and the mid-90s attached, relying-party-only certificate overhead ran 4k to 12k bytes. not only was attaching the relying-party-only digital certificate redundant and superfluous, it also resulted in payload bloat of two orders of magnitude (100 times).

somewhat in response, the x9 financial standards group did start a work item for compressed certificates ... hoping to get the size down to on the order of 300 bytes (so it is only a five times payload bloat instead of 100 times payload bloat for unnecessary, redundant and superfluous certificates). one of the approaches was to establish that all fields that were common to the relying-party-only certificates could be eliminated (leaving only fields that were unique to a specific digital certificate) on the assumption that the relying party already had copies of all the common fields. I pointed out that if you eliminated all digital certificate fields that the relying-party already had copies of, digital certificates could be reduced to zero fields and zero bytes. rather than saying that it was redundant and superfluous to attach digital certificates ... it was perfectly valid to attach zero-byte, compressed digital certificates.

note that the original observation is that the domain name infrastructure is both

1) a technology; providing trusted, real-time, online distribution of information

and

2) a business; providing trusted, real-time, online distribution of information related to domain names.

it turns out that as a business, a public key can also be a perfectly valid domain-name-associated piece of information and be distributed in real-time (rather than requiring stale, static, redundant and superfluous digital certificates to provide an association between a domain name and a public key).

the issue with LDAP isn't so much that real-time, online distribution of information isn't a valid implementation design (one can point to search engines in addition to the domain name infrastructure as another example of real-time, online information distribution mechanism) ... it is just that LDAP designers possibly didn't give a lot of thot in the protocol to attacks, threats, vulnerabilities, and countermeasures.

for instance, it is possible for both LDAP and RADIUS to have authentication, authorization, and account information in a backend RDBMS database ... potentially a LDAP and a RADIUS deployment could even share the same backend RDBMS database. There are RADIUS implementation, deployments that support digital signature verficiation as authentication mechanism
https://www.garlic.com/~lynn/subpubkey.html#radius

where the public key used to verify the digital signature is onfile in the backend database (in lieu of current common deployments that used RADIUS shared-secret, password mechanisms, where the shared-secret is in the backend database).

I would assert that the problem isn't in the actual backend database that might be used by LDAP (many deployments is frequently some RDBMS) ... since there are examples of RADIUS protocol being able to utilize essentially the same backend database.

Furthermore, domain name infrastructure protocol isn't tied to a specific backend database implementation ... I would assert that it is equally possible to create a domain name infrastructure deployment that used a backend RDBMS database in similar ways to the way that many LDAP deployments make use of backend RDBMS database.

and to complete the topic drift ... lots of posts about the original relational/sql project
https://www.garlic.com/~lynn/submain.html#systemr

Secure web page?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Secure web page?
Newsgroups: alt.computer.security
Date: Wed, 22 Feb 2006 16:40:57 -0700
comphelp@toddh.net (Todd H.) writes:
For what it's worth this is a common fallacy and doesn't tell the whole truth.

All SSL ensures is that the transport of data between your web browser and the server is securely encrypted and safe from man in the middle eavesdropping (assuming the certificate you accept is valid, and issued by a trusted authority to the website you think you're connected to, blah blah blah).


the original SSL for web commerce and the payment gateway
https://www.garlic.com/~lynn/aadsm5.htm#asrn2
https://www.garlic.com/~lynn/aadsm5.htm#asrn3

had the browser checking that the URL domain name typed in by the user matched the domain name in the domain name digital certificate ... after otherwise validating the digital certificate as valid. some of the exploits might be considered partially because certification authorities continuelly stressed the integrity and value of these digital certificates (at the expense of recognizing that digital certificates were a very small part of an overall end-to-end process, as well as not the only possible implementation/solution).

one vulnerability that opened up was that e-commerce websites found that SSL encryption introduced an 80-90 percent overhead (i.e. they could handle 5-10 times as much web activity with the same equipment if they didn't use SSL). as a result, majority of SSL e-commerce use was moved from the initial webserver connection (from the URL that the user entered as part of connecting to the website) ... to just being used for handling the payment process (in the overall webserver experience).

what you saw was the user getting into a purchase screen and being asked to click on a "payment" (or check-out) button. this button supplied the URL to the browser for doing payment SSL operation.

the threat is that SSL is no longer being used to validate the initial URL domain name connection to the webserver that the user entered ... it is only be used to validate the domain name connection to a payment webpage ... using a URL and domain name supplied by the remote webserver. Now, if the user had originally connected to a fraudulent website because SSL is no longer being used to validate the original connection (which the original use of SSL caled for), then any fraudulent website will probably provide a URL and domain name for which the crook actually has a valid certificate for i.e. the attacker registers some valid domain name and then obtains a valid certificate for that domain name. then they design a payment button that supplies a domain name URL for which they have a matching digital certificate.

this exploit can even be implemented as a man-in-the-middle attack ... the fraudulent webserver (that the user is directly talking to) is simulating a second/shadow session with the real website (so the user is actually seeing real-time information coming off the real website).

misc. past posts on MITM-attacks
https://www.garlic.com/~lynn/subintegrity.html#mitmattack

misc. past posts on general subject of SSL certificates
https://www.garlic.com/~lynn/subpubkey.html#sslcerts

recent posting discussing what SSL encryption is addressing by hiding account numbers for transactions transmitted over the internet
https://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Wed, 22 Feb 2006 22:22:39 -0700
"Richard E. Silverman" writes:
I've been meaning to ask you for a while: is the period key on your keyboard stuck?

there is a stanford phd thesis ... joint with language and computer ai from the early 80s on my email, posting, keyboard use, etc habits. there was a researcher hired that sat in the back of my office for 9 month to investigate and analyze how i communicated, taking notes on how i communicated, went with me to meetings, etc. they also got copies of all my incoming & outgoing email, all my postings, and logs of all my incoming and outgoing instant messages. detailed analysis was done on all my face-to-face, verbal, and computer communication (including typing idiosyncrasies). the research report also became the phd thesis.

besides the phd thesis, the material was also used in subsequent books and papers.

misc. collected postings mentioning commputer mediated communication
https://www.garlic.com/~lynn/subnetwork.html#cmc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Thu, 23 Feb 2006 09:39:29 -0700
Dimitri Maziuk writes:
Exactly. There's also information that's intentionally omitted: e.g. my workstation does not have a hostname visible to the Internet, yet it's the only machine in our domain that accepts ssh connections from the Internet. Add DNS spoofing and domain name hijacking -- would you really want to build a reliable authentication infrastructure on top of that?

the issue is that the domain name infrastructure is the authoritative agency for domain name ownership. the domain name infrastructure is the "trust root" for the PKI certification authority domain name SSL certificates. the PKI CA has to cross-check that the entity applying for a SSL domain name certificates is the entity registered with the domain name infrastructure as owning that domain.

if the domain name infrastructure information is compromised, then it puts the whole PKI CA ssl domain name certificates at risk. the issue isn't whether to trust the domain name infrastructure ... as opposed to trusting a PKI CA ssl domain name certificate .... the issue is that the domain name infrastructure is the trust root for all of it ... including the PKI CA ssl domain name certificates.

If the their are integrity issues with the domain name infrastructure, then those integrity issues propagate to all PKI CA ssl domain name certificates (i.e. a security infrastructure is only as strong as its weakest link). integrity issues putting the domain name infrastructure at risk ... also put PKI CA ssl domain name certificates at risk ... because the domain name infrastructure is the authoritative agency for domain name ownership ... and therefor the trust root for all PKI CA ssl domain name certificates.

that is why my oft repeated observation that integrity improvements in the domain name infrastructure have been backed by the PKI CA industry ... since they are dependent on the integrity of the domain name infrastructure (just like everybody else). part of those integrity improvements is to have domain name owners register a public key when they register the domain name. then all future communication with the domain name infrastructure is digital signed as one of the countermeasures for domain name hijacking.

domain name hijacking not only puts everybody directly dependent on the domain name infrastructure at risk ... but it also puts the PKI CA industry at risk ... because the hijacker can now apply for a SSL domain name certificate and get it ... since the hijacker is now listed as the owner of the domain.

that in turn leads to the observation that if the domain name infrastructure relies on the digital signature verification (using the onfile public key), then so can the CA PKI industry for SSL domain name certificate applications (in part because the digital signature verification is validating the true owner of the domain name, if this is vulnerable ... then you still have domain name hijacking, if you can still have domain name hijacking ... then the hijacker can still obtain perfectly valid SSL domain name certificate ... i.e. the onfile public key is now the trust root for the SSL domain name certificates).

the catch-22 is that if the CA PKI industry starts accepting such onfile public keys as the trust root for SSL domain name certificates (as countermeasure to things like domain name hijacking) ... then why can't the rest of the world ... eliminating the PKI CA operators as a redundant and superfluous unncessary intermediary.

recent post also mentioning the catch-22 dilemma facing the CA PKI ssl domain name certificate industry
https://www.garlic.com/~lynn/aadsm22.htm#22 Major Browsers and CAS announce balkanisation of Internet Security

the above also contains long list of prior posts mentioning the catch-22 delemma.

misc. past posts mentioning domain name hijacking
https://www.garlic.com/~lynn/aadsm8.htm#softpki2 Software for PKI
https://www.garlic.com/~lynn/aadsm8.htm#softpki16 DNSSEC (RE: Software for PKI)
https://www.garlic.com/~lynn/aadsm9.htm#cfppki5 CFP: PKI research workshop
https://www.garlic.com/~lynn/aadsm10.htm#cfppki20 CFP: PKI research workshop
https://www.garlic.com/~lynn/aepay11.htm#37 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm14.htm#1 Who's afraid of Mallory Wolf?
https://www.garlic.com/~lynn/aadsm15.htm#28 SSL, client certs, and MITM (was WYTM?)
https://www.garlic.com/~lynn/aadsm17.htm#60 Using crypto against Phishing, Spoofing and Spamming
https://www.garlic.com/~lynn/aadsm19.htm#13 What happened with the session fixation bug?
https://www.garlic.com/~lynn/aadsm19.htm#17 What happened with the session fixation bug?
https://www.garlic.com/~lynn/2000e.html#40 Why trust root CAs ?
https://www.garlic.com/~lynn/2000e.html#47 Why trust root CAs ?
https://www.garlic.com/~lynn/2001d.html#41 solicit advice on purchase of digital certificate
https://www.garlic.com/~lynn/2001e.html#40 Can I create my own SSL key?
https://www.garlic.com/~lynn/2001g.html#19 Root certificates
https://www.garlic.com/~lynn/2001l.html#26 voice encryption box (STU-III for the masses)
https://www.garlic.com/~lynn/2001n.html#73 A PKI question and an answer
https://www.garlic.com/~lynn/2004h.html#28 Convince me that SSL certificates are not a big scam
https://www.garlic.com/~lynn/2005c.html#51 [Lit.] Buffer overruns
https://www.garlic.com/~lynn/2005g.html#1 What is a Certificate?
https://www.garlic.com/~lynn/2005i.html#7 Improving Authentication on the Internet
https://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

X.509 and ssh

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: X.509 and ssh
Newsgroups: comp.security.ssh
Date: Thu, 23 Feb 2006 11:18:05 -0700
Dimitri Maziuk writes:
As far as the chain of trust goes, if you've paranoid security requrements you'll probably want to obtain the key out of band, directly from the issuer, regardless of how many digital signatures there are on somebody's DNS, LDAP, SSH, etc. servers.

issuer? is that issuer of digital certificate? or issuer of public key? or issuer of something else?

nominally somebody is the key owner ... except in scenarios like key recovery ... where an institution issues both the private and public key to the individual.

there is the authoritative agency for some piece of information ... like the domain name infrastructure is the authoritative agency for domain name ownership.

certification authorities ... typically are business processes that certify some information ... for situation where the relying parties don't directly have the information themselves and lack any means of directly contacting any authoritative agency responsible for the information.

finally, certification authorities might manufacture certificates ... as a representation of the certification business process. this is for environments where the relying parties are dependent on the certification process (as in above where they don't directly have the information themselves and/or have access to the authoritative agency responsible for the information). this certificate/license/credential paradigm has served the offline world for centuries. however, the certificate/license/credential paradigm is rapidly becoming obsolete as the world moves to online ... and relying parties are given direct access to the authoritative agencies responsible for the information and/or direct access to certification authorities.

some of the certificate/license/credential operations have attempted to find market niches in no-value operations ... where the infrastructure can't justify the expense of direct online operations for relying parties. however, even this, no-value market niche is rapidly shrinking as the cost of performing online operations rapidly declines.

one of the situations is that the domain name infrastructure is the authoritative agency for domain name ownership and relying parties are all, already doing online operations with the domain name infrastructure (it would be straight-forward to piggy-back additional information along with the current online information flow).

part of the issue that somewhat obfuscates the fact that the trust root for SSL domain name digital certificates resides with the domain name infrastructure is some of the misdirection about any cryptographic integrity of the digital certificates ... which is almost a minor nit in the overall end-to-end business operations of the authoritative agencies (aka domain name infrastructure) maintaining correct and accurate information ... and the certification business operations performed by certification authorities ... independent of the actual manufacture of any specific digital certificates.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Fri, 24 Feb 2006 11:21:03 -0700
rpw3@rpw3.org (Rob Warnock) writes:
Again, the only difference a user program sees between a "true SMP" and a "ccNUMA" system is that there's more variance in the cache-line miss delays on the ccNUMA system [at the same number of CPUs], and that ccNUMA systems can scale up to *far* more CPUs than anyone has even gotten "true SMP" to ever work.

I've oft claimed that the non-cache coherency for 801 (from the mid-70s) between the I and D caches for the same processor ... in addition to no provisions for coherency between caches associated with different processors ... was reaction to the enormous overhead paid by 370s with their store-thru caches and very strong memory consistency model.
https://www.garlic.com/~lynn/subtopic.html#801

of course, i've also oft claimed that basic 801/risc, extreme hardware simplification was a re-action to the failed effort of FS (from the early 70s) with its significant hardware complexity
https://www.garlic.com/~lynn/submain.html#futuresys

there was this internal advanced technology symposium in POK, around the spring of 77. we were presenting 16-way smp design with significantly simplified cache operation ... and the 801 group was presenting their stuff. our presentation included how we were going to modify a standard kernel to operate the 16-way smp machine. one of the 801 group criticized the presentation, claiming that they had examined the standard kernel code and couldn't figure out how it could possibly support 16-way smp operation ... our reply was we were going to modify the code ... i estimated that i could get the initial operation up and running with possibly 6k of code changes and corresponding data structure modifications. people in the 801 group somewhat expressed skepticism that we could make such software code modifications ... even tho we had done it before ... for example misc. postings about VAMPS, a 5-way smp project
https://www.garlic.com/~lynn/submain.html#bounce
and more general smp postings
https://www.garlic.com/~lynn/subtopic.html#smp

the 801 group then presentation the 801 hardware and CP.r software design. Somewhat as a result of their earlier remarks, I made some number of critical remarks about the significantly simplified 801 hardware. their response was that the 801 design represented hardware/software trade-offs ... that had migrated a lot of hardware features into software ... at the cost of software complexity. for instance, 801 had provisions for a limited number of memory sharing objects (16) and no kernel protection mechanism. the response was that only correct code would be generated by the compilers and that the cp.r system loader had special provisions for loading only correctly generated code. as a result applications programs would have full access to all hardware features including the virtual memory hardware operations ... and could change virtual memory hardware control as easily as applications could change general registers on more familiar hardware designs.

with respect to 370 (as well as separate I & D cache operation), there were other ways of implementing cache consistency ... for instance the reference in this recent post to a 370 clone that implemented separate I & D caches and other improved cache consistency efficiencies
https://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode

going into the late 80s ... you had LANL taking a COTS version of CRAY high-speed parallel copper hardware channel as hippi standard, LLNL taking a serial coppoer non-blocking switch in a fiber form as FCS (fiber channel) standard, and the guy from SLAC doing a cots/standard version of fiber for latency compensation (including memory and smp cache consistency) as SCI (scallable coherent interface) standard.
http://www.scizzl.com/

we were fortunate enuf to participate in all of the activities.

SCI/cots numa memory consistency was a 64-port design and fundamental SCI issue was moving from a synchronous bus operation (for memory as well as other operations) to a asynchronous bus operation ... somewhat leveraging underlying fiber interconnect that involved dual-simplex operation (i.e. pairs of fiber, with dedicated fiber cable for signals in each direction). since the underlying fiber technology implementation involved dedicated hardware for communication in each direction ... take advantage of the technology that decoupled signals going in one direction with signals going in the other direction to move to an asynchronous protocol and also use the hardware and protocol change to break from synchronous bus operation and use it to address the ever increasing latency delay with synchronous bus operations.

in the early 90s, you saw convex using a HP board with two HP SMP processors in an SCI (64-port) implementation to do the 128-way exemplar. You also had both sequent and data general taking a four intel SMP processor board in an SCI (64-port) implementation to do 256-way numa implemenations (we got to do some consulting with both convex and sequent). these were essentially cots implementations with essentially commodity components. sequent had been using their modified dynix (unix) system to support 32-way smp ... using a more convential snoopy bus. part of SCI limitation to 64-port design was being able to have a really cots/commodity standard and off-the-shelf chips. there were a number of efforts in the valley in the mid-90s that looked at custom numa implementations that involved thousands of processor chips. part of the business issues in the 90s was trading off the size of the market for systems involving thousands of processors (needing custom designed proprietary chips), against the much larger market for systems with tens or hundreds of processors (and being able to use off-the-shelf chips).

when we were doing ha/cmp
https://www.garlic.com/~lynn/subtopic.html#hacmp

the executive we reported to moved over to head up somerset ... the joint ibm, motorola, apple, at all effort that i've somewhat characterized to take 801/rios with aboslutely no provisions for cache consistency and redo the whole cache operation (as well as create single chip version) ... and mqarry it with motorola's 88k smp memory bus operation. a little after we left to do other things ... he also left somerset to be president of MIPs (this was after SGI had already bought MIPS) and we did some work with him.

this is somewhat the long-standing joke in the valley about there actually only being 200 people in the industry ... it is just that the same people keep moving around.

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Fri, 24 Feb 2006 11:42:30 -0700
Anne & Lynn Wheeler writes:
convential snoopy bus. part of SCI limitation to 64-port design was being able to have a really cots/commodity standard and off-the-shelf chips. there were a number of efforts in the valley in the mid-90s that looked at custom numa implementations that involved thousands of processor chips. part of the business issues in the 90s was trading off the size of the market for systems involving thousands of processors (needing custom designed proprietary chips), against the much larger market for systems with tens or hundreds of processors (and being able to use off-the-shelf chips).

ref:
https://www.garlic.com/~lynn/2006c.html#40

slightly related news article on the subject from this morning:
http://news.com.com/A+1%2C000-processor+computer+for+100K/2100-1010_3-6042849.html?tag=nefd.top

one of the reason that we were doing ha/cmp for scale-up was that rios chips had no provisions for cache coherency and therefor there was no way of building memory consistency smp systems ... so we looked at using FCS for building scale-up clusters instead. minor reference
https://www.garlic.com/~lynn/95.html#13

part of this was doing a scalleable distributed lock manager design ... recent posting on that subject in this thread:
https://www.garlic.com/~lynn/2006c.html#8

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Fri, 24 Feb 2006 15:12:17 -0700
jmfbahciv writes:
Sure. Now go talk to your software people. The performance penalty the code has implement to ensure coherency is worse. I don't think that can be changed; it's one of those sticklers of reality.

there is two types of software support ... one is actually hardware cache coherency and the other is smp processing coherency ... but frequently refered to as serialization (aka coherency/consistency is maintained by enforcing specific serialization).

the 801/risc separate I(nstruction) & D(ata) caches from the 70s weren't hardware coherent. for the most part this didn't surface the software level although it precluding any of the self-modifying instruction operations that were found in 360/370. it did surface for program loaders which tended to treat sequences of instructions as data ... which might result in modified cache lines (containing) instruction appearing in the (store "into") data cache. loaders than had special instruction that forced changed cache lines from D-cache back to memory ... so that when I-cache went went to pickup the cache line from memory, it would have any changes done by the loader.

801/rios had no cache coherency (somerset was project to do an 801-like smp cache coherency and a single-chip implementation for the power/pc). however there was an early 4-way "smp" 801/rios using a chip that was called ".9" (or single-chip) rios ... but didn't have cache coherency. what was doine was an additional flag was added to the segment registers ... this flag controlled whether data operations used standard d-cache operations or completely bypassed d-cache and alway used values directly from memory. software then had to sensitized to what stuff would be cache'able (and fast) and what stuff woulc be non-cache'able (slow, but consistent).

one of the things that was done for exemplar was a mach (from cmu) kernel implementation where the smp processor complex could be configured (partitioned) as multiple clusters with a subset of the processors. between the clusters, message passing i/o paradigm could be simulated with memory-to-memory operations.

there was also software stuff about whether memory from (slower accessed) numa banks was used directly or if it was first copied to closest/fastest numa memory bank (for the specific processor).

this somewhat harks back to the 360 days (in the 60s) with 8mbyte/8mic LCS. you found these boxes installed on 360/50 (main memory, 2byte, 2mic) and 360/65s (main memory, 8byte, 750nsec). you found some configurations using the slower 8mbyte LCS memory bank as extension of standard storage ... and other congfigurations that used the 8mbytes more like fast disk ... that would copy data &/or programs between LCS and standard, faster memroy. you found some installations mixing the two modes of operations ... sometimes using LCS as straight-forward extension of standard memory and sometimes using it as fast electronic disk ... copying data/programs between LCS and regular memory.

for some topic drift, connection between ampex and oracle:
https://www.garlic.com/~lynn/2000e.html#49 How did Oracle get started?

misc random other web refs curtesy of search engine
https://en.wikipedia.org/wiki/Oracle_database
http://www.answers.com/topic/oracle-database
http://nndb.com/people/439/000022373/
http://www.cs.iupui.edu/~ateal/n311/history.htm
http://www.webenglish.com.tw/encyclopedia/en/wikipedia/o/or/oracle_database.html

and for further relational drift, collected posts on original relational/sql implementation
https://www.garlic.com/~lynn/submain.html#systemr

ampex was one of the manufactures of LCS ... they had this building west of 101, south of sanfran. some past posts mentioning ampex:
https://www.garlic.com/~lynn/2000.html#7 "OEM"?
https://www.garlic.com/~lynn/2000e.html#2 Ridiculous
https://www.garlic.com/~lynn/2000e.html#3 Ridiculous
https://www.garlic.com/~lynn/2001f.html#51 Logo (was Re: 5-player Spacewar?)
https://www.garlic.com/~lynn/2001j.html#15 Parity - why even or odd (was Re: Load Locked (was: IA64 running out of steam))
https://www.garlic.com/~lynn/2003c.html#62 Re : OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003c.html#63 Re : OT: One for the historians - 360/91
https://www.garlic.com/~lynn/2003d.html#28 Why only 24 bits on S/360?
https://www.garlic.com/~lynn/2003g.html#49 Lisp Machines
https://www.garlic.com/~lynn/2004c.html#39 Memory Affinity
https://www.garlic.com/~lynn/2005j.html#13 Performance and Capacity Planning
https://www.garlic.com/~lynn/2005k.html#19 The 8008 (was: Blinky lights WAS: The SR-71 Blackbird was designed ENTIRELYwith slide rules)

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Fri, 24 Feb 2006 16:43:56 -0700
Anne & Lynn Wheeler writes:
one of the things that was done for exemplar was a mach (from cmu) kernel implementation where the smp processor complex could be configured (partitioned) as multiple clusters with a subset of the processors. between the clusters, message passing i/o paradigm could be simulated with memory-to-memory operations.

recent posts in the thread
https://www.garlic.com/~lynn/2006c.html#29 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#30 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#40 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#41 IBM 610 workstation computer
https://www.garlic.com/~lynn/2006c.html#42 IBM 610 workstation computer

note that VM had been doing something similar for years ... using memory-to-memory moves simulating various kinds of processor-to-processor I/O transfers between virtual machines that happen to be running on the same real processor complex ... these were the clusters of virtual machines sharing the same processors complex. (including memory)

the later cluster partitioning in numa processor complexes ... had groups of real processors that all access to the same real memory.

hear is more recent example ... "Server-to-server communication with near-zero latency"
http://www-03.ibm.com/servers/eserver/zseries/networking/hipersockets.html

misc. other examples found by search engine
http://www.findarticles.com/p/articles/mi_qa3649/is_200308/ai_n9253733
http://sinenomine.net/node/44

the real original processor-to-processor mainframe interconnect from the 60s was CTCA ... which were virtualized by VM by at least the 70s (that could do memory copies to simulate i/o message passing):
http://204.146.134.18/devpages/MUSSEL/vctc/vctc.html

an enanced version was trotter/3088 that was being developed in the time-frame that my wife servedd her stint in POK in charge of "loosely-coupled" architecture (i.e. mainframe speak for clusters). she had developed Peer-Coupled Shared Data architecture and seemed to be in constant battles with the "communication" division ... which wanted all message movement paradigms to be based on their terminal control message infrastructure (which was hierarchical master/slave and extremely poorly suited for peer-to-peer operation). misc. past references:
https://www.garlic.com/~lynn/submain.html#shareddata

and for complete topic drift ... reference to article in eserver magazine last year:
https://web.archive.org/web/20190524015712/http://www.ibmsystemsmag.com/mainframe/stoprun/Stop-Run/Making-History/

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 10:43:52 -0700
jmfbahciv writes:
Was this the thingie you guys called "background batch" for the job mixes? I think my momentary brilliance of understanding and remembering all this stuff has passed. Thus, if this is stupid question, you may say so :-).

background batch was software scheduling of workload.

LCS was memory hardware. They are totally independent constructs. A typical 360/50 might have 256kbytes of 2mic, 2byte storage. A typical 360/65 might have 512kbytes of 750ns, 8byte storage (i.e. fetches 8bytes at a time in 750ns). You could add 8mbytes of 8mic LCS memory ... which ran slower than regular memory ... sort of an early numa (non-uniform memory architecture).

the straight-forward process with numa is just strictly treat it as normal memory and ignore the different in thruput/performance. A little more sophisticated was to carefully allocate frequently used stuff in higher-speed memory and less frequently used stuff in LCS ... but allow it to execute as normal instructions/data when it was used. Alternatively, there were some strategies that would have things "staged" in LCS (typically small pieces of system code) that would be copied from LCS to higher-speed memory before actual use/execution. another strategy was to use LCS as a sort of electronic disk ... data that was normally located and used on disk ... would be staged into LCS ... and treated as if it was still resident on disk.

so for topic drift ... a story about extreme background batch. I've mentioned this before in connection with the 370/195 in bldg. 28 (sjr). PASC had this job that would run an a couple hrs on the 370/195 ... however, the backlog of work on the 370/195 was such that PASC's application had a 3month turnaround (i.e. from the time it was submitted to the time it was run was 3months).

PASC had a 370/145 which had about 1/30th the thruput of peak 370/195 thruput ... which they used for normal vm/cms interactive workload. This is also the machine they developed apl\cms and the apl 370/145 microcode assist on. So they configured this machine to run the PASC application in background ... along with doing periodic check-pointing (so it could be restarting after any system reboot). It would typically get little or no thruput during 1st shift ... but typically would get all of the 145 during 3rd shift. It might take a month or more elapsed time to run on the 370/145 ... but that was still better turn-around than the 3 months they were getting from the 370/195 at sjr (typically background batch got resources to run when there was nothing else using the resources).

misc. other topic drift, the original port of apl\360 to cms\apl was done at the cambridge science center
https://www.garlic.com/~lynn/subtopic.html#545tech

then PASC did the morph from cms\apl to apl\cms (as well as the 370/145 apl microcode assist). misc. posts mentioning apl ... including cms\apl, apl\cms, apl use at HONE (after the consolidation of all the US HONE datacenters, the US hone datacenter was across the back packing lot from PASC)
https://www.garlic.com/~lynn/subtopic.html#hone

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 11:26:31 -0700
jmfbahciv writes:
Sure. Now go talk to your software people. The performance penalty the code has implement to ensure coherency is worse. I don't think that can be changed; it's one of those sticklers of reality.

one of the issues may be that some state involves synchronous updating of multiple memory locations ... sort of like an update in databases where there is transfer from one account to another, the account record that is debited and the account record that is credited need to be both updated as a consistent commit.

cache coherency will make sure that an update of any single piece of data consistently occurs across all caches in the infrastructure (regargless of whether it is main memory is numa or not).

however, for kernel stuff, in uniprocessor operation ... it frequently dependent on straight-thru code execution with no interruptions to achieve consistent status update involving multi-location status.

in multiprocessor operation with multiple processors potentially executing the same code ... not only do you have to prevent interruptions (in the kernel) to get consistent state update ... but you also have to make sure that any other processors aren't concurrently updating the same set of locations in an inconsistent manner.

360 had the test&set instruction that could be used for implementing locking protocols. a single storage location was defined as the lock, and the test&set instruction was a special instruction that was allowed to fetch, conditional update, and store the results in a single atomic operation. this is in contrast to most fetch/update/store instructions which allowed multi-processor race conditions ... i.e. multiple processors could fetch, update, and store the same location concurrently, the resulting location value was a race condition on which ever processor was last ... the last value would be consistent across all processors ... but the value itself wouldn't be consistent. This is the bank account scenario ... where you have an account with $1000 and three concurrent debits were happeing, all for $100 each. each debit concurrently fetched $1000, subtracted $100, and stored back $900 ... when in fact, you needed them to be individually serialized to that the final resulting value was $700 (not $900).

test&set was pretty limited instruction, mostly used for things like spin-locks. it could test a location for zero, and if it wasn't zero it would set it to non-zero (and set the condition code whether it set the value or not) ... in a single, atomic operation. simplest code as something like
TS location BNZ *-8 serialized code sequence .... ... MVI location,0

a processor would "get the lock" by changing "location" from zero to non-zero. when it was done, it would reset the value back to zero. any other processors attempting to get the same lock would "spin" (loop) on the non-zero branch until the "location" returned to zero.

the instruction is still available:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.92?SHELF=EZ2HW125&DT=19970613131822

as mentioned in the above, you can now use compare&swap. when charlie was working at the science center on fine-grained locking for cp67 (running on 360/67), he observed that a large number of test&set usage involved serializing very short sequences of simple, single location storage update ... like push/pop top element on threaded list. he invented what was to be called compare&swap (we had to come up with a mnemonic that matched his initials CAS). The first pass at trying to get compare&swap added to 370 was met with rejection, the architecture red book people in POK saying that 370 didn't need any additional multiprocessor specific instructions (other than the test&set from 360). If compare&swap was to be added to 370, it had to have a non-multiprocessor specific use. thus we came up with the scenarios where compare&swap instruction in multi-threaded application code (enabled for interrupts, but equally applicable to whether running on single-processor or multi-processor configuration).

more recent description of compare&swap
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.22?SHELF=EZ2HW125&DT=19970613131822

for inclusion into 370, it was expanded to two instructions, single word atomic operation and double word operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/7.5.23?SHELF=EZ2HW125&DT=19970613131822

what use to compare&swap "programming notes" has since been moved to a section in the appendix on "multiprogramming and multiprocessing" examples ("multiprogramming" is mainframe for multi-threaded from at least the 60s)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/A.6?SHELF=EZ2HW125&DT=19970613131822&CASE=

lots of past posts on smp operation as well as compare&swap
https://www.garlic.com/~lynn/subtopic.html#smp

801/RIOS (i.e. power, etc) had no provisions for cache consistency and/or atomic synchronization instructions. however, they found that some number of applications, like large database operations, had adopted the compare&swap paradigm for multi-threaded operations (even for single-processor operations). eventually AIX had to provide a simulation of the compare&swap instruction. A mnemonic was created for compare&swap that executed a supervisor call into the kernel that has highly optimized pathlength in the supervisor call interrupt handle (it simulated the operation of compare&swap semantics inside the supervisor call interrupt handler and immediately retuned to the invoking application).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/

Hercules 3.04 announcement

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Hercules 3.04 announcement
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Sun, 26 Feb 2006 11:37:39 -0700
for some drift, maybe somebody out there could also help with this code name table (that i started before 3390s)

2301       fixed-head/track (2303 but 4 r/w heads at time)
           2303       fixed-head/track r/w 1-head (1/4th rate of 2301)
Corinth    2305-1     fixed-head/track
Zeus       2305-2     fixed-head/track
2311
           2314
2321       data-cell "washing machine"
Piccolo    3310       FBA
Merlin     3330-1
Iceberg    3330-11
Winchester 3340-35
           3340-70
3344       (3350 physical drive simulating multiple 3340s)
Madrid     3350
NFP        3370       FBA
Florence   3375       3370 supporting CKD
Coronado   3380 A04, AA4, B04
EvergreenD 3380 AD4, BD4
EvergreenE 3380 AE4, BE4
           3830       disk controller, horizontal microcode engine
Cybernet   3850       MSS (also Comanche & Oak)
Cutter     3880       disk controller, jib-prime (vertical) mcode engine
Ironwood   3880-11    (4kbyte/page block 8mbyte cache)
Sheriff    3880-13    (full track 8mbyte cache)
Sahara     3880-21    (larger cache for "11")
??         3880-23    (larger cache for "13")

IBM 610 workstation computer

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: IBM 610 workstation computer
Newsgroups: alt.folklore.computers
Date: Sun, 26 Feb 2006 12:54:25 -0700
Anne & Lynn Wheeler writes:
test&set was pretty limited instruction, mostly used for things like spin-locks. it could test a location for zero, and if it wasn't zero it would set it to non-zero (and set the condition code whether it set the value or not) ... in a single, atomic operation. simplest code as something like

TS    location
   BNZ   *-8
serialized code sequence
....
...
   MVI   location,0

ref:
https://www.garlic.com/~lynn/2006c.html#45 IBM 610 workstation computer

so the os/360 smp kernels had a single "spin-lock" around the whole kernel ... i.e. the first thing on entering a kernel interrupt handler ... do the TS/BNZ spin-lock loop and then at exit from the kernel (into application code), do the clear of the lock.

this could take a kernel developed for single-processor use and quickly and trivially adapt it for multi-processor use. the problem was that if the overall system spent any appreciable time executing in the kernel ... a large percentage of multi-processor operation would be spent in the TS/BNZ spin-loop.

charlie's work on fine-grain locking was to have lots of different locks and only little snippets of code between obtaining a specific lock and clearing the lock. in the single-kernel-lock scenario, there could be lots of "lock" contention ... processors spending huge amounts of time trying to obtain the single, kernel lock. having lots of unique locks around different little snippets of code, could hopefully reduce the amount of lock contention (and time processors spent spinning on locks).

there are also two different paradigms ... "locking" code and "locking" data structures. when "locking" code there is unique block for a specific sequence of code. when "locking" data structures, there is unique locks for specific data structures ... whenever some code (regardless of its location) attempts to operate on a specific data structure, it needs to obtain the associated data structure lock. the use of a "code" lock will only appear in a single piece of code. the use of a "data structure" lock can appear in large number of different code locations ... whenever there is activity involved a specific data structure. fine-grain locking tended to be data-structure specific ... rather than code specific.

so for VAMPS, I attempted to do a hack ... that gained the majority of the benefits of fine-grain locking with the minimum work involved in single kernel spin-lock. something like the highest executed kernel code ... typically around interrupt handlers were modified with locks for fine-grain locking paradigm. this drew on some of the ecps analysis ... looking for the highest used kernel pathlengths for migration into microcode. misc. past ecps postings:
https://www.garlic.com/~lynn/94.html#21 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#27 370 ECPS VM microcode assist
https://www.garlic.com/~lynn/94.html#28 370 ECPS VM microcode assist

then there was a single lock for the rest of the kernel, however instead of spinning, it would queue a light-weight kernel work request and go off to the dispatcher to look for other application code to execute. VAMPS was going to have up to five processors in a multiprocessor configuration. as long as the single-processor scenario spent in the single-lock, low-usage kernel code was well under 20percent (of a single processor) ... then there would little time wasted in lock-contention in a five-way operations. i referred to the VAMPS kernel lock as a bounce-lock ... rather than a spin-lock (processor would bounce off the kernel lock if it couldn't obtain it and go off to look for other work ... rather than spinning). misc. VAMPS postings
https://www.garlic.com/~lynn/submain.html#bounce

the objective was to make the absolute minimum amount of code changes to a single processor kernel for the maximum possible thruput in a 5-way, smp configuration.

in the original VAMPS design ... all of the high-use stuff was actually dropped into the microcode of the individual processors (dispatching, interrupt handling and some number of other things, again some adapted from ecps). when execution required some use of the low-usage kernel function (still in 370 code), the microcode would attempt to obtain the kernel lock and then exit into the 370 code. if the microcode couldn't obtain the kernel lock, it would queue a work requests for the 370 kernel code and go off to the (microcode) dispatcher, looking for other work to do.

from a 370 kernel code standpoint ... the semantics look more like the smp "machine" dispatcher found later in the i432.

when VAMPS was killed, the design was adapted from a highly modified microcode design to a much more normal 370 kernel ... with a combination of fine-grain locks involving little pieces of high-use kernel ... typically around the interrupt handlers ... and the majority of the (low-use) kernel handled by a single kernel bounce lock.

later, the develpment group begain to refer to it as a "defer" lock ... representing the semantics associated with the unit of work ... as opposed to the semantics associated with a specific processor execution (as i had originally referred to the lock).

--
Anne & Lynn Wheeler | https://www.garlic.com/~lynn/




previous, next, index - home