List of Archived Posts

2007 Newsgroup Postings (04/10 - 04/23)

The Perfect Computer - 36 bits?
21st Century ISA goals?
The Mainframe in 10 Years
21st Century ISA goals?
21st Century ISA goals?
21st Century ISA goals?
21st Century ISA goals?
The Mainframe in 10 Years
whiny question: Why won't z/OS support the HMC 3270 emulator
21st Century ISA goals?
The Perfect Computer - 36 bits?
The Perfect Computer - 36 bits?
The Perfect Computer - 36 bits?
Question on DASD Hardware
conformance
asymmetric cryptography + digital signature
conformance
MIPS and RISC
sizeof() was: The Perfect Computer - 36 bits?
Working while young
sizeof() was: The Perfect Computer - 36 bits?
asymmetric cryptography + digital signature
sizeof() was: The Perfect Computer - 36 bits?
MIPS and RISC
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
GA24-3639
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
Experts: Education key to U.S. competitiveness
sizeof() was: The Perfect Computer - 36 bits?
sizeof() was: The Perfect Computer - 36 bits?
ANN: Microsoft goes Open Source
ANN: Microsoft goes Open Source
ANN: Microsoft goes Open Source
Securing financial transactions a high priority for 2007
ANN: Microsoft goes Open Source
ANN: Microsoft goes Open Source
Securing financial transactions a high priority for 2007
ANN: Microsoft goes Open Source
John W. Backus, 82, Fortran developer, dies
ANN: Microsoft goes Open Source
ANN: Microsoft goes Open Source
T.J. Maxx data theft worse than first reported
ANN: Microsoft goes Open Source
T.J. Maxx data theft worse than first reported
ANN: Microsoft goes Open Source
Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
Fast and Safe C Strings: User friendly C macros to Declare and use C Strings
sizeof() was: The Perfect Computer - 36 bits?
T.J. Maxx data theft worse than first reported
sizeof() was: The Perfect Computer - 36 bits?
ANN: Microsoft goes Open Source
sizeof() was: The Perfect Computer - 36 bits?
SSL vs. SSL over tcp/ip
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
John W. Backus, 82, Fortran developer, dies
Linux: The Completely Fair Scheduler
John W. Backus, 82, Fortran developer, dies

The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 10 Apr 2007 15:24:52 -0600
eugene@cse.ucsc.edu (Eugene Miya) writes:
If that were true, then IBM would have stepped up to the plate and offered it. I suspect that Taylor, Kahn, Cerf and others would take issue with that.

re:
http://www.garlic.com/~lynn/2007g.html#84 The Perfect Computer - 36 bits?

I'm waiting ...

but while waiting for you to ask them to comment ... how 'bout sample of some RFCs references (from my RFC index) for some idea of the size of early arpanet:

clicking on the the ".txt=nnn" field in the RFC summary, retrieves the actual RFC

http://www.garlic.com/~lynn/rfcidx0.htm#293
293 -
Network host status, Westheimer E., 1972/01/18 (3pp) (.txt=7639) (Obsoleted by 298) (Obsoletes 288) (Updates 288)


http://www.garlic.com/~lynn/rfcidx0.htm#235
235 -
Site status, Westheimer E., 1971/09/27 (4pp) (.txt=7994) (Obsoleted by 240)


... snip ...

based on the stats in the various referenced RFC, the uptime of the early arpanet wasn't very good.

since some number of the machines mentioned in the above RFCs, where ibm machines ... it should be obvious that SNA wasn't the only mechanism in use by ibm mainframes. In fact, SNA was primarily a master/slave terminal control infrastructure (aka "VTAM", virtual telecommunication access method ... somewhat terminal control follow-on to TCAM ... telecommunication access method) ... not really suited for doing peer-to-peer networking operations. and, in fact, SNA wasn't even announced until 1974:
http://www-03.ibm.com/ibm/history/history/year_1974.html

before that, there were (at least) two early internal network activities, 1) one was sometimes referred to as "SUN" ... os/360 batch oriented systems based on HASP (in large part growing out of the HASP network updates from TUCC) and 2) the work at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

with cp67 based implementation (between cp67 machines). Both of these didn't involve SNA ... and in fact, their origins predate SNA.

The HASP-based network implementation actually suffered some of the same kind of limitations as the arpanet in terms of addressability and (mostly) requiring end-to-end homogeneous software operation of the network nodes; in the arpanet case, it was in the IMPs, while in the HASP case, all the support was in the mainframe, not outboard (the requirement for a separate, managed BBN box might even be considered one of the inhibitors to arpanet growth).

The CP67 origin stuff was much more flexible, having a kind of layered gateway architecture (more akin to later internetworking protocol) ... and when a "HASP" gateway driver was created for CP67 ... the two groups/collections of internal machines then were able to form a common network.

misc. past posts mentioning HASP, JES, HASP/JES networking implementation limitations, etc
http://www.garlic.com/~lynn/submain.html#hasp

eventually cp67/vm370 based infrastructure came to dominate the internal network (still not having anything to do with sna) ... and in fact was leveraged by the HASP/JES operations to provide format translations between different version/releases (of HASP/JES ... such incompatibilies were known to crash the respective MVT/SVS/MVS operating system, i.e. an intermediate cp67/vm370 node could be required to even allow two different HASP systems to communicate).

misc. old email touching on the internal network
http://www.garlic.com/~lynn/lhwemail.html#vnet

for some completely random topic drift ... the primary person (associated with dataquest) doing the high-speed interconnect study for the ha/cmp scaleup activity, mentioned here:
http://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market

in relation to the work mentioned in these old emails
http://www.garlic.com/~lynn/lhwemail.html#medusa

had a decade earlier worked at Santa Teresa lab ... and a decade or so before that, as undergraduate at UCSB, had been hired to do network penetration testing (before the UCSB arpanet connection was activated)

misc. past threads/posts where you've made similar comments on the subject:
http://www.garlic.com/~lynn/2002b.html#53 Computer Naming Conventions
http://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
http://www.garlic.com/~lynn/2002b.html#56 Computer Naming Conventions
http://www.garlic.com/~lynn/2002b.html#57 Computer Naming Conventions
http://www.garlic.com/~lynn/2002b.html#58 ibm vnet : Computer Naming Conventions
http://www.garlic.com/~lynn/2002k.html#19 Vnet : Unbelievable
http://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
http://www.garlic.com/~lynn/2004g.html#26 network history
http://www.garlic.com/~lynn/2004g.html#31 network history
http://www.garlic.com/~lynn/2004g.html#32 network history
http://www.garlic.com/~lynn/2004g.html#33 network history
http://www.garlic.com/~lynn/2006j.html#45 Arpa address
http://www.garlic.com/~lynn/2006k.html#8 Arpa address
http://www.garlic.com/~lynn/2006k.html#9 Arpa address
http://www.garlic.com/~lynn/2006n.html#2 The System/360 Model 20 Wasn't As Bad As All That
http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006t.html#43 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2006u.html#7 The Future of CPUs: What's After Multi-Core?

21st Century ISA goals?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 21st Century ISA goals?
Newsgroups: comp.arch
Date: Wed, 11 Apr 2007 09:52:42 -0600
Stephen Fuld <S.Fuld@PleaseRemove.att.net> writes:
Lets look at the way that IBM mainframes do I/O for an example. I chose this because it is petty easy to explain, not that I think it is necessarily the best implementation. This is, of course a somewhat simplified description.

just for the fun of it ... from a slightly different standpoint

the 370/158 engine had two sets of microcode that were shared execution on the same processor ... the 370 microcode and the "integrated channel microcode" (that had support for up to six configured channels).

for the "next generation" (after 370), the 303x ... they took the a 158 engine w/o the 370 microcode and packaged it as a independent box "channel director".

A 3031 was a repackaged 370/158 with only the 370 microcode (and w/o the integrated channel microcode) and an external "channel director" (could be considered an two-processor SMP but with the two engines running different microcode).

A 3032 was a repackaged 370/168 that could be configured with one to three external channel directors (for up to 16 channels).

A 3033 started out as 168 wiring diagram mapped to faster chip technology.

The Mainframe in 10 Years

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mainframe in 10 Years...
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 11 Apr 2007 10:42:04 -0600
abain@ibm-main.lst (Alan Bain) writes:
I have been asked by my management as well as a couple of clients to try to predict the future of the mainframe. Although I have done some research and have talked to many client companies, I thought I would ask a very open ended question to this list since it is you who have your fingers on the true pulse. I also have enjoyed your many and varied responses to questions over the years and feel that this exercise may be informative as well as entertaining. So here goes:

ref in post part of recent thread
http://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market

to older post containing summary of jun90 FORRESTER report "MAINFRAME R.I.P."
http://www.garlic.com/~lynn/2001n.html#79 a.f.c. history checkup

based on survey in mid-89 and some predictions out thru 99

doesn't sound like a lot has changed except possibly use of virtualization for server consolidation.

21st Century ISA goals?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 21st Century ISA goals?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 11 Apr 2007 13:42:18 -0600
ChrisQuayle <nospam@devnul.co.uk> writes:
Digging a bit further, one would assume that the channel commands and data / pointers are written to shared locations in memory by the driver or other os layer at some stage. That is, data and control must be setup somewhere prior to kicking off the i/o op. Then, the channel device uses dma or programmed io to get the data into the controller on exec of a channel instruction ?. If so, it's not too dissimilar in functionality to dec's (~ early 1980's) mscp disk protocol, where the os builds (from memory) linked lists of command/data descriptors in memory. The controller is told where to find the list head and the whole list is transferred by the controller using dma without further cpu intervention.

slightly related previous post:
http://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?

there were some differences ... the '60s flavor allowed that the program and/or data descriptors could be changed on the fly ... so there was a strict requirement for no prefetching (somewhat akin to very strong memory consistency in multiprocessor operation)

the 60s had a lot more I/O capacity than there was real storage ... as a result there was trade-off keeping lots of file infrastructure on disk and use the disk-based infrastructure to control the channel program.

a simple example was ISAM (indexed sequential access method) that would locate/search for particular record on disk (less-than, equal, greater-than, etc) ... then read the record that is to be argument to subsequent locate/search command in the same channel program.

another example was long running channel programs ... a particular channel command word (CCW) could have the PCI flag set (program controlled interrupt) ... which would schedule an interrupt to the processor ... even tho the channel program hadn't completed. This gave processor code change to change subsequent parts of the channel program "on-the-fly" (either arguments and/or instructions).

ISAM (or other implementations) channel programs could get even more complex ... not only changing arguments of subsequent commands ... but also changing the commands themselves (having read something from the device).

the requirement for no prefetching ... in support of possible "on-the-fly" modifications .. placed some distance limitations on operations ... especially later when started looking at longer distance fiber extensions. for instance, even the record locate/search argument was refetched from (processor) memory as each record (on disk) was encountered ... so there some latency issues related to disk rotation.

there was big deal made in 1980 when disk farm max channel cable length restrictions were doubled from 200ft to 400ft.

sometime in the 80s ... larger clusters of processors sharing football sized arrangements of disk farms ... would sometimes start going to 3-d configurations ... because of channel cable distance limitations (related to end-to-end latency restrictions). Start with processors somewhat located in the center of disk farm expanse that possibly had 100yds radius ... and then go to 3-d multiple floor configuration with channel cable length restrictions starting to form an operational sphere.

the storage size vis-a-vis i/o capacity trade-off changed in the 70s ... but you still had customers with configurations that had multi-cylinder file structure information (well into the 80s ... and possibly some continuing today). A full-cylinder "search" could take 1/3 sec. elapsed time ... and kept the channel resources dedicated the whole time because of the requirement to refetch the search argument on each record compare. some past posts discussiong effects of the characteristic and change in resource trade-offs w/o changing the implementation paradigm
http://www.garlic.com/~lynn/submain.html#dasd

the other thing that showed up in the 70s was the 1) increasing configuration size ... so a much higher probability of loaded systems and request queuing and 2) larger processors being built with processor caches.

The asynchronous i/o interrupts could wreck havoc with cache hit ratios. The operating system resource manager that I released in the mid-70s had a hack that would dynamically tract the asynchronous i/o interrupts rate ... and at some threshold switch to dispatching tasks disabled for interrupts for short periods. This would slightly delay some i/o interrupts (and the associated processing, increasing i/o processing latency) ... but tended to improve application thruput and their cache hit ratio. It also had some tendency to result in interrupt batching (several i/o interrupts processed in series). This in turn tended to improve the kernel interrupt processing cache hit ratio ... and could even result in the avg. interrupt processing latency to decline.

The other issue with high probability of queued requests would start showing up in "re-drive" latency becoming a measurable factor (i.e. latency between the time a pending i/o interrupt was processed and the next queued request was initiated) ... especially as the favorite son operating system became more and more bloated.

To some somewhat address both issues, a queued initiation/termination interface was introduced in the early 80s with 370-XA (initially on 3081). Channel programs could be scheduled for initiation w/o the resource being available (360 SIO i/o initiation instruction was syncronized and interrogated availability of all resources ... clear out to the device ... prior to channel program initiation and proceeding with next instruction processing). The initiation could also specify that rather than an asynchronous interrupt on completion ... just update a defined control infrastructure.

Actually, 370 (1970) first introduced an intermediate i/o initiation between the 360 SIO and the 370-XA start subchannel ... which was SIOF (sio "fast"). SIOF instruction would hand off the channel program to the channel but w/o waiting for interrogation delay clear out to the device (eliminating the "stall" associated with the SIO instruction).

The 370-XA actual channel program execution still prevented prefetching and could still require the constant access to processor memory ... however initiation and termination of the channel program no longer required syncronized processor execution (eliminating the redrive latency and could be leveraged to minimize the detrimental effects of asynchronous interrupts on cache hit ratios).

current description of operation
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/15.0?SHELF=DZ9ZBK03&DT=20040504121320

and since the operating system no longer was seeing exact operation initiation/termination ... some of the information gathered for dynamic resource management, reporting, and capacity planning was compromised ... so compensating processes and information gathering had to be added to the i/o subsystem
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/17.0?SHELF=DZ9ZBK03&DT=20040504121320

discussion of some of the make-over that changed from 370:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9AR004/F.3.2?SHELF=EZ2HW125&DT=19970613131822

I had first done something along these lines in 1975 when working on a 370 5-way smp machine (that was never announced/shipped). It had extensive microcode capability ... and I defined both a queued interface for disk i/o as well as a queued interface for dispatching/scheduling. Part of this was that it minimized some of the multiprocessor complexity ... no longer had to worry serializing which processor was doing a SIO for any specific channel at any specific time. Treat it purely as a multiprocessor multi-access shared storage control metaphor ... adding stuff and removing stuff from queues (microcode could worry about whether target processing unit was busy ... and would later get around to checking for additional work ... or was idle and had to be signaled to indicate arrival of new work).
http://www.garlic.com/~lynn/submain.html#bounce

21st Century ISA goals?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 21st Century ISA goals?
Newsgroups: comp.arch
Date: Wed, 11 Apr 2007 14:10:22 -0600
nmm1@cus.cam.ac.uk (Nick Maclaren) writes:
Indeed. And it caused a fair amount of trouble when IBM introduced caches nearly a decade later. Now, exactly what has changed since then?

somewhat long-winded
http://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals?

as to processor caches ... did the channel i/o processing require full ("multiprocessor") cache consistency support ... for commands, arguments, and data.

i/o had to signal processor caches on all(?) storage alterations ... and (especially store-into) caches had to possibly be interrogated for all command fetches as well all i/o argument fetches (in some cases there was little differentiation between what might be i/o command control argument fetch ... and an i/o command data transfer fetch).

there sometimes have been strategies if it is purely data transfer fetches ... during the i/o operation, unmap the related storage for processor execution. However, if it is really stuff that can be dynamically updated by either i/o transfer or processor during the execution of the i/o operation ... then it requires much tighter syncronization.

21st Century ISA goals?

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 21st Century ISA goals?
Newsgroups: comp.arch
Date: Wed, 11 Apr 2007 14:28:00 -0600
"MitchAlsup" <MitchAlsup@aol.com> writes:
A) Design and implement a 486-class CPU with a modern x86-64 instructions set and embedd it into the SouthBridge chip.
B) Extend the domain of cache coherence all the way out to the SouthBridge.
C) Tweek the OS so that it schedules the I/O processes only on these tiny-little CPUs.


re:
http://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?

note with the 370-xa change over ... started using (independent) 801 processor chips to handle the extended channel control functions

current i/o overview
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/13.0?SHELF=DZ9ZBK03&DT=20040504121320

i/o instructions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/14.0?SHELF=DZ9ZBK03&DT=20040504121320

basic i/o functions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/15.0?SHELF=DZ9ZBK03&DT=20040504121320

i/o interruptions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/16.0?SHELF=DZ9ZBK03&DT=20040504121320

... including all the additional statistical information ... since the actual sequence of individual events were being masked by the more sophisticated queued interface.

i/o support functions
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DZ9ZR003/17.0?SHELF=DZ9ZBK03&DT=20040504121320

21st Century ISA goals?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 21st Century ISA goals?
Newsgroups: comp.arch,alt.folklore.computers
Date: Wed, 11 Apr 2007 15:10:55 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
The other issue with high probability of queued requests would start showing up in "re-drive" latency becoming a measurable factor (i.e. latency between the time a pending i/o interrupt was processed and the nexted queued request was initiated) ... especially as the favorite son operating system became more and more bloated.

part of long winded post
http://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals?

other posts in thread:
http://www.garlic.com/~lynn/2007h.html#2 21st Century ISA goals
http://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals?

so a little (long-winded) folklore about redrive latency and the (new) 3880 disk controller (late 70s). as before past posts getting to play in the disk engineering and product test labs
http://www.garlic.com/~lynn/subtopic.html#disk

the labs were doing all their testing with "stand-alone" machines ... i.e. dedicate, scheduled machine time with simple engineering test monitor software. In the past, they had attempted operation in operating system environment but had experienced 15min MTBF (with corporation's favorite son operating system). I undertook to rewrite an i/o subsystem to make it absolute bullet-proof ... allowing them to do on-demand multiple concurrent testing of engineering hardware.

The disk labs. tending to get newest processors as they became available (processor developers would have the first engineering machine ... and the disk labs would get the 2nd or possibly 3rd engineering machine). As a result, the disk labs had significant processing power ... but it had been devoted to stand-alone testing. When I got i/o subsystem half-way bullet-proof ... they found themselves with operating system environment on their machines ... that had possibly 1-2% processor utilization (even with half-dozen engineering devices being tested concurrently). With all that extra processing power ... they initiated their own online interactive service ... scavenging some spare controllers and disk drives.

The new generation disk controller under development was the 3880 ... it would have more features and also handle the enhanced syncronization (for the 400ft double length channel cables) and the coming ten times faster disk transfer (3mbytes coming with 3380 disk drives compared to the prior 3330 disk drives). The 3880 control processor was a vertical microcode cpu that was much slower than the horizontal microcode processor used in the previous generation 3830 disk controller. To somewhat compensate there was special hardware for data transfer. However, the control operations and command processing was significantly slower on the 3880 (compared to the 3830).

So there was a requirement to show that the 3880 product was within five percent of the performance of the previous 3830 product. The command processing overhead was making the overall operation take much longer time (measure from what the processor saw). So to compensate ... they started doing some hacks ... realizing the redrive latency ... they took to signaling end-of-operation interrupt to the processor ... before the disk controller had actually finished doing all of the processing. At some point, somebody, somewhere ... ran a standard operating system product test suite against a 3880 controller and found test suite thruput to be within five percent of 3830 controller.

Looks good?

So one Monday morning about 10am, i get an upset call from the product test lab asking what I had done to their system over the weekend ... because their interactive service response had gone all to <somewhere> that morning (and, of course, they hadn't done anything over the weekend).

so some amount of investigation, i find that they had replaced a 3830 controller on string of 3330s with a brand-new 3880 controller over the weekend. Turns out that my super enhanced i/o subsystem, also had an extremely short i/o redrive pathlength ... and I was getting around to I/O redrive (after i/o interrupt processing) before the 3880 controller had actually finished completely processing the previous operation. As a result, my I/O redrive was hitting the controller while it was still busy ... which then reflected a busy condition back to the processor. Now the processor had to go into a whole lot of extra processing and requeue the operation until the controller had signaled it was actually finallly not busy. The controller having been hit with an additional operation while it was still busy ... experiences a lot of extra processing ... which included having to signal a new interrupt when it finally was really "free" (having been forced to signal that it was really busy ... even tho it had previously signaled that it "completed" the previous operation, it then had to signal when it really was free).

All this was fairly traumatic, effectively cutting disk i/o operations/sec thruput by at least half under moderate load. So now both the controller people and I have to see about work-arounds for the 3880 i/o redrive latency "problem" (they have to significantly cut their actual busy time that continues on after signaling finished with previous operation ... and/or i have to significantly delay how fast i get around to redriving operations after previous operation had signaled complete).

The Mainframe in 10 Years

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Mainframe in 10 Years...
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Wed, 11 Apr 2007 17:34:09 -0600
bdissen@ibm-main.lst (Binyamin Dissen) writes:
ISAM did not "die". It changed into KSDS.

Indexed-sequential access is used on almost every platform, some even now still called "ISAM".


recent x-over from comp.arch mentioning ISAM:
http://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals?

other possible posts of interest in the thread:
http://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals?

some other recent posts mentioning ISAM and "self-modifying" channel programs ... and one of my first assignments after graduation was spending a week at customer site getting ISAM running in virtual machine under cp67 (and trying to get dynamic modifications reflected in the shadow channel program)
http://www.garlic.com/~lynn/2007e.html#14 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#19 Cycles per ASM instruction
http://www.garlic.com/~lynn/2007e.html#27 IBM S/360 series operating systems history
http://www.garlic.com/~lynn/2007f.html#4 ISAM and/or self-modifying channel programs
http://www.garlic.com/~lynn/2007f.html#34 Historical curiosity question

previous post in this thread:
http://www.garlic.com/~lynn/2007h.html#2 The Mainframe in 10 Years

whiny question: Why won't z/OS support the HMC 3270 emulator

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: whiny question: Why won't z/OS support the HMC 3270 emulator
Newsgroups: bit.listserv.ibm-main
Date: Fri, 13 Apr 2007 05:12:24 -0600
alan_altmark@ibm-main.lst (Alan Altmark) writes:
I tell you 3 times: Yes. And XEDIT is way better than ISPF, too. And VM had TCP/IP first. And Rexx. Nyah. }:-)

some old mainframe tcp/ip folklore ... the original implementation was done in vs/pascal ... but had some issues with its interface to external boxes. somewhat as a result, it would get about 40kbyte/sec aggregate thruput and could use a whole 3090 processor doing it. i did the rfc 1044 implementation and in some testing at cray research between a 4341 clone and a cray machine ... was getting 1mbyte/sec thruput using only a modest amount of the 4341 processor (i.e. about 25 times the aggregate thruput for about 1/20 the pathlength ... about 400-500 times difference in bytes/transferred per instruction executed). misc. past posts mentioning 1044 support
http://www.garlic.com/~lynn/subnetwork.html#1044

the was "ported" to MVS and made available as a product by doing a vm kernel "diagnose" emulation for MVS (i.e. diagnose instruction use in vm is somewhat analogous to svc instruction in mvs).

some really old folklore ... was that later there was a outside subcontract to implement tcp/ip support in vtam. the initial implementation came back with tcp support significantly faster than lu6.2 support. they were told that everybody knows that lu6.2 is much more efficient than tcp ... and therefor the only way that tcp implementation could be significantly faster than lu6.2 was if it was implemented incorrectly ... and the contract wouldn't be fullfilled unless there was a "correct" tcp implementation.

past post reference
http://www.garlic.com/~lynn/2003c.html#77 COMTEN- IBM networking boxes
http://www.garlic.com/~lynn/2006f.html#13 Barbaras (mini-)rant
http://www.garlic.com/~lynn/2006l.html#53 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)

21st Century ISA goals?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: 21st Century ISA goals?
Newsgroups: comp.arch
Date: Fri, 13 Apr 2007 06:18:58 -0600
"robertwessel2@yahoo.com" <robertwessel2@yahoo.com> writes:
On S/360 this evolved from parallel channels, (fiber) ESCON, and now FICON (basically Fibre Channel with a different top protocol layer), which mostly impacted only the physical connections. There was one major change in the I/O instructions themselves (which decoupled channels from device addressing - so in stead of saying start I/O on channel 6, device 13, it's now start I/O on device #12345, and the system finds a channel with a path to that device that's not currently busy), but the channel programs have largely stayed unchanged (so a channel program for a 1964 2314 disk drive would work on a modern 3390 so long as you adapted it for the different track sizes, head counts, and cylinder counts - although there are certainly enhancements you could take advantage of on the newer driver which *would* require new channel programs). This parallels SCSI, which has been implemented on numerous physical interfaces (most obviously several versions of old parallel SCSI, FC and SAS, but also over things like USB and IDE connections for non-harddisk devices), but has retained the same command set within the packets sent over those interfaces.

some real dynamic pathing topic drift.

In late 70s I had started commenting that the relative system thruput of disks had been declining significantly (i.e. the processor and memory were getting bigger/faster ... faster than disks were getting faster). By the early 80s, I claimed that over a period of approx. 15 yrs, relative system disk thruput had declined by a factor of ten times.

This upset some in the disk division, and the organization's performance and modeling group was assigned to refute the claims. After a couple weeks they come back and said that I had actually understated the situation (it was actually somewhat worse).

So part of the issue was that the whole channel/controller/disk infrastructure required dedicated "connection" during most of channel program execution. Channel could theoretically executed multiple channel programs at a time ... but only if there was a "solid" channel connection. In 360, provisions were made for stand-alone "seeks" (i.e. disk arm movement) to disconnect from the channel as soon as the cylinder address had been transferred. This allowed for multiple disks to be connected to the same channel and have concurrent arm motion going on.

There was still the issue of disk rotation where no data was actually being transferred ... but the channel/controller were reserved/dedicated. For 370 (3830 controllers and 3330 disk drives), "rotational position sensing" (RPS) was introduced along with the "set sector" channel command. This allowed for a disk channel program to disconnect from the channel while the disk was rotating to the correct position for reading/writing a desired record (allowing other devices to utilize the channel). The problem was that when the rotation got into position, the disk had to "reconnect" ... if the channel was busy, the disk would rotate past the start of the record and would have to have a full, complete rotation to try again. This was called "RPS-miss". My 15 yr period including the transition from 360 to 370 and the introduction of "RPS" and "RPS-miss".

Some rule-of-thumb configuration grew up that channel loading had to be kept to 30percent or less ... in order to minimize RPS-miss (i.e. rotating disks trying to dynamically reconnect to the channel).

So we roll forward to 3880. Not only did I run into problem with device "redrive" hitting the controller while it was busy
http://www.garlic.com/~lynn/2007h.html#6 21st Century ISA goals?

but I had also done a superfast "dynamic pathing" algorithm purely in software.

Disk controllers supported multiple channel connections ... which could be used to connect to multiple different processor complexes ("CEC") for loosely-coupled (cluster) operation ... and/or connect to multiple different channels for the same CEC (for availability/thruput).

So standard multiple path support (processor complex with multiple different channels to the same disk controller) tended to be implemented as a primary with one or more alternates. When I was doing I/O supervisor rewrite for the engineering and product test labs ... lots of past posts
http://www.garlic.com/~lynn/subtopic.html#disk

I also did a highly optimized implementation of dynamic pathing with load balancing (as opposed to primary/alternate). However, in the transition from 3830 controller to 3880 controller this ran into another kind of "busy" problem.

Turns out one of the other optimization done in the 3880 controller microcode (to compensate for the slowness of the processor) was that a lot of status was cached regarding the channel interface in use. The 3880 thruput and busy was significantly better if operations came in thru a single (channel) interface. Starting to hit the 3880 randomly from lots of different channel interfaces ... blew its "caching" and significantly drove up the controller busy everytime it had to switch from one interface to another. This additional overhead was so significant ... that primary/alternate strategy had significantly better thruput than dynamic load-balancing across all (available) interfaces. misc. past posts mentioning experience redoing the multi-path support (in software)
http://www.garlic.com/~lynn/2004d.html#65 System/360 40 years old today
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2006v.html#16 Ranking of non-IBM mainframe builders?
http://www.garlic.com/~lynn/2007.html#44 vm/sp1

So one of the other things that 370-xa i/o interface did was move the "dynamic pathing" under the covers ... into what was sometimes called "bump" processing/storage (i.e. new "hardware" function that sat between the kernel drivers and the previous 360/370 channel interface).

Separate from that is the whole continuing saga of the excessive 3880 controller busy overhead ... which spilled over into increased channel busy (since a lot of the increased 3880 processing occured during dedicated controller/channel handshaking).

The 3090 was built using a few number of TCMs ... each TCM represented a significant part of the 3090 manufacturing cost. There was a lot of work on balanced configuration to maximize 3090 thruput ... this included having sufficient number of disks and channels (at avg. of 30percent busy or less ...harking back to the whole RPS-miss description). The early 3090 configuration specification was done effectively using 3830 disk controller characteristics. It eventually dawned that with the significant increase in channel busy when talking to 3880 (rather than 3830) ... that 3090 would require a lot more channels (in order to try and meet the 30percent avg busy threshhold requirement and minimize contention and problems like RPS-miss). It turns out in order to add the additional channels, an additional TCM would have to be used in every 3090. There were some snide remarks that the "manufacturing cost" of an additional TCM in every 3090 should be billed against the 3880 disk controller organization and not the 3090 processor organization. misc. past posts mentioning 3880 busy resulting in having to increase number of 3090 TCMs
http://www.garlic.com/~lynn/2000b.html#38 How to learn assembler language for OS/390 ?
http://www.garlic.com/~lynn/2002b.html#3 Microcode? (& index searching)
http://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2004n.html#22 Shipwrecks
http://www.garlic.com/~lynn/2006r.html#36 REAL memory column in SDSF

Now processor-side "dynamic pathing" only addressed half of the problem (at least for disks) ... which was find any available channel path to the controller to initiate the operation (although with 3880 disk controller, if the dynamic pathing got too fancy, as i found out with software, it could actually degrade thruput as compared to simpler primary/alternate strategy). However, once started, channel programs were bound to the initiating channel interface.

There was still the possibility of doing dynamic pathing (in the reverse direction) to try and help address the "RPS-miss" situation ... i.e. dynamic path from the controller to channel on "reconnect" when disk had rotated into position ... which would require a lot more processing smarts in the disk controller ... and also a way of indicating to the disk controller ... which of the channel paths were grouped to the same CEC. This was something for later, more efficient disk controller implementations (and more smarts on the processor side to realize that a channel program was reconnecting on a different channel). The channel program and channel commands can stay the same ... the "definition" of channel reconnect (for the controller) changes.

other posts in this thread:
http://www.garlic.com/~lynn/2007h.html#1 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#3 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#4 21st Century ISA goals?
http://www.garlic.com/~lynn/2007h.html#5 21st Century ISA goals?

misc. past posts mentioning RPS-miss
http://www.garlic.com/~lynn/96.html#5 360 "channels" and "multiplexers"?
http://www.garlic.com/~lynn/2001c.html#17 database (or b-tree) page sizes
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#46 MVS History (all parts)
http://www.garlic.com/~lynn/2002b.html#1 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002i.html#18 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2002o.html#46 Question about hard disk scheduling algorithms
http://www.garlic.com/~lynn/2004e.html#16 Paging query - progress
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s

old posts mentioning making claims about relative system disk thruput drastically declining over the years
http://www.garlic.com/~lynn/93.html#31 Big I/O or Kicking the Mainframe out the Door
http://www.garlic.com/~lynn/94.html#43 Bloat, elegance, simplicity and other irrelevant concepts
http://www.garlic.com/~lynn/94.html#55 How Do the Old Mainframes Compare to Today's Micros?
http://www.garlic.com/~lynn/95.html#10 Virtual Memory (A return to the past?)
http://www.garlic.com/~lynn/98.html#46 The god old days(???)
http://www.garlic.com/~lynn/99.html#4 IBM S/360
http://www.garlic.com/~lynn/2001b.html#38 Why SMP at all anymore?
http://www.garlic.com/~lynn/2001d.html#66 Pentium 4 Prefetch engine?
http://www.garlic.com/~lynn/2001f.html#62 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001f.html#68 Q: Merced a flop or not?
http://www.garlic.com/~lynn/2001l.html#40 MVS History (all parts)
http://www.garlic.com/~lynn/2001l.html#61 MVS History (all parts)
http://www.garlic.com/~lynn/2001m.html#23 Smallest Storage Capacity Hard Disk?
http://www.garlic.com/~lynn/2002.html#5 index searching
http://www.garlic.com/~lynn/2002b.html#11 Microcode? (& index searching)
http://www.garlic.com/~lynn/2002b.html#20 index searching
http://www.garlic.com/~lynn/2002e.html#8 What are some impressive page rates?
http://www.garlic.com/~lynn/2002e.html#9 What are some impressive page rates?
http://www.garlic.com/~lynn/2002i.html#16 AS/400 and MVS - clarification please
http://www.garlic.com/~lynn/2004n.html#15 360 longevity, was RISCs too close to hardware?
http://www.garlic.com/~lynn/2004p.html#39 100% CPU is not always bad
http://www.garlic.com/~lynn/2005f.html#55 What is the "name" of a system?
http://www.garlic.com/~lynn/2005h.html#13 Today's mainframe--anything to new?
http://www.garlic.com/~lynn/2005k.html#53 Performance and Capacity Planning
http://www.garlic.com/~lynn/2006m.html#32 Old Hashing Routine
http://www.garlic.com/~lynn/2006o.html#27 oops
http://www.garlic.com/~lynn/2006x.html#13 The Future of CPUs: What's After Multi-Core?

The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Fri, 13 Apr 2007 07:08:42 -0600
jmfbahciv writes:
A little history...size was The Issue back then. Everybody's fields had been defined to be too small. <snip>

I think the comment is don't feed the troll

I had included a simple statement that internal network was larger than the arpanet/internet from just about the beginning until possibly mid-85.
http://www.garlic.com/~lynn/2007g.html#84 The Perfect Computer - 36 bits?

there was a post that seemed to imply that there was some question regarding the assertion about the relative sizes of the internal network and the arpanet ... "from just about the beginning" ... and there was some specific names that might possibly contest the assertion.

i then posted reply with some references that gave indication about the size of the early arpanet (as well as some of the dynamics driving the internal network implementation) ... of course, lets not reference RFCs with real specifics
http://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits

and then there is a post seeming to imply that there was a discussion about something other than size.

similar threads have happened before ... from the references to older posts related to the internal network subject (included in previous post) .. things like reference to some comment about specific period in time ... and then response comes back that the discussion was really about some totally different subject, date, and/or place.

for other drift ... some of the ipv6 discussion was about the size increase in the address field. also a lot of y2k was about legacy applications and systems that were still around ... and had saved a few bits in the date by only using two digit year field. some old email about date processing
http://www.garlic.com/~lynn/99.html#233 Computer of the century
http://www.garlic.com/~lynn/99.html#24 BA Solves Y2K (Was: Re: Chinese Solve Y2K)
http://www.garlic.com/~lynn/2000.html#0 2000 = millennium?
http://www.garlic.com/~lynn/2000.html#94 Those who do not learn from history...
http://www.garlic.com/~lynn/2006r.html#16 Was FORTRAN buggy?

In the reference to some of the HASP networking implementation (i believe originating at TUCC) ... they had leveraged a one-byte field that had been used to define "psuedo" unit record devices. This was also used to support a lot of telecommunication unit record devices (card readers, printer, punches at remote locations). This telecommunication support was deployed and used in large number of customer shops (i.e. lots of customer sites with single processor supporting one or more remote sites over telecommunication lines).

The incremental enhancement was then to take that support and extend it to talk to other HASP systems. As a result, the single one-byte field was then being used to "address" psuedo-devices, remote telecommunication devices, as well as other hosts in the same network. This was a particular problem for most customers, since they tended to only have a limited number of hosts ... and the issues of cross-domain (cross corporate) interconnect was still quite a significant issue.

However, there was (at least) one company with hundreds and then thousands of mainframes installed for internal use ... where inter-corporate jurisdictional issues wouldn't inhibit interconnecting processors.

As previously mentioned, the HASP implementation had short comings where different versions of HASP (& then JES) couldn't interoperate ... and could require VNET intermidiate node to do format conversion (as countermeasure to prevent format incompatibilities resulting in whole system crashes ... shudder to think about what a hostile operational environment could do and things like denial of service attacks).

However, the other HASP implementation issue was that since all definitions had to identified by that single byte ... 255 max. possible that included all psuedo devices ... a large HASP configuration could have 60-80 definitions and possibly several remote telecommunications devices ... the number of network node definitions might be as few as 150. This wasn't a problem for most closed corporate environments of the period ... but there was at least one where it was a significant problem. Also, it wasn't unusual for a corporation to keep all its HASP systems at the same version ... doing syncronized upgrades across a limited number of machines. However, syncronized upgrades doesn't scale well as the number of nodes increase significantly.

The saving grace was the implementation originated at the science center for cp67.
http://www.garlic.com/~lynn/subtopic.html#545tech

Not only could it be used to handle the HASP/JES version interoperability problem ... but it didn't have the addressing limitation and could address the complete network. HASP/JES nodes then tended to migrate to boundaries ... with configuration definition that could only address some specific 100-200 node subset of the complete network.
http://www.garlic.com/~lynn/subnetwork.html#internalnet

The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Fri, 13 Apr 2007 08:07:00 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
However, the other HASP implementation issue was that since all definitions had to identified by that single byte ... 255 max. possible that included all psuedo devices ... a large HASP configuration could have 60-80 definitions and possibly several remote telecommunications devices ... the number of network node definitions might be as few as 150. This wasn't a problem for most closed corporate environments of the period ... but there was at least one where it was a significant problem. Also, it wasn't unusual for a corporation to keep all its HASP systems at the same version ... doing syncronized upgrades across a limited number of machines. However, syncronized upgrades doesn't scale well as the number of nodes increase significantly.

re:
http://www.garlic.com/~lynn/2007g.html#84 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#10 The Perfect Computer - 36 bits?

this also appeared to be a similar issue with the arpanet BBN box implementation ... in some of the past posts I've referenced RFCs with box downtime schedules across the whole arpanet where maintenance and software upgrades needed to be done and some of the software changes appeared to require coordinated maintenance across the whole infrastructure.

This is not only an network interoperability issue between different boxes ... for the arpanet, there was none of these kinds of interoperability issues since there was only one kind of box ... but also the interoperability issue of different software versions. If you keep all the networking software the same (both the boxes and the versions) ... then interoperability (homogenous/heterogeneous) issues can be eliminated ... although you still can have significant scaling issues if you have to keep the software version of all the boxes coordinated.

Supporting interoperability and eliminating the coordinated, homogeneous infrastructure operations ... helps with scaling ... since you no longer have to worry about keeping all boxes in coordinated sync. at all times.

From an operational standpoint ... different implementations from different organizations ... all being able to interoperate was something of a mid-80s happening for the arpanet/internet. The internal network faced it very early since

1) the cp67 and hasp implementations were totally different and came from totally distinct background and organizations (in fact, a lot of the early hasp networking base implementation even originating outside the company).

2) both cp67 and hasp implementations were part of the mainframe software (not a separate box). the individual datacenters around the world controlled the maintenance, support, and release/version transition schedule of the mainframe software in their datacenters ... and might have very little coordination with the rest of the world. As a result there was a wide variation in the release/version of the different software being run around the world (there wasn't the luxury of a separate box that could have centralized, coordinated support). the base interoperable orientation of the networking software started at the science center eliminated needing to have coordinated, centralized support for world-wide operation (not only for the cp67/vm370 systems, but also for the hasp/jes systems). This was critical for the network size scalling.

re:
http://www.garlic.com/~lynn/subnetwork.html#internal

The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Fri, 13 Apr 2007 12:46:01 -0600
jmfbahciv writes:
In 1980, X25 was moving target. Nobody could implement anything and ship it (our development cycles were about two years/major software project).

Perhaps you should get all the revisions of the specs that happened during those years.

The guy who was trying to implement X25 was ready to shoot France.


later in the '80s ... sort of between the time that we had gotten prevented from bid on the nsfnet backbone (even tho we had determined backing/support from NSF)
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

and the time we were out pitching 3-tier architecture and middle ware/layer
http://www.garlic.com/~lynn/subnetwork.html#3tier

my wife did short stint as chief architect for Amadeus. she was backing x.25 for their world-wide operation ... but the SNA forces were instrumental in getting her replaced. it didn't do them much good, Amadeus went with x.25 anyway.

misc. past posts mentioning Amadeus
http://www.garlic.com/~lynn/2001g.html#49 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#50 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001h.html#76 Other oddball IBM System 360's ?
http://www.garlic.com/~lynn/2003d.html#67 unix
http://www.garlic.com/~lynn/2003n.html#47 What makes a mainframe a mainframe?
http://www.garlic.com/~lynn/2004b.html#6 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004b.html#7 Mainframe not a good architecture for interactive workloads
http://www.garlic.com/~lynn/2004m.html#27 Shipwrecks
http://www.garlic.com/~lynn/2004o.html#23 Demo: Things in Hierarchies (w/o RM/SQL)
http://www.garlic.com/~lynn/2004o.html#29 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005f.html#22 System/360; Hardwired vs. Microcoded
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2006o.html#4 How Many 360/195s and 370/195s were shipped?
http://www.garlic.com/~lynn/2006r.html#9 Was FORTRAN buggy?
http://www.garlic.com/~lynn/2006y.html#14 Why so little parallelism?
http://www.garlic.com/~lynn/2007d.html#19 Pennsylvania Railroad ticket fax service
http://www.garlic.com/~lynn/2007e.html#52 US Air computers delay psgrs

Question on DASD Hardware

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Question on DASD Hardware
Newsgroups: bit.listserv.ibm-main
Date: Fri, 13 Apr 2007 13:34:06 -0600
starsoul writes:
I have a general question.

Does anyone know where I can find information about how often a disk needs to be replaced or does dynamic sparing inside these new fangle DASD Boxes from IBM, EMC, or Hitachi?

I am not talking about the mainframe dasd itself. But rather the disk in the box that the mainframe dasd is mapped (?) too?

I have heard that a CE has to replace at least 1 disk per month on some of these boxes. (DMX3000 - EMC, DS8000 - IBM, etc....)

So I guess I am looking at MTTF for the disk.


here is some MTBF numbers of some disks
http://www.digit-life.com/articles2/storage/maxtor15k2.html

in the million-plus hrs.

however, there has been some recent articles on how accurate published numbers might really be (and/or what the distribution actually works out to be)

Hard disk test 'surprises' Google
http://news.bbc.co.uk/2/hi/technology/6376021.stm
Google Releases Paper on Disk Reliability
http://hardware.slashdot.org/hardware/07/02/18/0420247.shtml
Failure Trends in a Large Disk Drive Population
http://labs.google.com/papers/disk_failures.pdf

there have been articles in the past about disk MTBF can be highly skewed (some very early ... and then very late ... as opposed to any sort of even or random distribution).

conformance

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: conformance
Newsgroups: alt.religion.kibology,alt.folklore.computers
Date: Fri, 13 Apr 2007 14:41:09 -0600
Glenn Knickerbocker <NotR@bestweb.net> writes:
Exactly. And the former can't exist in a CMS file.

cms implements (in os semantics) "FIXED" record format file ... i.e. file descriptor says that the file is recfm=F and the fixed record length of each file (LRECL=80 ... effectively card format) ... and "VARIABLE" record format file ... i.e. file descriptor says the files is recfm=V; each record is preceeded by half-byte, file infrastructure "metadata" given the length of the record following (not seen by the application).

this has somewhat been discussed in some of the buffer overflow threads ... about using "in-band" NULL characters as indicating end-of-line (and therefor implicitly indicating line length) ... as opposed to recfm=V used explicit (out-of-band infrastructure metadata) field for line length.
http://www.garlic.com/~lynn/subintegrity.html#overflow

the various terminal/wire characters CR/LF (carriage-return and/or line-fee) are terminal "control" constructs.

CMS deals with "virtual" 1052-7 (old style 360 machine console) for line-mode terminal (with some special stuff for "full-screen" 3270). CR/LF characters then get mapped into 1052-7 equivalent ... and typically CMS would parse the "incoming" emulated terminal line/wire and strip out terminal control characters.

The recfm=F, lrecl=80 file format is obviously inheritance from physical card format. terminal/wire lines would typically/frequently get mapped into recfm=V file format.

discussion of implementation of a POSIX compliant file system "known" as the BYTE FILE SYSTEM (BFS) for CMS
http://www.redbooks.ibm.com/abstracts/SG244747.html?Open

reference to CMS now having support for (traditional mainframe) RECFM=F, RECFM=V, RECFM=U, and also RECFM=D ("ascii variable length records")
http://www.vm.ibm.com/pubs/cms440/TVISECT.HTML

asymmetric cryptography + digital signature

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: asymmetric cryptography + digital signature
Newsgroups: sci.crypt
Date: Fri, 13 Apr 2007 15:12:54 -0600
"Giorgio" <nacci.giorgio@gmail.com> writes:
i need to send crypted signed data and i'm having some architectural doubt.
The question is: does operations order affect security?

I saw alot of schemas in which the process is described in the order:
1) Message -> signature (encrypted with private key) -> append signature to message -> encrypt with public key -> send.

Instead doing operations in this order:
2) Encrypt message with public key -> signature of encrypted message (encrypted with private key) -> send distinctly encrypted message and signature.

Mode 2) seems more efficient cause receiver doesn't need to decrypt the message if signature isn't verified, but i don't know if there are security issues in doing so.


in general encryption is used to hide the information and digital signature is used for both 1) integrity of the message (i.e. it hasn't been modified) and 2) authentication/origin

in some cases, the cleartext is digitally signed (first) ... in an attempt to imply that the digital signature is also associated with the meaning of the cleartext (as opposed to simply providing integrity and authentication) ... and/or the cleartext already carries a digital signature as means of integrity/authentication ... independent of whether it is going to be transmitted.

and in reality ... (for efficiency) many infrastructures actually generate a random symmetric key ... encrypt the message with the symmetric key and then encrypt the symmetric key with the recipients public key. then, in the case of email to multiple recipients ... all you have to do is encrypt the symmetric key with the public key of each of the recipients (as opposed to having a separately encrypted copy of the full message for each recipient).

on the recipient's side ... if the digital signature is on the cleartext ... then it is possible for the recipient to keep the unencrypted/cleartext message along with the digital signature for longterm integrity/authentication.

If the digital signature is on the encrypted message ... if future/longterm authentication/integrity (of the content) is needed ... then the full encrypted message has to be also retained. Then to have ongoing high assurance as to the authentication/integrity, could require the digital signature (of the encrypted message) verified on each use ... followed by message decryption (compared to just having to reverify the digital signature of the cleartext message).

conformance

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: conformance
Newsgroups: alt.religion.kibology,alt.folklore.computers
Date: Fri, 13 Apr 2007 17:03:25 -0600
ArarghMail704NOSPAM writes:
half-byte ? Makes for an rather short line. :-)

Probably meant half-word. (16 bits), IIRC that's what recfm=V uses.


re:
http://www.garlic.com/~lynn/2007h.html#14 conformance

yep, oh well, brain check ... even when i had done the tty/ascii terminal support for cp67 in the 60s when i was an undergraduate ... the subsequent problem mentioned in these posts, involved "one byte" length arithmatic.
http://www.garlic.com/~lynn/2007g.html#37 The Perfect Computer - 36 bits?
also mentioned in the stories here
http://www.multicians.org/thvv/360-67.html

MIPS and RISC

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MIPS and RISC
Newsgroups: comp.arch
Date: Fri, 13 Apr 2007 21:03:24 -0600
"MitchAlsup" <MitchAlsup@aol.com> writes:
RICS is the name given to the whole genré of this style of computers.

MIPS is an architecture from this genré that began as research at Stanford, and then spawned a company to design and manufacture industrial strength version of the acedemic architecture.

The first generation (looseley) consisted of MIPS, SPARC, Motorola Mc88000, IBM ??, and Intergraph Clipper. The second generation added HP PA-RISC, ALPHA, IBM Power, and mabe a couple more.


all the 801 stuff ... lots of past posts mentioning 801, iliad, romp, rios, fort knox, somerset, etc ... lots of past posts
http://www.garlic.com/~lynn/subtopic.html#801

even some old email
http://www.garlic.com/~lynn/lhwemail.html#801

maybe 2nd generation was various iliad

note here on john:
http://domino.watson.ibm.com/comm/pr.nsf/pages/news.20020717_cocke.html

801 wiki page:
http://en.wikipedia.org/wiki/IBM_801

i've periodically claimed that (at least some) motivation behind 801 was to go to the opposite extreme from the extreme complexity of the (failed/canceled) Future System project
http://www.garlic.com/~lynn/submain.html#futuresys

somewhat after 370 fort knox activity in endicott was killed, there were some number of engineers that had worked on 801 efforts ... show up at other companies ... amd (29k), hp (snake). there is folklore that one of prime people showup on snake had given 2weeks noticed ... and then spent the last two weeks on blue iliad.

somewhat separate from the 801 iliad stuff was 801 romp ... which started out as joint effort between research and office products for 801-based (cp.r, pl.8, etc) displaywriter follow-on. when that was killed they somewhat looked around and decided to turn it into a unix workstation ... which was released as pc/rt. Then work started on rios chipset (i've got a rios chipset paperweight that says 150 million ops, 60 million flops, 7 million transisters) ... which come out as "power" and rs/6000.

sizeof() was: The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sat, 14 Apr 2007 11:41:00 -0600
jmfbahciv writes:
We aren't using DECnet because it was too little too late. They took 10 f**king years to approve and implement functionality that TOPS-10 started shipping in 1976 or 1978. If the review process had passed and implemented Phase IV by 1980, TCP/IP would be in the folklore, not DECnet.

post from last yr mentioning that wecker had worked on cp67 and virtual machines (i ran into him a number of times in this era)
http://www.garlic.com/~lynn/2006m.html#21 The very first text editor

post also mentions that by the end of '76, 16percent of the burlington mall dev. group were working for DEC ... aka result of POK getting approval to kill off vm370/cms product, shutdown burlington location, needing to move all the people to POK to support MVS/XA development.

using search engine for decnet and wecker turns up lots of references ... include some mentioning Wecker as originator of DECnet (need login for following)
http://ieeexplore.ieee.org/iel1/35/4759/x0321428.pdf

others just say he was one of the architects of DECnet.

misc. recent posts
http://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#10 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#11 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#12 The Perfect Computer - 36 bits?

for other drift ... by the early 80s, the descendent of the network support originated at the science center
http://www.garlic.com/~lynn/subtopic.html#545tech

was starting to just ship the jes2 "gateway" drivers ... and stoped shipping any of the native drivers ... among other things, the native drivers performed/operated significantly better than the jes2 driver ... corporate decision to minimize comparisons(?). It was in this era that saw the start of bitnet using the product ... lots of past posts mentioning bitnet (and/or earn, its eurpean counterpart)
http://www.garlic.com/~lynn/subnetwork.html#bitnet

some old email from the person responsible for EARN in Europe
http://www.garlic.com/~lynn/2001h.html#email840320
http://www.garlic.com/~lynn/2006w.html#email850607

Also, slightly later in this era, the internal network
http://www.garlic.com/~lynn/subnetwork.html#internalnet

passed 1000 nodes (and the internal network continued to utilize "native" drivers)
http://www.garlic.com/~lynn/internet.htm#22
http://www.garlic.com/~lynn/99.html#112 OS/360 names and error codes (was: Humorous and/or Interesting Opcodes)
http://www.garlic.com/~lynn/2006k.html#3 Arpa address
http://www.garlic.com/~lynn/2006k.html#8 Arpa address

before jes2 revamp to support up to 1000 nodes (from its implementation based on HASP 1byte index table that had to be shared with several other functions) ... and later the internal network passed 2000 nodes before jes2 upgrade to support 1999 nodes. lots of past hasp/jes2 posts
http://www.garlic.com/~lynn/submain.html#hasp

Working while young

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Working while young
Newsgroups: alt.folklore.computers
Date: Sat, 14 Apr 2007 12:00:13 -0600
Car keys could go the way of tail fins
http://news.com.com/Car+keys+could+go+the+way+of+tail+fins/2100-11389_3-6176121.html

i learned to drive on an old flatbed truck the summer i turned nine. it had a pedal on the floor used to engage the starter motor (and all shifting was double clutch)

http://www.garlic.com/~lynn/38yellow.jpg

38chevy?

past posts:
http://www.garlic.com/~lynn/2002i.html#59 wrt code first, document later
http://www.garlic.com/~lynn/2004c.html#41 If there had been no MS-DOS

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sat, 14 Apr 2007 14:20:02 -0600
Morten Reistad <first@last.name> writes:
The two applications are X.509 and IS-IS. They work because the Internet needed them.

i was at a sigmod conference in the early 90s and the question of what was all this x.50x stuff came up in a session ... and somebody explained it as networking engineers trying to re-invent 1960s database technology.

later when we were consulting for this small client/server startup that wanted to do payments on their server ... they had this technology called SSL that required digital certificates.
http://www.garlic.com/~lynn/subpubkey.html#sslcerts

we also coined the term "comfort certificates" ... described in some of the referenced posts (a lot of them existing to make you feel better).

We had to do a lot of auditing of various businesses associated with the digital certificate stuff ... somewhat as a result, we coined the term certificate manufacturing to differentiate from the stuff called PKI that was normally found in the literature associated with x.509
http://www.garlic.com/~lynn/subpubkey.html#manufacture

the other issue was that by the mid-90s a lot of institutions were starting to realize that the earilier "identity" x.509 work (typically overloaded with a lot of personal information) represented significant privacy and liability issues. as a result there was quite a bit of retrenchment to what was called relying-party-only certificates
http://www.garlic.com/~lynn/subpubkey.html#rpo

however, it was normally trivial to demonstrate that rpo-certificates were redundant and superfluous. that nearly everything that could be done with rpo-certificates could be achieved with much simpler infrastructure, still involving digital signatures ... but w/o the enormous expense and trouble of digital certificates
http://www.garlic.com/~lynn/subpubkey.html#certless

old email suggesting/describing a simple certificate-less public key operation
http://www.garlic.com/~lynn/2006w.html#email810515

some recent posts about vulnerabilities related to the existing SSL operation
http://www.garlic.com/~lynn/2007f.html#31 Is that secure : <form action="https" from a local HTML page ?
http://www.garlic.com/~lynn/aadsm26.htm#47 SSL MITM-attacks make the news
http://www.garlic.com/~lynn/aadsm26.htm#56 Threatwatch: MITB spotted: MITM over SSL from within the browser

asymmetric cryptography + digital signature

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: asymmetric cryptography + digital signature
Newsgroups: sci.crypt
Date: Sat, 14 Apr 2007 14:36:16 -0600
"Giorgio" <nacci.giorgio@gmail.com> writes:
What really worries me about using "encrypt then sign" scenario is the possibility to read the signature (you just need public key) and the corresponding message togheter. I'm thinking about how easy (relatively; well, not so easy, but much easier then cracking crypting function) could finding a collision be. Am I wrong? If I'm not then integrity is lost, right?

if that were serious ... then the whole digital certificate infrastructure might collapse.

built into every browser is a whole lot of clear-text messages containing public keys along with digital signatures ... which happen to be called "digital certificates". these are the things nearly all SSL operates from. if there are significant vulnerabilities as you describe ... then the whole SSL infrastructure might come crashing down.

previous post:
http://www.garlic.com/~lynn/2007h.html#15 asymmetric cryptography + digital signature

unrelated recent message ... that happens to mention different kind of vulnerability with ssl infrastructure
http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 09:46:08 -0600
Morten Reistad <first@last.name> writes:
Yep, X.509 has degraded to non-security. Not because of the standard itself, which is workable but overly complicated.

There is no auditing. Having a certificate proves absolutly nothing. So the whole house falls down.


re:
http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?

the other issue is that digital certificates were a offline paradigm ... electronic emulation of credentials, certificates, licenses, etc ... essentially analogous to the letters of credit/introduction from the sailing ship (and earlier) days. they are useful to relying parties that have no other information about the entity.

the x.509 identity digital certificates were billed as security feature ... which requires quite a bit of funding to cover the costs of audit, compliance, etc. however, in an online world, real-time, online information is significantly more valuable to relying parties ... than stale, static certificates.

besides the privacy and liability issues with identity digital certificates grossly overloaded with personal information ... relying parties having growing access to (the much more valuable) online, real-time information ... starts to move digital certificates to the no-value market segment (i.e. applications that can't justify cost of online, real-time information). The problem then becames difficulty of justifying high prices in the no-value market segment ... and w/o a lot of revenue flow ... it is difficult to cover costs of stringent security features, audits, compliance, etc.

so another aspect is that the whole digital certificate paradigm was targeted at a rapidly disappearing market segment ... with expanding, online, ubiquitous connectivity.

we were called into consult with the small client/server startup that had ssl ... that wanted to do payment transaction on servers
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

was is now frequently referred to as electronic commerce. part of the issue is the ssl domain name digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

are being payed for by merchants ... supposedly in support of this electronic commerce stuff. the value of certificate is bounded by the fact that the merchants are already paying a significant amount on every transaction for real-time infrastructure ... and, in fact, had been for a couple decades prior to the appearance of SSL.

In this period there were some number of x.509 advocates making statements about digital certificates being required to bring payment infrastructure into the "modern" era. We would respond that the offline paradigm offerred by digital certificates actually reverts the electronic payment infrastructure several decades. It was not too long after that, there was work started on OCSP (online certificate status protocol) ... which is another rube golberg type hack ... (along with relying-party-only certificates) that attempts to demonstrate an online infrastructure but attempting to preserve the fiction of (stale, static, redundant and superfluous) certificates serving a useful purpose in an online world.

of course ... my oft reference old post about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61

and more recent posts concerning "armored" payments and transactions ... i.e. providing end-to-end strong authentication and integrity
http://www.garlic.com/~lynn/x959.html#x959
... but w/o enormous payload and processing bloat
http://www.garlic.com/~lynn/subpubkey.html#bloat

i.e. the payment protocols from the period that demonstrated appending digital certificates to payment transactions ... were sending (stale, static) information back to the relying party, when the relying party already had the real-time copy of the information (i.e. redundant and superfluous) ... however, these stale, static, redundant and superfluous digital certificates were increasing the payload transaction size and processing by two orders of magnitude.

it was somewhat out of this experience that we did the certificate-less
http://www.garlic.com/~lynn/subpubkey.html#certless

digitally signed financial transaction standard
http://www.garlic.com/~lynn/x959.html#x959

i.e. can have end-to-end integrity and authentication without having the enormous bloat of stale, static, redundant and superfluous appended digital certificates.

MIPS and RISC

Refed: **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: MIPS and RISC
Newsgroups: comp.arch,alt.folklore.computers
Date: Sun, 15 Apr 2007 10:00:05 -0600
re:
http://www.garlic.com/~lynn/2007h.html#17 MIPS and RISC

for totally other topic drift, Varian was running early chip design applications on cp67/cms. Later you find some of the influence (and even engineers) at other places around the valley like AMD, LSI Logic, etc. ... and chip design applications still running on vm370/cms (cp67/cms followon) well thru the 80s.

so separate from regularly visiting various of these places in the 70s and 80s ... related to chip technology ... also had lots of interactions about their vm370 operations.

and old email that happens to mention amd 29k
Date: Wed, 28 Sep 1988 13:42:59 PDT
From: wheeler
.... ....
Our icharts show that the risc industry is doubling processing power every 18-24 months. Given that AMD introduced the 29000 in '87, then the window for a 2x 29000 opens sometime in early '89. The 29000 has been benchmarked at 40+k drystones (making a 2x 29000 85k-90k drystones). I believe that the current numbers are approx:


original rt:       4k drystones
135:              12k drystones
ps2m70            13k drystones
29000             40k+ drystones

... snip .... top of post, old email index

of course, CISC processors then started to move onto similar technology curve.

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 10:34:40 -0600
jmfbahciv writes:
Wecker was one of the brilliant ones. The problem was the process of getting the work done. It took years to approve a dot in a spec. What people in PDP-10 land did was take the DECnet spec as it existed in mid-70s and implemented what later became ANF-10. In 1984, two of our developers attended a spec review meeting of DECnet. There was an agenda that listed the functionality that would be nice to put into DECnet. Our guys went down the list to see what ANF-10 did not have. It had implemented most of the items. So DECnet was not delayed because of non-existent hardware.

I think it was the process where all specs had 20 meetings minimum with 20 people attending and all specs had to have signatory approval of 20-25 managers. Nobody just went and did it. They waited for all approvals before typing MAKE DECNET.MAC


re:
http://www.garlic.com/~lynn/2007h.html#18 sizeof() was: The Perfect Computer - 36 bits?

all large corporations seemed to have their equivalent ... witness the future system stuff
http://www.garlic.com/~lynn/submain.html#futuresys

in the early 80s ... i had somewhat precipitated a new operating system rewrite project ... I had layed out a bunch of objectives ... programming technology, implementation language (some of the things being observed about the portability that unix was leveraging in the market), etc.

this quickly got adopted and balooned into a fairly massive effert (with lots of people wanting to take advantage of the opportunity and get their favorite feature included ... a somewhat smaller scale repeat of what happened with future system project). one of my original objectives about small, highly focused group of individuals doing implementation got lost. just before the whole effort imploded ... there was something like a couple hundred people working on writing specifications.

recent post
http://www.garlic.com/~lynn/2007g.html#69 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007g.html#70 The Perfect Computer - 36 bits?

of course something similar could be said about the workings of large bureaucratic organization in taking us out of the NSFNET backbone picture and attempting to substitute SNA (despite all the best efforts of NSF)
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

or our experiece with MEDUSA, cluster-in-a-box
http://www.garlic.com/~lynn/lhwemail.html#medusa

and there used to be a joke that product announcements required nearly 500 different executive approvals.

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 11:28:28 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
and there used to be a joke that product announcements required nearly 500 different executive approvals.

re:
http://www.garlic.com/~lynn/2007h.html#24 sizeof() was: The Perfect Computer - 36 bits?

for other drift ... the corporation periodically would release comments about encouraging wild ducks ... however somebody once wrote a byline for one of the series as long as they fly in formation. the other scenario was about encouraging people to self-select ... so they would have a list of people that needed to be delt with.

post about being able to tell the people out in front by the arrows in their back
http://www.garlic.com/~lynn/2007f.html#41 time spent/day on a computer

and other related comments
http://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
http://www.garlic.com/~lynn/2007f.html#29 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007f.html#30 The Perfect Computer - 36 bits?

and past post mentioning wild ducks:
http://www.garlic.com/~lynn/2007b.html#38 'Innovation' and other crimes

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 15:52:15 -0600
Morten Reistad <first@last.name> writes:
It can be done a whole lot better than that. If I have a web page up that says www.ford.com, (just to take an example, choosing one of the better operations) I want to be able to make a challenge to the system and make sure I am dealing with the Ford that is listed on Nyse as "F", or a company it has authorised.

I need a trusted authority to issue those "first order" certificates.

This is where Verisign has failed. The certificate is not audited, and is therefore just a stage prop. Potemkin security.


re:
http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?

Your don't necessarily need "first order" digital certificates ... which are paradigm targeted at relying parties that have no other access to the information ... direct, online, real-time access to the authoritative agency responsible for the information is also a possible solution. This has been the process in place for electronic payments for decades ... the quality of the information for the merchant is sufficiently more worthwhile since not only do they get a real-time response about (supposed) validity of the consumer ... but also real-time response about real-time aggregated information (like credit limit) ... which isn't possible in a stale, static, offline paradigm.

there were 3-4 separate scenarios ... all involving the authoritative agencies responsible for the information that certification authorities are supposedly using for the validity of the information being certified which are then represented in the (stale, static, redundant and superfluous) digital certificates.

so with respect to certifications of merchant servers accepting payments, we initially suggested that the merchant financial institutions that already certify and take financial responsibility for ... as part of sponsoring them to participate in electronic transactions ... should just issue digital certificates for the merchants that they already certify. but that turns out to be redundant and superfluous since the merchant financial institutions have already been doing such oeprations for decades as part of real-time electronic payments.

so with respect to domain name certificates ... as to the certification that the applicant for the domain name certificate is really the owner of the domain name ... SSL domain name certificates were originally intended as 1) encryption/hiding information in transit, 2) countermeasure to website impersonation, ip-address take-over, man-in-the-middle attacks, etc
http://www.garlic.com/~lynn/subpubkey.html#sslcert

as well as various things related to integrity issues with the domain name infrastructure. however, the process involves the domain name infrastructure as the authoritative agency as the source of the information that all the certification authorities rely on as to the real domain name owner (certification authorities have to check with the domain name infrastructure as to the true owner of the domain name ... when they are processing an application for domain name certificate). Now there have been some proposals that improve the integrity of the domain name infrastructure ... even backed by the certification authority industry (since the validity of domain name digital certificates tracks back to the integrity of the domain name infrastructure as the source). However, this represents something of a catch-22 for the certification authority industry since a major original justification for domain name digital certificates was because of domain name infrastructure integrity issues. Fixing those integrity issues reduces the justification for domain name digital certificates ... lots of past posts discussing this issue
http://www.garlic.com/~lynn/subpubkey.html#catch22

Part of the improvements involve having domain name owner register a public key when they register their domain name (minimizing various forms of domain name hijacking on other vulnerabilities by requiring communication from the domain name owner be digitally signed and then verified with the onfile public key ... note certificate-less operation).

Now there is additional opportunity for the certification authority industry. The current process has a domain name digital certificate applicant supply a lot of identification information with the application. Then the certification authority has an expensive, time-consuming, and error-prone process of matching the supplied identification information with the identification information on file with the domain name infrastructure.

The certification authority can start (also) requiring that domain name digital certificate applications be also be digitally signed by the domain name owner. Then the certification authority can replace an expensive, time-consuming and error prone identification process with a much less expensive, simpler, and much more reliable authentication ... by doing a real-time retrieval of the onfile public key (from the domain name infrastructure) to validate the digital signature on the domain name infrastructure.

The additional catch-22 for the certification authority industry (in addition to eliminating a lot of the original reason for their existance) ... is if they can do (certificate-less) real-time retrievals of public key ... then the possibility exists that the rest of the world could also. Rather than having all the digital certificate originated protocol chatter as part of SSL session setup ... the client can get the (valid) public key piggybacked from the domain name infrastructure in the transaction that responds with the domain name to ip-address. The client then just generates the random (symmetric) session encryption key, encrypts the message, encrypts the session key with the returned public key ... and sends the whole thing off to the server in a single message transmission. Is is then theoritically possible to have an SSL exchange in single transmisson round trip.

In a online world ... it is theoritically possible to have direct real-time "first order" information from the authoritative agency and/or financially responsible institution, information that is significantly more valuable than what you could get from stale, static, redundant and superfluous digital certificate.

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 16:42:38 -0600
Morten Reistad <first@last.name> writes:
The first problem was that the banking world was a gentleman's club of trust between the participants. Opening this up to electronic security requires real authentication.

re:
http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#26 sizeof() was: The Perfect Computer - 36 bits?

part of the issue ... which side-tracked the whole process with extremely complex and costly processing was much of the digital certificate stuff (providing little fundamentally added value).

fundamentally, digital signatures and asymmetrical key technology can provide for end-to-end integrity and authentication.

the digital certificate stuff is an electronic analog of credentials, certificates, licenses, and/or similar to the letters of credit/introduction from the sailing ship days (and earlier) ... aka mechanism for trusted distribution of information for relying parties that otherwise had no access to the information (the two parties were anonomous strangers, having no prior interaction with each other ... and had no resource to direct interaction with an authoritative agency with regard to information about the other party).

I've mentioned before that in the mid-90s, the certification authority industry appeared to take to wallstreet the prospect of a $20b/annum business ... i.e. the financial institutions would underwrite the cost of $100/person/annum digital certificate for their customers (i.e. approx. 200m).

There was one scenario where a large financial institution was told that they could transmit their customer masterfile and the certification authority would reformat the bits and return them something called a digital certificate (for every record in the institution's customer masterfile) ... and the price would only be $100/account ... and oh, by the way, this would have to be repeated every year.

The financial institution could then distribute this digital certificates to their customers ... and then for all future electronic communication/transactions, the customer would digitally sign electronic communication/transactions, and send off the communication/transaction, the digital signature, and the appended digital certificate to the financial institution. The financial instutition would retrieve the associated account record ... and use the onfile public key to verify the digital signature. Since the financial institution had the original realtime version of the information, there was never a situation when it would be necessary to refer to the stale, static, redundant and superfluous digital certificate. So the financial institution that had between 10m-20m such accounts ... came to the realization that there was no justification for a $1b-$2b annual transfer of wealth to the certification authority.

We dropped into the institution and visited the people (that had already spent $50m on a pilot) just after the board became aware of the $1b-$2b/annum ongoing transfer of wealth requirement (and the responsible people had been advised that they should start looking for opportunities elsewhere).

These sorts of realization sort of tanked the $20b/annum business case for the industry that had been floating around wallstreet.

misc. past posts mentioning the $20b/annum business case scenario
http://www.garlic.com/~lynn/aadsm7.htm#rhose4 Rubber hose attack
http://www.garlic.com/~lynn/aadsm14.htm#29 Maybe It's Snake Oil All the Way Down
http://www.garlic.com/~lynn/aadsm18.htm#52 A cool demo of how to spoof sites (also shows how TrustBar preventsthis...)
http://www.garlic.com/~lynn/aadsm23.htm#29 JIBC April 2006 - "Security Revisionism"
http://www.garlic.com/~lynn/2005i.html#36 Improving Authentication on the Internet
http://www.garlic.com/~lynn/2005j.html#32 IBM Plugs Big Iron to the College Crowd

so the next approach was attempting to get governments to pass legislation mandating digital certificates as part of all electronic signature operations. we ran into this when we were asked in to help word smith the cal. state electronic signature legislation (and later the federal electronic signature legislation). One of the big issues was that in attempting to equate digital signatures with human signatures ... the laywers pointed out they (certification authority industry) had left out the part of human signatures related to intent as part of the generating the signature ... or that the person had read, understood, agrees, approves, and/or authorizes what is being signed. lots of past post on the difference between digital/electronic signatures and the issue of showing intent
http://www.garlic.com/~lynn/subpubkey.html#signature

part of this was periodically attributed to attempts to take advantage of possible semantic confusion since both terms "digital signature" and "human signature", contain the word "signature" ... even tho, otherwise they are totally unrelated.

past posts mentioning x9.59 financial standard protocol providing end-to-end integrity and authentication
http://www.garlic.com/~lynn/x959.html#x959

w/o the enormous added payload and processing bloat of requiring appending digital certificates
http://www.garlic.com/~lynn/subpubkey.html#bloat

lots of related posts in this n.g. in the long running thread nhttp://www.garlic.com/~lynn/2006y.html#7 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2006y.html#8 Securing financial transactions a high priority for 2007
hhttp://www.garlic.com/~lynn/2007.html#0 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#5 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#6 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#27 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007.html#28 Securing financial transactions a high priority for 2007
hhttp://www.garlic.com/~lynn/2007b.html#60 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#61 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#62 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007b.html#64 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#6 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#8 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#10 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#15 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#17 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#18 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#22 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#26 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#27 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#28 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#30 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#31 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#32 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#33 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#35 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#36 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#37 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#38 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#39 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#43 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#44 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#46 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#51 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#52 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#53 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#0 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#5 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#11 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#26 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#68 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007d.html#70 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#2 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#12 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#20 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#23 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#24 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#26 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#28 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#58 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#61 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#62 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007e.html#65 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007f.html#8 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007f.html#58 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007f.html#68 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007f.html#72 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007f.html#75 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007g.html#8 Securing financial transactions a high priority for 2007

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 17:22:35 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
so the next approach was attempting to get governments to pass legislation mandating digital certificates as part of all electronic signature operations. we ran into this when we were asked in to help word smith the cal. state electronic signature legislation (and later the federal electronic signature legislation). One of the big issues was that in attempting to equate digital signatures with human signatures ... the laywers pointed out they (certification authority industry) had left out the part of human signatures related to intent as part of the generating the signature ... or that the person had read, understood, agrees, approves, and/or authorizes what is being signed. lots of past post on the different between digital/electronic signatures and the issue of showing intent
http://www.garlic.com/~lynn/subpubkey.html#signature

part of this was periodically attributed to attempts to take advantage of possible semantic confusion since both terms "digital signature" and "human signature", contain the word "signature" ... even tho, otherwise they are totally unrelated.


re:
http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#26 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#27 sizeof() was: The Perfect Computer - 36 bits?

there was a similar but different foray in this period to try and get merchants to underwrite consumer (x.509 identity) digital certificates ... since the financial institutions had declined the privilege and it looked like getting the gov. to mandate (that consumers would have to pay for their own) might not succeed.

now the merchants are already paying quite a bit on a per transaction basis ... and this would have further increased those payments. The offer basically implied that transactions that were consumer digitally signed and carrying the consumer's digital certificate ... might/could reverse the burden of proof. the current scenario in a merchant/consumer dispute ... puts the burden of proof on the merchant. If the burden of proof were to be reversed ... that met that in a merchant/consumer dispute ... the burden of proof would be on the consumer (enormous savings for merchants in disputes).

(unfortunately?) A few raised the question that if this actually came to pass ... why would a consumer ever voluntarily want to digitally sign anything?

This possibly is similar, but different to recent some comments about changes in the UK:
Card victims told 'don't call police'
http://www.thisismoney.co.uk/credit-and-loans/idfraud/article.html?in_article_id=418947&in_page_id=159
Concern over new fraud reporting
http://news.bbc.co.uk/1/hi/programmes/moneybox/6513835.stm
New rules to report fraud announced
http://www.moneyexpert.com/News/Credit-Card/18106248/New-rules-to-report-fraud-announced.aspx
Apacs: Report credit card fraud direct to bank
http://www.fairinvestment.co.uk/credit_cards-news-Apacs:-Report-credit-card-fraud-direct-to-bank-18107160.html
Anger at card fraud reporting changes - Law & Policy
http://management.silicon.com/government/0,39024677,39166633,00.htm
Banks charging to the top of the hate parade
http://edinburghnews.scotsman.com/opinion.cfm?id=508912007
Warning Over Purge On Credit Card Fraud
http://www.eveningtimes.co.uk/news/display.var.1303206.0.warning_over_purge_on_credit_card_fraud.php
Anger at card fraud reporting changes
http://www.silicon.com/financialservices/0,3800010322,39166633,00.htm
Financial institutions to report on card fraud
http://www.gaapweb.com/news/135-Financial-institutions-to-report-on-card-fraud.html
UK Tells Consumers To Report Financial Fraud to Their Banks
http://www.paymentsnews.com/2007/04/uk_tells_consum.html
Financial institutions to be first point of contact for reporting banking crime
http://www.cbronline.com/article_news.asp?guid=DE47801B-AE60-4073-8314-26AC46AC7C03
Card Fraud Changes 'Will Not Adversely Affect Police Response'

http://www.fool.co.uk/news/your-money/credit-cards/2007/04/11/card_fraud_changes_will_not_adversely_affect_polic.aspx

and related blog entry:
http://www.lightbluetouchpaper.org/2007/02/08/financial-ombudsman-on-chip-pin-infallibility/

and for other drift ... some past posts mentioning some possible vulnerabilities in various chip deployments
http://www.garlic.com/~lynn/subintegrity.html#yescard

including a quote of somebody quipping about having spent billions of dollars to prove that chips are less secure than magstripe.

for completely other drift ... a few past "interchange" fee references
http://www.garlic.com/~lynn/aadsm23.htm#37 3 of the big 4 - all doing payment systems
http://www.garlic.com/~lynn/aadsm26.htm#1 Extended Validation - setting the minimum liability, the CA trap, the market in browser governance
http://www.garlic.com/~lynn/aadsm26.htm#25 EV - what was the reason, again?
http://www.garlic.com/~lynn/aadsm26.htm#34 Failure of PKI in messaging
http://www.garlic.com/~lynn/aadsm7.htm#rhose3 Rubber hose attack
http://www.garlic.com/~lynn/2005u.html#16 AMD to leave x86 behind?
http://www.garlic.com/~lynn/2006k.html#23 Value of an old IBM PS/2 CL57 SX Laptop
http://www.garlic.com/~lynn/2007.html#27 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#38 Securing financial transactions a high priority for 2007

and misc. past posts mentioning burden of proof issue (in disputes):
http://www.garlic.com/~lynn/aadsm6.htm#nonreput Sender and receiver non-repudiation
http://www.garlic.com/~lynn/aadsm6.htm#terror7 [FYI] Did Encryption Empower These Terrorists?
http://www.garlic.com/~lynn/aepay10.htm#72 Invisible Ink, E-signatures slow to broadly catch on
http://www.garlic.com/~lynn/aadsm17.htm#59 dual-use digital signature vulnerability
http://www.garlic.com/~lynn/aadsm18.htm#0 dual-use digital signature vulnerability
http://www.garlic.com/~lynn/aadsm18.htm#55 MD5 collision in X509 certificates
http://www.garlic.com/~lynn/aadsm19.htm#33 Digital signatures have a big problem with meaning
http://www.garlic.com/~lynn/aadsm20.htm#0 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm21.htm#35 [Clips] Banks Seek Better Online-Security Tools
http://www.garlic.com/~lynn/aadsm23.htm#14 Shifting the Burden - legal tactics from the contracts world
http://www.garlic.com/~lynn/aadsm23.htm#33 Chip-and-Pin terminals were replaced by "repairworkers"?
http://www.garlic.com/~lynn/2000.html#57 RealNames hacked. Firewall issues.
http://www.garlic.com/~lynn/2000g.html#34 does CA need the proof of acceptance of key binding ?
http://www.garlic.com/~lynn/2001g.html#59 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001l.html#52 Security standards for banks and other institution
http://www.garlic.com/~lynn/2002g.html#69 Digital signature
http://www.garlic.com/~lynn/2004i.html#17 New Method for Authenticated Public Key Exchange without Digital Certificates
http://www.garlic.com/~lynn/2005e.html#41 xml-security vs. native security
http://www.garlic.com/~lynn/2005m.html#6 Creating certs for others (without their private keys)
http://www.garlic.com/~lynn/2005m.html#11 Question about authentication protocols
http://www.garlic.com/~lynn/2005o.html#26 How good is TEA, REALLY?
http://www.garlic.com/~lynn/2005o.html#42 Catch22. If you cannot legally be forced to sign a document etc - Tax Declaration etc etc etc
http://www.garlic.com/~lynn/2006d.html#32 When *not* to sign an e-mail message?
http://www.garlic.com/~lynn/2006e.html#8 Beginner's Pubkey Crypto Question

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 20:50:05 -0600
Andrew Swallow <am.swallow@btopenworld.com> writes:
Empire building - bosses in big organisations with 100 men working for them are paid more than bosses with 25 men working for them. Lack of resources is an excuse that personnel departments and accountants think they understand.

re:
http://www.garlic.com/~lynn/2007h.html#24 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#25 sizeof() was: The Perfect Computer - 36 bits?

in past posts there has been passing references to failures can spawn significant promotions and empire building ... there was a recent corrollary mentioned in risk digest article, nothing succeeds like failure:
http://catless.ncl.ac.uk/Risks/24.62.html

in the past, the reference was comparing the 12 people developing cp67 for the 360/67 and eventually something like 1200 people developing tss/360 for the 360/67; some comments are that as the larger group appeared to be unable to deal with one problem or another ... the solution was to significantly increase the size of the organization (with sufficiently large organization any problem can be solved). so there is some significant incentive to not solve problems simply ... because there is always the chance that having difficulty in solving a problem will result in significant empire building. old posts mentioning the difference between the 12 working on cp67 and the 1200 working on tss/360.
http://www.garlic.com/~lynn/2002d.html#23 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002d.html#36 Mainframers: Take back the light (spotlight, that is)
http://www.garlic.com/~lynn/2002n.html#62 PLX
http://www.garlic.com/~lynn/2003g.html#24 UltraSPARC-IIIi
http://www.garlic.com/~lynn/2004c.html#61 IBM 360 memory
http://www.garlic.com/~lynn/2004f.html#55 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2005c.html#18 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2006q.html#32 Very slow booting and running and brain-dead OS's?

another (datacenter) corrollary is that it can be counterproductive to always anticipate all problems and make sure they are averted ... because then it promotes the belief that there hasn't been hard problems and the job providing datacenter service wasn't really a diffiicult, hard problem.

as in other recent posts ... the Boyd scenario is that some of this orginates as the result of training given young officiers in WW2 ... i.e. assumption that there is massive numbers of inexperienced people and therefor it required strongly enforced command & control infrastructure to leverage the very few people that knew what they were doing ... and they directed the movements of massive numbers of inexperienced people. recent post
http://www.garlic.com/~lynn/2007e.html#45 time spent/day on a computer

somewhat bleed over from some other posts in this thread:
http://www.garlic.com/~lynn/2007h.html#27 sizeof() was: The Perfect Computer - 36 bits?

there is some line somewhere that KISS can actually be much more harder/difficult than an extremely complex solution ... or it is done when there is nothing left to remove (as opposed to it being done when there is nothing left to add).

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 21:43:36 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
there is some line somewhere that KISS can actually be much more harder/difficult than an extremely complex solution ... or it is done when there is nothing left to remove (as opposed to it being done when there is nothing left to add).

re:
http://www.garlic.com/~lynn/2007h.html#24 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#25 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?

and for a different kind of topic drift
http://www.garlic.com/~lynn/2007g.html#41 US Airways badmouths legacy system

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sun, 15 Apr 2007 23:55:02 -0600

http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#26 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#27 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#28 sizeof() was: The Perfect Computer - 36 bits?

now there were some ... even after the detailed description of information flow and how businesses actually operated with account records and other stuff in the real business world ... were totally unable to part with the digital certificate as comfortable analogy to the physical world with credentials, certificates, licenses, etc. .... this whole certificate-less based operation of digital signatures and public keys just seemed alien
http://www.garlic.com/~lynn/subpubkey.html#certless

so we came up with two certificate-based scenarios ... where we were able to avoid the enormous penalty of having the attached certificates on every communication/transmission ... especially in the payment transaction scenario ... where it represented a two-order magnitude (factor of one hundred times) increase in both payload bloat and processing bloat
http://www.garlic.com/~lynn/subpubkey.html#bloat

so the scenario is that the consumer ... registers their public key with the registration authority ... in this case, their financial institution. the financial institution validates the public key and then generates the digital certificate. the digital certificate is then recorded in the institutions accounts for business reasons.

1) caching

Now, normally at this point, a standard PKI process has the institution returning a copy of the digital certificate to the public key owner ... so that on all future communication that the public key owner has with the institution ... they can digitally sign it ... and then transmit the communication, the digital signature and the copy of the digital certificate back to the institution. However, normal caching algorithms allow that if it is know that the intended recepient already has a cached copy of something it is not necessary to repeatedly retransmit it.

In fact, normal, existing PKI implementations already allow relying parties such a strategy for digital certificates belonging to (intermediary) certification authorities (in a trust hierarchy), aka cache such digital certificates and avoid having to retransmit copies on every operation. This process just extends the caching strategies to all digital certificates ... and since the recipient of the communication is known to not only already have a copy of each digital certificate ... but in fact, actually has all the originals (stashed away in account records) ... as opposed to the copy provided to the public key owner ... it then is obvious that repeatedly transmitting the copy on every communication ... back to the entity that has the all the originals ... is both redundant and superfluous.

2) compression

the enormous payload and processing bloat of digital certificates in the financial sphere has been realized for some time. The X9F standards committee has even had a x9.68 work item for digital certificate compression related to financial transactions. The objective was to get digital certificate payload size down into the 300 byte range. One of the approaches was to recognize that a lot of the fields in digital certificates for financial institutions are the same (across all digital certificates). Part of the compression effort was looking at eliminating all fields that were identical across all digital certificates. We went even further, we claimed that it was possible to do information compression by eliminating all digital certificate fields that were known to already be in the possession of the financial institutions. We were then able to show, in fact all digital certificate fields were already in the possession of the financial institutions and therefor it was possible to reduce the digital certificate size to zero bytes. Then we would absolutely mandate that all digitally signed communication always required the appending of the (compressed) zero byte digital certificates.

....

various past posts going into this explanation on how to help reduce the enormous payload and processing bloat contributed by normal digital certificate processing (by either using caching techniques and/or extremely aggresive compression techniques to create zero byte digital certificates)
http://www.garlic.com/~lynn/aepay3.htm#aadsrel1 AADS related information
http://www.garlic.com/~lynn/aepay3.htm#aadsrel2 AADS related information ... summary
http://www.garlic.com/~lynn/aepay3.htm#x959discus X9.59 discussions at X9A & X9F
http://www.garlic.com/~lynn/aadsmore.htm#client4 Client-side revocation checking capability
http://www.garlic.com/~lynn/aadsm3.htm#cstech3 cardtech/securetech & CA PKI
http://www.garlic.com/~lynn/aadsm3.htm#cstech6 cardtech/securetech & CA PKI
http://www.garlic.com/~lynn/aadsm3.htm#kiss1 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
http://www.garlic.com/~lynn/aadsm3.htm#kiss6 KISS for PKIX. (Was: RE: ASN.1 vs XML (used to be RE: I-D ACTION :draft-ietf-pkix-scvp- 00.txt))
http://www.garlic.com/~lynn/aadsm4.htm#6 Public Key Infrastructure: An Artifact...
http://www.garlic.com/~lynn/aadsm4.htm#9 Thin PKI won - You lost
http://www.garlic.com/~lynn/aadsm5.htm#x959 X9.59 Electronic Payment Standard
http://www.garlic.com/~lynn/aadsm5.htm#shock revised Shocking Truth about Digital Signatures
http://www.garlic.com/~lynn/aadsm5.htm#spki2 Simple PKI
http://www.garlic.com/~lynn/aadsm8.htm#softpki8 Software for PKI
http://www.garlic.com/~lynn/aadsm9.htm#softpki23 Software for PKI
http://www.garlic.com/~lynn/aepay10.htm#76 Invisible Ink, E-signatures slow to broadly catch on (addenda)
http://www.garlic.com/~lynn/aepay11.htm#68 Confusing Authentication and Identiification?
http://www.garlic.com/~lynn/aadsm12.htm#28 Employee Certificates - Security Issues
http://www.garlic.com/~lynn/aadsm12.htm#64 Invisible Ink, E-signatures slow to broadly catch on (addenda)
http://www.garlic.com/~lynn/aadsm13.htm#20 surrogate/agent addenda (long)
http://www.garlic.com/~lynn/aadsm14.htm#30 Maybe It's Snake Oil All the Way Down
http://www.garlic.com/~lynn/aadsm14.htm#41 certificates & the alternative view
http://www.garlic.com/~lynn/aadsm15.htm#33 VS: On-line signature standards
http://www.garlic.com/~lynn/aadsm20.htm#11 the limits of crypto and authentication
http://www.garlic.com/~lynn/aadsm22.htm#4 GP4.3 - Growth and Fraud - Case #3 - Phishing
http://www.garlic.com/~lynn/aadsm23.htm#51 Status of opportunistic encryption
http://www.garlic.com/~lynn/aadsm24.htm#5 New ISO standard aims to ensure the security of financial transactions on the Internet
http://www.garlic.com/~lynn/2000b.html#93 Question regarding authentication implementation
http://www.garlic.com/~lynn/2000e.html#41 Why trust root CAs ?
http://www.garlic.com/~lynn/2000f.html#3 Why trust root CAs ?
http://www.garlic.com/~lynn/2001c.html#57 PKI and Non-repudiation practicalities
http://www.garlic.com/~lynn/2001c.html#58 PKI and Non-repudiation practicalities
http://www.garlic.com/~lynn/2001d.html#31 Very CISC Instuctions (Was: why the machine word size ...)
http://www.garlic.com/~lynn/2001e.html#35 Can I create my own SSL key?
http://www.garlic.com/~lynn/2001f.html#57 any 70's era supercomputers that ran as slow as today's supercomputers?
http://www.garlic.com/~lynn/2001f.html#79 FREE X.509 Certificates
http://www.garlic.com/~lynn/2001n.html#84 Buffer overflow
http://www.garlic.com/~lynn/2002j.html#9 "Clean" CISC (was Re: McKinley Cometh...)
http://www.garlic.com/~lynn/2003f.html#32 Alpha performance, why?
http://www.garlic.com/~lynn/2004d.html#7 Digital Signature Standards
http://www.garlic.com/~lynn/2005b.html#31 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005n.html#33 X509 digital certificate for offline solution
http://www.garlic.com/~lynn/2005o.html#31 Is symmetric key distribution equivalent to symmetric key generation?
http://www.garlic.com/~lynn/2005t.html#6 phishing web sites using self-signed certs
http://www.garlic.com/~lynn/2006b.html#37 X.509 and ssh
http://www.garlic.com/~lynn/2006c.html#35 X.509 and ssh
http://www.garlic.com/~lynn/2006f.html#29 X.509 and ssh
http://www.garlic.com/~lynn/2006h.html#28 confidence in CA
http://www.garlic.com/~lynn/2006i.html#13 Multi-layered PKI implementation

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Mon, 16 Apr 2007 08:23:19 -0600
jmfbahciv writes:
The news had a warning to people submitting their income taxes online. People have made web sites that are images of IRS.GOV. Taxpayers enter all the data; the intercept changes the bank account number where the refund is to be deposited, and then ships the e-forms to the real IRS.GOV.

How does one know that you've reached the real web site? There isn't any way to verify this from a user POV.


we had signoff on the server to payment gateway part of the stuff that came to be called electronic commerce ... but had no direct authority over the client/server (browser/sever) portion.

in the server/gateway (which we've periodically claimed is the original SOA) portion ... the merchant server has prior business relationship with the acquiring financial institution and must registered ... and similarly the acquiring financial institution must be registered with the merchant server. as a result there is stable registered information by both parties of the other party.

as oft repeated, the digital certificate design point is something of an anarchy with no-previous knownledge of either party of the other party (total strangers) ... trusted information distribution when there is no other mechanism (that either party has about obtaining information about the other party). the digital certificate design point is much lower quality than direct knowledge of relationship between both parties and/or established, registered information. ... and therefor the digital certificate design point has significantly greater uncertainties than direct, online, real-time information. However, again, this is the digital certificate design point ... it fills a gap in the offline world paradigm ... some information is better than the alternative ... none.

so this is recent post discussing the transition from pre-internet online banking to the current infrastructure. the pre-internet online (dialup) banking had much less uncertainty because there was stable registered information about the banking entity being dealt with. The transition to internet based online infrastructure was an enormous cost savings ... but the digital certificate based approach introduced enormous uncertainty (compared to the previous mechanism).
http://www.garlic.com/~lynn/aadsm26.htm#52 The One True Identity -- cracks being examined, filled, and rotted from the inside
http://www.garlic.com/~lynn/aadsm26.htm#53 The One True Identity -- cracks being examined, filled, and rotted from the inside

recent posts in this thread
http://www.garlic.com/~lynn/2007h.html#20 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#26 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#27 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#28 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#31 sizeof() was: The Perfect Computer - 36 bits?

for other topic drift ... i've also in the past drawn analogy being the digital certificate paradigm for trusted distribution of information for an offline world ... and distributed database caching (or multiprocessor processor caching) algorithms where the issue of attributes of distributed information ... like stale, static, timely, consistent, etc has been studied much more thoroughly and is better understood (by comparison, there tends to be a whole lot of hand waving most of the time with regard to information theory characteristics of digital certificates).

sizeof() was: The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Mon, 16 Apr 2007 10:02:15 -0600
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Constant change is mandated by the marketing department. Therefore, if you've implemented the optimum solution, you have to break it in the next release. This either means re-introducing bugs, or adding so many bells and whistles that they get in the way. A phrase I often find myself using about a particular version of software is "It's a great improvement on its successors."

there was some observation in the mid-90s about office software (word processing, presentation, etc) had reached the point where it offered 99percent of the features used by 99percent of the people. Software applications for personal computing platforms from the early 80s were constantly coming out with new versions that had new features ... which people would snap up. However, a paradigm shift needed to occur starting in the mid-90s to convince people to continue to buy new versions ... even if they offered no new feature/function.

for decades, similar comments have been leveled against the automobile industry ... i.e. new versions were on a 7-8 yr cycle ... and the annual model refresh consisted primarily of superficial cosmetic changes.

there has been recent TV advertisement somewhat poking fun at the characteristic ... by a new chewing gum product that positions itself has having much longer taste (when chewed). The marketing department points out if the taste lasts much longer ... people will be buying less.

For a time, I had an ongoing argument with the people responsible for some implementation feature that went into SVS/MVS ... but they insisted on doing it their way anyway. Some 6-7 yrs later, I got a call from somebody in POK mentioning that they had gotten a large corporate reward for coming up with a new method of how MVS kernel implemented the feature. He was wondering if he could make the same change in vm370 (for another large corporate award). Unfortunately, I had to point out to him that I had never, not done it that way ... going back to when I was an undergraduate implementing code in cp67 nearly 15yrs earlier. I then had some snide thot that instead of handing out large corporate award for fixing something that shouldn't have needed fixing ... they should have retro-actively penalized the compensation for the people responsible for the earlier faulty implementation.

related posts in this thread
http://www.garlic.com/~lynn/2007h.html#29 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#30 sizeof() was: The Perfect Computer - 36 bits?

and for a little topic drift
http://www.garlic.com/~lynn/2007e.html#48 time spent/day on a computer
http://www.garlic.com/~lynn/2007h.html#25 sizeof() was: The Perfect Computer - 36 bits?

GA24-3639

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: GA24-3639
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 09:14:00 -0600
Quadibloc <jsavard@ecn.ab.ca> writes:
the 3838 Array Processor.

Some web searches have allowed me to discover that the 2938 was an RPQ (request price quotation) item, as I came across part numbers for the FC manuals for them. But the 3838, appearing in the 370 system summary, was an ordinary product. (Apparently, it could only be attached (through a block multiplexer channel) to the models 165 and 168; you couldn't hook one up to a 195.)

And there were even references to the functional characteristics manual for the 3838: part number GA24-3639. Which went through more than one edition.

The 2938 attached to models 44, 65, and 75 of System/360, and its design is credited, at
http://www.mssu.edu/seg-vm/bio_john_h__koonce.html

to John H. Koonce and Byron Gariepy. (However, if the 2938 is an update of the 2937, since the account implies the machine in question was designed from scratch, I am somewhat suspicious that either the 2938 was a _major_ update to the 2937, or it is actually an account of the origin of the 2937.)

The IBM Systems Journal article noted on the Corestore page also notes that the 2938 performs only *single-precision* arithmetic. It connected directly to the CPU, just as the channel controllers did. From the 370 system summary, the 3838 not only handled double- precision floats as well as single-precision, but it also handled 16- bit integers!

Here's hoping that Al Kossow someday manages to encounter a copy of GA24-3639 for his site! But that probably *is* an extreme rarity, like manuals specific to the 360/85 and a few other items.


part of pre-data-streaming 3mbyte/sec for 370 would be operational restrictions that would have included channel cable distance significantly reduced. i had thot that the only device taking advantage of 3mbyte/sec. was the 2305-1.

the 2305 was fixed head disk ... the 2305-2 had about 11mbyte capacity and avg. rotational delay of 5millisec and 1.5mbyte/sec transfer. the 2305-1 had about half the capacity and half the rotational delay and twice the transfer rate. I never saw a 2305-1, but conjectured from the specs ... that they took two heads (offset 180degrees) and operated them in parallel on the same track ... a record then would have interleaved bytes offset on opposite sides of the track. reference here
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_2305.html

recent discussion of 3mbyte channel feature for 370 ... before data-streaming feature (relaxed end-to-end handshake on every byte transferred) that both doubled channel cable length from 200ft to 400ft and increased data transfer to 3mbyte (from 1.5mbyte) ... but somebody mentioned that the 3838 array processor also attached to channel at 3mbyte/sec (and possibly only supported by the 165/168 external channel box)
http://www.garlic.com/~lynn/2007e.html#40 FBA rant
http://www.garlic.com/~lynn/2007e.html#59 FBA rant

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 09:33:39 -0600
jmfbahciv writes:
You are stating this based on the final DECnet spec which took 10 years to be approved. I am saying that, if DECnet IV had been shipped, by 1978, the spec would have evolved, based on usage and customer needs, to something very useful. The node field problem would have been solved in 1982...maybe 3.

Software evolution based on real running rather than eternal specification review meetings produces a robust, extensible, and adaptable code platform. Just look at any OS that was useful. It got that way by shipping the code and responding to the bitches and suggestions of the users. It is more like a biological system; the best stuff survives and adapts to current-(minus)1 day needs of real computing life.


as an aside ... when we were out first out making executive presentations on 3tier network architecture and middle ware/layer
http://www.garlic.com/~lynn/subnetwork.html#3tier

we were taking lots of arrows from the "SAA" and token-ring forces ... i.e. the SAA effort has somewhat been described as trying to stuff the 2tier, client/server genie back into the bottle ... attempting to hold the bulwarks for the terminal emulation paradigm (and installed product base)
http://www.garlic.com/~lynn/subnetwork.html#emulation

and since we were pitching enet as much better than token-ring ... the token-ring crowd weren't very happy ... some recent posts:
http://www.garlic.com/~lynn/2007g.html#80 IBM to the PCM market(the sky is falling!!!the sky is falling!!)
http://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market
http://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?

disclaimer ... my wife is co-inventor on early (both US and international) token passing ring patent ... done back when she had been con'ed into going to POK to be responsible for (mainframe) loosely-coupled (i.e. cluster) architecture ... where she was responsible for peer-coupled shared data architecture
http://www.garlic.com/~lynn/submain.html#shareddata

... and except for IMS hot-standby, didn't see a lot of uptake until the relatively recent SYSPLEX offerings.

This was after she had co-authored peer-to-peer networking architecture AWP39 ... that was done in the same timeframe and somewhat competitive with SNA. One of the issues with peer-to-peer networking architecture was that it basically was a processor to processor infrastructure ... which saw relative little commercial customer market at the time ... while SNA was basically oriented around controlling large number of terminals ... and there were huge number of commercial accounts with large terminal (or other devices like ATM machines) populations (i.e. tens of thousands or more not infrequent).

The other author of AWP39, was the person (that much later) forwarded collection of email about what the SNA forces were up to (after we had been taken out of the NSFNET effort)
http://www.garlic.com/~lynn/2006w.html#email870109
in this post
http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET

recent post also mentioning AWP39
http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 09:46:16 -0600
jmfbahciv writes:
The security problem with this method is the outsourcing of the keypunching work. It gets done on somebody's laptop outside the hallowed halls of the IRS which can then be matchsticked and distributed. I spent about four weeks thinking time this year trying to figure out the most failsafe method of submitting the forms and exchaning monies.

My conclusion is that, if do that e-form stuff, you have to always owe money. You always pay using a disposable credit card. Thus you have to get a "new" credit card for each payment you make to the outside world. You also have to be very careful never to do a transaction that has both your money account number and the credit card number. This last one is very difficult to do in the USA.


re:
http://www.garlic.com/~lynn/2007h.html#22 sizeof() was: The Perfect Computer - 36 bits?

so after having done the consulting with the small client/server startup that had this technology called SSL and wanted to handle payment transactions on their server
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

we spent some time in the x9a10 financial standard working group coming up with the x9.59 standard
http://www.garlic.com/~lynn/x959.html#x959

in the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. as part of this we had to do detailed end-to-end vulnerability and threat analysis of a variety of different operational environments ... and perfect a solution that had countermeasures for all identified vulnerabilities and threats. By comparison, the other activities (from the period) tended to come up with simple "point" countermeasures ... frequently leaving lots of vulnerabilities unaddressed (and/or even creating new vulnerabilities).

a different, partial, interim solution that has been tried numerous by different institutions over the past decade or so ... has been one-time account numbers ... i.e. the consumer is provided with a list of different account numbers ... that they can use with their account. each account number is only good for one use ... and as they use an account number, the consumer crosses it off the list. this places a lot of the burden on the consumer ... but is a countermeasure to the current skimming/evesdropping/harvesting vulnerabilities
http://www.garlic.com/~lynn/subintegrity.html#harvest

that then use the harvested information in replay attacks for fraudulent transactions.

misc. posts mentioning one-time (use) account numbers
http://www.garlic.com/~lynn/aadsm17.htm#42 Article on passwords in Wired News
http://www.garlic.com/~lynn/2003n.html#0 public key vs passwd authentication?
http://www.garlic.com/~lynn/2007c.html#6 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007c.html#15 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007g.html#19 T.J. Maxx data theft worse than first reported

sizeof() was: The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 10:30:36 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
so after having done the consulting with the small client/server startup that had this technology called SSL and wanted to handle payment transactions on their server
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

we spent some time in the x9a10 financial standard working group coming up with the x9.59 standard
http://www.garlic.com/~lynn/x959.html#x959

in the mid-90s, the x9a10 financial standard working group had been given the requirement to preserve the integrity of the financial infrastructure for all retail payments. as part of this we had to do detailed end-to-end vulnerability and threat analysis of a variety of different operational environments ... and perfect a solution that had countermeasures for all identified vulnerabilities and threats. By comparison, the other (competing) tended to be simple "point" countermeasures ... frequently leaving lots of vulnerabilities unaddressed (and/or even creating new vulnerabilities).


re:
http://www.garlic.com/~lynn/2007h.html#36 sizeof() was: The Perfect Computer - 36 bits?

part of what contributed to taking so long to get the x9.59 standard passed was frequently encountering the kneejerk response from some of those involved that appeared to be indoctrinated that digital certificates were equivalent to security.

as part of the stuff currently commoningly referred to as electronic commerce ... we had to work out the full end-to-end operation of applying digital certificates to normal business processes ... including all the backroom stuff where all the real work is done. it slowly began to dawn on us that the digital certificates were really targeted for the situation involving two parties that had absolutely no prior relationship (and/or have no recourse for directly contacting a trusted 3rd party with regard to the other party).

in the internat retail payment scenario ... in turns out that the majority of transactions between consumer and merchant is repeat business .... the "digital certificate" design point possibly only applies to maybe 5-15percent of actual transactions. furthermore, even in situation where the consumer/merchant are strangers ... for decades the business process actually has a timely transaction going between the merchant and the consumer's financial institution ... where there is prior relationship.

as more and more end-to-end business processes were worked out in detail, it became more and more apparent that digital certificates were redundant and superfluous ... except in the case involving two complete stangers ... where neither party has timely access to a 3rd party.

as a result, frequently getting thru the x9.59 standards process ... would require taking participants thru the actual end-to-end business processes, along with all the implications and operations (and various things that we had learned having done the work on the details of the existing implementation)

recent posts on the subject:
http://www.garlic.com/~lynn/2007h.html#28 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#31 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#32 sizeof() was: The Perfect Computer - 36 bits?

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 12:14:22 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
This was after she had co-authored peer-to-peer networking architecture ... that was done in the same timeframe and somewhat competitive with SNA. One of the issues with peer-to-peer networking architecture was that it basically was a processor to processor infrastructure ... which saw relative little commercial customer market at the time ... while SNA was basically oriented around controlling large number of terminals ... and there were huge number of commercial accounts with large terminal (or other devices like ATM machines) populations (i.e. tens of thousands or more, not infrequent).

re:
http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?

as i've mentioned before ... the SNA crowd ... seemed to constantly view non-SNA netwroking operation as competitive with what they were doing ... even when it was peer-to-peer computer networking ... while they were focused on controlling large numbers of dumb terminal/devices.

Now, in the early part of this period ... there was enormous commercial requirement for dumb terminal/device operation ... most of the things out there had little or no local (microprocessor) smarts at the time ... so there was huge commercial requirement for such dumb terminal/device support ... that SNA filled.

By comparison ... in that period, there was significant less commercial requirement for direct processor-to-processor operation ... but the SNA forces would still view such offerings as competitive with what they were doing (at least from a corporate politics standpoint ... even if there was not overlap technically).

part of the reason that my wife didn't last long in POK (in charge of loosely-coupled architecture) ... was ongoing battles with SNA. I've mentioned before that the battle was (temporarily) resolved by allowing her to define processor-to-processor peer-to-peer operation ... as long as it was confined to a datacenter machine room boundary walls ... if it crossed the boundary walls of the datacenter machine room ... then it became the responsibility of SNA. past posts/references
http://www.garlic.com/~lynn/2005r.html#8 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#9 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#14 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#16 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005u.html#23 Channel Distances
http://www.garlic.com/~lynn/2006o.html#38 hardware virtualization slower than software?
http://www.garlic.com/~lynn/2006t.html#36 The Future of CPUs: What's After Multi-Core?

Part of the later transition was when everything started to acquire its own microprocessing smarts and could support full peer-to-peer operation.

i've often commented that the internal network was larger than the arpanet/internet from just about the beginning until possibly mid-85
http://www.garlic.com/~lynn/subnetwork.html#internalnet

and one of the reasons that the size of the internet exceeded the size of the internal network was the proliferation of workstations and personal computers as network nodes ... and it was in this period that the SNA forces were in the middle of the battle to maintain distributed personal computer operation via the dumb terminal emulation paradigm
http://www.garlic.com/~lynn/subnetwork.html#emulation

while dumb terminal emulation paradigm contributed to the early uptake of personal computers ... i.e. a business that already had (financial) justification for tens (or hundreds) of thousands of (dumb) 3270 terminals ... could get a PC for about the same price as a 3270 terminal ... and in a single desktop footprint have it serve the function of both a 3270 terminal as well as provide some local processing (i.e. origin of the term desktop computing).

It was the increasing capability of such desktop computing where it could move into full peer-to-peer network (2tier ... and then the 3tier, middle ware/layer that we were out pitching) ... and out of the strictly dumb terminal operation that contributed to a lot of the heartburn in the SNA crowd.

The internal network technology was somewhat a thorn in their side ... but as long as pushing it as commercial product was kept under control ... it was viewed more of an irritant than a real "threat". Even our early involvement in the early NSF networking stuff was viewed much more as an irritant than a real threat ... various past posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet
and old email
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

it wasn't until the NSFNET backbone (i've claimed is the real operational precursor to the modern internet) was going to become highly visible that they took steps to preclude our doing the implementation ... old email about large organizational meeting (which got canceled and not allowed to happen)
http://www.garlic.com/~lynn/2005d.html#email860501
http://www.garlic.com/~lynn/2006u.html#email860505
and this post
http://www.garlic.com/~lynn/2006u.html#56 Ranking of non-IBM mainframe builders?

and then references to having taken us out of the way ... then it was (theoritically) possible for the SNA proponents to move it and supply (SNA) solutions for NSFNET ... old email
http://www.garlic.com/~lynn/2006w.html#email870109
in this post
http://www.garlic.com/~lynn/2006w.html#21 SNA/VTAM for NSFNET

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 13:32:33 -0600
Frank McCoy <mccoyf@millcomm.com> writes:
There really was NO WAY to run SNA on a "terminal" without having considerable computing power in the processor handling the task. I must say though, that Data-100's "terminal", when properly programmed, and because of its rather formidable I/O processing machinery, could pretty much outperform any other "terminal" on the market that I know of, including most-especially those supplied by IBM.

re:
http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?

one of the things we ran into in the transition from 3272/3277 controller/terminal to 3274/327x controller/terminal ... was that they moved some amount of the ("simple") electronics in the terminal head back into the controller. Some of the electronic hacks that we had done to the 3277 terminal (to make it somewhat more human friendly) were no longer possible with 3274/327x combo ... this was separate from the issue that moving electronics back into the 3274 controller (besides appreciably reducing manufacturing costs) increased latency/processing of the operations (when we complained about all this ... the business response was that the terminal market was primarily data entry ... and not interactive computing ... possibly 3-4 orders of magnitude more terminals sold for data entry than interactive computing).

it was topez/3101 glass teletype coming out of japan that component and manufacturing costs come down enuf (providing corresponding profit margin) that you saw local microprocessing and possibility of local programmability ... misc. past posts getting early topaz/3101 and looking at burning new PROMs
http://www.garlic.com/~lynn/2006y.html#0 Why so little parallelism?
http://www.garlic.com/~lynn/2007e.html#15 The Genealogy of the IBM PC

I've mentioned before about the SNA joke ... not being a system, not being a network and not being an architecture. The first glimmer of something approaching networking support was APPN (mid-80s, AWP164). At the time, the person responsible for APPN and I reported to the same executive. We used to rib him about stop wasting time trying to craft networking into SNA and work on real (internet) networking ... since they were never going to appreciate what he was doing. In fact, when it came time to announce APPN, the SNA organization even non-concurred ... and it took 6-8 weeks of escalation to be able to get the APPN product announcement out the door (which included careful rewording the announcement "blue letter" to avoid implying that there was any relationship at all between APPN and SNA). Other recent mention of AWP164 (as well as AWP39)
http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?
http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?

note that another aspect of SNA ... is that it can be viewed as still trying to achieve the objectives set forth in (failed/canceled) future system project
http://www.garlic.com/~lynn/submain.html#futuresys

a quote about future system objectives here:
http://www.garlic.com/~lynn/2000f.html#16 [OT] FS - IBM Future System

with the pu4/pu5 (3705/vtam) interface.

the future system objectives, in turn trace to the clone controller business ... and article blaming (at least in part) four of us .... for having done a clone telecommunication controller as an undergraduate in the 60s (started out using a interdata/3 with our own channel adapter board and programmed to emulate 2702/2703)
http://www.garlic.com/~lynn/subtopic.html#360pcm

misc. past posts mentioning 3272/3274 controller issues
http://www.garlic.com/~lynn/94.html#23 CP spooling & programming technology
http://www.garlic.com/~lynn/99.html#193 Back to the original mainframe model?
http://www.garlic.com/~lynn/2000c.html#63 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#66 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2000c.html#67 Does the word "mainframe" still have a meaning?
http://www.garlic.com/~lynn/2001k.html#30 3270 protocol
http://www.garlic.com/~lynn/2001m.html#19 3270 protocol
http://www.garlic.com/~lynn/2002k.html#2 IBM 327x terminals and controllers (was Re: Itanium2 power
http://www.garlic.com/~lynn/2002k.html#6 IBM 327x terminals and controllers (was Re: Itanium2 power
http://www.garlic.com/~lynn/2003c.html#69 OT: One for the historians - 360/91
http://www.garlic.com/~lynn/2003e.html#43 IBM 3174
http://www.garlic.com/~lynn/2003h.html#15 Mainframe Tape Drive Usage Metrics
http://www.garlic.com/~lynn/2003i.html#30 A Dark Day
http://www.garlic.com/~lynn/2003k.html#20 What is timesharing, anyway?
http://www.garlic.com/~lynn/2003o.html#14 When nerds were nerds
http://www.garlic.com/~lynn/2005r.html#12 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005r.html#28 Intel strikes back with a parallel x86 design
http://www.garlic.com/~lynn/2005u.html#22 Channel Distances
http://www.garlic.com/~lynn/2006q.html#10 what's the difference between LF(Line Fee) and NL (New line) ?
http://www.garlic.com/~lynn/2006s.html#42 Ranking of non-IBM mainframe builders?

misc. past posts mentioning APPN
http://www.garlic.com/~lynn/2000.html#51 APPC vs TCP/IP
http://www.garlic.com/~lynn/2000.html#53 APPC vs TCP/IP
http://www.garlic.com/~lynn/2000b.html#89 "Database" term ok for plain files?
http://www.garlic.com/~lynn/2000c.html#54 WHAT IS A MAINFRAME???
http://www.garlic.com/~lynn/2001i.html#31 3745 and SNI
http://www.garlic.com/~lynn/2002.html#28 Buffer overflow
http://www.garlic.com/~lynn/2002b.html#54 Computer Naming Conventions
http://www.garlic.com/~lynn/2002c.html#43 Beginning of the end for SNA?
http://www.garlic.com/~lynn/2002g.html#48 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#12 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002h.html#48 Why did OSI fail compared with TCP-IP?
http://www.garlic.com/~lynn/2002k.html#20 Vnet : Unbelievable
http://www.garlic.com/~lynn/2003d.html#49 unix
http://www.garlic.com/~lynn/2003h.html#9 Why did TCP become popular ?
http://www.garlic.com/~lynn/2003o.html#55 History of Computer Network Industry
http://www.garlic.com/~lynn/2003p.html#2 History of Computer Network Industry
http://www.garlic.com/~lynn/2003p.html#39 Mainframe Emulation Solutions
http://www.garlic.com/~lynn/2004g.html#12 network history
http://www.garlic.com/~lynn/2004p.html#31 IBM 3705 and UC.5
http://www.garlic.com/~lynn/2005p.html#8 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2005p.html#9 EBCDIC to 6-bit and back
http://www.garlic.com/~lynn/2005p.html#15 DUMP Datasets and SMS
http://www.garlic.com/~lynn/2005q.html#20 Ethernet, Aloha and CSMA/CD -
http://www.garlic.com/~lynn/2006h.html#52 Need Help defining an AS400 with an IP address to the mainframe
http://www.garlic.com/~lynn/2006k.html#21 Sending CONSOLE/SYSLOG To Off-Mainframe Server
http://www.garlic.com/~lynn/2006l.html#45 Mainframe Linux Mythbusting (Was: Using Java in batch on z/OS?)
http://www.garlic.com/~lynn/2007b.html#48 6400 impact printer
http://www.garlic.com/~lynn/2007b.html#49 6400 impact printer
http://www.garlic.com/~lynn/2007d.html#55 Is computer history taugh now?

sizeof() was: The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 15:33:58 -0600
Frank McCoy <mccoyf@millcomm.com> writes:
Yep ... mainly a protocol for passing data to remote terminals. Try using SNA to pass data from one node to another without going through the mainframe.


http://www.garlic.com/~lynn/2007h.html#35 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#39 sizeof() was: The Perfect Computer - 36 bits?

i got involved with trying to put out as a product some work that one of the RBOCS had implemented on series/1 .... that emulated pu4/3705 (with channel attach card) and pu5/sscp/vtam cross-domain ... with a peer-to-peer network implemetation ... and would tell/fake the pu5/vtam that the "resource" was owned by some other mainframe (when in fact, the "resource" was "owned" by the network).

it was actually a two-phase business plan ... putting out the initial implementation on series/1 ... while in parallel porting the implementation to rios (used for rs/6000).

this resulted in all sorts of corporate turmoil with the SNA organization ... which would have included impacting numerous parts of their revenue sources.
http://www.garlic.com/~lynn/99.html#63 System/1 ?
http://www.garlic.com/~lynn/99.html#66 System/1 ?
http://www.garlic.com/~lynn/99.html#67 System/1 ?
http://www.garlic.com/~lynn/99.html#70 Series/1 as NCP (was: Re: System/1 ?)

in the above posts are portions of a presentation that i gave at a quarterly SNA architecture review board (SNA/ARB) ... which was like rubbing their nose in it (after my talk, the executive running the ARB wanted to know who approved my being able to give a talk to them ... implying a desire wanting to head off any sort of repeat performance in the future).

... the possibility that we might do the NSFNET backbone w/o any SNA content wasn't the only thing giving them heartburn
http://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?

Fast and Safe C Strings: User friendly C macros to Declare and use C Strings

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Tue, 17 Apr 2007 16:44:05 -0600
shmuel+ibm-main@ibm-main.lst (Shmuel Metz , Seymour J.) writes:
The use of trailing zero as a delimiter, like the confusion between arrays and pointers, is omnipresent in the C world. I see no practical way of getting rid of it without ditching most of the software written in C. The best that you could do would be to introduce a new data type and hope that people started using it, and I doubt that even that will happen.

>So, total revamp,

Switch to PL/I?


the original mainframe tcp/ip support had been done in vs/pascal ... and had none of the buffer overflow issues typically associated with a lot of c-language based networking implementations ... recent refrence:
http://www.garlic.com/~lynn/2007h.html#8 whiny question: Why won't z/OS support the HMC 3270 emulator

lots of past posts mentioning various buffer overflow related problems with c language implementations
http://www.garlic.com/~lynn/subintegrity.html#overflow

the pascal language implementation had originally been done by two people at the los gatos vlsi lab ... as part of a lot of work in tools supporting chip design. the compiler was eventually released as a product ... first as IUP and then as program product. the implementation was eventually also ported from the mainframe to (workstation) aix.

much later, as part of corporate strategy moving to (COTS) off-the-shelf tools ... some number of the tools/applications were ported to other vendor workstations and then turned over to external (chip tool) vendor.

in this exercise i was given the opportunity to port one such 60k line (vs/pascal based) application to another workstation platform. Unfortunately, the pascal implementation for that platform appeared to have never moved past the stage of being used for student educational purposes ... plus they had outsourced the implementation to an organization on the opposite of the globe (which really complicated resolving compiler and runtime issues).

total topic drift in this indirect reference using such tool skills for redoing airline res ROUTES application
http://www.garlic.com/~lynn/2007g.html#22 Bidirectional Binary Self-Joins
http://www.garlic.com/~lynn/2007g.html#41 US Airways badmouths legacy system

for totally other folklore ... one of the two original people responsible for pascal at the los gatos vlsi lab shows up later as vp of software development at MIPS and then (later still) general manager of the business unit that has responsibility for the original JAVA product.

Experts: Education key to U.S. competitiveness

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Experts: Education key to U.S. competitiveness
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 21:45:54 -0600
Experts: Education key to U.S. competitiveness
http://news.zdnet.com/2100-9595_22-6176967.html


...m above:
Innovation and U.S. competitiveness will suffer if kids don't get a better education, a panel of experts said Tuesday. In particular, science, technology, engineering and math education in kindergarten through 12th grade needs a boost, according to panelists speaking at an event here that's part of a National Governors Association initiative. K-through-12 education has traditionally been a focus of governors because much of a state's budget is spent there.

... snip ...

the above has probably been almost continuously repeated for the last 50yrs?

recent thread:
http://www.garlic.com/~lynn/2007g.html#6 U.S. Cedes Top Spot in Global IT Competitiveness
http://www.garlic.com/~lynn/2007g.html#7 U.S. Cedes Top Spot in Global IT Competitiveness
http://www.garlic.com/~lynn/2007g.html#34 U.S. Cedes Top Spot in Global IT Competitiveness
http://www.garlic.com/~lynn/2007g.html#35 U.S. Cedes Top Spot in Global IT Competitiveness
http://www.garlic.com/~lynn/2007g.html#52 U.S. Cedes Top Spot in Global IT Competitiveness
http://www.garlic.com/~lynn/2007g.html#68 U.S. Cedes Top Spot in Global IT Competitiveness

======

and some similar threads from last year:
http://www.garlic.com/~lynn/2006l.html#61 DEC's Hudson fab
http://www.garlic.com/~lynn/2006l.html#63 DEC's Hudson fab
http://www.garlic.com/~lynn/2006p.html#21 SAT Reading and Math Scores Show Decline
http://www.garlic.com/~lynn/2006p.html#23 SAT Reading and Math Scores Show Decline
http://www.garlic.com/~lynn/2006p.html#24 SAT Reading and Math Scores Show Decline
http://www.garlic.com/~lynn/2006p.html#25 SAT Reading and Math Scores Show Decline
http://www.garlic.com/~lynn/2006p.html#33 SAT Reading and Math Scores Show Decline
http://www.garlic.com/~lynn/2006q.html#6 SAT Reading and Math Scores Show Decline

sizeof() was: The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Tue, 17 Apr 2007 22:18:27 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
different institutions over the past decade or so ... has been one-time account numbers ... i.e. the consumer is provided with a list of different account numbers ... that they can use with their account. each account number is only good for one use ... and as they use an account number, the consumer crosses it off the list. this places a lot of the burden on the consumer ... but is a countermeasure to the current skimming/evesdropping/harvesting vulnerabilities
http://www.garlic.com/~lynn/subintegrity.html#harvest


re:
http://www.garlic.com/~lynn/2007h.html#36 sizeof() was: The Perfect Computer - 36 bits?

and recent item with another kind of one-time transaction code
The Clearing House Prepares for Consumer Use of Payment Codes
http://www.digitaltransactions.net/newsstory.cfm?newsid=1314


from above:
Electronic transactions using unique numerical identifiers to mask account and routing data are rising fast, and now the company behind the technology expects it will be commercially available for consumer payments in about a year. The Clearing House Payments Co. LLC says corporate users made 80,459 transactions in 2006 using its 5-year-old Universal Payment Identification Code, up from 9,696 in 2005. UPIC payments totaled $4.7 billion, up from $571 million.

... snip ...

sizeof() was: The Perfect Computer - 36 bits?

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 10:14:00 -0600
Andrew Swallow <am.swallow@btopenworld.com> writes:
Variable length fields are easy and cheap in software - you simply start the field with an 8 bit or 16 bit size count.

Subroutines to copy and compare variable length fields are simple to write and can be called from all over the program. The fields have to be handled serially but that is not a problem, computer instruction sets only allow the programmer to handle one variable at a time.

Fixed length fields are not fixed, they are simply modified by redesigning the program. Recompiling simply automates some of the changes.


variable length fields are much more expensive for hardware processing ... one of the performance improvements for risc was fixed-length instruction lengths ... random drift ... lots of past 801, risc, romp, rios, power, fort knox, somerset, power/pc, etc posts
http://www.garlic.com/~lynn/subtopic.html#801

high-speed networking in the 70s and 80s ... was constantly running into bottleneck processing in hosts ... and looking at out moving the processing outboard into external boxes and hardware. fixed-length fields would simplify processing in outboard boxes. also for the 70s and into the 80s ... memory was expensive ... so if you had offload processing in outboard boxes ... it was advantage to be able to pipeline processing of the data moving thru ... w/o needing a lot of local, intermediate storage to stage information.

simple examples are the purely hardware implementations like ethernet and ATM (not the cash machine atm).

in the late 80s, we were on the XTP technical advisory board ... looking at fielding a high-speed protocol ... and overcoming a lot of the processing inefficiencies related to ip and tcp ... target paradigm was pipelined offload hardware processing.

misc. past posts about XTP ... and HSP ... trying to get high-speed protocol work item in x3s3.3 (ISO chartered us network standards body)
http://www.garlic.com/~lynn/subnetwork.html#xtphsp

as i've commented before ... one of the issues in x3s3.3 was that ISO had requirement that no standards work be done on stuff that violated OSI. XTP/HSP violated OSI in several areas (and therefor was rejected)

1) XTP/HSP supported LAN/MAC interface. LAN/MAC sites in the middle of the OSI network layer (rather than at interface/boundary) ... LAN/MAC violated OSI ... and therefor anything that supported LAN/MAC violated OSI

2) XTP/HSP went directly from transport to LAN/MAC ... bypassing transport/network interface. bypassing transport/network interface violated OSI.

3) XTP/HSP supported internetworking. internetworking (IP) sits between the bottom of transport and the top of network ... which doesn't exist in OSI. support for internetworking (that doesn't exist in OSI) violated OSI.

ANN: Microsoft goes Open Source

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 10:35:18 -0600
Frank McCoy <mccoyf@millcomm.com> writes:
Um ... The 8088 didn't come out until considerably later than the 8086. As I recall, there was even a second generation (before 80286) that came out before the 8088. My mind goes blank as to the designation though.

Old-Timer's disease; that's what it is.

Does somebody have a timeline for 8086 variants?


are you referring to 80186? ... there were some number of 186 PCs that shipped ... but they didn't last long ... the 286s pretty well trounced them.

a passing past reference to 186 (post is mainly misc. old email from '82)
http://www.garlic.com/~lynn/2006p.html#15 "25th Anniversary of the Personal Computer"

somebody in the thread last yr must have had a URL reference to detailed timeline.

of course, wiki can be your friend
http://en.wikipedia.org/wiki/Intel_8086
http://en.wikipedia.org/wiki/Intel_8088
http://en.wikipedia.org/wiki/80188
http://en.wikipedia.org/wiki/80186
http://en.wikipedia.org/wiki/80286

which has pointers to things like:
http://www.intel.com/design/intarch/intel186/
http://en.wikipedia.org/wiki/NEC_V20

ANN: Microsoft goes Open Source

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 10:59:21 -0600
Frank McCoy <mccoyf@millcomm.com> writes:
Like you say though: The 80286 with its expanded memory, instructions, and greater speed pretty much killed the 8086/8088 types. They hung on for a long time though; simply because of the IBM PC needing to still be supported in software.

re:
http://www.garlic.com/~lynn/2007h.html#45 ANN: Microsoft goes Open Source

they hung in as somewhat embedded chips in other form factors. the point-of-sale cardswipe terminals for a couple decades were "PC/XT" ... radically different form factor and solid state memory in lieu of hard disk.

ANN: Microsoft goes Open Source

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 12:32:10 -0600
krw <krw@att.bizzzz> writes:
The hung on because the '286 was expensive. The '286 hung on because, in turn, the '386 was expensive. Backwards comparability issues forced PC down a common path.

late summer '88 when the pacific rim clone makers started really bulking up on 286 clones ... anticipating the holiday shopping frenzy ... and then the the price for 386sx dropped to about same as 286 ... and the bottom fell out of the 286 market ... and there was some great fire sales (to try and clear the inventory)

same past posts with references
http://www.garlic.com/~lynn/2003g.html#61 IBM zSeries in HPC
http://www.garlic.com/~lynn/2004b.html#1 The BASIC Variations
http://www.garlic.com/~lynn/2005q.html#33 Intel strikes back with a parallel x86 design

lets see if I have some old posts with prices of the era (although i think these are yr or two later)
http://www.garlic.com/~lynn/2001n.html#79 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2001n.html#81 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)
http://www.garlic.com/~lynn/2001n.html#82 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

and oblique recent reference
http://www.garlic.com/~lynn/2007g.html#81 IBM to the PCM market
http://www.garlic.com/~lynn/2007h.html#0 The Perfect Computer - 36 bits?

Securing financial transactions a high priority for 2007

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 12:57:53 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
re:
http://www.garlic.com/~lynn/2007e.html#29 Securing financial transactions a high priority for 2007

data aggregation which at the same time can show both reduction and sharp rise


re:
http://www.garlic.com/~lynn/2007e.html#62 Securing financial transactions a high priority for 2007

and more recent article from yesterday:
Banks must come clean on ID theft
http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2007/04/17/EDGEBOS87H1.DTL


from above:
Two separate studies recently reached conflicting conclusions: While one found that identity theft is on the rise significantly, the other reported that it is on the decline.

So which is it?


... snip ...

ANN: Microsoft goes Open Source

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 19:59:37 -0600
krw <krw@att.bizzzz> writes:
Very few had the need for the FPU. It wasn't until Quake that the masses really cared much.

<snip stuff not referenced>


re:
http://www.garlic.com/~lynn/2007h.html#45 ANN: Microsoft goes Open Source
http://www.garlic.com/~lynn/2007h.html#46 ANN: Microsoft goes Open Source
http://www.garlic.com/~lynn/2007h.html#47 ANN: Microsoft goes Open Source

several of the snips have references to articles comparing the various processors, announce dates, transistor counts,etc ... and other articles from the 80s

this specific one has summary of pieces from a longer article (in addition to 86 stuff ... also 68k stuff)
http://www.garlic.com/~lynn/2001n.html#80 a.f.c history checkup... (was What specifications will the standard year 2001 PC have?)

little use of search engine turns up this slightly more detailed (& more recent article 2006 instead of 1989)

Microprocessor Types and Specifications
http://www.quepublishing.com/articles/article.asp?p=481859&seqNum=14&rl=1

the description of 386sx gives it as somewhat similar rationship to 386dx as the 8088 to 8086 ... internally it was 386dx ... but external interfaces/buses was compatible with 286 (allowing 386sx to be put into systems that had been designed for 286).

and the above reference for 486 chapter
http://www.quepublishing.com/articles/article.asp?p=481859&seqNum=15&rl=1

ANN: Microsoft goes Open Source

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Wed, 18 Apr 2007 23:28:59 -0600
Joe Pfeiffer <pfeiffer@cs.nmsu.edu> writes:
Huh? How about a 32 bit word size for both arithetic and addresses? Made it reasonable to use integers that were big enough to avoid cramps and segments big enough to pretend it was a flat address space?

The original 386 didn't have an onboard FPU anyway -- that took a 387. The big crippling of the SX was an external data bus. SX:DX :: 8088:8086

Was it even *possible* to use an 8087 with a 386? 8087 went with 8086; 387 went with 386.


re:
http://www.garlic.com/~lynn/2007h.html#45 ANN: Microsoft goes Open Source
http://www.garlic.com/~lynn/2007h.html#46 ANN: Microsoft goes Open Source
http://www.garlic.com/~lynn/2007h.html#47 ANN: Microsoft goes Open Source
http://www.garlic.com/~lynn/2007h.html#49 ANN: Microsoft goes Open Source

one of the refs in previous post
http://www.quepublishing.com/articles/article.asp?p=481859&seqNum=14&rl=1

discusses 80387 coprocessor ... and mentions that 80287 coprocessor was merely 8087 with different pins ... and ... "because intel lagged in developing 387 coprocessor, some early 386 systems were designed with socket for 287" (aka 8087 with different pins)

Securing financial transactions a high priority for 2007

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Securing financial transactions a high priority for 2007
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2007 07:45:20 -0600
jmfbahciv writes:
Both. My guess without reading them is each looked at a different statistic and then proceeded to bias the report.

re:
http://www.garlic.com/~lynn/2007h.html#48 Securing financial transactions a high priority for 2007
and
http://www.garlic.com/~lynn/aadsm26.htm#58 Our security sucks, Why can't we change? What's wrong with us?

a blog here commenting on the article (combined with comments in other ongoing threads):

On cleaning up the security mess: escaping the self-perpetuating trap of Fraud?
https://financialcryptography.com/mt/archives/000895.html

and long-winded collection of posts on the subject of Naked Transaction Metaphor
http://www.garlic.com/~lynn/subintegrity.html#payments

ANN: Microsoft goes Open Source

Refed: **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2007 09:38:47 -0600
jmfbahciv writes:
A mainframe user would expect the tape to be processed at the maximum speed of the tape drive. VMS on a VAX could not do that. It was almost faster (from a user POV) to keypunch cards.

my first (student) programming job was porting 1401 MPIO to 360/30 ... i.e. unit-record <-> tape front end for 709.

the 360/30 could run in 1401 hardware emulation mode ... but there was desire to get experience transitioning to 360.

original MPIO did either card->tape or tape->printer/punch.

i got to design my own monitor, interrupt handlers, device drivers, storage manager, etc.

i got my implementation to the point that it could handle two parallel tape streams ... one doing card->tape and the other tape->printer/punch ... and allocate available memory for I/O buffers. tape->printer/punch would spin at full tape speed until the buffers filled ... and then slow-down to printer speed (1401N1, 1100 lines/minute). of course these were only 7track, 200bpi ... that were supported by 709.

4341 and vax/780 were relatively comparable processing power ... and 360/30 was somewhere 1-5 percent that of 4341.

although the table here
http://www.jcmit.com/cpu-performance.htm

uses 780 performance as "1" (VUP?) ... and lists 4341 as .6 and 360/30 0.01165 (just slightly over one percent the processing power of 780)

John W. Backus, 82, Fortran developer, dies

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2007 10:37:54 -0600
Charlton Wilbur <cwilbur@chromatico.net> writes:
Exactly! It's a cost-benefit analysis. Complex troubleshooting and repair are worth it if the system going down inconveniences multiple people, or going down has a high financial cost; they're not worth it if you don't lose anything by rebooting.

when we were doing ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

we ran into (at least one) operation ... located in a metropolitan area in a large skyscraper ... in each overnight operation, it cleared more money than the lease on the whole skyscraper for a year plus the aggregate of a years compensation for everybody that worked in the building.

ANN: Microsoft goes Open Source

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2007 11:24:24 -0600
Andrew Swallow <am.swallow@btopenworld.com> writes:
True. Real mainframe OS are batch only.

lots of people considered cp67 and later vm370 real mainframe OSs ... and was used for timesharing interactive offerings ... including several commercial timesharing
http://www.garlic.com/~lynn/submain.html#timeshare

as i've referenced in the past ... cp67/vm370 wouldn't have been considered less or more "real" than the os/360 varieties ... it was just in the period that the customers installed an order of magnitude or more of the batch variety.

part of the heritage of commercial batch infrastructure was that a large amount of the commercial dataprocessing workload was on behalf of the business ... not doing tasks on behalf of an individual. this was nuts and bolts of running a commercial enterprise ... not related to any individual ... but on behalf of the business. thinks like re-occuring payroll being run w/o regard to any individual or group of individuals.

some of the webservers have been slowely somewhat growing (up) into such environments ... i.e. the webserver runs 24x7 ... whether or not people are present or not present. the web server application is somewhat analogous to the "batch" applications supporting large numbers of ATM (as in teller, not transfer) that have been around since at least the 70s.

once computing price/performance dropped below some threshold ... it could find more and more use in less profitable areas ... like email.

for other drift, past posts about rewriting i/o supervisor in the late 70s, so it could be used in disk engineering/development and product test labs (since the standard "batch" mvs operating system had MTBF of 15 minutes in that environment)
http://www.garlic.com/~lynn/subtopic.html#disk

recent post with references to cp67 & vm370 being just as real as os/360 varieties ... it was just that up into the early 80s ... that business oriented (as opposed to individual/personal) applications dominated the market place i.e. customer batch installations far exceeded customer cp67/vm370 installations, which far exceeded internal cp67/vm370 installation, which far exceeded the max. number of internal cp67/vm370 that I directly supported at one point, which was about the same number of the total multics installations in its whole lifetime
http://www.garlic.com/~lynn/2007g.html#75 The Perfect Computer - 36 bits?

this is somewhat analogous to recent post discussing various aspects of the transition from desktop terminals to desktop computing
http://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?

ANN: Microsoft goes Open Source

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2007 16:09:02 -0600
l_cole writes:
Just out of curiosity, Mr. Wheeler, did you laugh or groan when you read Mr. Swallow's post?

I didn't do either, but it's possible that my butt cheeks may have gripped my seat cushion a little tighter.


somewhat smiled ... gave me opportunity for some typing entertainment while working on this post
http://www.garlic.com/~lynn/aadsm26.htm#59 On cleaning up the security mess: escaping the self-perpetuating trap of Fraud?

which somewhat precipitated this new item

We pluck the lemons; you get the plums: the Lemon Maligned, in Wikipedia as in the security literature
https://financialcryptography.com/mt/archives/000896.html

which makes reference to this nobel prize work

Markets with Asymmetric Information
http://nobelprize.org/nobel_prizes/economics/laureates/2001/public.html

which then allows for topic drift and this hot-of-the-press URL

'Freakonomics' writer talks monkey business
http://news.com.com/Freakonomics+writer+talks+monkey+business/2100-1026_3-6177655.html

T.J. Maxx data theft worse than first reported

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: T.J. Maxx data theft worse than first reported
Newsgroups: bit.listserv.ibm-main
Date: Thu, 19 Apr 2007 17:25:22 -0600
Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
The first paragraph that I posted (above) makes it sound like it might have been a man-in-the-middle attack (which can be done to/with z/OS as Stu Henderson's SHARE presentation in Tampa demonstrated, per the proceedings that I read earlier today). The second paragraph supports Ed's assertion that it was on a POS (in-store Point Of Sale) system attack.

some merchants have each POS terminal doing the modem 1-800 dialup ... however, larger merchants will tend to have either a store concentrator (all POS terminals going to store concentrator which then goes into financial network) ... but numerous larger merchants will have a single POS concentrator ... where all transactions for the merchant go thru.

one of the scenarios where this would result in problems is where the merchant had an online webstore as well as lots of brick&mortor. software in typical e-commerce will usually emulate transaction in a traditional POS terminal ... and the merchant would drive all their transactions thru their single concentrator.

at issue is that the interchange fee tends to be quite a bit different for webservers ... and much of the fee determination/billing is driven off merchant and/or location code. having everything coming in thru a single interface has resulted in situations where the web transactions were obfuscated.

old post about security proportional to risk
http://www.garlic.com/~lynn/2001h.html#61

and related observation that attackers may be able to outspend defender by as much as 100:1
http://www.garlic.com/~lynn/2007e.html#26 Securing financial transactions a high priority for 2007
http://www.garlic.com/~lynn/2007g.html#20 T.J. Maxx data theft worse than first reported

for a little topic drift ... past posts discussing the naked transaction metaphor
http://www.garlic.com/~lynn/subintegrity.html#payments

lots of past posts on evedropping, skimming, harvesting, etc that can be used for replay attacks
http://www.garlic.com/~lynn/subintegrity.html#harvest

and numerous posts discussing man-in-the-middle attacks (as opposed to simple evesdropping and replay attacks)
http://www.garlic.com/~lynn/subintegrity.html#mitm

and posts on general subject of fraud, vulnerabilities, threats, exploits and risks
http://www.garlic.com/~lynn/subintegrity.html#fraud

ANN: Microsoft goes Open Source

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Thu, 19 Apr 2007 21:35:42 -0600
Morten Reistad <first@last.name> writes:
The friday after the week of May 17th, i.e. May 27th 1983 we went out and had a few beers, and we made some bets about when we would each have a machine as capable as a dec20 on our desks for personal use, and what it would cost.

... 27may83 ...
Date: 27 May 1983, 14:31:52 PDT
To: wheeler
Subject: ZM Project Proposal

I've finally gotten my hands on a PC for awhile, I think. I'll be taking it home with me over this three day weekend. I've dumped the ZMPT* files to a floppy (since the plant will be shutdown) and plan to work at the overhaul. How far I get will be determined by the difficulty encountered, of course. However, I've allocated about 1.5 total days to work on editing it.

How was Boyd the 2nd time around?

... snip ... top of post, old email index

the operating system rewrite started out being called VM/PT (PT for programming technology). the first cross-organization meeting was scheduled for large conference room at a east coast location ... the bldg. person responsible for scheduling the conference room ... got the "title" wrong ... so when we all arrived ... all the signs said "ZM" instead of "VM"; from then on the project was referred to as the "ZM" project.

recent post about operating system rewrite
http://www.garlic.com/~lynn/2007h.html#24 sizeof() was: The Perfect Computer - 36 bits?

and other reference was to the 2nd symposium sponsored for John Boyd ... misc. past posts
http://www.garlic.com/~lynn/subboyd.html#boyd
as well of URLs from around the web
http://www.garlic.com/~lynn/subboyd.html#boyd2

T.J. Maxx data theft worse than first reported

Refed: **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: T.J. Maxx data theft worse than first reported
Newsgroups: bit.listserv.ibm-main
Date: Fri, 20 Apr 2007 08:20:38 -0600
Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
I once heard a former CIA spook say that any POS system can be hacked from a truck parked at the curb, if the price/value is right. (Speaking from a previous lifetime in marketing research.) Maybe somebody built a proof-of- concept device??? (Think: TEMPEST)

re:
http://www.garlic.com/~lynn/2007h.html#56 T.J. Maxx data theft worse than first reported

... don't think individual POS terminals sitting on the counter ... think corporate POS concentrator ... where all POS transactions for the whole corporation passes thru on the way to the financial network.

this is slightly analogous to the internet payment gateway (we periodically claim is the original SOA)

long ago, and far away, we were called in to consult with this small client/server startup that had this technology called SSL and wanted to do payment transactions on their server.
http://www.garlic.com/~lynn/aadsm5.htm#asrn2
http://www.garlic.com/~lynn/aadsm5.htm#asrn3

a "payment gateway" was developed and deployed ... lots of past posts
http://www.garlic.com/~lynn/subnetwork.html#gateway

it is somewhat analogous to a corporate POS concentrator ... but can be used by lots of different (small) webservers any place on the web (as opposed to webservers in large corporation that frequently just aggregate into a corporate POS concentrator).

as before ... there are all kinds of evesdropping technology (some may or may not require some sort of physical operation) ... and then use the harvested information for fraudulent transactions in various kinds of replay attacks (being able to use information harvested from previous transactions ... in new fraudulent transactions)
http://www.garlic.com/~lynn/subintegrity.html#harvest

as an aside ... it isn't too unusual to see such trucks parked all over the place around silicon valley ... they are brought in for regular audits for leaking/stray emissions. they typically don't bother to disguise external antennas

for some topic drift ... posts about trade secret litigation and some question about whether the security was proportional to the risk (i.e. had to demonstrate security procedures that were proportional to the claimed value of the stuff at risk):
http://www.garlic.com/~lynn/2001d.html#42 IBM was/is: Imitation...
http://www.garlic.com/~lynn/2002d.html#8 Security Proportional to Risk (was: IBM Mainframe at home)
http://www.garlic.com/~lynn/2003i.html#62 Wireless security
http://www.garlic.com/~lynn/2005r.html#7 DDJ Article on "Secure" Dongle
http://www.garlic.com/~lynn/2006r.html#29 Intel abandons USEnet news
http://www.garlic.com/~lynn/2007e.html#9 The Genealogy of the IBM PC
http://www.garlic.com/~lynn/2007f.html#45 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007f.html#46 The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007f.html#57 Is computer history taught now?

part of the web case ... was that the existing infrastructure is extremely vulnerable to replay attacks.

from security acronym PAIN

P - privacy (sometimes CAIN, confidential) A - authentication I - integrity N - non-repudiation

in the case of the payment gateway, SSL was used for privacy/confidentiality of the transaction transmission thru the internet ... i.e. achieving "security" with encryption as countermeasure to evesdropping (as part of replay attacks). However, as we've frequently noted was that the most of the harvesting exploits appear to happen at the end-points ... as opposed to while the transaction is actually being transmitted.

now, in the mid-90s, the x9a10 financial standard working had been given the requirement to preserve the integrity of the financial infrastructure for ALL retail payments. the result was x9.59 financial transaction standard
http://www.garlic.com/~lynn/x959.html#x959

in effect, the x9.59 financial standard substituted end-to-end "authentication" and "integrity" (for privacy, confidentiality, encryption) to achieve "security". providing end-to-end "authentication" and "integrity" eliminated evesdropping as a risk or compromise ... since information from existing transactions could no longer be used for fraudulent transactions in replay attacks i.e. x9.59 transactions aren't vulnerable to evesdropping, skimming, harvesting exploits ... whether "at-rest" or "in-transit".

we've claimed that the largest use of SSL has been the e-commerce stuff that we previously worked on ... as part of hiding transactions during transmission. x9.59 eliminates the requirement for hiding transactions (and therefor eliminates one of the major uses for SSL).

ANN: Microsoft goes Open Source

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Fri, 20 Apr 2007 10:10:51 -0600
greymaus writes:
Over the years, we have gone decimal, then to a really Irish currency, and now to the Euro. Every time caused a rise in prices. Some people use the different currencys as an excuse, with rapid rise in house prices, one guy offered his house and large garden for a 'million pounds', the buyer agreed to buy, thinking he meant 'million euros', but no, he meant a 'million pounds UK' (40% extra)[1].. The local agricultural prices are largely quoted in 'punts' (Irish pounds) at selling point (Inputs are quoted in euros). To create real confusion, officially quoted prices are given 'per Kg', (which nobody really uses), and areas are sometimes quoted as 'Hectares' rather than 'statute acres'. (There was an old measure, Irish acres, 40% bigger than the statute acre, but the people who used that have largely died out).

we were on trip to germany right after the conversion to euro, several people commented that the shop owners left the number the same and just changed the currency from marks to euros.

for one of the trips ... we had to do walk thru of new fab in dresden (including all the bunny suit stuff) ... somewhat in support of
http://www.garlic.com/~lynn/aadssummary.htm

Fast and Safe C Strings: User friendly C macros to Declare and use C Strings

Refed: **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
Date: Fri, 20 Apr 2007 10:50:50 -0600
JimKeo <jimkeo@multi-platforms.com> writes:
I remember with great fondness working with VS/Pascal when on contract to IBM working on VM and MVS implementation of TCPIP.

Later IBM came out with a C version of much of the TCPIP suite and one of my duties was to address serious performance issues with TCPIP stack but mostly with the C FTP Server (and later C FTP Client).

At some point performance had finally been improved enough to where C FTP was competitive with the earlier VS/Pascal offering. When I suggested I could make some of the same improvements to the VS/Pascal version some folks were positively apoplectic. {smile}

They, understandably but regretably, wanted the "old" Pascal FTP buried and replaced by the "new" C FTP and knew renewed performance issues (Pascal faster than C) would just cause grief to some.

Hmmm. It's been almost a decade but is anyone able to ascertain whether IBM FTP Server or client still has some of my assembler CSECTs with names like WRTFBA** or WRTVB** linked/bound somewhere? Just curious.


re:
http://www.garlic.com/~lynn/2007h.html#41 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings

the initial implementation and would get about 44kbyte/sec consuming most of a 3090 processor. i then added the support for rfc1044 and some tuning tests at cray research was getting channel speed (1mbyte/sec) between 4341 clone and a cray machine ... using only a modest amount of the 4341 processor ... i.e. about 25 times the aggregate thruput for about 1/20 the pathlength ... about 400-500 times difference in bytes/transferred per instruction executed.

re:
http://www.garlic.com/~lynn/2007h.html#8 whiny question: Why won't z/OS support the HMC 3270 emulator

misc. posts mentioning various compromises, vulnerabilities, exploits, etc related C language
http://www.garlic.com/~lynn/subintegrity.html#overflow
and misc. past posts mentioning having done rfc1044 support
http://www.garlic.com/~lynn/subnetwork.html#1044

for other topic drift ... we had an internal high-speed backbone ... part of our hsdt (high-speed data transport project)
http://www.garlic.com/~lynn/subnetwork.html#hsdt

and we working with various organizations and NSF for applying it to NSFNET related operations ... various old email from the period
http://www.garlic.com/~lynn/lhwemail.html#nsfnet

in various posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

Fast and Safe C Strings: User friendly C macros to Declare and use C Strings

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.
Newsgroups: bit.listserv.ibm-main,alt.folklore.computers
To: <ibm-main@bama.ua.edu>
Date: Fri, 20 Apr 2007 12:15:35 -0600
re:
http://www.garlic.com/~lynn/2007h.html#41 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.
http://www.garlic.com/~lynn/2007h.html#60 Fast and Safe C Strings: User friendly C macros to Declare and use C Strings.

besides vs/pascal and lot of chip design applications, los gatos vlsi lab had also done the LSM ... original name was los gatos state machine, but change to logic simulation machine for some external publications ... it want chip logic simulation at something like 50,000 times that of software application running on 3033. it was somewhat original in that it could take into account time (allowed for handling asynchronous clock chips as well as digital chips with analog circuits). The later machines, like EVE (endicott verification engine), assumed chips with synchronous clock. recent post mentioning LSM (with several LSM, YSE, and EVE references):
http://www.garlic.com/~lynn/2007f.html#73 Is computer history taught now?

one of the HSDT high-speed links
http://www.garlic.com/~lynn/subnetwork.html#hsdt

was between austin and los gatos ... and there was fair amount of chip design traffic over the link from austin to los gatos; in fact it was claimed that the availability helped bring in the RIOS (i.e. rs/6000) chipset a year early.

The Los Gatos lab also did a high-performance eperimental database in conjunction with some people from STL ... somewhat concurrent with system/r ... original sql/relational implementation
http://www.garlic.com/~lynn/submain.html#systemr

it shared some of the characteristics of relational ... but while the system/r implementation assumed fairly regular information organization implemented in tables ... the los gatos implementation (also originally done in vs/pascal) was targeted at chip design ... both logical and physical layout ... with possibly extremely anomoulous and non-uniform data (not well suited for table structure).

i had worked on some of the system/r stuff ... recent post
http://www.garlic.com/~lynn/2007.html#1 The Elements of Programming Style
with some old email
http://www.garlic.com/~lynn/2007.html#email801006
http://www.garlic.com/~lynn/2007.html#email801016

as well worked on some of the implementation that los gatos was doing.

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Fri, 20 Apr 2007 15:52:51 -0600
"Charlie Gibbs" <cgibbs@kltpzyxm.invalid> writes:
Some people got really silly about this, writing 6-line subroutines half of whose code was calls to the next level down. Spaghetti code indeed!

somewhat similar problem was happening in some of APL SEQUOIA and AIDS applications on HONE. HONE was the vm370-based online timesharing system supporting worldwide marketing, sales and field people ... with most of the applications implemented in APL. It was extremely processor intensive ... and some investigation found a lot of APL one-liners ... not only was the APL interpreted ... but sometimes majority of processing power was going to just the APL function call processing (for these one-liners).
http://www.garlic.com/~lynn/subtopic.html#hone

HONE started out with clone of the (cambridge) science center's cp67 system and cms\apl (science center had ported apl\360 to cms).
http://www.garlic.com/~lynn/subtopic.html#545tech

HONE migrated to vm370 and apl\cms ... apl\cms work done at the palo alto science center, which included the apl microcode assist for 370/145 (was able to get about the same thruput out of 145 as apl on 168 w/o microcode assist ... i.e. about 10:1 improvement in pure processing).

However, the issue for HONE was that a lot of their applications were also i/o and memory hungry ... which met they actually had to have 370/168 (or better) machines (w/o apl m'code assist).

T.J. Maxx data theft worse than first reported

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: T.J. Maxx data theft worse than first reported
Newsgroups: bit.listserv.ibm-main
Date: Fri, 20 Apr 2007 16:28:14 -0600
Tom.Schmidt@ibm-main.lst (Tom Schmidt) writes:
I once heard a former CIA spook say that any POS system can be hacked from a truck parked at the curb, if the price/value is right. (Speaking from a previous lifetime in marketing research.) Maybe somebody built a proof-of- concept device??? (Think: TEMPEST)

re:
http://www.garlic.com/~lynn/2007h.html#56 T.J. Maxx data theft worse than first reported
http://www.garlic.com/~lynn/2007h.html#58 T.J. Maxx data theft worse than first reported

and for more topic drift, latest news , hot of the press today
Laptops And Flat Panels Now Vulnerable to Van Eck Methods
http://hardware.slashdot.org/hardware/07/04/20/2048258.shtml
Seeing through walls
http://www.newscientist.com/blog/technology/2007/04/seeing-through-walls.html


from above:
Back in 1985, Wim Van Eck proved it was possible to tune into the radio emissions produced by electromagentic coils in a CRT display and then reconstruct the image. The practice became known as Van Eck Phreaking, and NATO spent a fortune making its systems invulnerable to it. It was a major part of Neal Stephenson's novel Cryptonomicon.

... snip ...

so as previously noted, there are several countermeasure to evesdropping and replay attacks ... 1) make sure the attacker can't get the information, 2) scramble/encrypt, so the information is unintelligible, 3) change the paradigm (ala x9.59) so the evesdropped/harvested information is useless for replay attacks.
http://www.garlic.com/~lynn/x959.html#x959

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 03:07:12 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
while dumb terminal emulation paradigm contributed to the early uptake of personal computers ... i.e. a business that already had (financial) justification for tens (or hundreds) of thousands of (dumb) 3270 terminals ... could get a PC for about the same price as a 3270 terminal ... and in a single desktop footprint have it serve the function of both a 3270 terminal as well as provide some local processing (i.e. origin of the term desktop computing).

re:
http://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?

slight drift:

Ballmer: Citigroup to upgrade 500,000 PCs to Vista in next year
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9017204

now 25yrs ago, for a lot of these commercial entities, it would be the case of upgrading from desktop terminals to desktop pcs/computing.

although back then, this particular institution also had an issue with variable rate mortgages ... long winded post mentioning a variety of different subjects ...including variable rate mortgages
http://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

collected posts mentioning some aspect of terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation

ANN: Microsoft goes Open Source

From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: ANN: Microsoft goes Open Source
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 03:19:23 -0600
"Robert" <sabu77@comcast.net> writes:
Minor annoyance? I grew up in Montana. Snow doesn't cause much damage, but really ruins your day.

quiet a bit of difference between western/rockies and eastern, flat plans ... "snow fences" were common along hiways (even 50yrs ago) ... help mitigate drifts from blowing snow.

sizeof() was: The Perfect Computer - 36 bits?

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: sizeof() was: The Perfect Computer - 36 bits?
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 04:42:53 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
slight drift:

Ballmer: Citigroup to upgrade 500,000 PCs to Vista in next year
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9017204

now 25yrs ago, for a lot of these commercial entities, it would be the case of upgrading from desktop terminals to desktop pcs/computing.

although back then, this particular institution also had an issue with variable rate mortgages ... long winded post mentioning a variety of different subjects ...including variable rate mortgages
http://www.garlic.com/~lynn/aepay3.htm#riskm The Thread Between Risk Management and Information Security

collected posts mentioning some aspect of terminal emulation
http://www.garlic.com/~lynn/subnetwork.html#emulation


re:
http://www.garlic.com/~lynn/2007h.html#38 sizeof() was: The Perfect Computer - 36 bits?
http://www.garlic.com/~lynn/2007h.html#64 sizeof() was: The Perfect Computer - 36 bits?

oh, almost forgot ... while t/r might have not been optimal LAN ... even 16mbit compared to 10mbit enet ... recent reference:
http://www.garlic.com/~lynn/2007g.html#80 IBM to the PCM market

20yrs ago it could be used to address slightly different aspect.

20yrs ago, there were starting to be bldg. flr loading problems with the 3270 coax cable trays (long runs of 3270 coax from every office back to the bldg datacenter and stressing the bldg. flr loading limits). the other problem was the fire inspectors were finding that a lot of the 3270x coax insulation was flammable and the coax cable trays (with large bundles of flammable material) permeated bldgs. they were mandating that all that coax cable had to be replaced with cable that had non-flammable insulation.

so along comes t/r ... the customer could replace their desktop PC 3270 emulation cards with t/r cards ... and still run their dumb terminal emulation. the cat5 cable was significantly lighter than 3270 coax ... and cat5 cable runs tended to be significantly shorter distance ... to a local t/r MAU ... rather than all the way back to the bldg. datacenter. All those heavy 3720 coax cable trays permeating the bldg with massive amounts of flammable material could just disappear.

SSL vs. SSL over tcp/ip

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: SSL vs. SSL over tcp/ip
Newsgroups: comp.security.misc
Date: Sat, 21 Apr 2007 10:20:56 -0600
"xpyttl" <xpyttl_NOSPAM@earthling.net> writes:
Firstly we're talking about SSL, not TLS. Secondly, we have a free layer 5, where we can make unreliable transport protocols reliable.

Sounds like you totally miss the meaning of "reliable" in the context of the standard.


we were called into consult with this small client/server startup that wanted to do payment transactions on their server ... and they had this technology called SSL they wanted to use. we had to do a technology end-to-end audit ... as well as an end-to-end business audit ... i.e. all the businesses that might be associated with webservers, things called digital certificates (and how these things called certificaton authorities actually did their business), etc. misc. past posts on ssl domain name digital certificates
http://www.garlic.com/~lynn/subpubkey.html#sslcert

we did have approval/sign-off on the webserver to payment gateway part of the implementation ... misc. past posts about payment gateway
http://www.garlic.com/~lynn/subnetwork.html#gateway

and had to specify some additional operations for SSL ... like mutual authentication ... which didn't exist at the time. both http and https was implemented running over TCP ... supposedly a "reliable" protocol.

in part because we had done ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

we had setup the payment gateway as a continuous operation with multiple different physical connections into multiple different carefully selected ISPs ... that were at different places in the internet backbone (and also have since claimed it was the original SOA).

in the midst of doing all this ... the internet governance transitioned from anarchy routing (i.e. ROUTED and such advertisements) to hierarchical routing (part of the problem was anarchy routing was exceeded available memory space in backbone routers ... and transitioning to hierarchical routing saved significant memory and processing). As a result, it was no longer possible to dynamically advertise routes as countermeasure to various failures and/or partitioning in the internet (or even ISP taking hardware down on sundays for maintenance). So the only alternative left was (domain name) multiple A-record support and the "client" (setting up the connection) running thru all the advertised A-records until it found one that got thru.

At the time, we got several howls from individuals claiming that using straight-forward TCP connects provided for "reliable transport" ... but that was only true once the connection was made. It there wasn't a specific path that was up-and-running ... it was still possible for the initial TCP connection to fail (we came up with the observation that if it wasn't in the steven's tcp/ip book used in undergraduate course, they didn't know it existed). Once the TCP connection was made to the appropriate port ... then http/https could start their part of the process.

The explanation of multiple A-records then was met with response that it was "way too advanced" (even when presented with telnet example code from 4.3 Tahoe) ... however, since we had sign-off/approval for the server to payment gateway implementation ... it had to be implemented. However, it took another year to get the client/browser to implement it for the client to server operation.

An example was an early, large web merchant that would advertise products in commercials that ran in sunday afternoon football. However, one of their ISPs had standard operations that they took routers down on sunday for maintenance. As a result ... all browsers that were routing in through that ISP path would be unable to connect during half-time (which was prime activity for them) ... even though there were alternate paths that were available for connection.

This was in the period that ipsec was attempting to take over the world with end-to-end (network) encryption. I've claimed that both SSL and VPN got a lot of uptake in that period because of the difficulty that ipsec was having in getting new kernel space tcp/ip stack code deployed. About the same time that SSL was starting to be used ... a friend of ours that we had worked with since late 70s ... introduced (router/gateway based) VPN in the gateway committee meeting at the IETF san jose meeting. My observation was that it ruffled some feathers in the ipsec operation ... until they came up with the label "lightweight ipsec" for VPNs (which met that everybody else could call what they were doing "heavyweight ipsec").

A corporation with hundreds/thousands of machines containing their own kernels and tcp/ip protocol stack ... didn't have to be updated. Individuals could just load a new browser application ... and voila ... all of a sudden they had "end-to-end" (application layer) encryption (it was similarly simple for end-users/consumers with their own home machine ... where the kernel had been preloaded by the PC vendor).

from our rfc index
http://www.garlic.com/~lynn/rfcietff.htm

click on Term (term->RFC#) in RFCs listed by section. Then click on "TLS" in the Acronym fastpath section ... i.e.
transport layer security (TLS )
see also encryption , security
4785 4762 4681 4680 4642 4572 4513 4507 4492 4366 4347 4346 4279 4261 4217 4162 4132 3943 3788 3749 3734 3546 3436 3268 3207 2847 2830 2818 2817 2716 2712 2595 2487 2246


clicking on the RFC number brings up the RFC summary in the lower frame; clicking on the ".txt=nnn" field in the RFC summary retrieves the actual RFC.

similarly:
transmission control protocol (TCP )
see also connection network protocol
4828 4808 4782 4727 4654 4653 4614 4413 4404 4342 4341 4278 4163 4145 4138 4022 4015 3822 3821 3782 3742 3734 3708 3649 3562 3522 3517 3481 3465 3449 3448 3430 3390 3360 3293 3081 3042 2988 2923 2883 2873 2861 2760 2582 2581 2525 2488 2452 2416 2415 2414 2398 2385 2151 2147 2140 2126 2018 2012 2001 1859 1792 1791 1739 1693 1644 1613 1470 1379 1347 1337 1323 1273 1263 1213 1195 1185 1180 1158 1156 1155 1147 1146 1145 1144 1110 1106 1095 1086 1085 1078 1072 1066 1065 1025 1006 1002 1001 983 964 962 896 889 879 872 848 846 845 843 842 839 838 837 836 835 834 833 832 817 816 814 813 801 794 793 773 761 721 700 675


for other topic drift ... while tcp/ip was the technology basis for the modern internet ... we claim that the NSFNET backbone was the operational basis for the modern internet ... misc. past posts
http://www.garlic.com/~lynn/subnetwork.html#nsfnet

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 16:33:54 -0600
krw <krw@att.bizzzz> writes:
I don't follow. Fixed disks have been used in mainframes for decades (indeed they *all* are now). Physics/engineering/economics sorta demands it these days. I would expect a removable drive mechanism with the tolerances needed for a 100GB 3in platter would be rather expensive.

2311/2314/3330 were removeable disk packs ... i.e. the platters and their central hub came off (with surfaces exposed to room environment). 3340 was removeable pack ... but it removed with totally enclosed infrastructure ... including the read/write heads. the "drive" rotating the pack and the mechanism to move the read/write heads weren't in the removeable, enclosed environments.

as tolerances became more & more refined ... the mating of the drive and arm mechanism to the 3340-like enclosed infrastructure became less practical ... instead it became all one unit w/o problems related to providing decoupling infrastructure.

picture of string of 3340 drives with 3340 removeable pack (which is completely enclosed, closed infrastructure:
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3340.html
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3340b.html

it sort of reminds me of analogy with Boyd's comment about F111 moveable wing ... the mechanical infrastructure & weight to support the moveable wing cost more in performance and operation ... then the advantages gained from having a moveable wing (it is part of the reason that you don't see moveable wing from his work on f15, f16, f18, etc). f111 reference:
http://www.airpower.maxwell.af.mil/airchronicles/aureview/1972/nov-dec/holder.html

misc. posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
misc. URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 17:43:48 -0600
krw <krw@att.bizzzz> writes:
But the F14 got away with it, for what 30 years? Carrier based too (there was supposed to be a carrier version of the F111, as well).

re:
http://www.garlic.com/~lynn/2007h.html#66 John W. Backus, 82, Fortran developer, dies

the reference (in the above)
http://www.airpower.maxwell.af.mil/airchronicles/aureview/1972/nov-dec/holder.html

has some reference to designs being able to be improved by eliminating swing-wing (i.e. it doesn't mean that a swing-wing is bad ... it just is that a swing-wing can "cost" more than it benefits).

i.e. F14 was designed before Boyd's E-M theory of maneuverability (B-52s were also ... and there are still have some of them flying).

boyd ran light weight fighter (LFW) at the pentagon ... doing work on f15, f16, f18 ... here is article on F18 ... which was to be F14 follow-on.
http://en.wikipedia.org/wiki/F/A-18_Hornet

Boyd had lots of fights taking huge amount of weight out of the F15 and improving its performance ... and also doing the F16 design. The above article makes some reference to it

wiki page for LWF (also mentions Boyd and Boyd's E-M theory of maneuverability)
http://en.wikipedia.org/wiki/Light_Weight_Fighter

above page talks about much of the LWF was based on Boyd's work ... but it doesn't actually mention that Boyd ran the LWF office in the pentagon for a period.

search engine turns up this comparison of F14 and F18
http://www.geocities.com/CapeCanaveral/8629/showdown.htm

the contrast between F14 and F18 in the above ... is similar to some of the arguments about F15 vis-a-vis F16 ... the F15 can carry heavier (missile) payload and therefor has more capability to fight at a distant (acting more like a missile platform than fighter).

and of course ... collected post mentioning Boyd:
http://www.garlic.com/~lynn/subboyd.html#boyd
misc. URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 19:48:28 -0600
Bernd Felsche <bernie@innovative.iinet.net.au> writes:
This is about aircraft roles as well. The 111 was, IIRC, designed to come in low and very fast to drop nukes carried internally.

The F111 also has greater range and payload capacity than the others mentioned; one of the reasons why the RAAF is hanging onto them.

The European MRCA was/is also swing-wing.


re:
http://www.garlic.com/~lynn/2007h.html#68 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#69 John W. Backus, 82, Fortran developer, dies

while the referenced swing-wing article is from 72
http://www.airpower.maxwell.af.mil/airchronicles/aureview/1972/nov-dec/holder.html

... it includes references to
the Messerschmitt P-1101 the Bell X-5 the Grumman XF10F the Convair F-111 the B-1 strategic bomber (North American Rockwell) the Grumman F-14 Tomcat the Mirage G8 and the Panavia 200 Boeing's initial SST swingwing in space -- the Lockheed FDL-5

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 20:05:37 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
above page talks about much of the LWF was based on Boyd's work ... but it doesn't actually mention that Boyd ran the LWF office in the pentagon for a period.

re:
http://www.garlic.com/~lynn/2007h.html#68 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#69 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#70 John W. Backus, 82, Fortran developer, dies

a couple old posts retelling one of Boyd's folklore tales from when he was running LWF office in the pentagon
http://www.garlic.com/~lynn/2004b.html#13 The BASIC Variations
http://www.garlic.com/~lynn/2005n.html#14 Why? (Was: US Military Dead during Iraq War

lots of past posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
and various URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 21:05:05 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
the contrast between F14 and F18 in the above ... is similar to some of the arguments about F15 vis-a-vis F16 ... the F15 can carry heavier (missile) payload and therefor has more capability to fight at a distant (acting more like a missile platform than fighter).

re:
http://www.garlic.com/~lynn/2007h.html#68 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#69 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#70 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#71 John W. Backus, 82, Fortran developer, dies

and recent F18 item from today

Pilot Killed in S.C. Blue Angel Crash
http://apnews.myway.com/article/20070421/D8OLA5GO0.html

and Blue Angels FAQ ... with other comments about F18 vis-a-vis F14
http://www.blueangels.com/faq.shtml and
http://www.navy.com/about/navylife/onduty/blueangels/faq/

and F14 details:
http://www.fas.org/man/dod-101/sys/ac/f-14.htm
and F18
http://www.fas.org/man/dod-101/sys/ac/f-18.htm

and with reference to comment about B-52s still flying (as opposed to F-14s):

Navy retires F-14, the coolest of cold warriors
http://www.usatoday.com/news/nation/2006-09-22-F14-tomcat_x.htm

and B52
http://www.centennialofflight.gov/essay/Air_Power/B52/AP37.htm

fom above:
An engineering study in the year 2001 predicted that the B-52 would be flying for the air force into the year 2045, almost a century after its development began. It has outlived not only its predecessors but also many of its successors such as the Convair B-58, Rockwell B-70 and B-1A, and perhaps even the B-1B. A USAF general called it a plane that is "not getting older, just getting better." Of the 744 B-52s built, fewer than 100 remain in service, all H-models. The Boeing engineers had built a plane that was strong enough to last and basic enough to be adaptable to the changing technology of air war.

... snip ...

B-52 STRATOFORTRESS
http://www.af.mil/factsheets/factsheet.asp?fsID=83

Atlantic Strike V begins in Avon Park (4/18/2007)
http://www.af.mil/news/story.asp?storyID=123049204

from above:
Joint air assets participating in the training include F-16 Fighting Falcons, A-10 Thunderbolt IIs, B-52 Stratofortress, E-8 Joint STARS, Navy F/A-18 Hornets, E-2 Hawkeyes, and P-3C Orions. Coalition observers include military members from the United Kingdom, Germany and the Netherlands.

... snip ...

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 21:19:17 -0600
krw <krw@att.bizzzz> writes:
A friend (a retired crew chief in the VTANG) calls the F15 a "F4 on stilts" He hates the thing.

re:
http://www.garlic.com/~lynn/2007h.html#68 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#69 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#70 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#71 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#72 John W. Backus, 82, Fortran developer, dies

while Boyd vastly improved the F-15 ... the F15 didn't originate from his design principles.

so a little more detail from here

USAF Col John Boyd
http://www.sci.fi/~fta/JohnBoyd.htm

from above ...
In the mean time the U.S. media had focused in the huge price tag that went with the F-15 and the poor performance of the F-14 Tomcat. The Nixon government urged Secretary of Defense Melvin Laird to put the military procurement system on track. Laird gave the mission to his assistant David Packard, who approved the lightweight fighter project.

... snip ... and ...
Lightweight fighter studies showed that the aircraft would have better performance than the F-15 Eagle, but this information had to be kept secret because the USAF didn't want even the prototype to be better than the F-15.

... snip ...

Boyd's stories of what went on are more colorful.

misc. posts mentioning Boyd:
http://www.garlic.com/~lynn/subboyd.html#boyd
and various Boyd URLs from around the web
http://www.garlic.com/~lynn/subboyd.html#boyd2

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sat, 21 Apr 2007 21:48:38 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
while Boyd vastly improved the F-15 ... the F15 didn't originate from his design principles.

so a little more detail from here

USAF Col John Boyd
http://www.sci.fi/~fta/JohnBoyd.htm


re:
http://www.garlic.com/~lynn/2007h.html#68 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#69 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#70 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#71 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#72 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#73 John W. Backus, 82, Fortran developer, dies

so even more from:
http://www.sci.fi/~fta/JohnBoyd.htm

from above:
In late 1962 Boyd met the General Dynamics project engineer for the F-111, Harry Hillaker, at the Eglin O'Club. Boyd complained to Hillaker that the F-111 was underpowered and the swing-wing mechanism was too complicated to be used fast enough to sweep the wings during flight and would get fatigue and stress cracks. Boyd had already done some E-M calculations on the F-111 and knew that the Air Force was about to make a mistake if it procured the F-111. Swing-wing technology would ultimately ruin two generations of airplanes: the Navy's underpowered F-14 and the Air Force's B-1 bomber. Boyd and Hillaker agreed that they would like to develop a small maneuverable fighter.

... snip ...

a Boyd reference from earlier this year
http://www.garlic.com/~lynn/2007.html#20 MS to world: Stop sending money, we have enough - was Re: Most ... can't run Vista

with a reference to a Boyd quote here
http://www.belisarius.com/
http://web.archive.org/web/20010722050327/http://www.belisarius.com/

from above:
"There are two career paths in front of you, and you have to choose which path you will follow. One path leads to promotions, titles, and positions of distinction.... The other path leads to doing things that are truly significant for the Air Force, but the rewards will quite often be a kick in the stomach because you may have to cross swords with the party line on occasion. You can't go down both paths, you have to choose. Do you want to be a man of distinction or do you want to do things that really influence the shape of the Air Force? To be or to do, that is the question." Colonel John R. Boyd, USAF 1927-1997

From the dedication of Boyd Hall, United States Air Force Weapons School, Nellis Air Force Base, Nevada. 17 September 1999


... snip ...

past posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
various URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sun, 22 Apr 2007 08:05:02 -0600
Anne & Lynn Wheeler <lynn@garlic.com> writes:
Lightweight fighter studies showed that the aircraft would have better performance than the F-15 Eagle, but this information had to be kept secret because the USAF didn't want even the prototype to be better than the F-15.

... snip ...

Boyd's stories of what went on are more colorful.


re:
http://www.garlic.com/~lynn/2007h.html#68 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#69 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#70 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#71 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#72 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#73 John W. Backus, 82, Fortran developer, dies
http://www.garlic.com/~lynn/2007h.html#74 John W. Backus, 82, Fortran developer, dies

so to slightly return to a computer bent ... they eventually realized that Boyd was working on the design of what was to become the F16 ... and tried all sort of means to stop him. One was that they realized that he had to be using significant amounts of (gov.) supercomputer time for the design (programs written in Fortran) ... and they were going to find records which would allow him to be charged with theft of millions in gov. resources, put in Leavenworth and throw away the key. There were extensive audit of all gov. supercomputers ... but they never found any records of Boyd's use.

past posting of story about trying to get Boyd thrown into Leavenworth
http://www.garlic.com/~lynn/99.html#120 atomic History
http://www.garlic.com/~lynn/2005t.html#13 Dangerous Hardware

past posts mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd
various URLs from around the web mentioning Boyd
http://www.garlic.com/~lynn/subboyd.html#boyd2

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Sun, 22 Apr 2007 08:31:25 -0600
jmfbahciv writes:
The banking system used to be the few that needed that. I'm stating that now the people who do banking are going to need it. This has never been done before.

or at least some of the financial transaction networks ... within the last decade we were talking to the person running one of the larger financial transaction networks ... who mentioned that they had had 100 percent uptime for several yrs and attributed it to
• IMS hot-standby
automated operator


... i.e. humans sometimes make mistakes ... automating many of their operations eliminates the chance for them to make mistakes doing those particular things.

the other is that there was study in the early 80s, attributed to Jim when he was at Tandem ... showing that hardware was becoming dwiddling percentage of the cause of service outages.

IMS hot-standby was configured with replicated locations at geographicly separated datacenters.

of course, past posts mentioning my wife being con'ed into going to POK to be in charge of loosely-coupled architecture (i.e. mainframe for cluster) ... and being responsible for peer-coupled shared data architecture ... and that there was very little uptake at the time ... except for the work on IMS hot-standby (until sysplex work some decades later)
http://www.garlic.com/~lynn/submain.html#shareddata

of course later we did ha/cmp product
http://www.garlic.com/~lynn/subtopic.html#hacmp

where we coined the terms disaster survivability and geographic survivability (to differentiate from disaster recovery)
http://www.garlic.com/~lynn/submain.html#available

for other drift ... other recent posts mentioning Jim:
http://www.garlic.com/~lynn/2007.html#1 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007.html#13 "The Elements of Programming Style"
http://www.garlic.com/~lynn/2007d.html#4 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#6 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#8 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#17 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007d.html#33 Jim Gray Is Missing
http://www.garlic.com/~lynn/2007e.html#4 The Genealogy of the IBM PC
http://www.garlic.com/~lynn/2007f.html#12 FBA rant
http://www.garlic.com/~lynn/2007g.html#28 Jim Gray Is Missing

Linux: The Completely Fair Scheduler

Refed: **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Linux: The Completely Fair Scheduler
Newsgroups: alt.folklore.computers
Date: Sun, 22 Apr 2007 11:55:24 -0600
Linux: The Completely Fair Scheduler
http://kerneltrap.org/node/8059

from above

... snip ...

deja vu from '68 ... nearly 40 yrs ago as undergraduate doing resource manager for cp67 and default policy was fair share ... misc. posts about resource manager and fair share scheduling policy
http://www.garlic.com/~lynn/subtopic.html#fairshare

John W. Backus, 82, Fortran developer, dies

Refed: **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **, - **
From: Anne & Lynn Wheeler <lynn@garlic.com>
Subject: Re: John W. Backus, 82, Fortran developer, dies
Newsgroups: alt.folklore.computers
Date: Mon, 23 Apr 2007 06:08:46 -0600
jmfbahciv writes:
The banking system used to be the few that needed that. I'm stating that now the people who do banking are going to need it. This has never been done before.

re:
http://www.garlic.com/~lynn/2007h.html#76 John W. Backus, 82, Fortran developer, dies

note ... part of this is also cost/benefit analysis ... does the net added benefit of high availability justify the incremental cost ... especially as some of the techniques for providing higher availability has come down in cost ... i.e. earlier post
http://www.garlic.com/~lynn/2007h.html#53 John W. Backus, 82, Fortran developer, dies

for slightly different discussion related to this topic ... posting in different n.g.
http://www.garlic.com/~lynn/2007h.html#67 SSL vs. SSL over tcp/ip

about several compensating processes/techniques for improving the availability of the original payment gateway
http://www.garlic.com/~lynn/subnetwork.html#gateway

other related past posts about payment gateway as original SOA (service oriented architecture) and possibly taking 4-10 times the effort to turn a well-tested application into a "service"
http://www.garlic.com/~lynn/2001f.html#75 Test and Set (TS) vs Compare and Swap (CS)
http://www.garlic.com/~lynn/2001n.html#91 Buffer overflow
http://www.garlic.com/~lynn/2001n.html#93 Buffer overflow
http://www.garlic.com/~lynn/2002n.html#11 Wanted: the SOUNDS of classic computing
http://www.garlic.com/~lynn/2003g.html#62 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003j.html#15 A Dark Day
http://www.garlic.com/~lynn/2003p.html#37 The BASIC Variations
http://www.garlic.com/~lynn/2004b.html#8 Mars Rover Not Responding
http://www.garlic.com/~lynn/2004b.html#48 Automating secure transactions
http://www.garlic.com/~lynn/2004k.html#20 Vintage computers are better than modern crap !
http://www.garlic.com/~lynn/2004l.html#49 "Perfect" or "Provable" security both crypto and non-crypto?
http://www.garlic.com/~lynn/2004p.html#23 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2004p.html#63 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2004p.html#64 Systems software versus applications software definitions
http://www.garlic.com/~lynn/2005b.html#40 [Lit.] Buffer overruns
http://www.garlic.com/~lynn/2005i.html#42 Development as Configuration
http://www.garlic.com/~lynn/2005n.html#26 Data communications over telegraph circuits
http://www.garlic.com/~lynn/2006n.html#20 The System/360 Model 20 Wasn't As Bad As All That
http://www.garlic.com/~lynn/2007f.html#37 Is computer history taught now?
http://www.garlic.com/~lynn/2007g.html#51 IBM to the PCM market(the sky is falling!!!the sky is falling!!)



previous, next, index - home