Subject: Re: High speed io on VS3100's, (Re: VS3100 SCSI)
To: Brian Chase <bdc@world.std.com>
From: Lord Isildur <mrfusion@umbar.vaxpower.org>
List: port-vax
Date: 05/30/2001 07:52:31
whats this talk of using a disk in the middle? 
not only do you have the loss of speed with using a disk, and the 
inherent halving of throughput as everything is on the bus twice, to the 
disk and then from it, but also, you have the much more serious issue of 
maintaining serialization and integrity of information on the disk. you 
need to lock and unlock the shared regions before and after writing. you 
need atomic locking mechanisms, etc. This gets ugly. 
One very effective implementation of something like this, incidentally, 
is the VMS distributed lock manager, which is one of the big pieces of 
magic behind clusters... *grin*
however, i would instead go directly to host adapter to host adapter 
communication. A scsi device can send a packet to another scsi device. 
the communication can be direct and then things get simpler and faster. 

isildur

On Tue, 29 May 2001, Brian Chase wrote:

> On Wed, 30 May 2001, Stephen Bell wrote:
> 
> > Although, this would work, given the speed of the vs3100 SCSI would it
> > end up any faster than the 10Bt?, does the data need to be physically
> > commited to disk or would the disk cache operate as a "shared memory"
> > interface.?
> 
> With the intermediate disk, it might not be any faster than 10Mbit
> ethernet.  It depends on the disk really, but I definitely think you could
> size the buffers to both fit within the disk cache to help the
> performance.
> 
> Even with older slower SCSI drives, you might be able to overcome the disk
> bottle-neck by using multiple SCSI drives.  You'd be able to come closer
> to saturating the 5MB/s or 10MB/s SCSI bus.  It's perhaps a questionable
> use of good SCSI drives given the associated space being wasted.  I guess
> nothing prevents one from putting a filesystems on the rest of the disk.
> But only a single host would be able to mount it, and any filesystem I/O
> would reduce the benefits of leveraging the disk cache for IP over SCSI
> use.
> 
> Direct host adapter to host adapter communication would obviously be best.
> It's time to start wiring a couple of these guys together to see what
> happens.
> 
> > I've been working on an FPGA design for a 16bit bus link between
> > unibus/qbus <-> ISA bus (The early prototype in discrete logic managed
> > something like "hheelloooooo wwwrrllllddd" ). The FPGA design allows
> > some extra complexity to provide synchronisation between the two
> > machines using hardware interupts, avoiding the need for this to be
> > handled in software..  It sits between a DR11C on the DEC side and a
> > 16bit io card on the PC side.
> 
> Ahhh, nice.  At least with Q-bus and Unibus systems you have the
> relatively inexpensive option of adding multiple ethernet modules to get
> more I/O between hosts.
> 
> > Have 2 VS3100's and would be quite keen to look into anything that
> > gets max I/O between a VS3100 & the world.
> >
> > Was thinking more along the lines of finding a rom/eprom socket that I
> > can steal some address space from, preferably something common to
> > several models. Any ideas?? Presumably most eproms use less than 100%
> > of the address space allocated to them.
> 
> This is a cool idea.  On most of the original VS3100s they had one SCSI
> controller and one MFM controller.  I wouldn't mind tossing the MFM
> capabilities all together if you can use the rom socket for zapping data
> between the VAXen at high speed.
> 
> -brian.
> 
>