Port-sparc archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Installation from a tape
> I created bootable tape as follows:
> # dd if=tapefile1 of=/dev/nrst0 bs=4k conv=sync
> # dd if=tapefile2 of=/dev/nrst0 bs=4k conv=sync
> Then, I booted my SS20 from the tape. Kernel is successfully loaded
> and asked name of tape drive and place of tapefile2 and block size.
> Then I got errors as following:
> [198.2459840] st0: 4096-byte tape record too big for 4-byte user buffer
> [198.2609645] st0(esp0:0:4:0): Sense Key 0x00, info = -4092 (decimal), data = 00
> 00 00 00 00 00 00 00 00 00 00 00
> gzip: can't read stdin: Input/output error
> tar: End of archive volume 1 reached
> tar: Sorry, unable to determine archive format.
That looks to me like a bug in tar: when running gzip, it apparently
treats the input like an octet stream instead of a block stream,
breaking use on real tapes. (At a guess, based on the above, it does
this by simply passing stdin to a forked gunzipper process.)
The _right_ fix of course is to fix that tar variant, so that it reads
the input itself when forking gunzip, so as to preserve the
stream-of-blocks paradigm. But that's likely more work than you're
interested in getting into for this.
The next thing that comes to mind is to gunzip the file before writing
it to the tame, assuming of course that the uncompressed file fits on
the tape:
# gunzip < tapefile2 | dd obs=4k of=/dev/nrst0
Then drop the "gunzip the input" flag from tar (z? I don't know that
tar version; I normally use my own variant) when reading the tape.
> # /usr/obj/distrib/sparc/ramdisk/ramdiskbin tar zxf /dev/nrst0
> This causes similar errors shown above. Also,
> # /usr/obj/distrib/sparc/ramdisk/ramdiskbin pax -rzvf /dev/nrst0
> gives the same result.
I would guess that "dd if=/dev/nrst0 bs=4k | .../ramdiskbin tar zxf -"
works fine?
> This result seem to be caused by differences between block device and
> raw device/character based file.
Does it work any better if you use the block device isntead?
My guess would be that the actual culprit is the difference between the
stream-of-octets paradigm that most of Unix is built around and the
stream-of-blocks API to tapes, combined with whoever wrote the tar
variant in question not having tested compressed archives on real
tapes.
> Creating a tape with bs=4 (not 4K, adjusted for user-land buffer
> size) for tapefile2 itself succeeded, however, loading and extracting
> it failed:
Ouch. On some tapes, that wouldn't be possible at all; some tapes are
stream-of-512-octet-blocks devices even at the bits-on-the-medium
level. Even if it worked, I would expect it to be _incredibly_
inefficient, in both processor time and tape usage. (Any tape format
with inter-record gaps is likely to use almost all of the tape for IRGs
instead of data when written that way.)
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse%rodents-montreal.org@localhost
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Home |
Main Index |
Thread Index |
Old Index