[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
On Jun 16, 9:13am, Michael van Elst wrote:
} On Sun, Jan 24, 2010 at 11:16:05PM +1030, Brett Lymn wrote:
} > On Fri, Jan 22, 2010 at 01:09:10PM +0100, Michael van Elst wrote:
} > >
} > > Keeping DEV_SIZE at 512 bytes avoids lots of changes.
A quote, often attributed to Einstein, is, "Everything should be
made as simple as possible, but no simpler." I can't help but feel
that this is making things simpler then they should be. This may be a
good first implementation, but I keep getting the feeling that
DEV_BSIZE should go away, and we should be using the real blocksize.
However, since I'm not overly famailiar with this area of the system
and I'm unlikely to be doing the work, I can't tell the people that
are, how to do it.
} > Won't that mean there is a chance there will be a lot of
} > read/modify/write going on if the driver is pretending to have 512byte
} > sectors?
} No, the driver will not support writes of single 512byte sectors
} if the underlying hardware does not provide 512byte sectors.
How do you communicate the real blocksize up the stack? If you're
doing writes from userland through the raw device, how do you find out
the real blocksize?
} We are only talking about the API and what units are used to
} specify disk addresses and block counts. So on a disk with
} 1K sectors you will address blocks 0,2,4,6,... and you can
} only transfer an even number of blocks.
Other then simplifying things possibly beyond the point they
should be, what is the point of keeping DEV_BSIZE when you are going to
force everything to use the real blocksize?
} N.B. So far I have MSDOSFS and FFS running on a disk with 1K sectors
} and I learned that the block size translation is already done
} in our block drivers, so there is no need to funnel I/O through dk.
It is certainly good to have a proof of concept.
}-- End of excerpt from Michael van Elst
Main Index |
Thread Index |