tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Maxphys on -current?



Le ven. 4 août 2023 à 17:27, Jason Thorpe <thorpej%me.com@localhost> a écrit :
> If someone does pick this up, I think it would be a good idea to start from scratch, because MAXPHYS, as it stands, is used for multiple things.  Thankfully, I think it would be relatively straightforward to do the work that I am suggesting incrementally.
>

I believe I've been the last one to look at the tls maxphys branch and
at least update it.

I agree that it's likely more useful to start it from scratch and do
incremental updates.
Certainly physio before block interface, and also by controller type,
e.g. ATA first and then SCSI or USB.

For the branch, I particularly disliked that there were quite a few
changes which looked either unrelated, or avoidable.
As one of the first steps I've planned to reduce diffs against HEAD.
I've not gotten to it yet however.

Le ven. 4 août 2023 à 08:04, Brian Buhrow <buhrow%nfbcal.org@localhost> a écrit :
> speed of the transfers on either system.  Interestingly enough, however, the FreeBSD
> performance is markedly worse on this test.
> ...
> NetBSD-99.77/amd64 with SATA3 disk
> # dd if=/dev/rwd0a of=/dev/null bs=1m count=50000
> 52428800000 bytes transferred in 292.067 secs (179509496 bytes/sec)
>
> FreeBSD-13.1/AMD64 with SATA3 disk
> # dd if=/dev/da4 of=/dev/null bs=1m count=50000
> 52428800000 bytes transferred in 322.433936 secs (162603232 bytes/sec)

Interesting. FreeBSD da(4) is a character device since FreeBSD has no
block devices anymore, so it's not a raw-vs-block device difference.
Is the hardware really similar enough to be a fair comparison?

In a broader view, I have doubts if there is any practical reason to
even have support for bigger than 64kb block size support at all.

For HDDs over SATA maybe - the bigger blocks mean potentially more
sequential I/O and hence higher total throughput.
Also you can queue more I/O and hence avoid the seeks - current NetBSD
maximum on SATA is 32 x 64KiB = 2048KiB of queued I/O.
For SCSI, you can queue way more I/O than the usual disk cache can
hold even with 64KiB blocks, so bigger block size is not very
important.
Still, I doubt >64KiB blocks on HDD would achieve more than a couple
of percent increase over 64KiB ones.

For SSDs over SATA, there is no seek to worry about, but the command
latency is a concern. But even there, according to Linux benchmarks,
the total transfer rate tops out with 64KiB blocks already.

For NVMe, the command latency is close to irrelevant - it has a very
low latency command interface and very deep command queues. I don't
see bigger blocks helping much, if at all.

Jaromir


Home | Main Index | Thread Index | Old Index