NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Network very very slow... was iSCSI and jumbo frames



RVP a écrit :
> On Thu, 4 Feb 2021, BERTRAND Joël wrote:
> 
>> Michael van Elst a écrit :
>>> Over a real network with higher latencies than loopback the effect
>>> is much larger, especially for reading,
>>
>>     Sure. But since Linux initiator can reach more than 100 MBps, NetBSD
>> should obtain a throughout grater than 10 MBps with the same target...
> 
> Apart from reading the /dev/sd0 case--which Michael explained--both
> my tests and his show that the standard iSCSI initiator should be able
> to saturate a 1Gbps link. And, this was with the standard config. files
> ie. no special customization. You could do the same test yourself on one
> of the partitions on `legendre' using the istgt target.

	OK. I have istgt installed on legendre (as I have exactly the same poor
performances with iscsi-target). My istgt installation runs fine and
provides swap volumes for a lot of diskless workstations. It exports
ccd0 device.

	I have created a new dk6 on ccd0 and mounted this device with iscsictl :
legendre# iscsictl list_targets
     1: iqn.2004-04.com.qnap:ts-431p2:iscsi.euclide.3b96e9 (QNAP Target)
        2: 192.168.12.2:3260,1
     3: iqn.2020-02.fr.systella.legendre.istgt:hilbert
        4: 192.168.10.128:3260,1
     5: iqn.2020-02.fr.systella.legendre.istgt:abel
        6: 192.168.10.128:3260,1
     7: iqn.2020-02.fr.systella.legendre.istgt:schwarz
        8: 192.168.10.128:3260,1
     9: iqn.2020-02.fr.systella.legendre.istgt:pythagore
       10: 192.168.10.128:3260,1
    11: iqn.2020-02.fr.systella.legendre.istgt:test
       12: 192.168.10.128:3260,1

This new volume is mounted as FFSv2 on/mnt :
legendre# df -h
Filesystem         Size       Used      Avail %Cap Mounted on
/dev/raid0a         31G       3.3G        26G  11% /
/dev/raid0e         62G        25G        34G  42% /usr
/dev/raid0f         31G        23G       6.0G  79% /var
/dev/raid0g        252G        48G       191G  20% /usr/src
/dev/raid0h        523G       209G       288G  42% /srv
/dev/dk0           3.6T       2.4T       1.0T  70% /home
kernfs             1.0K       1.0K         0B 100% /kern
ptyfs              1.0K       1.0K         0B 100% /dev/pts
procfs             4.0K       4.0K         0B 100% /proc
tmpfs              4.0G        28K       4.0G   0% /var/shm
/dev/dk5            11T       8.9T       1.1T  89% /opt
/dev/sd1a           16G       2.0K        15G   0% /mnt

legendre# dd if=/dev/zero of=test bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 4.697 secs (223243772 bytes/sec)
legendre# dd if=/dev/zero of=test bs=1m count=5000
5000+0 records in
5000+0 records out
5242880000 bytes transferred in 247.150 secs (21213352 bytes/sec)

	

After 3GB written on sd1a :

In a first time, istgt takes a lot of CPU time (from 60 to 70%) and this
occupation falls :
load averages:  1.00,  1.19,  0.88;               up 4+03:43:07
14:32:30

  PID USERNAME PRI NICE   SIZE   RES STATE      TIME   WCPU    CPU COMMAND
 1257 root      43    0   784M  527M parked/0   8:13  7.76%  7.76% istgt
 7831 root      84    0    20M 2456K biowai/6   0:09  6.57%  6.54% dd
    0 root       0    0     0K   18M CPU/7    290:27  0.00%  1.76% [system]
 1141 root      85    0   117M 2736K nfsd/3   206:14  0.00%  0.00% nfsd
14626 root      85    0    68M 6204K select/5  28:43  0.00%  0.00% bacula-fd
15556 bacula-s  85    0    64M 6516K select/7  18:05  0.00%  0.00% bacula-sd
...

After 4;2GB :
load averages:  0.63,  1.03,  0.84;               up 4+03:44:34
14:33:57
71 processes: 69 sleeping, 2 on CPU
CPU states:  0.0% user,  0.0% nice,  0.2% system,  0.9% interrupt, 98.7%
idle
Memory: 5310M Act, 160M Inact, 16M Wired, 94M Exec, 4295M File, 5782M Free
Swap: 16G Total, 16G Free

  PID USERNAME PRI NICE   SIZE   RES STATE      TIME   WCPU    CPU COMMAND
 1257 root      43    0   784M  527M parked/7   8:18  4.69%  4.69% istgt
    0 root       0    0     0K   18M CPU/7    290:34  0.00%  0.83% [system]
 1141 root      85    0   117M 2736K nfsd/3   206:15  0.00%  0.00% nfsd
14626 root      85    0    68M 6204K select/5  28:43  0.00%  0.00% bacula-fd
15556 bacula-s  43    0    64M 6516K parked/1  18:05  0.00%  0.00% bacula-sd
  467 root      95  -20    35M 4852K select/0   6:48  0.00%  0.00% openvpn
 1339 root      85    0    23M 1816K select/0   5:21  0.00%  0.00% rpc.lockd
  494 root      85    0    36M 2352K kqueue/3   3:29  0.00%  0.00% syslogd
   98 root      43    0   162M  117M parked/3   3:22  0.00%  0.00% squid
15657 root      43    0   243M   72M parked/3   1:44  0.00%  0.00% named
 2840 mysql     43    0   590M  198M parked/0   1:12  0.00%  0.00% mysqld
  474 root      95  -20    35M 4904K select/0   0:25  0.00%  0.00% openvpn
 1949 root      85    0    30M   16M pause/0    0:23  0.00%  0.00% ntpd
 2922 root      85    0    53M   12M select/4   0:21  0.00%  0.00% perl
  697 root      85    0    20M 1768K select/0   0:20  0.00%  0.00% rpcbind
  722 root      85    0    21M 1880K select/0   0:19  0.00%  0.00% ypserv
 2355 root      85    0    17M 1360K nanosl/6   0:19  0.00%  0.00% estd
 2462 pgsql     85    0    76M 4404K select/0   0:15  0.00%  0.00% postgres
 2775 root      85    0   219M   14M select/1   0:13  0.00%  0.00% httpd
  652 root      85    0    21M 1484K select/6   0:12  0.00%  0.00% ypbind
 7831 root      85    0    20M 2456K biowai/1   0:11  0.00%  0.00% dd
 1380 pgsql     85    0   218M 8880K select/2   0:10  0.00%  0.00% postgres

	I have tried to write 10GB and throughput falls to 9 MB/s.

/mnt is now dismonted.

legendre# dd  if=/dev/sd1a of=/dev/null bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 110.914 secs (9453955 bytes/sec)
legendre# dd  if=/dev/rsd1a of=/dev/null bs=1m count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 3.503 secs (299336568 bytes/sec)
legendre# dd  if=/dev/rsd1a of=/dev/null bs=1m count=10000
10000+0 records in
10000+0 records out
10485760000 bytes transferred in 407.568 secs (25727633 bytes/sec)

	raw device is better than block one, but results are not very different
on large files.

> The low bandwith is something specific to your setup, I think.
> Do you have any firewalling/filtering being done on legendre on wm0?

	Nope. This server is built with ALTQ and NPF, but I don't have install
a special setup on wm0.

> And I would have said this was a bit odd too, but, since the Linux
> client is able to pull 100MBPS through legendre, I guess it is OK:

	I suppose also.

> legendre:[~] > ifconfig wm0
> wm0: flags=0x8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 9000
>         capabilities=7ff80<TSO4,IP4CSUM_Rx,IP4CSUM_Tx,TCP4CSUM_Rx>
>         capabilities=7ff80<TCP4CSUM_Tx,UDP4CSUM_Rx,UDP4CSUM_Tx,TCP6CSUM_Rx>
>         capabilities=7ff80<TCP6CSUM_Tx,UDP6CSUM_Rx,UDP6CSUM_Tx,TSO6>
>         enabled=0
>         ec_capabilities=17<VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,EEE>
>         ec_enabled=2<VLAN_HWTAGGING>
>         address: b4:96:91:92:77:6e
>         media: Ethernet autoselect (1000baseT full-duplex)
>         status: active
>         inet 192.168.12.1/24 broadcast 192.168.12.255 flags 0x0
>         inet6 fe80::b696:91ff:fe92:776e%wm0/64 flags 0x0 scopeid 0x1
> 
> Look at the `enabled' and `ec_enabled' capabilities. Most of them are
> off. Not sure if they make much of a difference...

	JB


Home | Main Index | Thread Index | Old Index