tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Very slow transfers to/from micro SD card on a RPi B+





2015-08-18 13:06 GMT+02:00 J. Hannken-Illjes <hannken%eis.cs.tu-bs.de@localhost>:
On 18 Aug 2015, at 12:44, Stephan <stephanwib%googlemail.com@localhost> wrote:

> 2015-08-17 21:30 GMT+02:00 Michael van Elst <mlelstv%serpens.de@localhost>:
> stephanwib%googlemail.com@localhost (Stephan) writes:
>
> >I have just rebooted with WAPBL enabled. Some quick notes:
>
> >-Sequential write speed is a little lower, around 5,4 MB/s.
>
>
> WAPBL is rather slow on SD cards because SD cards are very slow
> when writing small chunks. So even when WAPBL excels, like unpacking
> lots of files or removing a directory tree, it is slow because the
> sequential journal is written in small blocks.

The journal is written in chunks of MAXPHYS (64k) bytes.

> That might be all right. However, creating many files becomes worse the more files are being created. That is on all kinds of devices I´ve seen.
>
> This is from an amd64 server box with an aac raid controller.
>
> /root/test/files> time seq 1 10000|xargs touch
>     3.10s real     0.01s user     3.07s system
> /root/test/files> rm *
> /root/test/files> time seq 1 20000|xargs touch
>     9.88s real     0.01s user     8.51s system
> /root/test/files> rm *
> /root/test/files> time seq 1 30000|xargs touch
>    23.45s real     0.04s user    20.41s system
> /root/test/files> time seq 1 40000|xargs touch
>    43.35s real     0.05s user    38.32s system
>
> That is clearly not linear.

I'm quite sure this is the memcmp in ufs_lookup.c:390.

For every file we have to compare its name to all names in the
directory so far leading to 0.5*n*(n-1) calls to memcmp.

And our memcmp is sloooow ...

Good hint anyway. I wrote this hacky tool to "measure" the performance of the call to open(..., O_CREAT):

------8<---------------------------------------------------
#include <fcntl.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/time.h>

int main(int argc, char **argv) {
  struct timeval tv_start, tv_finish;
  unsigned long duration;
  int i;
  char filename[16];

  if (argc < 2) { printf("Usage: %s <count>\n", argv[0]); exit(1); }
  int howmany = atoi(argv[1]);

  for (i = 0; i < howmany; i++)
  {
    snprintf(&filename, sizeof(filename), "%d", i);
    gettimeofday(&tv_start, NULL);
    int fd = open(&filename, O_CREAT);
    gettimeofday(&tv_finish, NULL);
    close(fd);
    duration = (tv_finish.tv_sec * 1000000 + tv_finish.tv_usec) - (tv_start.tv_sec * 1000000 + tv_start.tv_usec);
    printf("Duration: %ul\n", duration);
  }

  return 0;
}

=============================


Creating some files in an almost empty directory is very quick:

/root/test> ./test 5
Duration: 35l
Duration: 14l
Duration: 15l
Duration: 12l
Duration: 11l

Doing that in a directory with many files in it, this becomes much slower:

/root/test/files> ls | wc -w
   59995
/root/test/files> ../test 5
Duration: 787l
Duration: 715l
Duration: 711l
Duration: 715l
Duration: 711l

Reopening files, however, is always fast:

/root/test/files> ../test 5
Duration: 5l
Duration: 4l
Duration: 2l
Duration: 3l
Duration: 2l


I checked this also on a Solaris box which doesn´t show that kind of issue.


 

--
J. Hannken-Illjes - hannken%eis.cs.tu-bs.de@localhost - TU Braunschweig (Germany)




Home | Main Index | Thread Index | Old Index