Subject: about untar performance on -current
To: None <tech-kern@netbsd.org>
From: enami tsugutomo <enami@sm.sony.co.jp>
List: tech-kern
Date: 04/12/2001 19:12:27
One of reason that a tar spent more time to extract files these days
is that it is waiting the last page to be written into disk when a
fragment is relocated.  Look at the appended uvm history dump.
uvn_findpage called inside ubc_fault waiting the page 0xc05f9b80,
which is written async in ufs_balloc_range.

At least for async io, we don't need to flush pages in
ufs_balloc_range, do we?

With appended patch, elapsed time to extract mozilla source is not so
slow with comparing to 1.5 as listed below.

enami.

987040683.411236 ffs_write#1098: called!
987040683.411242 ffs_write#1098: vp 0xc951fb60 off 0xf800 len 0x1000
987040683.411256 ufs_balloc_range#1371: called!
987040683.411264 ufs_balloc_range#1371: vp 0xc951fb60 off 0xf800 len 0x800 u_size 0xf800
987040683.411281 genfs_getpages#9125: called!
987040683.411288 genfs_getpages#9125: vp 0xc951fb60 off 0x0/e000 count 2
987040683.411302 uvn_findpage#42025: called!
987040683.411308 uvn_findpage#42025: vp 0xc951fb60 off 0xe000
987040683.411316 uvn_findpage#42025: found
987040683.411322 uvn_findpage#42026: called!
987040683.411328 uvn_findpage#42026: vp 0xc951fb60 off 0xf000
987040683.411334 uvn_findpage#42026: found
987040683.411341 genfs_getpages#9125: returning cached pages
987040683.411350 genfs_getpages#9126: called!
987040683.411356 genfs_getpages#9126: vp 0xc951fb60 off 0x0/10000 count 16
987040683.411361 genfs_getpages#9126: off 0x10000 count 16 goes past EOF 0xf800
987040683.411370 genfs_getpages#9127: called!
987040683.411375 genfs_getpages#9127: vp 0xc951fb60 off 0x0/20000 count 16
987040683.411381 genfs_getpages#9127: off 0x20000 count 16 goes past EOF 0xf800
987040683.411388 genfs_getpages#9125: succeeding, npages 2
987040683.411395 genfs_getpages#9125: examining pg 0xc05f9b40 flags 0x1d
987040683.411405 genfs_getpages#9125: examining pg 0xc05f9b80 flags 0x1d
987040683.411412 ufs_balloc_range#1371: got pgs1[0] 0xc05f9b40
987040683.411419 ufs_balloc_range#1371: got pgs1[1] 0xc05f9b80
987040683.411436 ffs_balloc#1387: called!
987040683.411445 ffs_balloc#1387: vp 0xc951fb60 lbn 0x7 size 0x2000
987040683.411523 uvm_vnp_setsize#3349: called!
987040683.411529 uvm_vnp_setsize#3349: old 0xf800 new 0x10000
987040683.411539 uvm_page_unbusy#2678: called!
987040683.411546 uvm_page_unbusy#2678: unbusying pg 0xc05f9b40
987040683.411553 uvm_page_unbusy#2678: unbusying pg 0xc05f9b80
987040683.411600 uvm_pager_put#828: called!
987040683.411629 genfs_putpages#828: called!
987040683.411635 genfs_putpages#828: vp 0xc951fb60 offset 0xe000 count 2
987040683.411759 genfs_putpages#828: vp 0xc951fb60 mbp 0xc0796068 num now 2 bytes 0x2000
987040683.411777 genfs_putpages#828: vp 0xc951fb60 offset 0xe000 bcount 0x2000 blkno 0x11f60
987040683.411862 genfs_putpages#828: returning 0 (async)
987040683.411869 uvm_pager_put#828: put -> 0
987040683.411891 ubc_alloc#5071: called!
987040683.411898 ubc_alloc#5071: uobj 0xc951fb60 offset 0xf800 len 0x800 filesize 0x10000
987040683.411910 ubc_alloc#5071: umap 0xc06fa1bc refs 1 va 0xc9230000
987040683.411990 ubc_fault#1637: called!
987040683.411997 ubc_fault#1637: va 0xc9231000 ubc_offset 0x75b000 at 3
987040683.412005 ubc_fault#1637: slot_offset 0x1000 writeoff 0x1800 writelen 0x800 u_size 0x10000
987040683.412012 ubc_fault#1637: NOT setting PGO_OVERWRITE
987040683.412019 ubc_fault#1637: getpages vp 0xc951fb60 offset 0xf000 npages 1
987040683.412026 genfs_getpages#9128: called!
987040683.412032 genfs_getpages#9128: vp 0xc951fb60 off 0x0/f000 count 1
987040683.412042 uvn_findpage#42027: called!
987040683.412049 uvn_findpage#42027: vp 0xc951fb60 off 0xf000
987040683.412977 uvm_aio_aiodone#865: called!
987040683.412984 uvm_aio_aiodone#865: bp 0xc0796068
987040683.413002 uvm_aio_aiodone#865: pgs[0] = 0xc05f9b40
987040683.413009 uvm_aio_aiodone#865: pgs[1] = 0xc05f9b80
987040683.413131 uvm_page_unbusy#2679: called!
987040683.413138 uvm_page_unbusy#2679: unbusying pg 0xc05f9b40
987040683.413144 uvm_page_unbusy#2679: unbusying pg 0xc05f9b80
987040683.413175 uvn_findpage#42027: found
987040683.413181 genfs_getpages#9128: returning cached pages
987040683.413187 genfs_getpages#9128: succeeding, npages 2
987040683.413193 genfs_getpages#9128: examining pg 0xc05f9b80 flags 0x1d
987040683.413201 ubc_fault#1637: getpages error 0 npages 1
987040683.413207 ubc_fault#1637: va 0xc9231000 eva 0xc9232000
987040683.413214 ubc_fault#1637: pgs[0] = 0xc05f9b80
987040683.413242 ubc_release#5071: called!
987040683.413249 ubc_release#5071: va 0xc9231800
987040683.413256 ubc_release#5071: umap 0xc06fa1bc refs 0
987040683.413297 uvm_pager_put#829: called!
987040683.413319 genfs_putpages#829: called!
987040683.413325 genfs_putpages#829: vp 0xc951fb60 offset 0x1000 count 1
987040683.413422 genfs_putpages#829: vp 0xc951fb60 mbp 0xc070be38 num now 2 bytes 0x1000
987040683.413432 genfs_putpages#829: vp 0xc951fb60 offset 0x1000 bcount 0x1000 blkno 0x11ef8
987040683.413471 genfs_putpages#829: returning 0 (async)
987040683.413478 uvm_pager_put#829: put -> 0
987040683.413489 uvm_pager_put#830: called!
987040683.413512 genfs_putpages#830: called!
987040683.413518 genfs_putpages#830: vp 0xc951fb60 offset 0x3000 count 1
987040683.413619 genfs_putpages#830: vp 0xc951fb60 mbp 0xc070b340 num now 4 bytes 0x1000
987040683.413629 genfs_putpages#830: vp 0xc951fb60 offset 0x3000 bcount 0x1000 blkno 0x11f08
987040683.413639 genfs_putpages#830: returning 0 (async)
987040683.413645 uvm_pager_put#830: put -> 0
987040683.413657 uvm_pager_put#831: called!
987040683.413678 genfs_putpages#831: called!
987040683.413684 genfs_putpages#831: vp 0xc951fb60 offset 0x5000 count 1
987040683.413783 genfs_putpages#831: vp 0xc951fb60 mbp 0xc070b270 num now 6 bytes 0x1000


Index: ufs_inode.c
===================================================================
RCS file: /cvsroot/syssrc/sys/ufs/ufs/ufs_inode.c,v
retrieving revision 1.22
diff -u -r1.22 ufs_inode.c
--- ufs_inode.c	2001/02/27 02:55:40	1.22
+++ ufs_inode.c	2001/04/12 09:29:40
@@ -286,11 +286,15 @@
 		/*
 		 * The data in the frag might be moving to a new disk location.
 		 * We need to flush pages to the new disk locations.
+		 *
+		 * XXX.  For now, we writes whole filesystem block.
+		 * See the comment in {ffs,lfs}_write in
+		 * ufs_readwrite.c.
 		 */
-
-		(uobj->pgops->pgo_flush)(uobj, oldeof & ~(bsize - 1),
-		    MIN((oldeof + bsize) & ~(bsize - 1), neweof),
-		    PGO_CLEANIT | ((flags & B_SYNC) ? PGO_SYNCIO : 0));
+		if ((flags & B_SYNC) != 0)
+			(*uobj->pgops->pgo_flush)(uobj, oldeof & ~(bsize - 1),
+			    (oldeof + bsize) & ~(bsize - 1),
+			    PGO_CLEANIT | PGO_SYNCIO);
 	}
 	if (pgs2[0] != NULL) {
 		uvm_page_unbusy(pgs2, npages2);


Some measurements.  All files are on same nfs server.

[ppro200, 64M, 1.5]
enami@backup% csh -c 'time tar xzf /ccd0/NetBSD/pkgsrc/distfiles/mozilla-source-0.8.1.tar.bz2'
129.1u 33.2s 9:12.87 29.3% 0+0k 12553+59507io 65pf+0w

[ppro200, 96M, -current with the patch]
enami@annex-2f-floor-217% csh -c 'time tar xzf /b/pkgsrc/distfiles/mozilla-source-0.8.1.tar.bz2'
124.2u 72.9s 11:26.30 28.7% 0+0k 2724+18843io 33pf+0w

[p-iii 1g, 384M, -currrent]
enami@enxio% csh -c 'time tar xzf /sink/NetBSD/pkgsrc/distfiles/mozilla-source-0.8.1.tar.bz2'
36.5u 10.3s 10:46.81 7.2% 0+0k 1541+20597io 36pf+0w

[p-iii 1g, 384M, -currrent with the patch]
enami@enxio% csh -c 'time tar xzf /sink/NetBSD/pkgsrc/distfiles/mozilla-source-0.8.1.tar.bz2'
37.4u 8.4s 6:09.65 12.4% 0+0k 1552+16224io 36pf+0w