NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: kern/54818: 9.0_RC1 pagedaemon spins



>>   buf16k 16384 1411 0 1270 241 205 36 93 1 1 0
>>   buf1k 1024 2 0 2 1 0 1 1 1 1 1
>>   buf2k 2048 9 0 9 5 4 1 5 1 1 1
>>   buf32k 32768 223292 0 193503 92993 73624 19369 37065 1 1 0
>>   buf4k 4096 491370 0 391560 491371 391560 99811 179873 1 1 1
>>   buf64k 65536 4 0 0 5 0 5 5 1 1 1
>>   buf8k 8192 1865 0 1613 160 128 32 63 1 1 0
>>   bufpl 288 210502 0 80506 15026 0 15026 15026 0 inf 111
>
>>>>> these are very interesting:
>
> These are the quantum caches for allocation virtual address space.
>
> No 4k allocation as the direct map is used (that's expected) and most
> pools have a pool page size of 4k
> but a lot of 64k allocations with the backing pool page size 256k.
>
> That is 64*63924 4091136kb worse of allocations
> (15981 pool pages each 256k)
>  and no releases at all seems like some leak to me.
>
> Does that happen when starting X?

No.  It typically happens after a few days running.

> Seems to be an intel drmkms judged from the list of pools.

Correct:

i915drmkms0 at pci0 dev 2 function 0: vendor 8086 product 0412 (rev. 0x06)
drm: Memory usable by graphics device = 2048M
drm: Supports vblank timestamp caching Rev 2 (21.10.2013).
drm: Driver supports precise vblank timestamp query.
i915drmkms0: interrupting at ioapic0 pin 16 (i915)
intelfb0 at i915drmkms0
i915drmkms0: info: registered panic notifier
i915drmkms0: More than 8 outputs detected via ACPI
intelfb0: framebuffer at 0xffff80013bc7d000, size 1920x1200, depth 32, stride 7680
wsdisplay0 at intelfb0 kbdmux 1: console (default, vt100 emulation)
wsmux1: connecting to wsdisplay0

> The kmem arena is most likely a bit more than this mentioned 4g as the
> machine seems to have 16gb?

Physical ram is 16GB, yes.

> It should be the second entry of the output of "pmap 0".

Currently the first three lines from "pmap 0" is

FFFF800000000000 473672K read/write/exec     [ anon ]
FFFF80001CE92000 4166220K read/write/exec     [ anon ]
FFFF80011B325000 524288K read/write/exec     [ pager_map ]

>>   kva-12288 12288 35 0 0 2 0 2 2 0 inf 0
>>   kva-16384 16384 17 0 0 2 0 2 2 0 inf 0
>>   kva-20480 20480 84 0 0 7 0 7 7 0 inf 0
>>   kva-24576 24576 9 0 0 1 0 1 1 0 inf 0
>>   kva-28672 28672 3 0 0 1 0 1 1 0 inf 0
>>   kva-32768 32768 1 0 0 1 0 1 1 0 inf 0
>>   kva-36864 36864 3 0 0 1 0 1 1 0 inf 0
>>   kva-40960 40960 108 0 0 18 0 18 18 0 inf 0
>>   kva-49152 49152 1 0 0 1 0 1 1 0 inf 0
>>   kva-65536 65536 63924 0 0 15981 0 15981 15981 0 inf 0
>>   kva-8192 8192 52 0 0 2 0 2 2 0 inf 0
>
> ...
> I'm not aware of any pool that allocates from the 64k quantum cache so
> it doesn't surprise me that that pagedaemon/pool_drain
> isn't able to free anything.

Hm.  I know too little about the mechanism, but having something
allocate from it and nothing to release when there's pressure
looks like a recipe for disaster.

Regards,

- Havard


Home | Main Index | Thread Index | Old Index