Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Xen balloon driver rewrite



Hi,

On 7 April 2011 18:32, Jean-Yves Migeon <jeanyves.migeon%free.fr@localhost> 
wrote:
> On Thu, 7 Apr 2011 12:34:00 +0200, Manuel Bouyer 
> <bouyer%antioche.eu.org@localhost>
> wrote:
>>
>> On Wed, Apr 06, 2011 at 10:21:11AM +0100, Jean-Yves Migeon wrote:
>>>
>>> Hi list,
>>>
>>> So, in an attempt to add most of the missing stuff in current Xen
>>> balloon driver, I ended up rewriting most of the logic behind. It's
>>> not yet finished, but really closed to it (FYI, I am attaching a
>>> patch). Only the workqueue part needs to be done, it is ~ one/two
>>> hours of coding, then testing. The balloon will be enabled by
>>> default for -6.
>>>
>>> The old design used a specific thread to queue balloon operations
>>> and handle inflating/deflating. The "new" driver will rather be
>>> workqueue(9) based, as it simplifies the locking and handling of
>>> errors from ballooning, especially for error feedback from
>>> balloon_thread. I will now simply log an error and terminate the
>>> worker.
>>
>> I'm not sure how this is easier than just logging an error
>> and going back to the thread's idle loop.
>
> It's still at paper level. ATM, I'm still not sure if workqueue(9) is
> necessary: at a locking level, both will check the same "target" variable.
> The only difference is that workqueue(9) will spawn a thread context when
> necessary, compared to a thread that stays "sleepy" most of the time
> (eventually with a timeout, but it rapidly gets amortized).
>

If I understand the design correctly, this effectively uses a queue to
serialise inflate/deflate requests. Is this really required ? For eg:
if a new alloc request, say: "inflate by 256M" is followed by "deflate
by 256M", would this mean that both the inflate and the deflate would
occur in series (which would make the domU uvm thrash swap
unnecessarily) ?

I'm not sure that a single kernel thread context is a lot of overhead.
Part of my design motivations were based around being gentle with the
VM system; ie; minimise the rate of "spikes" in mem alloc/de-alloc.
I'm a little concerned about if the workqueue will respond to
rapid-fire balloon change requests with a workqueue overflow. Have you
checked for this case ?


> The workqueue(9) has one advantage: I can simply return from it if it fails
> allocating memory, while for thread, I have to implement two levels of
> locking: one to protect ``target'' variable from being overwritten
> concurrently between feedback operation (in balloon_thread) and
> xenstore_watcher(), and one to wakeup balloon_thread.
>
> At a lower level though, I wonder what would happen with such a scenario
> with memoryalloactors(9):
> - suppose I want to inflate balloon, which will decrease domain's available
> memory. Eventually, it might start sleeping for certain allocations (see
> kmem_alloc() in reserve_pages() [1])
> - the balloon_thread() sleeps
> - now I want to deflate balloon, and give memory back to domain
> => how am I supposed to wake up the balloon_thread which is currently
> sleeping? are kmem_allocations interruptible when using KM_SLEEP?
>

You're right, reserve_pages() shouldn't sleep.

The most obvious option that comes to mind is to use a pool. The alloc
is quite small sized, so it shouldn't be that much of an overhead.
OTOH, if that small a size of allocation is failing, memory pressure
is pretty huge, so I think KM_NOSLEEP would be more apt design, and
the driver should refuse to inflate.

Thanks for working on this!

Cheers,
-- 
~Cherry


Home | Main Index | Thread Index | Old Index