[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: [suspend/resume] memory_op hypercall failure: XENMEM_maximum_gpfn
Jean-Yves Migeon wrote:
I investigated this matter a bit further: looks like the architecture
dependent part of shared_info struct from Xen should be "updated" by
guest upon start up, by completing the
shared_info->arch.pfn_to_mfn_frame_list_list element with the list of
frames that specifies the list of frames which makes up the entire p2m
table (I would say it mimics the PD => PT mechanisms).
# xm dump-core 2 /root/core
Error: Failed to dump core: (1, 'Internal error', 'p2m_size < nr_pages
-1 (0 < 1fff')
- hypercall APIs speak of "-ve errcode" on failure, but I can not
manage to find which errcode they are referring to. Are they the same
as the ones given in the mini-os from xentools?
(extras/mini-os/include/errno-base.h). If yes, -1 indicates an EPERM
error, which is weird for dom0.
- do the XENMEM_maximum_gpfn memory operation require some cooperation
from the guest to obtain the proper value? I would say no, hypervisor
should do it by itself alone. Am I missing something here?
I did not find any part in NetBSD kernel which uses such kind of
translation tables (relevant parts of pmap dealing with machine frames
use hypercall MMU operations), and the rest just use the pseudo physical
frame numbers, anyway (like virtual addresses).
What troubles me is the errno returned by the hypercall I was talking
about in my previous mail (EPERM). IMHO, it has nothing to do with
Anyway, it is used by dump-core/save/restore. So, I am heading for some
pmap things. Does my reasoning look plausible to you or not?
Main Index |
Thread Index |