pkgsrc-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[pkgsrc/trunk]: pkgsrc/sysutils/xenkernel411 Apply available security patches...
details: https://anonhg.NetBSD.org/pkgsrc/rev/dbb4c408003c
branches: trunk
changeset: 387950:dbb4c408003c
user: bouyer <bouyer%pkgsrc.org@localhost>
date: Wed Nov 28 14:00:49 2018 +0000
description:
Apply available security patches relevant for Xen 4.11, up to XSA282.
Bump PKGREVISION
diffstat:
sysutils/xenkernel411/Makefile | 4 +-
sysutils/xenkernel411/distinfo | 14 +-
sysutils/xenkernel411/patches/patch-XSA269 | 114 +++++++++
sysutils/xenkernel411/patches/patch-XSA275-1 | 106 ++++++++
sysutils/xenkernel411/patches/patch-XSA275-2 | 70 +++++
sysutils/xenkernel411/patches/patch-XSA276-1 | 122 ++++++++++
sysutils/xenkernel411/patches/patch-XSA276-2 | 85 ++++++
sysutils/xenkernel411/patches/patch-XSA277 | 49 ++++
sysutils/xenkernel411/patches/patch-XSA278 | 328 +++++++++++++++++++++++++++
sysutils/xenkernel411/patches/patch-XSA279 | 39 +++
sysutils/xenkernel411/patches/patch-XSA280-1 | 118 +++++++++
sysutils/xenkernel411/patches/patch-XSA280-2 | 143 +++++++++++
sysutils/xenkernel411/patches/patch-XSA282-1 | 149 ++++++++++++
sysutils/xenkernel411/patches/patch-XSA282-2 | 44 +++
14 files changed, 1382 insertions(+), 3 deletions(-)
diffs (truncated from 1455 to 300 lines):
diff -r e8d57dd7e7ad -r dbb4c408003c sysutils/xenkernel411/Makefile
--- a/sysutils/xenkernel411/Makefile Wed Nov 28 12:08:03 2018 +0000
+++ b/sysutils/xenkernel411/Makefile Wed Nov 28 14:00:49 2018 +0000
@@ -1,7 +1,7 @@
-# $NetBSD: Makefile,v 1.2 2018/07/24 17:29:09 maya Exp $
+# $NetBSD: Makefile,v 1.3 2018/11/28 14:00:49 bouyer Exp $
VERSION= 4.11.0
-#PKGREVISION= 4
+PKGREVISION= 1
DISTNAME= xen-${VERSION}
PKGNAME= xenkernel411-${VERSION}
CATEGORIES= sysutils
diff -r e8d57dd7e7ad -r dbb4c408003c sysutils/xenkernel411/distinfo
--- a/sysutils/xenkernel411/distinfo Wed Nov 28 12:08:03 2018 +0000
+++ b/sysutils/xenkernel411/distinfo Wed Nov 28 14:00:49 2018 +0000
@@ -1,10 +1,22 @@
-$NetBSD: distinfo,v 1.1 2018/07/24 13:40:11 bouyer Exp $
+$NetBSD: distinfo,v 1.2 2018/11/28 14:00:49 bouyer Exp $
SHA1 (xen411/xen-4.11.0.tar.gz) = 32b0657002bcd1992dcb6b7437dd777463f3b59a
RMD160 (xen411/xen-4.11.0.tar.gz) = a2195b67ffd4bc1e6fc36bfc580ee9efe1ae708c
SHA512 (xen411/xen-4.11.0.tar.gz) = 33d431c194f10d5ee767558404a1f80a66b3df019012b0bbd587fcbc9524e1bba7ea04269020ce891fe9d211d2f81c63bf78abedcdbe1595aee26251c803a50a
Size (xen411/xen-4.11.0.tar.gz) = 25131533 bytes
SHA1 (patch-Config.mk) = 9372a09efd05c9fbdbc06f8121e411fcb7c7ba65
+SHA1 (patch-XSA269) = baf135f05bbd82fea426a807877ddb1796545c5c
+SHA1 (patch-XSA275-1) = 7097ee5e1c073a0029494ed9ccf8c786d6c4034f
+SHA1 (patch-XSA275-2) = e286286a751c878f5138e3793835c61a11cf4742
+SHA1 (patch-XSA276-1) = 0b1e4b7620bb64f3a82671a172810c12bad91154
+SHA1 (patch-XSA276-2) = ef0e94925f1a281471b066719674bba5ecca8a61
+SHA1 (patch-XSA277) = 845afbe1f1cfdad5da44029f2f3073e1d45ef259
+SHA1 (patch-XSA278) = f344db46772536bb914ed32f2529424342cb81b0
+SHA1 (patch-XSA279) = 6bc022aba315431d916b2d9f6ccd92942e74818a
+SHA1 (patch-XSA280-1) = 401627a7cc80d77c4ab4fd9654a89731467b0bdf
+SHA1 (patch-XSA280-2) = 8317f7d8664fe32a938470a225ebb33a78edfdc6
+SHA1 (patch-XSA282-1) = e790657be970c71ee7c301b7f16bd4e4d282586a
+SHA1 (patch-XSA282-2) = 8919314eadca7e5a16104db1c2101dc702a67f91
SHA1 (patch-xen_Makefile) = 465388d80de414ca3bb84faefa0f52d817e423a6
SHA1 (patch-xen_Rules.mk) = c743dc63f51fc280d529a7d9e08650292c171dac
SHA1 (patch-xen_arch_x86_Rules.mk) = 0bedfc53a128a87b6a249ae04fbdf6a053bfb70b
diff -r e8d57dd7e7ad -r dbb4c408003c sysutils/xenkernel411/patches/patch-XSA269
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel411/patches/patch-XSA269 Wed Nov 28 14:00:49 2018 +0000
@@ -0,0 +1,114 @@
+$NetBSD: patch-XSA269,v 1.1 2018/11/28 14:00:49 bouyer Exp $
+
+From: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+Subject: x86/vtx: Fix the checking for unknown/invalid MSR_DEBUGCTL bits
+
+The VPMU_MODE_OFF early-exit in vpmu_do_wrmsr() introduced by c/s
+11fe998e56 bypasses all reserved bit checking in the general case. As a
+result, a guest can enable BTS when it shouldn't be permitted to, and
+lock up the entire host.
+
+With vPMU active (not a security supported configuration, but useful for
+debugging), the reserved bit checking in broken, caused by the original
+BTS changeset 1a8aa75ed.
+
+From a correctness standpoint, it is not possible to have two different
+pieces of code responsible for different parts of value checking, if
+there isn't an accumulation of bits which have been checked. A
+practical upshot of this is that a guest can set any value it
+wishes (usually resulting in a vmentry failure for bad guest state).
+
+Therefore, fix this by implementing all the reserved bit checking in the
+main MSR_DEBUGCTL block, and removing all handling of DEBUGCTL from the
+vPMU MSR logic.
+
+This is XSA-269
+
+Signed-off-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+Reviewed-by: Jan Beulich <jbeulich%suse.com@localhost>
+
+diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
+index 207e2e7..d4444f0 100644
+--- xen/arch/x86/cpu/vpmu_intel.c.orig
++++ xen/arch/x86/cpu/vpmu_intel.c
+@@ -535,27 +535,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content,
+ uint64_t *enabled_cntrs;
+
+ if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+- {
+- /* Special handling for BTS */
+- if ( msr == MSR_IA32_DEBUGCTLMSR )
+- {
+- supported |= IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+- IA32_DEBUGCTLMSR_BTINT;
+-
+- if ( cpu_has(¤t_cpu_data, X86_FEATURE_DSCPL) )
+- supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+- IA32_DEBUGCTLMSR_BTS_OFF_USR;
+- if ( !(msr_content & ~supported) &&
+- vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+- return 0;
+- if ( (msr_content & supported) &&
+- !vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+- printk(XENLOG_G_WARNING
+- "%pv: Debug Store unsupported on this CPU\n",
+- current);
+- }
+ return -EINVAL;
+- }
+
+ ASSERT(!supported);
+
+diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
+index 9707514..ae028dd 100644
+--- xen/arch/x86/hvm/vmx/vmx.c.orig
++++ xen/arch/x86/hvm/vmx/vmx.c
+@@ -3032,11 +3032,14 @@ void vmx_vlapic_msr_changed(struct vcpu *v)
+ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
+ {
+ struct vcpu *v = current;
++ const struct cpuid_policy *cp = v->domain->arch.cpuid;
+
+ HVM_DBG_LOG(DBG_LEVEL_MSR, "ecx=%#x, msr_value=%#"PRIx64, msr, msr_content);
+
+ switch ( msr )
+ {
++ uint64_t rsvd;
++
+ case MSR_IA32_SYSENTER_CS:
+ __vmwrite(GUEST_SYSENTER_CS, msr_content);
+ break;
+@@ -3091,16 +3094,26 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
+
+ case MSR_IA32_DEBUGCTLMSR: {
+ int i, rc = 0;
+- uint64_t supported = IA32_DEBUGCTLMSR_LBR | IA32_DEBUGCTLMSR_BTF;
+
+- if ( boot_cpu_has(X86_FEATURE_RTM) )
+- supported |= IA32_DEBUGCTLMSR_RTM;
+- if ( msr_content & ~supported )
++ rsvd = ~(IA32_DEBUGCTLMSR_LBR | IA32_DEBUGCTLMSR_BTF);
++
++ /* TODO: Wire vPMU settings properly through the CPUID policy */
++ if ( vpmu_is_set(vcpu_vpmu(v), VPMU_CPU_HAS_BTS) )
+ {
+- /* Perhaps some other bits are supported in vpmu. */
+- if ( vpmu_do_wrmsr(msr, msr_content, supported) )
+- break;
++ rsvd &= ~(IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
++ IA32_DEBUGCTLMSR_BTINT);
++
++ if ( cpu_has(¤t_cpu_data, X86_FEATURE_DSCPL) )
++ rsvd &= ~(IA32_DEBUGCTLMSR_BTS_OFF_OS |
++ IA32_DEBUGCTLMSR_BTS_OFF_USR);
+ }
++
++ if ( cp->feat.rtm )
++ rsvd &= ~IA32_DEBUGCTLMSR_RTM;
++
++ if ( msr_content & rsvd )
++ goto gp_fault;
++
+ if ( msr_content & IA32_DEBUGCTLMSR_LBR )
+ {
+ const struct lbr_info *lbr = last_branch_msr_get();
diff -r e8d57dd7e7ad -r dbb4c408003c sysutils/xenkernel411/patches/patch-XSA275-1
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel411/patches/patch-XSA275-1 Wed Nov 28 14:00:49 2018 +0000
@@ -0,0 +1,106 @@
+$NetBSD: patch-XSA275-1,v 1.1 2018/11/28 14:00:49 bouyer Exp $
+
+From: Roger Pau Monné <roger.pau%citrix.com@localhost>
+Subject: amd/iommu: fix flush checks
+
+Flush checking for AMD IOMMU didn't check whether the previous entry
+was present, or whether the flags (writable/readable) changed in order
+to decide whether a flush should be executed.
+
+Fix this by taking the writable/readable/next-level fields into account,
+together with the present bit.
+
+Along these lines the flushing in amd_iommu_map_page() must not be
+omitted for PV domains. The comment there was simply wrong: Mappings may
+very well change, both their addresses and their permissions. Ultimately
+this should honor iommu_dont_flush_iotlb, but to achieve this
+amd_iommu_ops first needs to gain an .iotlb_flush hook.
+
+Also make clear_iommu_pte_present() static, to demonstrate there's no
+caller omitting the (subsequent) flush.
+
+This is part of XSA-275.
+
+Reported-by: Paul Durrant <paul.durrant%citrix.com@localhost>
+Signed-off-by: Roger Pau Monné <roger.pau%citrix.com@localhost>
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+
+--- xen/drivers/passthrough/amd/iommu_map.c.orig
++++ xen/drivers/passthrough/amd/iommu_map.c
+@@ -35,7 +35,7 @@ static unsigned int pfn_to_pde_idx(unsig
+ return idx;
+ }
+
+-void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long gfn)
++static void clear_iommu_pte_present(unsigned long l1_mfn, unsigned long gfn)
+ {
+ u64 *table, *pte;
+
+@@ -49,23 +49,42 @@ static bool_t set_iommu_pde_present(u32
+ unsigned int next_level,
+ bool_t iw, bool_t ir)
+ {
+- u64 addr_lo, addr_hi, maddr_old, maddr_next;
++ uint64_t addr_lo, addr_hi, maddr_next;
+ u32 entry;
+- bool_t need_flush = 0;
++ bool need_flush = false, old_present;
+
+ maddr_next = (u64)next_mfn << PAGE_SHIFT;
+
+- addr_hi = get_field_from_reg_u32(pde[1],
+- IOMMU_PTE_ADDR_HIGH_MASK,
+- IOMMU_PTE_ADDR_HIGH_SHIFT);
+- addr_lo = get_field_from_reg_u32(pde[0],
+- IOMMU_PTE_ADDR_LOW_MASK,
+- IOMMU_PTE_ADDR_LOW_SHIFT);
+-
+- maddr_old = (addr_hi << 32) | (addr_lo << PAGE_SHIFT);
+-
+- if ( maddr_old != maddr_next )
+- need_flush = 1;
++ old_present = get_field_from_reg_u32(pde[0], IOMMU_PTE_PRESENT_MASK,
++ IOMMU_PTE_PRESENT_SHIFT);
++ if ( old_present )
++ {
++ bool old_r, old_w;
++ unsigned int old_level;
++ uint64_t maddr_old;
++
++ addr_hi = get_field_from_reg_u32(pde[1],
++ IOMMU_PTE_ADDR_HIGH_MASK,
++ IOMMU_PTE_ADDR_HIGH_SHIFT);
++ addr_lo = get_field_from_reg_u32(pde[0],
++ IOMMU_PTE_ADDR_LOW_MASK,
++ IOMMU_PTE_ADDR_LOW_SHIFT);
++ old_level = get_field_from_reg_u32(pde[0],
++ IOMMU_PDE_NEXT_LEVEL_MASK,
++ IOMMU_PDE_NEXT_LEVEL_SHIFT);
++ old_w = get_field_from_reg_u32(pde[1],
++ IOMMU_PTE_IO_WRITE_PERMISSION_MASK,
++ IOMMU_PTE_IO_WRITE_PERMISSION_SHIFT);
++ old_r = get_field_from_reg_u32(pde[1],
++ IOMMU_PTE_IO_READ_PERMISSION_MASK,
++ IOMMU_PTE_IO_READ_PERMISSION_SHIFT);
++
++ maddr_old = (addr_hi << 32) | (addr_lo << PAGE_SHIFT);
++
++ if ( maddr_old != maddr_next || iw != old_w || ir != old_r ||
++ old_level != next_level )
++ need_flush = true;
++ }
+
+ addr_lo = maddr_next & DMA_32BIT_MASK;
+ addr_hi = maddr_next >> 32;
+@@ -687,10 +706,7 @@ int amd_iommu_map_page(struct domain *d,
+ if ( !need_flush )
+ goto out;
+
+- /* 4K mapping for PV guests never changes,
+- * no need to flush if we trust non-present bits */
+- if ( is_hvm_domain(d) )
+- amd_iommu_flush_pages(d, gfn, 0);
++ amd_iommu_flush_pages(d, gfn, 0);
+
+ for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
+ merge_level <= hd->arch.paging_mode; merge_level++ )
diff -r e8d57dd7e7ad -r dbb4c408003c sysutils/xenkernel411/patches/patch-XSA275-2
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel411/patches/patch-XSA275-2 Wed Nov 28 14:00:49 2018 +0000
@@ -0,0 +1,70 @@
+$NetBSD: patch-XSA275-2,v 1.1 2018/11/28 14:00:49 bouyer Exp $
+
+From: Jan Beulich <jbeulich%suse.com@localhost>
+Subject: AMD/IOMMU: suppress PTE merging after initial table creation
+
+The logic is not fit for this purpose, so simply disable its use until
+it can be fixed / replaced. Note that this re-enables merging for the
+table creation case, which was disabled as a (perhaps unintended) side
+effect of the earlier "amd/iommu: fix flush checks". It relies on no
+page getting mapped more than once (with different properties) in this
+process, as that would still be beyond what the merging logic can cope
+with. But arch_iommu_populate_page_table() guarantees this afaict.
+
+This is part of XSA-275.
+
+Reported-by: Paul Durrant <paul.durrant%citrix.com@localhost>
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+
+--- xen/drivers/passthrough/amd/iommu_map.c.orig
++++ xen/drivers/passthrough/amd/iommu_map.c
+@@ -702,11 +702,24 @@ int amd_iommu_map_page(struct domain *d,
+ !!(flags & IOMMUF_writable),
+ !!(flags & IOMMUF_readable));
+
+- /* Do not increase pde count if io mapping has not been changed */
+- if ( !need_flush )
+- goto out;
++ if ( need_flush )
Home |
Main Index |
Thread Index |
Old Index