pkgsrc-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[pkgsrc/trunk]: pkgsrc/sysutils Apply patches from upstream, fixing security ...
details: https://anonhg.NetBSD.org/pkgsrc/rev/eedc17a29967
branches: trunk
changeset: 372754:eedc17a29967
user: bouyer <bouyer%pkgsrc.org@localhost>
date: Fri Dec 15 14:00:44 2017 +0000
description:
Apply patches from upstream, fixing security issues XSA246 up to XSA251.
Also update patch-XSA240 from upstream, fixing issues in linear page table
handling introduced by the original XSA240 patch.
Bump PKGREVISION
diffstat:
sysutils/xenkernel46/Makefile | 4 +-
sysutils/xenkernel46/distinfo | 12 +-
sysutils/xenkernel46/patches/patch-XSA240 | 98 +++++++++-
sysutils/xenkernel46/patches/patch-XSA241 | 28 +--
sysutils/xenkernel46/patches/patch-XSA246 | 76 +++++++
sysutils/xenkernel46/patches/patch-XSA247 | 286 +++++++++++++++++++++++++++++
sysutils/xenkernel46/patches/patch-XSA248 | 164 +++++++++++++++++
sysutils/xenkernel46/patches/patch-XSA249 | 44 ++++
sysutils/xenkernel46/patches/patch-XSA250 | 69 +++++++
sysutils/xenkernel46/patches/patch-XSA251 | 23 ++
sysutils/xenkernel48/Makefile | 4 +-
sysutils/xenkernel48/distinfo | 14 +-
sysutils/xenkernel48/patches/patch-XSA240 | 97 +++++++++-
sysutils/xenkernel48/patches/patch-XSA241 | 32 +--
sysutils/xenkernel48/patches/patch-XSA242 | 10 +-
sysutils/xenkernel48/patches/patch-XSA246 | 76 +++++++
sysutils/xenkernel48/patches/patch-XSA247 | 287 ++++++++++++++++++++++++++++++
sysutils/xenkernel48/patches/patch-XSA248 | 164 +++++++++++++++++
sysutils/xenkernel48/patches/patch-XSA249 | 44 ++++
sysutils/xenkernel48/patches/patch-XSA250 | 69 +++++++
sysutils/xenkernel48/patches/patch-XSA251 | 23 ++
21 files changed, 1550 insertions(+), 74 deletions(-)
diffs (truncated from 1840 to 300 lines):
diff -r 4e5574e7b158 -r eedc17a29967 sysutils/xenkernel46/Makefile
--- a/sysutils/xenkernel46/Makefile Fri Dec 15 11:38:26 2017 +0000
+++ b/sysutils/xenkernel46/Makefile Fri Dec 15 14:00:44 2017 +0000
@@ -1,9 +1,9 @@
-# $NetBSD: Makefile,v 1.16 2017/10/17 11:10:35 bouyer Exp $
+# $NetBSD: Makefile,v 1.17 2017/12/15 14:00:44 bouyer Exp $
VERSION= 4.6.6
DISTNAME= xen-${VERSION}
PKGNAME= xenkernel46-${VERSION}
-PKGREVISION= 1
+PKGREVISION= 2
CATEGORIES= sysutils
MASTER_SITES= https://downloads.xenproject.org/release/xen/${VERSION}/
diff -r 4e5574e7b158 -r eedc17a29967 sysutils/xenkernel46/distinfo
--- a/sysutils/xenkernel46/distinfo Fri Dec 15 11:38:26 2017 +0000
+++ b/sysutils/xenkernel46/distinfo Fri Dec 15 14:00:44 2017 +0000
@@ -1,4 +1,4 @@
-$NetBSD: distinfo,v 1.10 2017/10/17 10:57:34 bouyer Exp $
+$NetBSD: distinfo,v 1.11 2017/12/15 14:00:44 bouyer Exp $
SHA1 (xen-4.6.6.tar.gz) = 82f39ef4bf754ffd679ab5d15709bc34a98fccb7
RMD160 (xen-4.6.6.tar.gz) = 6412f75183647172d72597e8779235b60e1c00f3
@@ -15,11 +15,17 @@
SHA1 (patch-XSA237) = 2a5cd048a04b8cadc67905b9001689b1221edd3e
SHA1 (patch-XSA238) = e2059991d12f31740650136ec59c62da20c79633
SHA1 (patch-XSA239) = 10619718e8a1536a7f52eb3838cdb490e6ba8c97
-SHA1 (patch-XSA240) = af3d204e9873fe79b23c714d60dfa91fcbe46ec5
-SHA1 (patch-XSA241) = b506425ca7382190435df6f96800cb0a24aff23e
+SHA1 (patch-XSA240) = 9677ebc1ee535b11ae1248325ad63ea213677561
+SHA1 (patch-XSA241) = bf9a488d2da40be0e4aed5270e25c64a9c673ca4
SHA1 (patch-XSA242) = afff314771d78ee2482aec3b7693c12bfe00e0ec
SHA1 (patch-XSA243) = ffe83e9e443a2582047f1d17673d39d6746f4b75
SHA1 (patch-XSA244) = 95077513502c26f8d6dae7964a0e422556be322a
+SHA1 (patch-XSA246) = a7eb9365cad042f5b1aa3112df6adf8421a3a6e4
+SHA1 (patch-XSA247) = 5a03a8ef20db5cd55fa39314a15f80175be78b94
+SHA1 (patch-XSA248) = d5787fa7fc48449ca90200811b66cb6278c750aa
+SHA1 (patch-XSA249) = 7037a35f37eb866f16fe90482e66d0eca95944c4
+SHA1 (patch-XSA250) = 25ab2e8c67ebe2b40cf073197c17f1625f5581f6
+SHA1 (patch-XSA251) = dc0786c85bcfbdd3f7a1c97a3af32c10deea8276
SHA1 (patch-tools_xentrace_xenalyze.c) = ab973cb7090dc90867dcddf9ab8965f8f2f36c46
SHA1 (patch-xen_Makefile) = be3f4577a205b23187b91319f91c50720919f70b
SHA1 (patch-xen_arch_arm_xen.lds.S) = df0e4a13b9b3ae863448172bea28b1b92296327b
diff -r 4e5574e7b158 -r eedc17a29967 sysutils/xenkernel46/patches/patch-XSA240
--- a/sysutils/xenkernel46/patches/patch-XSA240 Fri Dec 15 11:38:26 2017 +0000
+++ b/sysutils/xenkernel46/patches/patch-XSA240 Fri Dec 15 14:00:44 2017 +0000
@@ -1,4 +1,4 @@
-$NetBSD: patch-XSA240,v 1.1 2017/10/17 10:57:34 bouyer Exp $
+$NetBSD: patch-XSA240,v 1.2 2017/12/15 14:00:44 bouyer Exp $
From ce31198dd811479da34dfb66315f399dc4b98055 Mon Sep 17 00:00:00 2001
From: Jan Beulich <jbeulich%suse.com@localhost>
@@ -532,7 +532,7 @@
+### pv-linear-pt
+> `= <boolean>`
+
-+> Default: `true`
++> Default: `false`
+
+Allow PV guests to have pagetable entries pointing to other pagetables
+of the same level (i.e., allowing L2 PTEs to point to other L2 pages).
@@ -540,9 +540,9 @@
+used to allow operating systems a simple way to consistently map the
+current process's pagetables into its own virtual address space.
+
-+None of the most common PV operating systems (Linux, MiniOS)
-+use this technique, but NetBSD in PV mode, and maybe custom operating
-+systems do.
++None of the most common PV operating systems (Linux, NetBSD, MiniOS)
++use this technique, but there may be custom operating systems which
++do.
### reboot
> `= t[riple] | k[bd] | a[cpi] | p[ci] | P[ower] | e[fi] | n[o] [, [w]arm | [c]old]`
@@ -576,3 +576,91 @@
--
2.14.1
+From: Jan Beulich <jbeulich%suse.com@localhost>
+Subject: x86: don't wrongly trigger linear page table assertion
+
+_put_page_type() may do multiple iterations until its cmpxchg()
+succeeds. It invokes set_tlbflush_timestamp() on the first
+iteration, however. Code inside the function takes care of this, but
+- the assertion in _put_final_page_type() would trigger on the second
+ iteration if time stamps in a debug build are permitted to be
+ sufficiently much wider than the default 6 bits (see WRAP_MASK in
+ flushtlb.c),
+- it returning -EINTR (for a continuation to be scheduled) would leave
+ the page inconsistent state (until the re-invocation completes).
+Make the set_tlbflush_timestamp() invocation conditional, bypassing it
+(for now) only in the case we really can't tolerate the stamp to be
+stored.
+
+This is part of XSA-240.
+
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+Reviewed-by: George Dunlap <george.dunlap%citrix.com@localhost>
+
+--- xen/arch/x86/mm.c.orig
++++ xen/arch/x86/mm.c
+--- xen/arch/x86/mm.c.orig 2017-12-15 10:18:25.000000000 +0100
++++ xen/arch/x86/mm.c 2017-12-15 10:20:53.000000000 +0100
+@@ -2494,29 +2494,20 @@
+ break;
+ }
+
+- if ( ptpg && PGT_type_equal(x, ptpg->u.inuse.type_info) )
+- {
+- /*
+- * page_set_tlbflush_timestamp() accesses the same union
+- * linear_pt_count lives in. Unvalidated page table pages,
+- * however, should occur during domain destruction only
+- * anyway. Updating of linear_pt_count luckily is not
+- * necessary anymore for a dying domain.
+- */
+- ASSERT(page_get_owner(page)->is_dying);
+- ASSERT(page->linear_pt_count < 0);
+- ASSERT(ptpg->linear_pt_count > 0);
+- ptpg = NULL;
+- }
+-
+ /*
+ * Record TLB information for flush later. We do not stamp page
+ * tables when running in shadow mode:
+ * 1. Pointless, since it's the shadow pt's which must be tracked.
+ * 2. Shadow mode reuses this field for shadowed page tables to
+ * store flags info -- we don't want to conflict with that.
++ * Also page_set_tlbflush_timestamp() accesses the same union
++ * linear_pt_count lives in. Pages (including page table ones),
++ * however, don't need their flush time stamp set except when
++ * the last reference is being dropped. For page table pages
++ * this happens in _put_final_page_type().
+ */
+- if ( !(shadow_mode_enabled(page_get_owner(page)) &&
++ if ( (!ptpg || !PGT_type_equal(x, ptpg->u.inuse.type_info)) &&
++ !(shadow_mode_enabled(page_get_owner(page)) &&
+ (page->count_info & PGC_page_table)) )
+ page->tlbflush_timestamp = tlbflush_current_time();
+ }
+From: Jan Beulich <jbeulich%suse.com@localhost>
+Subject: x86: don't wrongly trigger linear page table assertion (2)
+
+_put_final_page_type(), when free_page_type() has exited early to allow
+for preemption, should not update the time stamp, as the page continues
+to retain the typ which is in the process of being unvalidated. I can't
+see why the time stamp update was put on that path in the first place
+(albeit it may well have been me who had put it there years ago).
+
+This is part of XSA-240.
+
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+Reviewed-by: <George Dunlap <george.dunlap.com>
+
+--- xen/arch/x86/mm.c.orig 2017-12-15 10:20:53.000000000 +0100
++++ xen/arch/x86/mm.c 2017-12-15 10:25:32.000000000 +0100
+@@ -2441,9 +2441,6 @@
+ {
+ ASSERT((page->u.inuse.type_info &
+ (PGT_count_mask|PGT_validated|PGT_partial)) == 1);
+- if ( !(shadow_mode_enabled(page_get_owner(page)) &&
+- (page->count_info & PGC_page_table)) )
+- page->tlbflush_timestamp = tlbflush_current_time();
+ wmb();
+ page->u.inuse.type_info |= PGT_validated;
+ }
diff -r 4e5574e7b158 -r eedc17a29967 sysutils/xenkernel46/patches/patch-XSA241
--- a/sysutils/xenkernel46/patches/patch-XSA241 Fri Dec 15 11:38:26 2017 +0000
+++ b/sysutils/xenkernel46/patches/patch-XSA241 Fri Dec 15 14:00:44 2017 +0000
@@ -1,4 +1,4 @@
-$NetBSD: patch-XSA241,v 1.1 2017/10/17 10:57:34 bouyer Exp $
+$NetBSD: patch-XSA241,v 1.2 2017/12/15 14:00:44 bouyer Exp $
x86: don't store possibly stale TLB flush time stamp
@@ -25,7 +25,7 @@
#include <asm/cpregs.h>
--- xen/arch/x86/mm.c.orig
+++ xen/arch/x86/mm.c
-@@ -2524,7 +2524,7 @@ static int _put_final_page_type(struct p
+@@ -2440,7 +2440,7 @@ static int _put_final_page_type(struct p
*/
if ( !(shadow_mode_enabled(page_get_owner(page)) &&
(page->count_info & PGC_page_table)) )
@@ -34,27 +34,9 @@
wmb();
page->u.inuse.type_info--;
}
-@@ -2534,7 +2534,7 @@ static int _put_final_page_type(struct p
- (PGT_count_mask|PGT_validated|PGT_partial)) == 1);
- if ( !(shadow_mode_enabled(page_get_owner(page)) &&
- (page->count_info & PGC_page_table)) )
-- page->tlbflush_timestamp = tlbflush_current_time();
-+ page_set_tlbflush_timestamp(page);
- wmb();
- page->u.inuse.type_info |= PGT_validated;
- }
-@@ -2588,7 +2588,7 @@ static int _put_page_type(struct page_in
- if ( ptpg && PGT_type_equal(x, ptpg->u.inuse.type_info) )
- {
- /*
-- * page_set_tlbflush_timestamp() accesses the same union
-+ * set_tlbflush_timestamp() accesses the same union
- * linear_pt_count lives in. Unvalidated page table pages,
- * however, should occur during domain destruction only
- * anyway. Updating of linear_pt_count luckily is not
-@@ -2609,7 +2609,7 @@ static int _put_page_type(struct page_in
- */
- if ( !(shadow_mode_enabled(page_get_owner(page)) &&
+@@ -2510,7 +2510,7 @@
+ if ( (!ptpg || !PGT_type_equal(x, ptpg->u.inuse.type_info)) &&
+ !(shadow_mode_enabled(page_get_owner(page)) &&
(page->count_info & PGC_page_table)) )
- page->tlbflush_timestamp = tlbflush_current_time();
+ page_set_tlbflush_timestamp(page);
diff -r 4e5574e7b158 -r eedc17a29967 sysutils/xenkernel46/patches/patch-XSA246
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel46/patches/patch-XSA246 Fri Dec 15 14:00:44 2017 +0000
@@ -0,0 +1,76 @@
+$NetBSD: patch-XSA246,v 1.1 2017/12/15 14:00:44 bouyer Exp $
+
+From: Julien Grall <julien.grall%linaro.org@localhost>
+Subject: x86/pod: prevent infinite loop when shattering large pages
+
+When populating pages, the PoD may need to split large ones using
+p2m_set_entry and request the caller to retry (see ept_get_entry for
+instance).
+
+p2m_set_entry may fail to shatter if it is not possible to allocate
+memory for the new page table. However, the error is not propagated
+resulting to the callers to retry infinitely the PoD.
+
+Prevent the infinite loop by return false when it is not possible to
+shatter the large mapping.
+
+This is XSA-246.
+
+Signed-off-by: Julien Grall <julien.grall%linaro.org@localhost>
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+Reviewed-by: George Dunlap <george.dunlap%citrix.com@localhost>
+
+--- xen/arch/x86/mm/p2m-pod.c.orig
++++ xen/arch/x86/mm/p2m-pod.c
+@@ -1073,9 +1073,8 @@ p2m_pod_demand_populate(struct p2m_domai
+ * NOTE: In a fine-grained p2m locking scenario this operation
+ * may need to promote its locking from gfn->1g superpage
+ */
+- p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
+- p2m_populate_on_demand, p2m->default_access);
+- return 0;
++ return p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_2M,
++ p2m_populate_on_demand, p2m->default_access);
+ }
+
+ /* Only reclaim if we're in actual need of more cache. */
+@@ -1106,8 +1105,12 @@ p2m_pod_demand_populate(struct p2m_domai
+
+ gfn_aligned = (gfn >> order) << order;
+
+- p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
+- p2m->default_access);
++ if ( p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
++ p2m->default_access) )
++ {
++ p2m_pod_cache_add(p2m, p, order);
++ goto out_fail;
++ }
+
+ for( i = 0; i < (1UL << order); i++ )
+ {
+@@ -1152,13 +1155,18 @@ remap_and_retry:
+ BUG_ON(order != PAGE_ORDER_2M);
+ pod_unlock(p2m);
+
+- /* Remap this 2-meg region in singleton chunks */
+- /* NOTE: In a p2m fine-grained lock scenario this might
+- * need promoting the gfn lock from gfn->2M superpage */
++ /*
++ * Remap this 2-meg region in singleton chunks. See the comment on the
++ * 1G page splitting path above for why a single call suffices.
++ *
++ * NOTE: In a p2m fine-grained lock scenario this might
++ * need promoting the gfn lock from gfn->2M superpage.
++ */
+ gfn_aligned = (gfn>>order)<<order;
+- for(i=0; i<(1<<order); i++)
+- p2m_set_entry(p2m, gfn_aligned + i, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+- p2m_populate_on_demand, p2m->default_access);
++ if ( p2m_set_entry(p2m, gfn_aligned, _mfn(INVALID_MFN), PAGE_ORDER_4K,
++ p2m_populate_on_demand, p2m->default_access) )
++ return -1;
++
+ if ( tb_init_done )
+ {
+ struct {
diff -r 4e5574e7b158 -r eedc17a29967 sysutils/xenkernel46/patches/patch-XSA247
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel46/patches/patch-XSA247 Fri Dec 15 14:00:44 2017 +0000
@@ -0,0 +1,286 @@
Home |
Main Index |
Thread Index |
Old Index