pkgsrc-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[pkgsrc/pkgsrc-2019Q3]: pkgsrc/sysutils/xenkernel411 Pullup ticket #6086 - re...



details:   https://anonhg.NetBSD.org/pkgsrc/rev/4119e0011ae9
branches:  pkgsrc-2019Q3
changeset: 344157:4119e0011ae9
user:      bsiegert <bsiegert%pkgsrc.org@localhost>
date:      Sat Nov 16 22:10:06 2019 +0000

description:
Pullup ticket #6086 - requested by bouyer
sysutils/xenkernel411: security fix

Revisions pulled up:
- sysutils/xenkernel411/Makefile                                1.9-1.10
- sysutils/xenkernel411/distinfo                                1.6-1.7
- sysutils/xenkernel411/patches/patch-XSA298                    1.1-1.2
- sysutils/xenkernel411/patches/patch-XSA299                    1.1
- sysutils/xenkernel411/patches/patch-XSA302                    1.1-1.2
- sysutils/xenkernel411/patches/patch-XSA304                    1.1-1.2
- sysutils/xenkernel411/patches/patch-XSA305                    1.1-1.2

---
   Module Name:    pkgsrc
   Committed By:   bouyer
   Date:           Wed Nov 13 13:36:11 UTC 2019

   Modified Files:
           pkgsrc/sysutils/xenkernel411: Makefile distinfo
   Added Files:
           pkgsrc/sysutils/xenkernel411/patches: patch-XSA298 patch-XSA302
               patch-XSA304 patch-XSA305

   Log Message:
   Add patches for relevant Xen security advisory up to XSA305 (everything
   up to XSA297 is already fixed upstream).
   Bump PKGREVISION

---
   Module Name:    pkgsrc
   Committed By:   bouyer
   Date:           Wed Nov 13 15:00:06 UTC 2019

   Modified Files:
           pkgsrc/sysutils/xenkernel411: Makefile distinfo
           pkgsrc/sysutils/xenkernel411/patches: patch-XSA298 patch-XSA302
               patch-XSA304 patch-XSA305
   Added Files:
           pkgsrc/sysutils/xenkernel411/patches: patch-XSA299

   Log Message:
   Apply patch fixing XSA299.
   Bump PKGREVISION

diffstat:

 sysutils/xenkernel411/Makefile             |     4 +-
 sysutils/xenkernel411/distinfo             |     7 +-
 sysutils/xenkernel411/patches/patch-XSA298 |    89 +
 sysutils/xenkernel411/patches/patch-XSA299 |  2413 ++++++++++++++++++++++++++++
 sysutils/xenkernel411/patches/patch-XSA302 |   537 ++++++
 sysutils/xenkernel411/patches/patch-XSA304 |   481 +++++
 sysutils/xenkernel411/patches/patch-XSA305 |   482 +++++
 7 files changed, 4010 insertions(+), 3 deletions(-)

diffs (truncated from 4055 to 300 lines):

diff -r 791123747a16 -r 4119e0011ae9 sysutils/xenkernel411/Makefile
--- a/sysutils/xenkernel411/Makefile    Sat Nov 16 22:09:58 2019 +0000
+++ b/sysutils/xenkernel411/Makefile    Sat Nov 16 22:10:06 2019 +0000
@@ -1,7 +1,7 @@
-# $NetBSD: Makefile,v 1.8 2019/08/30 13:16:27 bouyer Exp $
+# $NetBSD: Makefile,v 1.8.2.1 2019/11/16 22:10:06 bsiegert Exp $
 
 VERSION=       4.11.2
-#PKGREVISION=  0
+PKGREVISION=   2
 DISTNAME=      xen-${VERSION}
 PKGNAME=       xenkernel411-${VERSION}
 CATEGORIES=    sysutils
diff -r 791123747a16 -r 4119e0011ae9 sysutils/xenkernel411/distinfo
--- a/sysutils/xenkernel411/distinfo    Sat Nov 16 22:09:58 2019 +0000
+++ b/sysutils/xenkernel411/distinfo    Sat Nov 16 22:10:06 2019 +0000
@@ -1,10 +1,15 @@
-$NetBSD: distinfo,v 1.5 2019/08/30 13:16:27 bouyer Exp $
+$NetBSD: distinfo,v 1.5.2.1 2019/11/16 22:10:06 bsiegert Exp $
 
 SHA1 (xen411/xen-4.11.2.tar.gz) = 82766db0eca7ce65962732af8a31bb5cce1eb7ce
 RMD160 (xen411/xen-4.11.2.tar.gz) = 6dcb1ac3e72381474912607b30b59fa55d87d38b
 SHA512 (xen411/xen-4.11.2.tar.gz) = 48d3d926d35eb56c79c06d0abc6e6be2564fadb43367cc7f46881c669a75016707672179c2cca1c4cfb14af2cefd46e2e7f99470cddf7df2886d8435a2de814e
 Size (xen411/xen-4.11.2.tar.gz) = 25164925 bytes
 SHA1 (patch-Config.mk) = 9372a09efd05c9fbdbc06f8121e411fcb7c7ba65
+SHA1 (patch-XSA298) = 63e0f96ce3b945b16b98b51b423bafec14cf2be6
+SHA1 (patch-XSA299) = beb7ba1a8f9e0adda161c0da725ff053e674067e
+SHA1 (patch-XSA302) = 12fbb7dfea27f53c70c8115487a2e30595549c2b
+SHA1 (patch-XSA304) = f2c22732227e11a3e77c630f0264a689eed53399
+SHA1 (patch-XSA305) = eb5e0096cbf501fcbd7a5c5f9d1f932b557636b6
 SHA1 (patch-xen_Makefile) = 465388d80de414ca3bb84faefa0f52d817e423a6
 SHA1 (patch-xen_Rules.mk) = c743dc63f51fc280d529a7d9e08650292c171dac
 SHA1 (patch-xen_arch_x86_Rules.mk) = 0bedfc53a128a87b6a249ae04fbdf6a053bfb70b
diff -r 791123747a16 -r 4119e0011ae9 sysutils/xenkernel411/patches/patch-XSA298
--- /dev/null   Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel411/patches/patch-XSA298        Sat Nov 16 22:10:06 2019 +0000
@@ -0,0 +1,89 @@
+$NetBSD: patch-XSA298,v 1.2.2.2 2019/11/16 22:10:07 bsiegert Exp $
+
+From: Jan Beulich <jbeulich%suse.com@localhost>
+Subject: x86/PV: check GDT/LDT limits during emulation
+
+Accesses beyond the LDT limit originating from emulation would trigger
+the ASSERT() in pv_map_ldt_shadow_page(). On production builds such
+accesses would cause an attempt to promote the touched page (offset from
+the present LDT base address) to a segment descriptor one. If this
+happens to succeed, guest user mode would be able to elevate its
+privileges to that of the guest kernel. This is particularly easy when
+there's no LDT at all, in which case the LDT base stored internally to
+Xen is simply zero.
+
+Also adjust the ASSERT() that was triggering: It was off by one to
+begin with, and for production builds we also better use
+ASSERT_UNREACHABLE() instead with suitable recovery code afterwards.
+
+This is XSA-298.
+
+Reported-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+Signed-off-by: Jan Beulich <jbeulich%suse.com@localhost>
+Reviewed-by: Andrew Cooper <andrew.cooper3%citrix.com@localhost>
+
+--- xen/arch/x86/pv/emul-gate-op.c.orig
++++ xen/arch/x86/pv/emul-gate-op.c
+@@ -51,7 +51,13 @@ static int read_gate_descriptor(unsigned
+     const struct desc_struct *pdesc = gdt_ldt_desc_ptr(gate_sel);
+ 
+     if ( (gate_sel < 4) ||
+-         ((gate_sel >= FIRST_RESERVED_GDT_BYTE) && !(gate_sel & 4)) ||
++         /*
++          * We're interested in call gates only, which occupy a single
++          * seg_desc_t for 32-bit and a consecutive pair of them for 64-bit.
++          */
++         ((gate_sel >> 3) + !is_pv_32bit_vcpu(v) >=
++          (gate_sel & 4 ? v->arch.pv_vcpu.ldt_ents
++                        : v->arch.pv_vcpu.gdt_ents)) ||
+          __get_user(desc, pdesc) )
+         return 0;
+ 
+@@ -70,7 +76,7 @@ static int read_gate_descriptor(unsigned
+     if ( !is_pv_32bit_vcpu(v) )
+     {
+         if ( (*ar & 0x1f00) != 0x0c00 ||
+-             (gate_sel >= FIRST_RESERVED_GDT_BYTE - 8 && !(gate_sel & 4)) ||
++             /* Limit check done above already. */
+              __get_user(desc, pdesc + 1) ||
+              (desc.b & 0x1f00) )
+             return 0;
+--- xen/arch/x86/pv/emulate.c.orig
++++ xen/arch/x86/pv/emulate.c
+@@ -31,7 +31,14 @@ int pv_emul_read_descriptor(unsigned int
+ {
+     struct desc_struct desc;
+ 
+-    if ( sel < 4)
++    if ( sel < 4 ||
++         /*
++          * Don't apply the GDT limit here, as the selector may be a Xen
++          * provided one. __get_user() will fail (without taking further
++          * action) for ones falling in the gap between guest populated
++          * and Xen ones.
++          */
++         ((sel & 4) && (sel >> 3) >= v->arch.pv_vcpu.ldt_ents) )
+         desc.b = desc.a = 0;
+     else if ( __get_user(desc, gdt_ldt_desc_ptr(sel)) )
+         return 0;
+--- xen/arch/x86/pv/mm.c.orig
++++ xen/arch/x86/pv/mm.c
+@@ -92,12 +92,16 @@ bool pv_map_ldt_shadow_page(unsigned int
+     BUG_ON(unlikely(in_irq()));
+ 
+     /*
+-     * Hardware limit checking should guarantee this property.  NB. This is
++     * Prior limit checking should guarantee this property.  NB. This is
+      * safe as updates to the LDT can only be made by MMUEXT_SET_LDT to the
+      * current vcpu, and vcpu_reset() will block until this vcpu has been
+      * descheduled before continuing.
+      */
+-    ASSERT((offset >> 3) <= curr->arch.pv_vcpu.ldt_ents);
++    if ( unlikely((offset >> 3) >= curr->arch.pv_vcpu.ldt_ents) )
++    {
++        ASSERT_UNREACHABLE();
++        return false;
++    }
+ 
+     if ( is_pv_32bit_domain(currd) )
+         linear = (uint32_t)linear;
diff -r 791123747a16 -r 4119e0011ae9 sysutils/xenkernel411/patches/patch-XSA299
--- /dev/null   Thu Jan 01 00:00:00 1970 +0000
+++ b/sysutils/xenkernel411/patches/patch-XSA299        Sat Nov 16 22:10:06 2019 +0000
@@ -0,0 +1,2413 @@
+$NetBSD: patch-XSA299,v 1.1.2.2 2019/11/16 22:10:07 bsiegert Exp $
+
+From 852df269d247e177d5f2e9b8f3a4301a6fdd76bd Mon Sep 17 00:00:00 2001
+From: George Dunlap <george.dunlap%citrix.com@localhost>
+Date: Thu, 10 Oct 2019 17:57:49 +0100
+Subject: [PATCH 01/11] x86/mm: L1TF checks don't leave a partial entry
+
+On detection of a potential L1TF issue, most validation code returns
+-ERESTART to allow the switch to shadow mode to happen and cause the
+original operation to be restarted.
+
+However, in the validation code, the return value -ERESTART has been
+repurposed to indicate 1) the function has partially completed
+something which needs to be undone, and 2) calling put_page_type()
+should cleanly undo it.  This causes problems in several places.
+
+For L1 tables, on receiving an -ERESTART return from alloc_l1_table(),
+alloc_page_type() will set PGT_partial on the page.  If for some
+reason the original operation never restarts, then on domain
+destruction, relinquish_memory() will call free_page_type() on the
+page.
+
+Unfortunately, alloc_ and free_l1_table() aren't set up to deal with
+PGT_partial.  When returning a failure, alloc_l1_table() always
+de-validates whatever it's validated so far, and free_l1_table()
+always devalidates the whole page.  This means that if
+relinquish_memory() calls free_page_type() on an L1 that didn't
+complete due to an L1TF, it will call put_page_from_l1e() on "page
+entries" that have never been validated.
+
+For L2+ tables, setting rc to ERESTART causes the rest of the
+alloc_lN_table() function to *think* that the entry in question will
+have PGT_partial set.  This will cause it to set partial_pte = 1.  If
+relinqush_memory() then calls free_page_type() on one of those pages,
+then free_lN_table() will call put_page_from_lNe() on the entry when
+it shouldn't.
+
+Rather than indicating -ERESTART, indicate -EINTR.  This is the code
+to indicate that nothing has changed from when you started the call
+(which is effectively how alloc_l1_table() handles errors).
+
+mod_lN_entry() shouldn't have any of these types of problems, so leave
+potential changes there for a clean-up patch later.
+
+This is part of XSA-299.
+
+Reported-by: George Dunlap <george.dunlap%citrix.com@localhost>
+Signed-off-by: George Dunlap <george.dunlap%citrix.com@localhost>
+Reviewed-by: Jan Beulich <jbeulich%suse.com@localhost>
+---
+ xen/arch/x86/mm.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
+index e6a4cb28f8..8ced185b49 100644
+--- xen/arch/x86/mm.c.orig
++++ xen/arch/x86/mm.c
+@@ -1110,7 +1110,7 @@ get_page_from_l2e(
+     int rc;
+ 
+     if ( !(l2e_get_flags(l2e) & _PAGE_PRESENT) )
+-        return pv_l1tf_check_l2e(d, l2e) ? -ERESTART : 1;
++        return pv_l1tf_check_l2e(d, l2e) ? -EINTR : 1;
+ 
+     if ( unlikely((l2e_get_flags(l2e) & L2_DISALLOW_MASK)) )
+     {
+@@ -1142,7 +1142,7 @@ get_page_from_l3e(
+     int rc;
+ 
+     if ( !(l3e_get_flags(l3e) & _PAGE_PRESENT) )
+-        return pv_l1tf_check_l3e(d, l3e) ? -ERESTART : 1;
++        return pv_l1tf_check_l3e(d, l3e) ? -EINTR : 1;
+ 
+     if ( unlikely((l3e_get_flags(l3e) & l3_disallow_mask(d))) )
+     {
+@@ -1175,7 +1175,7 @@ get_page_from_l4e(
+     int rc;
+ 
+     if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+-        return pv_l1tf_check_l4e(d, l4e) ? -ERESTART : 1;
++        return pv_l1tf_check_l4e(d, l4e) ? -EINTR : 1;
+ 
+     if ( unlikely((l4e_get_flags(l4e) & L4_DISALLOW_MASK)) )
+     {
+@@ -1404,7 +1404,7 @@ static int alloc_l1_table(struct page_info *page)
+     {
+         if ( !(l1e_get_flags(pl1e[i]) & _PAGE_PRESENT) )
+         {
+-            ret = pv_l1tf_check_l1e(d, pl1e[i]) ? -ERESTART : 0;
++            ret = pv_l1tf_check_l1e(d, pl1e[i]) ? -EINTR : 0;
+             if ( ret )
+                 goto out;
+         }
+-- 
+2.23.0
+
+From 6bdddd7980eac0cc883945d823986f24682ca47a Mon Sep 17 00:00:00 2001
+From: George Dunlap <george.dunlap%citrix.com@localhost>
+Date: Thu, 10 Oct 2019 17:57:49 +0100
+Subject: [PATCH 02/11] x86/mm: Don't re-set PGT_pinned on a partially
+ de-validated page
+
+When unpinning pagetables, if an operation is interrupted,
+relinquish_memory() re-sets PGT_pinned so that the un-pin will
+pickedup again when the hypercall restarts.
+
+This is appropriate when put_page_and_type_preemptible() returns
+-EINTR, which indicates that the page is back in its initial state
+(i.e., completely validated).  However, for -ERESTART, this leads to a
+state where a page has both PGT_pinned and PGT_partial set.
+
+This happens to work at the moment, although it's not really a
+"canonical" state; but in subsequent patches, where we need to make a
+distinction in handling between PGT_validated and PGT_partial pages,
+this causes issues.
+
+Move to a "canonical" state by:
+- Only re-setting PGT_pinned on -EINTR
+- Re-dropping the refcount held by PGT_pinned on -ERESTART
+
+In the latter case, the PGT_partial bit will be cleared further down
+with the rest of the other PGT_partial pages.
+
+While here, clean up some trainling whitespace.
+
+This is part of XSA-299.
+
+Reported-by: George Dunlap <george.dunlap%citrix.com@localhost>
+Signed-off-by: George Dunlap <george.dunlap%citrix.com@localhost>
+Reviewed-by: Jan Beulich <jbeulich%suse.com@localhost>
+---
+ xen/arch/x86/domain.c | 31 ++++++++++++++++++++++++++++---
+ 1 file changed, 28 insertions(+), 3 deletions(-)
+
+diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
+index 29f892c04c..8fbecbb169 100644
+--- xen/arch/x86/domain.c.orig
++++ xen/arch/x86/domain.c
+@@ -112,7 +112,7 @@ static void play_dead(void)
+      * this case, heap corruption or #PF can occur (when heap debugging is
+      * enabled). For example, even printk() can involve tasklet scheduling,
+      * which touches per-cpu vars.
+-     * 
++     *
+      * Consider very carefully when adding code to *dead_idle. Most hypervisor
+      * subsystems are unsafe to call.
+      */
+@@ -1838,9 +1838,34 @@ static int relinquish_memory(
+             break;
+         case -ERESTART:
+         case -EINTR:
++            /*
++             * -EINTR means PGT_validated has been re-set; re-set
++             * PGT_pinned again so that it gets picked up next time
++             * around.
++             *
++             * -ERESTART, OTOH, means PGT_partial is set instead.  Put
++             * it back on the list, but don't set PGT_pinned; the
++             * section below will finish off de-validation.  But we do
++             * need to drop the general ref associated with
++             * PGT_pinned, since put_page_and_type_preemptible()
++             * didn't do it.
++             *
++             * NB we can do an ASSERT for PGT_validated, since we
++             * "own" the type ref; but theoretically, the PGT_partial
++             * could be cleared by someone else.
++             */
++            if ( ret == -EINTR )
++            {
++                ASSERT(page->u.inuse.type_info & PGT_validated);



Home | Main Index | Thread Index | Old Index