Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys Add support for Kernel Memory Sanitizer (kMSan). It dete...



details:   https://anonhg.NetBSD.org/src/rev/58f11351c731
branches:  trunk
changeset: 461070:58f11351c731
user:      maxv <maxv%NetBSD.org@localhost>
date:      Thu Nov 14 16:23:52 2019 +0000

description:
Add support for Kernel Memory Sanitizer (kMSan). It detects uninitialized
memory used by the kernel at run time, and just like kASan and kCSan, it
is an excellent feature. It has already detected 38 uninitialized variables
in the kernel during my testing, which I have since discreetly fixed.

We use two shadows:
 - "shad", to track uninitialized memory with a bit granularity (1:1).
   Each bit set to 1 in the shad corresponds to one uninitialized bit of
   real kernel memory.
 - "orig", to track the origin of the memory with a 4-byte granularity
   (1:1). Each uint32_t cell in the orig indicates the origin of the
   associated uint32_t of real kernel memory.

The memory consumption of these shadows is consequent, so at least 4GB of
RAM is recommended to run kMSan.

The compiler inserts calls to specific __msan_* functions on each memory
access, to manage both the shad and the orig and detect uninitialized
memory accesses that change the execution flow (like an "if" on an
uninitialized variable).

We mark as uninit several types of memory buffers (stack, pools, kmem,
malloc, uvm_km), and check each buffer passed to copyout, copyoutstr,
bwrite, if_transmit_lock and DMA operations, to detect uninitialized memory
that leaves the system. This allows us to detect kernel info leaks in a way
that is more efficient and also more user-friendly than KLEAK.

Contrary to kASan, kMSan requires comprehensive coverage, ie we cannot
tolerate having one non-instrumented function, because this could cause
false positives. kMSan cannot instrument ASM functions, so I converted
most of them to __asm__ inlines, which kMSan is able to instrument. Those
that remain receive special treatment.

Contrary to kASan again, kMSan uses a TLS, so we must context-switch this
TLS during interrupts. We use different contexts depending on the interrupt
level.

The orig tracks precisely the origin of a buffer. We use a special encoding
for the orig values, and pack together in each uint32_t cell of the orig:
 - a code designating the type of memory (Stack, Pool, etc), and
 - a compressed pointer, which points either (1) to a string containing
   the name of the variable associated with the cell, or (2) to an area
   in the kernel .text section which we resolve to a symbol name + offset.

This encoding allows us not to consume extra memory for associating
information with each cell, and produces a precise output, that can tell
for example the name of an uninitialized variable on the stack, the
function in which it was pushed on the stack, and the function where we
accessed this uninitialized variable.

kMSan is available with LLVM, but not with GCC.

The code is organized in a way that is similar to kASan and kCSan, so it
means that other architectures than amd64 can be supported.

diffstat:

 sys/arch/amd64/amd64/amd64_trap.S   |     6 +-
 sys/arch/amd64/amd64/busfunc.S      |    10 +-
 sys/arch/amd64/amd64/cpu_in_cksum.S |     4 +-
 sys/arch/amd64/amd64/cpufunc.S      |    20 +-
 sys/arch/amd64/amd64/lock_stubs.S   |     3 +-
 sys/arch/amd64/amd64/locore.S       |    11 +-
 sys/arch/amd64/amd64/machdep.c      |    13 +-
 sys/arch/amd64/amd64/mptramp.S      |     5 +-
 sys/arch/amd64/amd64/spl.S          |    17 +-
 sys/arch/amd64/conf/GENERIC         |    15 +-
 sys/arch/amd64/conf/Makefile.amd64  |    10 +-
 sys/arch/amd64/include/cpu.h        |    20 +-
 sys/arch/amd64/include/frameasm.h   |    69 +-
 sys/arch/amd64/include/msan.h       |   241 ++++++
 sys/arch/amd64/include/param.h      |     9 +-
 sys/arch/amd64/include/pmap.h       |    14 +-
 sys/arch/amd64/include/types.h      |     5 +-
 sys/arch/x86/include/bus_defs.h     |     5 +-
 sys/arch/x86/include/pmap.h         |     7 +-
 sys/arch/x86/x86/bus_dma.c          |    10 +-
 sys/arch/x86/x86/pmap.c             |    13 +-
 sys/conf/files                      |     4 +-
 sys/kern/files.kern                 |     3 +-
 sys/kern/kern_lwp.c                 |     7 +-
 sys/kern/kern_malloc.c              |    17 +-
 sys/kern/subr_kmem.c                |    10 +-
 sys/kern/subr_msan.c                |  1356 +++++++++++++++++++++++++++++++++++
 sys/kern/subr_pool.c                |    55 +-
 sys/lib/libkern/libkern.h           |    19 +-
 sys/net/if.c                        |     7 +-
 sys/sys/atomic.h                    |    86 ++-
 sys/sys/bus_proto.h                 |    55 +-
 sys/sys/cdefs.h                     |     8 +-
 sys/sys/lwp.h                       |     9 +-
 sys/sys/msan.h                      |    85 ++
 sys/sys/systm.h                     |    20 +-
 sys/uvm/uvm_km.c                    |    11 +-
 37 files changed, 2198 insertions(+), 61 deletions(-)

diffs (truncated from 3389 to 300 lines):

diff -r 9f92e6c7def6 -r 58f11351c731 sys/arch/amd64/amd64/amd64_trap.S
--- a/sys/arch/amd64/amd64/amd64_trap.S Thu Nov 14 13:58:22 2019 +0000
+++ b/sys/arch/amd64/amd64/amd64_trap.S Thu Nov 14 16:23:52 2019 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: amd64_trap.S,v 1.49 2019/10/12 06:31:03 maxv Exp $     */
+/*     $NetBSD: amd64_trap.S,v 1.50 2019/11/14 16:23:52 maxv Exp $     */
 
 /*
  * Copyright (c) 1998, 2007, 2008, 2017 The NetBSD Foundation, Inc.
@@ -224,6 +224,7 @@
        cld
        SMAP_ENABLE
        IBRS_ENTER
+       KMSAN_ENTER
        movw    %gs,TF_GS(%rsp)
        movw    %fs,TF_FS(%rsp)
        movw    %es,TF_ES(%rsp)
@@ -267,6 +268,7 @@
        movw    %ds,TF_DS(%rsp)
 
        SVS_ENTER_NMI
+       KMSAN_ENTER
 
        movl    $MSR_GSBASE,%ecx
        rdmsr
@@ -292,6 +294,7 @@
        IBRS_LEAVE
 1:
 
+       KMSAN_LEAVE
        SVS_LEAVE_NMI
        INTR_RESTORE_GPRS
        addq    $TF_REGSIZE+16,%rsp
@@ -668,6 +671,7 @@
        movl    $T_ASTFLT,TF_TRAPNO(%rsp)
        movq    %rsp,%rdi
        incq    CPUVAR(NTRAP)
+       KMSAN_INIT_ARG(8)
        call    _C_LABEL(trap)
        jmp     .Lalltraps_checkast     /* re-check ASTs */
 3:     CHECK_DEFERRED_SWITCH
diff -r 9f92e6c7def6 -r 58f11351c731 sys/arch/amd64/amd64/busfunc.S
--- a/sys/arch/amd64/amd64/busfunc.S    Thu Nov 14 13:58:22 2019 +0000
+++ b/sys/arch/amd64/amd64/busfunc.S    Thu Nov 14 16:23:52 2019 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: busfunc.S,v 1.11 2013/06/22 05:20:57 uebayasi Exp $    */
+/*     $NetBSD: busfunc.S,v 1.12 2019/11/14 16:23:52 maxv Exp $        */
 
 /*-
  * Copyright (c) 2007, 2008 The NetBSD Foundation, Inc.
@@ -30,6 +30,7 @@
  */
 
 #include <machine/asm.h>
+#include <machine/frameasm.h>
 
 #include "assym.h"
 
@@ -47,10 +48,12 @@
        cmpl    $X86_BUS_SPACE_IO, BST_TYPE(%rdi)
        je      1f
        movzbl  (%rdx), %eax
+       KMSAN_INIT_RET(1)
        ret
 1:
        xorl    %eax, %eax
        inb     %dx, %al
+       KMSAN_INIT_RET(1)
        ret
 END(bus_space_read_1)
 
@@ -63,10 +66,12 @@
        cmpl    $X86_BUS_SPACE_IO, BST_TYPE(%rdi)
        je      1f
        movzwl  (%rdx), %eax
+       KMSAN_INIT_RET(2)
        ret
 1:
        xorl    %eax, %eax
        inw     %dx, %ax
+       KMSAN_INIT_RET(2)
        ret
 END(bus_space_read_2)
 
@@ -79,9 +84,11 @@
        cmpl    $X86_BUS_SPACE_IO, BST_TYPE(%rdi)
        je      1f
        movl    (%rdx), %eax
+       KMSAN_INIT_RET(4)
        ret
 1:
        inl     %dx, %eax
+       KMSAN_INIT_RET(4)
        ret
 END(bus_space_read_4)
 
@@ -94,6 +101,7 @@
        cmpl    $X86_BUS_SPACE_IO, BST_TYPE(%rdi)
        je      .Ldopanic
        movq    (%rdx), %rax
+       KMSAN_INIT_RET(8)
        ret
 END(bus_space_read_8)
 
diff -r 9f92e6c7def6 -r 58f11351c731 sys/arch/amd64/amd64/cpu_in_cksum.S
--- a/sys/arch/amd64/amd64/cpu_in_cksum.S       Thu Nov 14 13:58:22 2019 +0000
+++ b/sys/arch/amd64/amd64/cpu_in_cksum.S       Thu Nov 14 16:23:52 2019 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: cpu_in_cksum.S,v 1.3 2015/06/30 21:08:24 christos Exp $ */
+/* $NetBSD: cpu_in_cksum.S,v 1.4 2019/11/14 16:23:52 maxv Exp $ */
 
 /*-
  * Copyright (c) 2008 Joerg Sonnenberger <joerg%NetBSD.org@localhost>.
@@ -30,6 +30,7 @@
  */
 
 #include <machine/asm.h>
+#include <machine/frameasm.h>
 #include "assym.h"
 
 ENTRY(cpu_in_cksum)
@@ -282,6 +283,7 @@
 .Mreturn:
        popq    %rbx
        popq    %rbp
+       KMSAN_INIT_RET(4)
        ret
 
 .Mout_of_mbufs:
diff -r 9f92e6c7def6 -r 58f11351c731 sys/arch/amd64/amd64/cpufunc.S
--- a/sys/arch/amd64/amd64/cpufunc.S    Thu Nov 14 13:58:22 2019 +0000
+++ b/sys/arch/amd64/amd64/cpufunc.S    Thu Nov 14 16:23:52 2019 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: cpufunc.S,v 1.46 2019/10/30 17:06:57 maxv Exp $        */
+/*     $NetBSD: cpufunc.S,v 1.47 2019/11/14 16:23:52 maxv Exp $        */
 
 /*
  * Copyright (c) 1998, 2007, 2008 The NetBSD Foundation, Inc.
@@ -153,6 +153,7 @@
 ENTRY(x86_read_flags)
        pushfq
        popq    %rax
+       KMSAN_INIT_RET(8)
        ret
 END(x86_read_flags)
 
@@ -174,6 +175,7 @@
        addl    CPUVAR(CC_SKEW), %eax
        cmpq    %rdi, L_NCSW(%rcx)
        jne     2f
+       KMSAN_INIT_RET(4)
        ret
 2:
        jmp     1b
@@ -194,6 +196,13 @@
 
        xorq    %rax, %rax
        movq    %rax, PCB_ONFAULT(%r8)
+#ifdef KMSAN
+       movq    %rsi,%rdi
+       movq    $8,%rsi
+       xorq    %rdx,%rdx
+       callq   _C_LABEL(kmsan_mark)
+#endif
+       KMSAN_INIT_RET(4)
        ret
 END(rdmsr_safe)
 
@@ -211,12 +220,14 @@
        shlq    $32, %rdx
        orq     %rdx, %rax
        addq    CPUVAR(CC_SKEW), %rax
+       KMSAN_INIT_RET(8)
        ret
 END(cpu_counter)
 
 ENTRY(cpu_counter32)
        rdtsc
        addl    CPUVAR(CC_SKEW), %eax
+       KMSAN_INIT_RET(4)
        ret
 END(cpu_counter32)
 
@@ -230,11 +241,13 @@
 
 ENTRY(x86_curcpu)
        movq    %gs:(CPU_INFO_SELF), %rax
+       KMSAN_INIT_RET(8)
        ret
 END(x86_curcpu)
 
 ENTRY(x86_curlwp)
        movq    %gs:(CPU_INFO_CURLWP), %rax
+       KMSAN_INIT_RET(8)
        ret
 END(x86_curlwp)
 
@@ -246,12 +259,14 @@
 ENTRY(__byte_swap_u32_variable)
        movl    %edi, %eax
        bswapl  %eax
+       KMSAN_INIT_RET(4)
        ret
 END(__byte_swap_u32_variable)
 
 ENTRY(__byte_swap_u16_variable)
        movl    %edi, %eax
        xchgb   %al, %ah
+       KMSAN_INIT_RET(2)
        ret
 END(__byte_swap_u16_variable)
 
@@ -330,6 +345,7 @@
        movq    %rdi, %rdx
        xorq    %rax, %rax
        inb     %dx, %al
+       KMSAN_INIT_RET(1)
        ret
 END(inb)
 
@@ -346,6 +362,7 @@
        movq    %rdi, %rdx
        xorq    %rax, %rax
        inw     %dx, %ax
+       KMSAN_INIT_RET(2)
        ret
 END(inw)
 
@@ -362,6 +379,7 @@
        movq    %rdi, %rdx
        xorq    %rax, %rax
        inl     %dx, %eax
+       KMSAN_INIT_RET(4)
        ret
 END(inl)
 
diff -r 9f92e6c7def6 -r 58f11351c731 sys/arch/amd64/amd64/lock_stubs.S
--- a/sys/arch/amd64/amd64/lock_stubs.S Thu Nov 14 13:58:22 2019 +0000
+++ b/sys/arch/amd64/amd64/lock_stubs.S Thu Nov 14 16:23:52 2019 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: lock_stubs.S,v 1.32 2019/09/05 12:57:30 maxv Exp $     */
+/*     $NetBSD: lock_stubs.S,v 1.33 2019/11/14 16:23:52 maxv Exp $     */
 
 /*
  * Copyright (c) 2006, 2007, 2008, 2009 The NetBSD Foundation, Inc.
@@ -343,6 +343,7 @@
        cmpxchgb %ah, (%rdi)
        movl    $0, %eax
        setz    %al
+       KMSAN_INIT_RET(4)
        RET
 END(__cpu_simple_lock_try)
 
diff -r 9f92e6c7def6 -r 58f11351c731 sys/arch/amd64/amd64/locore.S
--- a/sys/arch/amd64/amd64/locore.S     Thu Nov 14 13:58:22 2019 +0000
+++ b/sys/arch/amd64/amd64/locore.S     Thu Nov 14 16:23:52 2019 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: locore.S,v 1.189 2019/10/12 06:31:03 maxv Exp $        */
+/*     $NetBSD: locore.S,v 1.190 2019/11/14 16:23:52 maxv Exp $        */
 
 /*
  * Copyright-o-rama!
@@ -1235,6 +1235,7 @@
 
 .Lswitch_return:
        /* Return to the new LWP, returning 'oldlwp' in %rax. */
+       KMSAN_INIT_RET(8)
        movq    %r13,%rax
        popq    %r15
        popq    %r14
@@ -1321,6 +1322,7 @@
        STI(si)
        /* Pushed T_ASTFLT into tf_trapno on entry. */
        movq    %rsp,%rdi
+       KMSAN_INIT_ARG(8)
        call    _C_LABEL(trap)
        jmp     .Lsyscall_checkast      /* re-check ASTs */
 END(handle_syscall)
@@ -1336,8 +1338,10 @@
        movq    %rbp,%r14       /* for .Lsyscall_checkast */
        movq    %rax,%rdi
        xorq    %rbp,%rbp
+       KMSAN_INIT_ARG(16)
        call    _C_LABEL(lwp_startup)
        movq    %r13,%rdi
+       KMSAN_INIT_ARG(8)
        call    *%r12
        jmp     .Lsyscall_checkast
 END(lwp_trampoline)
@@ -1410,6 +1414,7 @@
        .if     \is_svs
                SVS_ENTER
        .endif
+       KMSAN_ENTER
        jmp     handle_syscall
 IDTVEC_END(\name)
 .endm
@@ -1453,6 +1458,7 @@
        TEXT_USER_BEGIN
        _ALIGN_TEXT



Home | Main Index | Thread Index | Old Index