NetBSD-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

port-xen/47193: xenwatch handlers are added twice to the list of pending watches



>Number:         47193
>Category:       port-xen
>Synopsis:       xenwatch handlers are added twice to the list of pending 
>watches
>Confidential:   no
>Severity:       critical
>Priority:       high
>Responsible:    port-xen-maintainer
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Wed Nov 14 17:05:00 +0000 2012
>Originator:     Roger Pau Monné
>Release:        6.99.14
>Organization:
Citrix Systems R&D
>Environment:
NetBSD test 6.99.14 NetBSD 6.99.14 (XEN3_DOM0) #11: Wed Nov 14 17:29:21 CET 
2012  royger@mac:/Users/royger/obj/sys/arch/amd64/compile/XEN3_DOM0 amd64
>Description:
When a xenstore watch triggers, the event is processed on process_msg and if a 
valid handle it's found the handler is queued for execution on the pending xen 
watches queue (watch_events).

This may present a problem if we trigger a xenwatch several times and then 
disconnect the device. If several xenwatch events are added to the watch_events 
queue and the device is disconnected afterwards, the first processed xenwatch 
event will disconnect the device, remove the watch and free all resources. This 
triggers a panic if there are pending xenwatch events for that device already 
queued in the local queue of the function xenwatch_thread, since when the next 
watch that has the same handler tries to execute we get a panic due to the fact 
that the device is already disconnected and all resources had been freed:

xenbus_watch: 0xffffa0000b7cd1d0
xbw_callback: 0xffffffff80755dd4
otherend_changed: backend/vif/1/0
backend/vif/1/0/state 6
backend/vif/1/0 -> Closed
backend/vif/1/0 -> backend_device, b_detach: 0xffffffff8075a2bf
xenbus_watch: 0xffffa0000b7cd1d0
xbw_callback: 0xfc5ec02183e547a8
fatal protection fault in supervisor mode
trap type 4 code 0 rip ffffffff80756596 cs e030 rflags 10246 cr2 7f7ff7b4c020 
ilevel 0 rsp ffffa000e6d82c50
curlwp 0xffffa0000a72d580 pid 0 lid 36 lowest kstack 0xffffa000e6d7f000
kernel: protection fault trap, code=0
Stopped in pid 0.36 (system) at netbsd:xenwatch_thread+0xc7:    call    *10(%rax
)
xenwatch_thread() at netbsd:xenwatch_thread+0xc7
ds          f
es          5987
fs          2c40
gs          1460
rdi         ffffa0000b7cd1d0
rsi         ffffa0000a5477f0
rbp         ffffa000e6d82c70
rbx         ffffa0000b7c14c0
rdx         2
rcx         f
rax         ffffa0000b7cd1d0
r8          78
r9          ffffffef
r10         deadbeef
r11         1
r12         ffffa000e6d82c50
r13         ffffa0000a72d580
r14         ffffa0000a72d580
r15         0
rip         ffffffff80756596    xenwatch_thread+0xc7
cs          e030
rflags      10246
rsp         ffffa000e6d82c50
ss          e02b
netbsd:xenwatch_thread+0xc7:    call    *10(%rax)
db>

This is a security problem, because you can effectively make the Dom0 crash 
from a DomU, so I think this PR should be marked as confidential until a fix is 
committed.
>How-To-Repeat:

>Fix:
The fix is easy, we need to check if the watch handler has already been added 
to the queue of pending events before adding it. Here is a proposed patch.

---
diff --git a/sys/arch/xen/xenbus/xenbus_xs.c b/sys/arch/xen/xenbus/xenbus_xs.c
index c09e7c4..8cef74a 100644
--- a/sys/arch/xen/xenbus/xenbus_xs.c
+++ b/sys/arch/xen/xenbus/xenbus_xs.c
@@ -751,7 +751,7 @@ xenwatch_thread(void *unused)
 static int
 process_msg(void)
 {
-       struct xs_stored_msg *msg;
+       struct xs_stored_msg *msg, *s_msg;
        char *body;
        int err;
 
@@ -782,7 +782,7 @@ process_msg(void)
        body[msg->hdr.len] = '\0';
 
        if (msg->hdr.type == XS_WATCH_EVENT) {
-               bool found;
+               bool found, repeated;
 
                DPRINTK("process_msg: XS_WATCH_EVENT");
                msg->u.watch.vec = split(body, msg->hdr.len,
@@ -797,13 +797,20 @@ process_msg(void)
                    msg->u.watch.vec[XS_WATCH_TOKEN]);
                found = (msg->u.watch.handle != NULL);
                if (found) {
+                       repeated = false;
+                       SIMPLEQ_FOREACH(s_msg, &watch_events, msg_next) {
+                               if (s_msg->u.watch.handle == 
msg->u.watch.handle)
+                                       repeated = true;
+                       }
+               }
+               if (found && !repeated) {
                        mutex_enter(&watch_events_lock);
                        SIMPLEQ_INSERT_TAIL(&watch_events, msg, msg_next);
                        cv_broadcast(&watch_cv);
                        mutex_exit(&watch_events_lock);
                }
                mutex_exit(&watches_lock);
-               if (!found) {
+               if (!found || repeated) {
                        free(msg->u.watch.vec, M_DEVBUF);
                        free(msg, M_DEVBUF);
                }



Home | Main Index | Thread Index | Old Index