tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
remote kernel debugging over a network
Hello, kernel hackers!
I am the student doing the "Remote kernel debugging over Ethernet"
Google Summer of Code project.
I have to make a decision about the protocol that will be used for
remote kernel debugging.
The environment for running any protocol, when the kernel is stopped for
inspection, is very limited. Normal interrupt delivery is disabled and
we can't rely on any kernel facilities.
My question is: Should I go with a custom protocol (+ a daemon that
proxies to TCP), or should I go with TCP directly?
(TCP is a necessary part, because gdb has a built-in ability to speak
its remote debug protocol over TCP.)
With a custom protocol, we can:
1) have packet based flow control, i.e. if we have 3 free RX descriptors
in the NIC receive ring, we can communicate that to the peer, so that he
doesn't send more.
2) pass over the responsibility for detecting packet loss and doing
retransmissions to the more capable peer (the one on the developer
workstation).
3) we can do any other thing that comes to our mind
If we use TCP:
1) we don't need to re-invent or re-write what we already have. TCP has
flow control and retransmission. There are small TCP implementations
available.
2) we don't need a proxy daemon. Gdb will connect directly to the kernel
being debugged.
Using TCP directly requires retransmission timers. We have at least
these ways to emulate timers:
1) Use delay(len) to sleep for one "tick" of length 'len', counting the
ticks and thus measuring time.
2) Continuously poll the Ethernet NIC for arriving packets, with an
external daemon providing us with tick packets every 'len' microseconds.
Every tick packet is one tick. Count the ticks and measure time that way.
So, what do you think? Custom protocol with a proxy daemon or TCP directly?
--
Jordan Gordeev
Home |
Main Index |
Thread Index |
Old Index