pkgsrc-Bugs archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

pkg/59009: pbulk-build SIGSEGV on null pointer dereference



>Number:         59009
>Category:       pkg
>Synopsis:       pbulk-build SIGSEGV on null pointer dereference
>Confidential:   no
>Severity:       serious
>Priority:       medium
>Responsible:    pkg-manager
>State:          open
>Class:          sw-bug
>Submitter-Id:   net
>Arrival-Date:   Sun Jan 19 10:40:00 +0000 2025
>Originator:     Taylor R Campbell
>Release:        2024Q4
>Organization:
The NetBulkbuilD Foundacrash
>Environment:
>Description:
Resolving...
Building...
Initialisation complete.
[1245/28443] Starting build of  cwrappers-20220403
pbulk-build: Premature end of stream while reading path from socket
[1]   Segmentation fault (core dumped) ${pbuild} -r ${loc}/pbuild -I ${pbuild_start_s...

$ gdb /pbulk/2024Q4/pkg/bin/pbulk-build ./pbulk-build.core
...
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000001d60389c in send_build_info (arg=0x7b40d8c92270) at master.c:128
128     master.c: No such file or directory.
(gdb) info locals
peer = 0x7b40d8c92270
(gdb) print peer->job
$1 = (struct build_job *) 0x0
(gdb) print *peer
$2 = {peer_link = {le_next = 0x0, le_prev = 0x1d80f328 <inactive_peers>}, job = 0x0, fd = 9, tmp_buf = "\000\000\021h", buf = 0x0}
(gdb) print active_peers
$3 = {lh_first = 0x7b40d8c92270}
(gdb) print *active_peers->lh_first
$4 = {peer_link = {le_next = 0x0, le_prev = 0x1d80f328 <inactive_peers>}, job = 0x0, fd = 9, tmp_buf = "\000\000\021h", buf = 0x0}
(gdb) print inactive_peers
$5 = {lh_first = 0x7b40d8c92270}
(gdb) print active_peers.lh_first == inactive_peers.lh_first
$6 = 1
(gdb) print unassigned_peers 
$7 = {lh_first = 0x0}
(gdb) print clients_started
$8 = 1
(gdb) print child_event
$9 = {sig_link = {le_next = 0x0, le_prev = 0x1d80f3c8 <all_signals>}, sig_id = 20, sig_received = 0, sig_handler = 0x1d603be1 <child_handler>}
(gdb) print child_pid
$10 = 278

master.c:

   128          deferred_write(peer->fd, peer->job->begin, peer->job->end - peer->job->begin, peer, recv_status,
   129              kill_peer);

It is curious that both active_peers and inactive_peers point to the same peer.  This seems suboptimal.

Have a core dump and pbulk-build program with debug data, can print more info on request.
>How-To-Repeat:
run pbulk a lot
>Fix:
Yes, please!



Home | Main Index | Thread Index | Old Index