Hi,
Corosync can be compiled without any patches. You can pick version 1.4.4 from corosync.org and just compile it. As for cluster-glue and Pacemaker, I didn´t make any patches yet.. there were some minor issues to fix in order to make it compile. Also, I used a slightly older release of cluster-glue because the recent one insists of libaio :(
I just found that my build was already compiled with -g. How can I print the content of conn_info and analyze the cause of the crash?
Regards,
Stephan2012/12/1 Christos Zoulas <christos%astron.com@localhost>In article <CABZpUSWKyWkJnWEsFuuHpAuj+YxuxZ3ny=kEFW2pzk6B3ZUaWQ%mail.gmail.com@localhost>,
Stephan <stephanwib%googlemail.com@localhost> wrote:
>-=-=-=-=-=-
>If you compile with -g you should be able to print the contents of conn_info.
>Hi folks,
>
>some time ago I managed to get Corosync working on NetBSD - now the
>official 1.x builds can be compiled and run "out of the box". Recently, I
>made Pacemaker 1.1 and cluster_glue compile after making some cosmetic
>changes to the code.
>
>However, when I load the Pacemaker subsystem into Corosync, corosync
>crashes with SIGSEGV. It alway does in pthread_mutex_lock(), for example in
>the following code block:
>
>===========
>static int ipc_thread_active (void *conn)
>{
> struct conn_info *conn_info = (struct conn_info *)conn;
> int retval = 0;
>
>
> pthread_mutex_lock
>(&conn_info->mutex); <<<--- CRASH
> if (conn_info->state == CONN_STATE_THREAD_ACTIVE) {
> retval = 1;
> }
> pthread_mutex_unlock (&conn_info->mutex);
> return (retval);
>}
>==================
>
>I am not sure how to track this down with gdb. Here are some findings:
christos