Subject: ingres and atomicity
To: None <tech-userlevel@netbsd.org>
From: James K. Lowden <jklowden@schemamania.org>
List: tech-userlevel
Date: 06/12/2005 22:22:43
Back to the future: Ingres is free again.  If you google "ingres
site:netbsd.org" you get a lot of messages about the group, and some
ancient ones about the original RDBMS from Berkeley.  You may know CA
released it as open source.  I thought I'd try my hand at porting it. 
Since going commercial, Ingres grew threads, about which it seems I'll
have to learn.  

My first real difficulty has to do with "atomic clear".  The build
requires it:

ingres/src/cl/hdr/hdr_unix_win/csnormal.h:2621:11: 
	#error : must define an atomic clear

(followed closely by: 
ingres/src/cl/hdr/hdr_unix_win/csnormal.h:2705:3
: #error "BUILD ERROR: Need to provide Compare & Swap Routines"
)

It's defined variously by processor and OS.  Here's the axp version:

** CS_aclr(CS_ASET *lock_variable)
**
** Description:
**      Atomic clear of a quadword.
**
**      A pre-condition is that this function is called only by a process
**      holding the specified lock.
**
**      There is no need to use the ldq_l, stq_c pair here, if another
**      process is contending for the lock this atomic clear should have
**      priority.
**
**      The stq instruction here will clear the processor lock flag in
**      another process (if any) contending for the lock.
**
**      The initial mb (memory barrier) instruction prevents any reads
**      prefetching or writes from being delayed past the clearing of the
lock.
...
        .text
        .align  4

        .set    noreorder
        .globl  CS_aclr
        .ent    CS_aclr

CS_aclr:
        ldgp    gp, 0(pv)
	.frame	sp, 0, ra
	.prologue 1
	mb				/* lock acquired, prevent race
					   conditions due to pipelining */
	stq	zero, 0(a0)		/* *lock_variable = 0 (atomic) */
	mov	1, v0			/* return value 1 (success) */
	ret
	.end	CS_aclr

Linux i386 seems to define it as CS_relspin:

# define   CS_ACLR(a)    (CS_relspin(a))

implemented as:

/* CS_relspin(memloc)
 * CS_SPIN *memloc
 *  Get spin lock
 *  Note:  on Model A and Model D processors we must use atomic clear 
 *  to ensure cache coherrency  */
_CS_relspin:
/*  Set lock set code */ 
	movb 	$0, %al
/* load the address of lock into register */
	movl	4(%esp), %ecx  
/* Atomically swap current value in lock with our set 
 * lock was already set.  */
	xchgb   %al, (%ecx)	
endrelspin:
	ret

Ugh.  I didn't expect to deal with machine-specific stuff.  Does our
kernel provide such a thing?  (I think it's a kernel question?)  Or am I
already in what W's Dad famously called "deep doo doo"?  

Thanks.

--jkl