tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Search space complexity -> Re: ARC model specified in spinroot/promela



Hi Again,

This email is a very long rabbit-hole into formal verification itself -
If you don't really care about this - just about QA and CI integration,
then you can skip this mail entirely - I will send an update on that in
a few days.

If you would like to help out with the verification run, then please try
the attached patch on a reasonably powerful machine (previous reports on
list seem to have made progress with about 64GB of RAM etc.), and please
send me the output offlist. TIA!

As usual, with $PATH suitably set for spin and/or modex, the commands
are:

# For hand written model verification
$ make clean spin-gen spin-run spin-trace
# For modex extracted model verification
$ make clean modex-gen spin-run spin-trace

If you care about the details and some minimal theory of how spin can be
best made to explore statespace, and how it is different from
hand-written test cases, then please read on, and also try the updated
patch below.

contd. - inline below.... please read on...

>>>>> Mathew, Cherry G * <c%bow.st@localhost> writes:


[...]


    > A few things remain WIP. Obviously, this is a "toy" model - it has
    > stack based "memory" management, and a directory buffer size of
    > 64. So that needs to be hammered on a bit more. Further, I'm keen
    > to now dip my toes into re-entrancy. If you noticed in the earlier
    > patches, there were model routines for
    > "mutex_enter()/mutex_exit()". I'll re-look these and play around a
    > little bit with concurrent entry of ARC() (ARC() is now strictly
    > entered sequentially in a loop - see arc.drv). Once things settle
    > down, I will look at making an actual block disk driver with an
    > ARC(9) managed cache using this code.

I will work on this next. But see below for FV details. 


    > Another area of further work would be the *.inv files, where the
    > invariant specifications are made.

I bring your attention to the last expression in arc/arc.inv

Here, there is a block marked as follows:

#if 1 /* Disable for easy demo. Enable to search harder */

This is a knob you can turn off, to have a quick understanding of the
description below. When I refer to the spin "trace", I am referring to
what the console barfs out when you run the above commands with the
#if negated (ie; s/#if 1/#if 0/)

    > I also noticed that the arc input trace makes the extracted and
    > hand-coded models exlclude slightly different looking codepaths. I
    > need to review these to understand how/what is different. Finally,
    > need to update the input trace itself to exercise all possible
    > codepaths in both.


This problem went away on its own, because the current state space
exploration code works all code paths (confirmed on both handcoded model
and modex extracted model).

Right, so let's start with what we're trying to do, and how spin (and
modern Finite Automaton based verification) differs from predicate logic
based testing (eg: assert()s ) we currently rely on in our codebases.

I'm obviously not an FV expert, so please refer to all the very
excellent documentation out there - in particular, this spin Tutorial
helped me quite a bit [1]. If my description below is inconsistent or
has errors, please feel free to point them out, on list.

Chapter 1: Specification vs. Code

Given a "program" - in this case our promela model in arc/arc.pml, spin
views each statement as a potentially reachable "state" in a None
Deterministic Finite Automaton (NDFA). The first line will be the
"Start" state, and the end of the file is usually considered the "End"
state (unless one modifies that using "end:" labels - see spin manual
for details). We will call a single example of the
{ "Start" -> "Execute subset of given statements" -> "End" }, a "Run".
Why "Non Deterministic" ? Because there are more than one possible
branches a next statement can take, to land on the next state, during a
"Run". Which branch (formally called "Edge", "Arrow" or "transition",
depending on which book you read) you take, can depend on the current
state of select variables one can specify. Spin provides the if/fi and
do/od constructs to facilitate this.

Aside:
Note that a single run is deterministic and can thus be viewed as a
constituent subset DFA of the equivalent DFA to the specified
NDFA. (Recall that any NDFA can be "broken out" into an equivalent
DFA).

This is where things start to get interesting. When we normally program
in C, for eg:, the programming language semantics do not force us to
consider all possible execution paths and possibilities.

As programmers, we thus usually mentally model only the functional case
at hand (eg: Add two numbers, add()) and relegate the alternative cases
(eg: the legal range of values that can be provided to the add()
function) to a "later" problem to be investigated during "testing".

For eg: if I were to write the following C code block:

state0: /* Start */
{
state1:
	int i = 0;
state2:        
	if (i == 0) {
state3:
	   printf("Expr true\n");
	}
state4: /* End */	           
}

,we know the exact deterministic path of states the program will go
through during a single run,
ie; state0->state1->state2->state3->state4

As programmers, we normally don't need to worry about the case where
(i != 0) because we normally think of programs functionally, and in the
context of a *single run*. "All other cases is for testers to worry
about", would be the typical lazy programmer perspective.

However, spin's modelling language, promela, has constructs to specify
the program (model) with every possible state transition based on the
state of specified variables in mind. In fact, spin forces you to
explicitly "hide" variables you don't want to contribute to the
set of Runs to be explored.

For eg: in the above scenario, the following fragment would seemingly be
the equivalent promela model:

state0: /* Start */
{
state1:
        int i = 0;

        if
state2:
        :: (i == 0) ->
state3:        
              printf("Expr true\n");
        fi
state4:
}

At first glance, this may look equivalent to the previous C code snippet
above, but spin views this snippet as a Non Deterministic Finite
Automaton (NDFA) which is to be decomposed into all possible constituent
Run DFAs which make up its equivalent DFA. Remember that a single Run
consists of a single sequence of precisely deterministic states - ie; a
Deterministic Finite Automaton (DFA). In contrast, we view the C code as
an NDFA that's left to the executing computer to explore the Run state
progression (DFA) *one* possible Run at a time.

To illustrate this, let's assume that in state1:, i = 1 in both snippets
above. In the case of C code, because the program state sequence is
the job of the executing computer during Run time, we know the state
transition precisely - ie; state0->state1->state2->state4 and don't
care about other possibilities eg: "What about state3 ?"

Spin however views this very differently. It views the

state1->state2->... 

transition as *ONE CASE* among all the several possible Run sequences
regardless of the currently considered value of i (remember, we assume
i = 1 in this scenario). Since 'i' is declared as type int, this is the
set of possible values (INT_MIN < i < INT_MAX). The specific edge
traversed next in the NDFA is determined in this case, by the precise
current value of i which is expected to be enumerated case by case based
on the if/fi construct.

Thus when it arrives at state2, spin will need to know what to do next
(ie; which edge to traverse to for the next statement) based on the
current value of 'i' in all possible cases of 'i'. if these  cases are
not specified as forward edges, spin views this as an underspecification
error and "block"s the Run (as there is no transition edge corresponding
to the current value of 'i' to chose). Thus to fix this, we will need to
provide the "in all other cases" edge compressed into a keyword, called
"else" as follows: 

state0: /* Start */
{
state1:
        int i = 0;

        if
state2:
        :: (i == 0) ->
state3:        
              printf("Expr true\n");
state3.5a: /* In all other cases, including i == 1 */             
        :: else ->
state3.5b:
                skip;
        fi
state4:
}

For spin experts, please forgive my oversimplified explanation, which is
really about the "executability" of "state2: :: (i == 0 ... " vs.
"state3.5: :: else ..." where "else" evaluates to "true"/executable, in
the specific situation that no other alternative in the if/fi block is
executable.

A keen observer will now note that, if the current value of the variable
'i' were non determinable, one would still be able to specify in
promela, what we intend for our program to do. In other words, if we
were able to do something like 'state1: i = rand() % i' (not available
in spin for very good reasons out of the scope of this discussion) -
then even in that case, the specification above would be able to process
the program behaviour in *ALL POSSIBLE* "Run" paths between
state0->...state4 , whereas the C program can be satisfied by the
scenario of a *SINGLE* Run path, at random. To do the functionally
equivalent mechanism of what spin does in this case, we would have to
write C code to exhaustively loop the entire code block from state1:
through all possible values of the variable 'i' between INT_MIN and
INT_MAX.

Recalling from undergrad computer science discourse, one can view the
promela model "program" in the context of spin attempting to decompose 
the specification into all possible deterministic "Run"s - ie;
technically, the attempt is to decompose the specification program which
is a NDFA, into all possible "Run"s, each run being a constitutent
subset of the equivalent DFA. This is the fundamental context in which
we need to view spin's promela specifications, when we wish to verify
our specified model. 

Chapter 2: Specifying Properties of models
"So you've got an iterator for the statespace of all variables in a given
program, so what ? I can write that in a couple of C while(){} loops!"
one might argue. This is where spin sits apart from regular
programming.

Let's assume that one were to write a large set of exhaustive tests,
taking into account the entire state space of all variables in the
code. The core part of these tests would be what's called
"Propositional Logic". For eg: if you wanted to exhaustively test an
add() function add(int a, int b); , one could potentially write a loop
as follows:

...
test_add()
{
        int a, b;

        for (a = INT_MIN;a <= INT_MAX;a++) {
            for (b = INT_MIN;b <= INT_MAX;b++) {
                assert(add(a, b) == (a + b));
            }
        }
}
        
Here, the core propositional logic is the '==' operator. We exhaustively
"walk the timeline" (ie; every single Run sequence) of a, b, but are
only able to ask the question "is proposition 'P' true *NOW* ? (Here
proposition 'P' would be 'add(a,b) == (a + b)'). There is no mechanism
in propositional logic to ask questions about the behaviour of state
over a period of time (ie; a sequence of state during a Run) wrt the
current state, ie; if we are interested to make generalised (technically
called "regular") statements about the behaviour of an automaton over a
period of time, we need to use a different set of logical tools called
"Linear Temporal Logic" (LTL). LTL provides us with a superset of the
usual propositional Logic operators to take into account the "timeline"
or more precisely, the exact path taken by a given "Run" DFA. 

So for eg: you could make LTL statements such as:

"always add(a, b) == (a + b)"

or

"eventually (a == b)"

The view is that of standing "outside" the for() loops above, and making
logical assertions about the state evolutions happening within the
loops, for all possible combination of values (states) of a, b and
add().

This is what we attempt to do in the file arc/arc.inv - in this case,
the  Logical assertions listed out in the ltl{} section are about the
the properties of the ARC buffers over time. If you bring your attention
to the last assertion guarded by a #ifdef DISCOVER_STATEPATH/#endif
pair, and modify the #if 1 to a #if 0, you can quickly see how spin
operates on the assertion that
"always ! (lengthof(T1) == C && lengthof(B2) == C)"

As one might apply De-Morgan's laws to propositional logic, LTL has its
own set of transformations, with which one can rewrite the above as:

"it is never true that the lengths of buffers T1 and B2 is equal to C"

or more intuitive, "T1 and B2 are never simultaneously full".

( always !P <==>  never P )

When we make this assertion, spin tries to disprove this by exhaustively
searching the space of all states and state transitions, in order to
identify a case where the assertion is false. If it is able to find this
case, then the entire state transition trail is printed out on the
console, along with the final state at which the assertion was
falsified. This is what happens when you run

'$ make clean spin-gen spin-run spin-trace'

If spin is unable to find a counter example to disprove the ltl{} claims
we have made, then we must conclude that the claims are true - this
idea is formally called "Model Checking" [2] and borrows from rich
theoretical work that is way beyond my current scope of knowledge.

Conclusion:

In conclusion, I believe that spin is an excellent tool to apply these
rich theoretical ideas to a software methodology (that I've been calling
D3) in order to keep the BSD codebases high quality, easily
maintainable, and auto-documenting (via LTL claims).


Application:
If you look at arc/arc.inv you will find a '#if PROBLEM_INVARIANT'
clause that I lifted from the original ARC paper (Section III, A.3) -
it's a fairly strong invariant spec, which I was surprised to find fails
on the current model. There are several potential reasons why this is
happening:

1) The assertion is logically untrue, and the authors got this wrong.

   This is unlikely, as the ARC has been around for long enough, and
   applied hard enough, that an error would very likely have been
   discovered by now. But if not, then spin has found a serious
   inconsistency in the paper!

2) My implementation is incorrect:
   This is more likely the case, as I am new to specification in
   promela, and there are several likely points of error. It would be
   interesting to see how / what can be done to pinpoint where the bug
   is. I invite the community to attempt to find this logical bug.


Finally, if you notice the "DISCOVER_STATEPATH" assertion, if you run
it long enough (my computing resources are insufficient to do this in a
meaningful time), the counter example provided should give a sequence of
inputs that can be used as a static input sequence, to validate the
model without -D EXPLORE_STATESPACE. What this would do is to bring in
very basic checking of a small subset of the model statespace, which can
be integrated into the testing/CI build infrastructure for continuous
coverage at low computational cost.

Final Note:
If you look at the fundamental source of search space complexity, it
will become apparent that the system is attempting to find a sequence of
inputs that can satisfy the invariants specified. See:
arc/arc.drv:init()

This search space is very large, as it involves the Kleen Closure set of
all words that the model NDFA would match on - which is theoretically
infinite - but since we attempt to find the first counter example using
the Model checking algorithm, we only look within a finite matching
language (my understanding - I may be wrong, since my theoretical
understanding of this stuff is very shaky).

What will become clear if I get help running this model on a larger,
powerful machine, is how big the searchspace is (assuming the program is
unable to find a counter example to disprove the last clause in the
ltl{}).

In other words, please try the patch below, and let me know!

Many Thanks,


[1] https://spinroot.com/spin/Doc/Spin_tutorial_2004.pdf
[2] https://dl.acm.org/doi/10.1145/1592761.1592781

-- 
~cherry

diff -urN arc-null/arc.c arc/arc.c
--- arc-null/arc.c	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.c	2023-09-14 13:59:20.607107308 +0000
@@ -0,0 +1,173 @@
+/* C Implementation of the Adaptive Replacement Cache algorithm. Written by cherry */
+
+/*
+ * We implement the following algorithm from page 10, Figure 4.
+ * https://www.usenix.org/legacy/events/fast03/tech/full_papers/megiddo/megiddo.pdf
+ *
+ *
+ *  ARC(c)
+ *  
+ *  INPUT: The request stream x1,x2,....,xt,....
+ *  INITIALIZATION: Set p = 0 and set the LRU lists T1, B1, T2, and B2 to empty.
+ *  
+ *  For every t>=1 and any xt, one and only one of the following four cases must occur.
+ *  Case I: xt is in T1 or T2. A cache hit has occurred in ARC(c) and DBL(2c).
+ *       Move xt to MRU position in T2.
+ *  
+ *  Case II: xt is in B1. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = min { p + d1,c }
+ *  	 	     where d1 = { 1 if |B1| >= |B2|, |B2|/|B1| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B1 to the MRU position in T2 (also fetch xt to the cache).
+ *  
+ *  Case III: xt is in B2. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = max { p - d2,0 }
+ *  	 	     where d2 = { 1 if |B2| >= |B1|, |B1|/|B2| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B2 to the MRU position in T2 (also fetch xt to the cache).
+ *       
+ *  Case IV: xt is not in T1 U B1 U T2 U B2. A cache miss has occurred in ARC(c) and DBL(2c).
+ *       Case A: L1 = T1 U B1 has exactly c pages.
+ *       	  If (|T1| < c)
+ *  	     	     	Delete LRU page in B1. REPLACE(xt,p).
+ *  	  	  else
+ *			Here B1 is empty. Delete LRU page in T1 (also remove it from the cache).
+ *  	  	  endif
+ *       Case B: L1 = T1 U B1 has less than c pages.
+ *       	  If (|T1| + |T2| + |B1| + |B2| >= c)
+ *  	             Delete LRU page in B2, if (|T1| + |T2| + |B1| + |B2| = 2c).
+ *  		     REPLACE(xt, p).
+ *  	  	  endif
+ *  
+ *       Finally, fetch xt to the cache and move it to MRU position in T1.
+ *  
+ *  Subroutine REPLACE(xt,p)
+ *       If ( (|T1| is not empty) and ((|T1| exceeds the target p) or (xt is in B2 and |T1| = p)) )
+ *       	  Delete the LRU page in T1 (also remove it from the cache), and move it to MRU position in B1.
+ *       else
+ *		  Delete the LRU page in T2 (also remove it from the cache), and move it to MRU position in B2.
+ *       endif
+ */
+ 
+#include "arc_queue/arc.h"
+
+static void arc_list_init(struct arc_list *_arc_list)
+{
+	TAILQ_INIT(&_arc_list->qhead);
+	_arc_list->qcount = 0;
+	
+	int i;
+	for(i = 0; i < ARCLEN; i++) {
+		init_arc_item(&_arc_list->item_list[i], IID_INVAL, false);
+	};
+}
+
+int p, d1, d2;
+struct arc_list _B1, *B1 = &_B1, _B2, *B2 = &_B2, _T1, *T1 = &_T1, _T2, *T2 = &_T2;
+
+void arc_init(void)
+{
+	p = d1 = d2 = 0;
+
+	arc_list_init(B1);
+	arc_list_init(B2);
+	arc_list_init(T1);
+	arc_list_init(T2);
+}
+
+struct arc_item _LRUitem, *LRUitem = &_LRUitem;
+
+static void
+REPLACE(struct arc_item *x_t, int p)
+{
+
+
+	init_arc_item(LRUitem, IID_INVAL, false);
+
+	if ((lengthof(T1) != 0) &&
+	    ((lengthof(T1) >  p) ||
+	     (memberof(B2, x_t) && (lengthof(T1) == p)))) {
+		readLRU(T1, LRUitem);
+		delLRU(T1);
+		cacheremove(LRUitem);
+		addMRU(B1, LRUitem);
+	} else {
+		readLRU(T2, LRUitem);
+		delLRU(T2);
+		cacheremove(LRUitem);
+		addMRU(B2, LRUitem);
+	}
+}
+
+void
+ARC(struct arc_item *x_t)
+{
+	if (memberof(T1, x_t)) { /* Case I */
+		delitem(T1, x_t);
+		addMRU(T2, x_t);
+	}
+
+	if (memberof(T2, x_t)) { /* Case I */
+		delitem(T2, x_t);
+		addMRU(T2, x_t);
+	}
+
+	if (memberof(B1, x_t)) { /* Case II */
+		d1 = ((lengthof(B1) >= lengthof(B2)) ? 1 : (lengthof(B2)/lengthof(B1)));
+		p = min((p + d1), C);
+
+		REPLACE(x_t, p);
+
+		delitem(B1, x_t);
+		addMRU(T2, x_t);
+		cachefetch(x_t);
+	}
+
+	if (memberof(B2, x_t)) { /* Case III */
+		d2 = ((lengthof(B2) >= lengthof(B1)) ? 1 : (lengthof(B1)/lengthof(B2)));
+		p = max((p - d2), 0);
+
+		REPLACE(x_t, p);
+
+		delitem(B2, x_t);
+		addMRU(T2, x_t);
+		cachefetch(x_t);
+	}
+
+	if (!(memberof(T1, x_t) ||
+	      memberof(B1, x_t) ||
+	      memberof(T2, x_t) ||
+	      memberof(B2, x_t))) { /* Case IV */
+
+		if ((lengthof(T1) + lengthof(B1)) == C) { /* Case A */
+			if (lengthof(T1) < C) {
+				delLRU(B1);
+				REPLACE(x_t, p);
+			} else {
+				assert(lengthof(B1) == 0);
+				readLRU(T1, LRUitem);
+				delLRU(T1);
+				cacheremove(LRUitem);
+			}
+		}
+
+		if ((lengthof(T1) + lengthof(B1)) < C) {
+			if ((lengthof(T1) +
+			     lengthof(T2) +
+			     lengthof(B1) +
+			     lengthof(B2)) >= C) {
+				if ((lengthof(T1) +
+				     lengthof(T2) +
+				     lengthof(B1) +
+				     lengthof(B2)) == (2 * C)) {
+
+					delLRU(B2);
+				}
+				
+				REPLACE(x_t, p);
+			}
+		}
+		cachefetch(x_t);
+		addMRU(T1, x_t);
+	}
+}
diff -urN arc-null/arc.drv arc/arc.drv
--- arc-null/arc.drv	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.drv	2023-09-28 07:08:06.528728685 +0000
@@ -0,0 +1,148 @@
+/*
+ * Spin process model statespace driver for the
+ * Adaptive Replacement Cache algorithm.
+ * Written by "Mathew, Cherry G." <c%bow.st@localhost>
+ */
+
+/*
+ * Note: What we're attempting in this driver file, is to generate an
+ * input trace that would exercise all code-paths of the model specified
+ * in arc.pml
+ *
+ * Feeding a static trace to the algorithm in array _x[N_ITEMS] is an
+ * acceptable alternative. See -D DISCOVER_STATEPATH below.
+ */
+
+int _x_iid = 0; /* Input trace variable. A temporal record of this
+    	     	 * variable can serve as the input trace.
+		 */
+
+/*
+ * Explore as much of the program statespace as possible, in order to
+ * try to disprove LTL claims. (See: arc.inv for ltl{} section)
+ */
+
+#ifdef EXPLORE_STATESPACE
+
+#define DISCOVER_STATEPATH
+			   /*
+			    * Note: disabling this will force spin to
+ 			    * explore the *entire* state space. This
+			    * may or may not be bounded, and I'm not
+			    * sure if the question of a finite
+			    * boundary is a decidable problem.
+			    */
+
+
+/* Look for a good input trace ie; a specific linear sequence that
+ * _x_iid took (see below), which triggered disproving an invariant
+ * crafted for the purpose of exercising literally every single
+ * conditional path in the model. (See arc.inv)
+ *
+ * In spin parlance, this invariant is called a "never claim". We make
+ * this claim as specific as possible, in order to force spin to
+ * search as much of the statespace as possible to disprove it. What
+ * ensues in the process is twofold:
+ *
+ * 1) A program trace is discovered, where the never claim is
+ *     disproved.
+ * 2) In searching the statespace, a large number of other program
+ *    trace spaces are trialled, which effectively acts as verification
+ *    over the other invariants specified in LTL. (See: arc.inv)
+ *
+ * If we save the sequence of generated input values of _x_iid which
+ * led to disproving the never claim, and the claim itself were crafted
+ * prudently, then we can use this trace as a "static input" to
+ * ensure that the rest of the invariants hold, and this takes a
+ * fraction of the time it takes compared to the discovery
+ * effort for full verification. This can be thought of as closer to
+ * "unit testing" of the model, and can be used for a basic
+ * sanity-check in regular code builds, once the model, and its code
+ * implementation have matured enough.
+ *
+ * Note that this does not invalidate the need for actual unit testing
+ * of final code.
+ */
+
+#define X_MIN 0
+#define X_MAX (4 * C - 1) /* Four buffer lengths - start at 0 */
+
+#define N_ITEMS (X_MAX + 1) /* Number of distinct cache items to test with */
+
+#define REPEAT_MAX (X_MAX - X_MIN) /* How many times a repeat loop may go on for */
+hidden int _repeat = 0;
+
+hidden arc_item _x[N_ITEMS]; /* Input state is irrelevant from a verification PoV */
+
+init {
+     _x_iid = (X_MAX - X_MIN) / 2; /* Start halfway */
+
+     do
+     :: 
+     	  	init_arc_item(_x[_x_iid], _x_iid, false);
+		ARC(_x[_x_iid]);
+	   	if
+		:: (_x_iid > X_MIN) -> _x_iid--;
+		:: (_x_iid < X_MAX) -> _x_iid++;
+		:: (_repeat < REPEAT_MAX) -> _repeat++;
+		:: else break;
+		fi
+     od
+
+}
+
+#else		
+
+/*
+ * Not so prudent trace generator - this served as the first iteration
+ * while putting the basic model and invariants in place.
+ *
+ * Eventually, a static trace obtained from the statespace exploration
+ * described above can be used to drive the ARC() promela model, more
+ * like how a testing harness would be driven. This could be then in
+ * the default sanity-check path in CI build runs. This trace could
+ * also be used as input for testing the C-code implementation (See:
+ * arc_drv.c etc.
+ */
+
+#define N_ITEMS (N * C) /* Number of distinct cache items to test with */
+#define ITEM_REPS (C / 4) /* Max repeat item requests */
+#define N_ITERATIONS 3
+
+hidden arc_item _x[N_ITEMS]; /* Input state is irrelevant from a verification PoV */
+hidden int _item_rep = 0;
+hidden int _iterations = 0;
+
+/* Drive the procs */
+init {
+
+	atomic {
+	       do
+	       ::
+	       _iterations < N_ITERATIONS ->
+	       
+			_x_iid = 0;
+			do
+			:: _x_iid < N_ITEMS ->
+			   	   init_arc_item(_x[_x_iid], _x_iid, false);
+				   _item_rep = 0;
+				   do
+				   :: _item_rep < (_x_iid % ITEM_REPS) ->
+				      		ARC(_x[_x_iid]);
+						_item_rep++;
+				   :: _item_rep >= (_x_iid % ITEM_REPS) ->
+				      		break;
+				   od
+				   _x_iid++;
+			:: _x_iid >= N_ITEMS ->
+				break;
+			od
+			_iterations++;
+		::
+		_iterations >= N_ITERATIONS ->
+			    break;
+		od
+	}
+
+}
+#endif
\ No newline at end of file
diff -urN arc-null/arc_drv.c arc/arc_drv.c
--- arc-null/arc_drv.c	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_drv.c	2023-09-14 14:01:04.826170703 +0000
@@ -0,0 +1,35 @@
+/* See arc.drv for design details */
+
+#include "arc_queue/arc.h"
+
+#define N_ITEMS (2 * C) /* Number of distinct cache items to test with */
+#define ITEM_REPS (C / 4) /* Max repeat item requests */
+#define N_ITERATIONS 3
+
+static struct arc_item _x[N_ITEMS]; /* Input state is irrelevant from a verification PoV */
+static int _x_iid = 0;
+static int _item_rep = 0;
+static int _iterations = 0;
+
+/* Drive ARC() with a preset input trace */
+
+void
+main(void)
+{
+	arc_init(); /* Init module state */
+
+	while (_iterations < N_ITERATIONS) {
+		_x_iid = 0;
+		while (_x_iid < N_ITEMS) {
+			init_arc_item(&_x[_x_iid], _x_iid, false);
+			_item_rep = 0;
+			while(_item_rep < (_x_iid % ITEM_REPS) ) {
+				ARC(&_x[_x_iid]);
+				_item_rep++;
+			} 
+			_x_iid++;
+		}
+		_iterations++;
+	}
+}
+
diff -urN arc-null/arc.inv arc/arc.inv
--- arc-null/arc.inv	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.inv	2023-09-28 07:09:08.515935227 +0000
@@ -0,0 +1,65 @@
+/* $NetBSD$ */
+
+/* These are Linear Temporal Logic invariants (and constraints)
+ * applied over the statespace created by the promela
+ * specification. Correctness is implied by Logical consistency.
+ */
+ltl 
+{
+	/* c.f Section I. B, on page 3 of paper */
+	always ((lengthof(T1) +
+	         lengthof(B1) +
+	         lengthof(T2) +
+	         lengthof(B2)) <= (2 * C)) 
+
+	/* Reading together Section III. A., on page 7, and
+	 * Section III. B., on pages  7,8
+	 */
+	&& always ((lengthof(T1) + lengthof(B1)) <= C)
+	&& always ((lengthof(T2) + lengthof(B2)) <= (2 * C))
+
+	/* Section III. B, Remark III.1	*/
+	&& always ((lengthof(T1) + lengthof(T2)) <= C)
+
+	/* TODO: III B, A.1 */
+
+	/* III B, A.2 */
+	&& always (((lengthof(T1) +
+	          lengthof(B1) +
+	          lengthof(T2) +
+	          lengthof(B2)) < C)
+		 implies ((lengthof(B1) == 0) &&
+			   lengthof(B2) == 0))
+#if PROBLEM_INVARIANT
+	/* III B, A.3 */
+	&& always (((lengthof(T1) +
+	          lengthof(B1) +
+	          lengthof(T2) +
+	          lengthof(B2)) >= C)
+		 implies ((lengthof(T1) +
+		 	   lengthof(T2)) == C))
+#endif
+	/* TODO: III B, A.4 */
+
+	/* TODO: III B, A.5 */
+
+	/* IV A. */
+	&& always (p <= C)
+
+#ifdef DISCOVER_STATEPATH /* See arc.drv */
+	/*
+	 * Force spin to generate a "good" input trace (See: arc.drv)
+	 * The handwavy reasoning here is that an absolutely full ARC
+	 * would have had to exercise all codepaths to get there.
+	 */
+	&& always !(true /* Syntactic glue */
+	   	    && lengthof(T1) == C
+#if 1 /* Disable for easy demo. Enable to search harder */
+	            && lengthof(B1) == C
+	            && lengthof(T2) == C
+#endif		    
+	            && lengthof(B2) == C
+		   )
+
+#endif
+}
\ No newline at end of file
diff -urN arc-null/arc.pml arc/arc.pml
--- arc-null/arc.pml	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.pml	2023-09-28 06:47:31.494741428 +0000
@@ -0,0 +1,212 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by cherry */
+
+/*
+ * We implement the following algorithm from page 10, Figure 4.
+ * https://www.usenix.org/legacy/events/fast03/tech/full_papers/megiddo/megiddo.pdf
+ *
+ *
+ *  ARC(c)
+ *  
+ *  INPUT: The request stream x1,x2,....,xt,....
+ *  INITIALIZATION: Set p = 0 and set the LRU lists T1, B1, T2, and B2 to empty.
+ *  
+ *  For every t>=1 and any xt, one and only one of the following four cases must occur.
+ *  Case I: xt is in T1 or T2. A cache hit has occurred in ARC(c) and DBL(2c).
+ *       Move xt to MRU position in T2.
+ *  
+ *  Case II: xt is in B1. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = min { p + d1,c }
+ *  	 	     where d1 = { 1 if |B1| >= |B2|, |B2|/|B1| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B1 to the MRU position in T2 (also fetch xt to the cache).
+ *  
+ *  Case III: xt is in B2. A cache miss (resp. hit) has occurred in ARC(c) (resp. DBL(2c)).
+ *       	 ADAPTATION: Update p = max { p - d2,0 }
+ *  	 	     where d2 = { 1 if |B2| >= |B1|, |B1|/|B2| otherwise
+ *  
+ *       REPLACE(xt, p). Move xt from B2 to the MRU position in T2 (also fetch xt to the cache).
+ *       
+ *  Case IV: xt is not in T1 U B1 U T2 U B2. A cache miss has occurred in ARC(c) and DBL(2c).
+ *       Case A: L1 = T1 U B1 has exactly c pages.
+ *       	  If (|T1| < c)
+ *  	     	     	Delete LRU page in B1. REPLACE(xt,p).
+ *  	  	  else
+ *			Here B1 is empty. Delete LRU page in T1 (also remove it from the cache).
+ *  	  	  endif
+ *       Case B: L1 = T1 U B1 has less than c pages.
+ *       	  If (|T1| + |T2| + |B1| + |B2| >= c)
+ *  	             Delete LRU page in B2, if (|T1| + |T2| + |B1| + |B2| = 2c).
+ *  		     REPLACE(xt, p).
+ *  	  	  endif
+ *  
+ *       Finally, fetch xt to the cache and move it to MRU position in T1.
+ *  
+ *  Subroutine REPLACE(xt,p)
+ *       If ( (|T1| is not empty) and ((|T1| exceeds the target p) or (xt is in B2 and |T1| = p)) )
+ *       	  Delete the LRU page in T1 (also remove it from the cache), and move it to MRU position in B1.
+ *       else
+ *		  Delete the LRU page in T2 (also remove it from the cache), and move it to MRU position in B2.
+ *       endif
+ */
+ 
+/* Temp variable to hold LRU item */
+arc_item LRUitem;
+
+/* Adaptation "delta" variables */
+hidden int d1, d2;
+int p = 0;
+
+/* Declare arc lists - "shadow/ghost cache directories" */
+arc_list T1, T2, B1, B2;
+
+inline REPLACE(/* arc_item */ x_t, /* int */ p)
+{
+	/*
+	 * Since LRUitem is declared in scope p_ARC, we expect it to be only accessible from there and REPLACE()
+	 * as REPLACE() is only expected to be called from p_ARC.
+	 * XXX: May need to revisit due to Modex related limitations.
+	 */
+	init_arc_item(LRUitem, IID_INVAL, false);
+	
+	if
+		::
+		(lengthof(T1) != 0) &&
+		((lengthof(T1) > p) || (memberof(B2, x_t) && (lengthof(T1) == p)))
+		->
+		{
+		       readLRU(T1, LRUitem);
+		       delLRU(T1);
+		       cacheremove(LRUitem);
+		       addMRU(B1, LRUitem);
+		}
+
+		::
+		else
+		->
+		{
+		       readLRU(T2, LRUitem);
+		       delLRU(T2);
+		       cacheremove(LRUitem);
+		       addMRU(B2, LRUitem);
+		}
+	fi
+}
+
+inline ARC(/* arc_item */ x_t)
+{
+	if
+		:: /* Case I */
+		memberof(T1, x_t)
+		->
+		{
+		       delitem(T1, x_t);
+		       addMRU(T2, x_t);
+		}
+		:: /* Case I */
+		memberof(T2, x_t)
+		->
+		{
+		       delitem(T2, x_t);
+		       addMRU(T2, x_t);
+		}
+		:: /* Case II */
+		memberof(B1, x_t)
+		->
+		d1 = ((lengthof(B1) >= lengthof(B2)) -> 1 : (lengthof(B2)/lengthof(B1)));
+		p = min((p + d1), C);
+
+		REPLACE(x_t, p);
+		{
+		       delitem(B1, x_t);
+		       addMRU(T2, x_t);
+		       cachefetch(x_t);
+		}
+		:: /* Case III */
+		memberof(B2, x_t)
+		->
+		d2 = ((lengthof(B2) >= lengthof(B1)) -> 1 : (lengthof(B1)/lengthof(B2)));
+		p = max(p - d2, 0);
+		
+		REPLACE(x_t, p);
+		{
+		       delitem(B2, x_t);
+		       addMRU(T2, x_t);
+		       cachefetch(x_t);
+		}
+		:: /* Case IV */
+		!(memberof(T1, x_t) ||
+		  memberof(B1, x_t) ||
+		  memberof(T2, x_t) ||
+		  memberof(B2, x_t))
+		->
+		if
+			:: /* Case A */
+			((lengthof(T1) + lengthof(B1)) == C)
+			->
+			if
+				::
+				(lengthof(T1) < C)
+				->
+				delLRU(B1);
+				REPLACE(x_t, p);
+				::
+				else
+				->
+				assert(lengthof(B1) == 0);
+				{
+				       readLRU(T1, LRUitem);
+				       delLRU(T1);
+				       cacheremove(LRUitem);
+				}
+			fi
+			:: /* Case B */
+			((lengthof(T1) + lengthof(B1)) < C)
+			->
+			if
+				::
+				((lengthof(T1) +
+				  lengthof(T2) +
+				  lengthof(B1) +
+				  lengthof(B2)) >= C)
+				->
+				if
+					::
+					((lengthof(T1) +
+				  	  lengthof(T2) +
+				  	  lengthof(B1) +
+				  	  lengthof(B2)) == (2 * C))
+					->
+					delLRU(B2);
+					::
+					else
+					->
+					skip;
+				fi
+				REPLACE(x_t, p);
+				::
+				else
+				->
+				skip;
+			fi
+			::
+			else
+			->
+			skip;
+		fi
+		cachefetch(x_t);
+		addMRU(T1, x_t);
+	fi
+
+}
+
+#if 0 /* Resolve this after modex extract foo */
+proctype p_arc(arc_item x_t)
+{
+	/* Serialise entry */	
+	mutex_enter(sc_lock);
+
+	ARC(x_t);
+
+	mutex_exit(sc_lock);
+}
+#endif
diff -urN arc-null/arc.prx arc/arc.prx
--- arc-null/arc.prx	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc.prx	2023-09-14 07:06:38.900036315 +0000
@@ -0,0 +1,80 @@
+// Spin model extractor harness written by cherry
+//
+%F arc.c
+%X -n REPLACE
+%X -n ARC
+%H
+// Disable effects of all included files and try to implement a subset of the APIs they provide.
+#define _ARC_H_
+%%
+//%C  // c_code {}
+//%%
+//%D // c_cdecl {}
+//%%
+%L
+// We use spin primitives and data objects.
+// See %P Below
+NonState	hidden	_LRUitem
+NonState	hidden	LRUitem
+NonState	hidden	_B2
+NonState	hidden	B2
+NonState	hidden	_B1
+NonState	hidden	B1
+NonState	hidden	_T2
+NonState	hidden	T2
+NonState	hidden	_T1
+NonState	hidden	T1
+NonState	hidden	x_t
+
+
+
+assert(...		keep
+REPLACE(...		keep
+init_arc_item(...	keep
+lengthof(...		keep
+memberof(...		keep
+addMRU(...		keep
+readLRU(...		keep
+delLRU(...		keep
+delitem(...		keep
+cacheremove(...		keep
+cachefetch(...		keep
+
+
+Substitute		c_expr { ((lengthof(T1)!=0)&&((lengthof(T1)>now.p)||(memberof(B2,x_t)&&(lengthof(T1)==now.p)))) }	(lengthof(T1) != 0) && ((lengthof(T1) > p) || (memberof(B2, x_t) && (lengthof(T1) == p)))
+Substitute		c_code { now.d1=((lengthof(B1)>=lengthof(B2)) ? Int 1\n : (lengthof(B2)/lengthof(B1))); }	d1 = ((lengthof(B1) >= lengthof(B2)) -> 1 : (lengthof(B2)/lengthof(B1)))
+Substitute		c_code { now.p=min((now.p+now.d1),C); }	 p = min((p + d1), C)
+
+Substitute		c_code { now.d2=((lengthof(B2)>=lengthof(B1)) ? Int 1\n : (lengthof(B1)/lengthof(B2))); }	d2 = ((lengthof(B2) >= lengthof(B1)) -> 1 : (lengthof(B1)/lengthof(B2)));
+Substitute		c_code { now.p=max((now.p-now.d2),0); }		      	  p = max(p - d2, 0);
+Substitute		c_expr { (!(((memberof(T1,x_t)||memberof(B1,x_t))||memberof(T2,x_t))||memberof(B2,x_t))) }			!(memberof(T1, x_t) || memberof(B1, x_t) || memberof(T2, x_t) ||  memberof(B2, x_t))
+Substitute		c_expr { ((lengthof(T1)+lengthof(B1))==C) }	((lengthof(T1) + lengthof(B1)) == C)
+Substitute		c_expr { (lengthof(T1)<C) }	(lengthof(T1) < C)
+Substitute		c_expr { ((lengthof(T1)+lengthof(B1))<C) }	((lengthof(T1) + lengthof(B1)) < C)
+Substitute		c_expr { ((((lengthof(T1)+lengthof(T2))+lengthof(B1))+lengthof(B2))>=C) }	((lengthof(T1) + lengthof(T2) + lengthof(B1) + lengthof(B2)) >= C)
+Substitute		c_expr { ((((lengthof(T1)+lengthof(T2))+lengthof(B1))+lengthof(B2))==(2*C)) }	((lengthof(T1) + lengthof(T2) + lengthof(B1) + lengthof(B2)) == (2 * C))
+%%
+
+%P
+
+/* Temp variable to hold LRU item */
+hidden arc_item LRUitem;
+
+arc_list B1, B2, T1, T2;
+
+#define p_REPLACE(_arg1, _arg2) REPLACE(_arg1, _arg2) /* Demo arbitrary Cfunc->PMLproc transformation */
+inline p_REPLACE(/* arc_item */ x_t, /* int */ p)
+{
+
+#include "_modex_REPLACE.pml"
+
+}
+
+#define p_ARC(_arg1) ARC(_arg1)
+inline p_ARC(/* arc_item */ x_t)
+{
+
+#include "_modex_ARC.pml"
+
+}
+%%
\ No newline at end of file
diff -urN arc-null/arc_queue/arc.h arc/arc_queue/arc.h
--- arc-null/arc_queue/arc.h	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc.h	2023-09-26 15:04:45.316426496 +0000
@@ -0,0 +1,170 @@
+/*
+ * The objective of the header here is to provide a set of macros that
+ * reflect the interfaces designed in arc.pmh
+ */
+
+#ifndef _ARC_H_
+#define _ARC_H_
+
+#ifdef MODEX
+/* Glue for model extraction run */
+#else
+/* Defaults to POSIX */
+#include <assert.h>
+#include <stddef.h>
+#include <stdbool.h>
+#endif
+
+#include "queue.h" /* We use the NetBSD version as it has no
+		    * dependencies (except for -DNULL) . */
+
+#define C 64
+
+#define ARCLEN (2 * C) /* c.f ghost cache directory length constraints in arc.inv */
+
+#define IID_INVAL -1
+
+struct arc_item {
+	TAILQ_ENTRY(arc_item) qlink;	
+	int iid;	/* Unique identifier for item */
+	bool cached;
+};
+
+struct arc_list {
+	TAILQ_HEAD(arc_qhead, arc_item) qhead;
+	int qcount;
+	struct arc_item item_list[ARCLEN]; /* We use static memory for demo purposes */
+};
+
+inline static struct arc_item * allocmember(struct arc_list *);
+inline static void freemember(struct arc_item *);
+inline static struct arc_item * findmember(struct arc_list *, struct arc_item *);
+
+#define init_arc_item(/* &struct arc_item [] */ _arc_item_addr,			\
+		      /* int */_iid, /*bool*/_cached)	do {			\
+		struct arc_item *_arc_item = _arc_item_addr;			\
+		assert(_arc_item != NULL);				\
+		_arc_item->iid = _iid;						\
+		_arc_item->cached = _cached;					\
+	} while (/*CONSTCOND*/0)
+
+#define lengthof(/* struct arc_list* */_arc_list) (_arc_list->qcount)
+#define memberof(/* struct arc_list* */_arc_list,				\
+		 /* struct arc_item* */_arc_item)				\
+	((findmember(_arc_list,							\
+		     _arc_item) != TAILQ_END(&_arc_list->qhead)) ?		\
+	 true : false)
+
+/*
+ * We follow spin's channel rx/tx semantics here: "send" means
+ * duplicate onto queue ("_arc_list.item_list!_arc_item.iid"), and
+ * recieve means "duplicate" from queue but leave the data source on
+ * queue  ("_arc_list.item_list?<_arc_item.iid>").
+ *
+ * It is an error to addMRU() on a full queue. Likewise, it is an
+ * error to readLRU() on an empty queue. The verifier is expected to
+ * have covered any case where these happen. We use assert()s to
+ * indicate the error.
+ *
+ * Note: We use spin's channel mechanism in our design, only because
+ * it's the easiest. We could have chosen another
+ * mechanism/implementation, if the semantics were specified
+ * differently due to, for eg: convention, architectural or efficiency
+ * reasons.
+ */
+#define addMRU(/* struct arc_list* */_arc_list,					\
+	       /* struct arc_item* */_arc_item) do {				\
+		assert(_arc_list->qcount < ARCLEN);				\
+		struct arc_item *aitmp; aitmp = allocmember(_arc_list);		\
+		assert(aitmp != NULL);						\
+		*aitmp = *_arc_item;						\
+		TAILQ_INSERT_TAIL(&_arc_list->qhead, aitmp, qlink);		\
+		_arc_list->qcount++;						\
+	} while (/*CONSTCOND*/0)
+
+#define readLRU(/* struct arc_list* */_arc_list,				\
+		/* struct arc_item* */_arc_item) do {				\
+		assert(!TAILQ_EMPTY(&_arc_list->qhead));			\
+		assert(_arc_item != NULL);					\
+		*_arc_item = *(struct arc_item *)TAILQ_FIRST(&_arc_list->qhead);\
+	} while (/*CONSTCOND*/0)
+		
+#define delLRU(/* struct arc_list* */_arc_list)					\
+	if (!TAILQ_EMPTY(&_arc_list->qhead)) {					\
+		struct arc_item *aitmp; aitmp = TAILQ_FIRST(&_arc_list->qhead); \
+		TAILQ_REMOVE(&_arc_list->qhead, aitmp, qlink);			\
+		freemember(aitmp);						\
+		_arc_list->qcount--; assert(_arc_list->qcount >= 0);		\
+	} else assert(false)
+
+#define delitem(/* struct arc_list* */_arc_list,				\
+		/* struct arc_item* */_arc_item) do {				\
+	struct arc_item *aitmp;							\
+	aitmp = findmember(_arc_list, _arc_item);				\
+	if (aitmp != TAILQ_END(&_arc_list->qhead)) {				\
+		TAILQ_REMOVE(&_arc_list->qhead, aitmp, qlink);			\
+		freemember(aitmp);						\
+		_arc_list->qcount--; assert(_arc_list->qcount >= 0);		\
+	}									\
+	} while (/*CONSTCOND*/0)
+
+#define cachefetch(/* struct arc_item* */_arc_item) do {			\
+		_arc_item->cached = true; /* XXX:TODO */			\
+	} while (/*CONSTCOND*/0)
+
+#define cacheremove(/* struct arc_item* */_arc_item)  do {			\
+		_arc_item->cached = false;	/* XXX:TODO */			\
+	} while (/*CONSTCOND*/0)
+
+#define min(a, b) ((a < b) ? a : b)
+#define max(a, b) ((a > b) ? a : b)
+	
+/* These routines deal with our home-rolled mem management for the
+ * ghost cache directory memory embedded within statically defined
+ * struct arc_list buffers.
+ * Note that any pointers emerging from these should be treated as
+ * "opaque"/cookies - ie; they should not be assumed by other routines
+ * to have any specific properties (such as being part of any specific
+ * array etc.) They are solely for the consumption of these
+ * routines. Their contents however may be freely copied/written to.
+ */
+inline static struct arc_item *
+allocmember(struct arc_list *_arc_list)
+{
+	/* Search for the first unallocated item in given list */
+	struct arc_item *aitmp = NULL;
+	int i;
+	for (i = 0; i < ARCLEN; i++) {
+		if (_arc_list->item_list[i].iid == IID_INVAL) {
+			assert(_arc_list->item_list[i].cached == false);
+			aitmp = &_arc_list->item_list[i];
+		}
+	}
+	return aitmp;
+}
+	
+inline static void
+freemember(struct arc_item *aip)
+{
+	assert(aip != NULL);
+	init_arc_item(aip, IID_INVAL, false);
+}	
+
+static inline struct arc_item *
+findmember(struct arc_list *_arc_list, struct arc_item *aikey)
+{
+	assert(_arc_list != NULL && aikey != NULL);
+	assert(aikey->iid != IID_INVAL);
+	struct arc_item *aitmp;
+	TAILQ_FOREACH(aitmp, &_arc_list->qhead, qlink) {
+			if (aitmp->iid == aikey->iid) {
+				return aitmp;
+			}
+	}
+	return aitmp; /* returns TAILQ_END() on non-membership */
+}
+
+void ARC(struct arc_item * /* x_t */);
+void arc_init(void);
+
+#endif /* _ARC_H_ */
diff -urN arc-null/arc_queue/arc.pmh arc/arc_queue/arc.pmh
--- arc-null/arc_queue/arc.pmh	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc.pmh	2023-09-28 05:13:29.106914626 +0000
@@ -0,0 +1,48 @@
+/* Spin process model for Adaptive Replacement Cache algorithm. Written by cherry */
+
+#ifndef _ARC_INC
+#define _ARC_INC
+
+#define EXPLORE_STATESPACE /* XXX: -D via Makefile ? */
+
+#ifdef EXPLORE_STATESPACE
+#define C 5 /* Cache size - use judiciously - adds to statespace */
+#else
+#define C 64 /* Static input run - we can use default size */
+#endif
+
+#define ARCLEN C
+
+#define IID_INVAL -1
+
+typedef arc_item {
+	int iid;    /* Unique identifier for item */
+	bool cached;
+};
+
+/* Note that we use the arc_item.iid as the member lookup handle to reduce state space */
+typedef arc_list {
+	chan item_list  = [ ARCLEN ] of { int }; /* A list of page items */
+};
+
+
+#define init_arc_item(_arc_item, _iid, _cached)		\
+	{		       			\
+		_arc_item.iid = _iid;	       		\
+		_arc_item.cached = _cached;		\
+	}
+
+#define lengthof(_arc_list) len(_arc_list.item_list)
+#define memberof(_arc_list, _arc_item) _arc_list.item_list??[eval(_arc_item.iid)]
+#define addMRU(_arc_list, _arc_item) _arc_list.item_list!_arc_item.iid
+#define readLRU(_arc_list, _arc_item) _arc_list.item_list?<_arc_item.iid>
+#define delLRU(_arc_list) _arc_list.item_list?_
+#define delitem(_arc_list, _arc_item) if :: lengthof(_arc_list) > 0; _arc_list.item_list??eval(_arc_item.iid) :: else; skip; fi
+
+#define cachefetch(_arc_item) _arc_item.cached = true
+#define cacheremove(_arc_item) _arc_item.cached = false
+
+#define min(a, b) ((a < b) -> a : b)
+#define max(a, b) ((a > b) -> a : b)
+	
+#endif /* _ARC_INC_ */
\ No newline at end of file
diff -urN arc-null/arc_queue/arc_queue.c arc/arc_queue/arc_queue.c
--- arc-null/arc_queue/arc_queue.c	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.c	2023-09-11 11:38:49.468321594 +0000
@@ -0,0 +1,26 @@
+/* Mostly to pull in macros into functions, so that modex can parse them */
+
+#include "arc.h"
+
+void arc_addMRU(struct arc_list *Q,
+		struct arc_item *I)
+{
+	addMRU(Q, I);
+}
+
+void arc_readLRU(struct arc_list *Q,
+		 struct arc_item *I)
+{
+	readLRU(Q, I);
+}
+
+void arc_delLRU(struct arc_list *Q)
+{
+	delLRU(Q);
+}
+
+void arc_delitem(struct arc_list *Q,
+		 struct arc_item *I)
+{
+	delitem(Q, I);
+}
diff -urN arc-null/arc_queue/arc_queue.drv arc/arc_queue/arc_queue.drv
--- arc-null/arc_queue/arc_queue.drv	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.drv	2023-09-14 14:09:51.189956593 +0000
@@ -0,0 +1,43 @@
+/* Drive the procs */
+
+arc_item _x;
+
+init {
+
+	atomic { /* Load up Q */
+		I.iid = 0;
+		do
+		:: I.iid < ARCLEN ->
+		   	  p_arc_addMRU( /* Q, I */ );
+			   I.iid++;
+		:: I.iid >= ARCLEN ->
+			break;
+		od
+	}
+
+	_x.iid = ARCLEN;
+
+	atomic { /* Read and remove from head */
+	       do
+	       :: _x.iid > (ARCLEN/2) ->
+	       		_x.iid--;
+	       	  	p_arc_readLRU( /* Q, I */ );
+			assert(I.iid == (ARCLEN - (_x.iid + 1)));
+			p_arc_delLRU( /* Q */);
+	       :: _x.iid <= (ARCLEN/2) ->
+	       	  	break;
+	       od
+	}
+
+	atomic { /* Remove from tail */
+	       do
+	       :: _x.iid >= 0 -> /* delitem() semantics allow attempt on empty list */
+	       		_x.iid--;
+			I.iid = _x.iid + ARCLEN/2;
+			p_arc_delitem( /* Q, I */);
+	       :: _x.iid < 0 ->
+	       	  	break;
+	       od
+	}
+
+}
diff -urN arc-null/arc_queue/arc_queue_drv.c arc/arc_queue/arc_queue_drv.c
--- arc-null/arc_queue/arc_queue_drv.c	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue_drv.c	2023-09-13 10:04:05.819212718 +0000
@@ -0,0 +1,52 @@
+#include "arc.h"
+#include "arc_queue.h"
+
+#include <stdio.h>
+
+static void arc_list_init(struct arc_list *_arc_list)
+{
+	TAILQ_INIT(&_arc_list->qhead);
+	_arc_list->qcount = 0;
+	
+	int i;
+	for(i = 0; i < ARCLEN; i++) {
+		init_arc_item(&_arc_list->item_list[i], IID_INVAL, false);
+	};
+}
+
+void main(void)
+{
+	struct arc_list Q;
+	struct arc_item I, _x;
+
+	arc_list_init(&Q);
+
+	I.iid = 0;
+
+	do {
+		printf("addMRU(): I.iid == %d\n", I.iid);
+		arc_addMRU(&Q, &I);
+		I.iid++;
+	} while(I.iid < ARCLEN);
+
+	_x.iid = ARCLEN;
+
+	do {
+		_x.iid--;
+		arc_readLRU(&Q, &I);
+		printf("readLRU(): I.iid == %d, _x.iid == %d\n", I.iid, _x.iid);		
+		assert(I.iid == (ARCLEN - (_x.iid + 1)));
+		arc_delLRU(&Q);
+	} while(_x.iid > (ARCLEN/2));
+
+
+	do { /* Remove from tail */
+		_x.iid--;
+		I.iid = _x.iid + ARCLEN/2;
+		arc_delitem( &Q, &I);
+		printf("delitem(): I.iid == %d, _x.iid == %d\n", I.iid, _x.iid);				
+	} while(_x.iid >= 0); /* delitem() semantics allow attempt on empty list */
+
+}
+
+
diff -urN arc-null/arc_queue/arc_queue.h arc/arc_queue/arc_queue.h
--- arc-null/arc_queue/arc_queue.h	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.h	2023-09-11 09:23:01.999474035 +0000
@@ -0,0 +1,16 @@
+#ifndef _ARC_QUEUE_H_
+#define _ARC_QUEUE_H_
+
+void arc_lengthof(struct arc_list *);
+
+void arc_memberof(struct arc_list *, struct arc_item *);
+
+void arc_addMRU(struct arc_list *, struct arc_item *);
+
+void arc_readLRU(struct arc_list *, struct arc_item *);
+
+void arc_delLRU(struct arc_list *);
+
+void arc_delitem(struct arc_list *, struct arc_item *);
+
+#endif /* _ARC_QUEUE_H_ */
diff -urN arc-null/arc_queue/arc_queue.inv arc/arc_queue/arc_queue.inv
--- arc-null/arc_queue/arc_queue.inv	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.inv	2023-09-14 08:51:59.057339095 +0000
@@ -0,0 +1,17 @@
+/* These are Linear Temporal Logic invariants (and constraints)
+ * applied over the statespace created by the promela
+ * specification. Correctness is implied by Logical consistency.
+ */
+ltl 
+{
+	/* Liveness - all threads, except control must finally exit */
+	eventually always (_nr_pr == 1) && 
+
+	eventually (len(Q.item_list) == ARCLEN) && /* We fill up Q first */
+
+	eventually always (len(Q.item_list) == 0) && /* We drain the Q in the end */
+	
+	true
+	
+
+}
\ No newline at end of file
diff -urN arc-null/arc_queue/arc_queue.pmh arc/arc_queue/arc_queue.pmh
--- arc-null/arc_queue/arc_queue.pmh	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.pmh	2023-09-13 08:02:28.647475795 +0000
@@ -0,0 +1,23 @@
+#define C 64
+
+#define ARCLEN (2 * C)
+
+#define IID_INVAL -1
+
+typedef arc_item {
+	int iid;
+}	
+
+/* Note that we use the arc_item.iid as the member lookup handle to reduce state space */
+typedef arc_list {
+	chan item_list  = [ ARCLEN ] of { int }; /* A list of page items */
+};
+
+#define TAILQ_INSERT_TAIL(_qh, _var, _ent) _qh ! _var.iid
+#define TAILQ_EMPTY(_qh) (len(_qh) == 0)
+#define TAILQ_REMOVE(_qh, _var, _ent) _qh ?? eval(_var.iid)
+#define TAILQ_FIRST(_qh, _var) _qh ? <_var.iid>
+#define TAILQ_END(_qh) IID_INVAL
+#define allocmember(_arc_list, _aitmp) skip; _aitmp.iid = IID_INVAL
+#define freemember(_arc_item) _arc_item.iid = IID_INVAL
+#define findmember(_arc_list, _arc_item) (TAILQ_EMPTY(_arc_list.item_list) -> TAILQ_END(_arc_list.item_list) : (_arc_list.item_list ?? [eval(_arc_item.iid)] -> _arc_item.iid : IID_INVAL))
diff -urN arc-null/arc_queue/arc_queue.pml arc/arc_queue/arc_queue.pml
--- arc-null/arc_queue/arc_queue.pml	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.pml	2023-09-14 07:01:26.549121004 +0000
@@ -0,0 +1,29 @@
+/* This model fronts the  "equivalance" model which is exported.
+ * The idea here is to drive it identically in such a way that
+ * invariants hold from both the extracted, as well as the
+ * hand-crafted model.
+ */
+
+int qcount;
+arc_item I;
+arc_list Q;
+
+inline p_arc_delitem()
+{
+	delitem(Q, I);
+}
+
+inline p_arc_delLRU()
+{
+	delLRU(Q);
+}
+
+inline p_arc_readLRU()
+{
+	readLRU(Q, I);
+}
+
+inline p_arc_addMRU()
+{
+	addMRU(Q, I);
+}
\ No newline at end of file
diff -urN arc-null/arc_queue/arc_queue.prx arc/arc_queue/arc_queue.prx
--- arc-null/arc_queue/arc_queue.prx	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/arc_queue.prx	2023-09-13 07:51:32.144705226 +0000
@@ -0,0 +1,51 @@
+// Spin model extractor harness written by cherry
+//
+%F arc_queue.c
+%X -i arc_addMRU
+%X -i arc_readLRU
+%X -i arc_delLRU
+%X -i arc_delitem
+%H
+// arc.h glue
+#define bool int
+#define false 0
+#define _SYS_QUEUE_H_ /* Don't expand queue.h macros, as we model them */
+
+%%
+//%C  // c_code {}
+//%%
+//%D // c_cdecl {}
+//%%
+%L
+// We use spin primitives and data objects.
+// See %P Below
+NonState	hidden	Q
+NonState	hidden	I
+NonState	hidden	aitmp
+
+
+Substitute		c_code [Q] { Q->qcount--; }	 qcount--
+Substitute		c_code [Q] { Q->qcount++; }	 qcount++
+Substitute		c_code [(Q->qcount>=0)] { ; }	 assert(qcount >= 0)
+Substitute		c_code { aitmp=findmember(Q,I); }	aitmp.iid=findmember(Q, I)
+Substitute		c_expr { (aitmp!=TAILQ_END((&(Q->qhead)))) }	(aitmp.iid != TAILQ_END(Q.item_list))
+Substitute		c_code [Q] { TAILQ_REMOVE((&(Q->qhead)),aitmp,qlink); }	TAILQ_REMOVE(Q.item_list, aitmp, _)
+Substitute		c_code { freemember(aitmp); }	freemember(aitmp)
+Substitute		c_expr { (!TAILQ_EMPTY((&(Q->qhead)))) }	(!TAILQ_EMPTY(Q.item_list))
+Substitute		c_code [(!TAILQ_EMPTY((&(Q->qhead))))] { ; }	assert((!TAILQ_EMPTY(Q.item_list)))
+Substitute		c_code [(I!=NULL)] { ; }	assert(I.iid != IID_INVAL)
+Substitute		c_code [Q && (struct arc_item *)TAILQ_FIRST((&(Q->qhead))) && I] { (*I)=(*((struct arc_item *)TAILQ_FIRST((&(Q->qhead))))); }	TAILQ_FIRST(Q.item_list, I)
+Substitute		c_code [(Q->qcount<(2*64))] { ; }	assert(qcount < ARCLEN)
+Substitute		c_code [(aitmp!=NULL)] { ; }	assert(aitmp.iid == IID_INVAL)
+Substitute		c_code [I && aitmp] { (*aitmp)=(*I); }	aitmp.iid = I.iid
+Substitute		c_code [Q] { TAILQ_INSERT_TAIL((&(Q->qhead)),aitmp,qlink); }	TAILQ_INSERT_TAIL(Q.item_list, aitmp, _); aitmp.iid = IID_INVAL
+Substitute		c_code { aitmp=allocmember(Q); }	allocmember(Q.item_list, aitmp)
+Substitute		c_code [Q] { aitmp=TAILQ_FIRST((&(Q->qhead))); }	TAILQ_FIRST(Q.item_list, aitmp)
+%%
+
+%P
+int qcount;
+hidden arc_item aitmp;
+arc_item I;
+arc_list Q;
+%%
\ No newline at end of file
diff -urN arc-null/arc_queue/Makefile arc/arc_queue/Makefile
--- arc-null/arc_queue/Makefile	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/Makefile	2023-09-14 14:07:27.103171180 +0000
@@ -0,0 +1,62 @@
+# Equivalence verification
+# We attempt to verify that the arc queue implementation in C is consistent with its model.
+# Note that the simplified model is in arc_queue/arc.pmh and the C
+# implementation equivalent model is in arc_queue/arc_queue.pmh
+#
+# Thus, spin-gen: uses arc.pmh as the model interface, whereas
+# modex-gen: uses arc_queue.pmh
+
+spin-gen: arc_queue.pml arc_queue.drv arc_queue.inv
+	cp arc_queue.pml model #mimic modex
+	cat arc.pmh model > spinmodel.pml;cat arc_queue.drv >> spinmodel.pml;cat arc_queue.inv >> spinmodel.pml;
+	spin -am spinmodel.pml
+
+spin-build: #Could be spin-gen or modex-gen
+	cc -DVECTORSZ=65536 -o pan pan.c
+
+all: spin-gen spin-build prog
+
+# Verification related targets.
+spin-run: spin-build
+	./pan -a #Generate arc.pml.trail	on error
+
+# You run the trace only if the spin run above failed and created a trail
+spin-trace: spinmodel.pml.trail
+	spin -t spinmodel.pml -p -g #  -p (statements) -g (globals) -l (locals) -s (send) -r (recv)
+	./pan -r spinmodel.pml.trail -g
+
+# Build the implementation
+prog: arc_queue.c arc.h
+	cc -g -o arc_queue arc_queue_drv.c arc_queue.c
+
+# Modex Extracts from C code to 'model' - see arc_queue.prx
+modex-gen: arc_queue.prx arc_queue.c
+	modex -v -w arc_queue.prx
+	cat arc_queue.pmh model > spinmodel.pml;cat arc_queue.drv >> spinmodel.pml;cat arc_queue.inv >> spinmodel.pml;
+	spin -a spinmodel.pml #Sanity check
+
+# Housekeeping
+modex-gen-clean:
+	rm -f spinmodel.pml # Our consolidated model file
+	rm -f _spin_nvr.tmp # Never claim file
+	rm -f model # modex generated intermediate "model" file
+	rm -f pan.* # Spin generated source files
+	rm -f _modex* # modex generated script files
+	rm -f  *.I *.M
+
+prog-clean:
+	rm -f arc_queue
+spin-run-clean:
+	rm -f spinmodel.pml.trail
+
+spin-build-clean:
+	rm -f pan
+
+spin-gen-clean:
+	rm -f spinmodel.pml # Our consolidated model file
+	rm -f _spin_nvr.tmp # Never claim file
+	rm -f model # Intermediate "model" file
+	rm -f pan.* # Spin generated source files
+
+clean: modex-gen-clean spin-gen-clean spin-build-clean spin-run-clean prog-clean
+	rm -f *~
diff -urN arc-null/arc_queue/queue.h arc/arc_queue/queue.h
--- arc-null/arc_queue/queue.h	1970-01-01 00:00:00.000000000 +0000
+++ arc/arc_queue/queue.h	2023-09-11 04:48:17.669520444 +0000
@@ -0,0 +1,655 @@
+/*	$NetBSD: queue.h,v 1.76 2021/01/16 23:51:51 chs Exp $	*/
+
+/*
+ * Copyright (c) 1991, 1993
+ *	The Regents of the University of California.  All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the University nor the names of its contributors
+ *    may be used to endorse or promote products derived from this software
+ *    without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ *
+ *	@(#)queue.h	8.5 (Berkeley) 8/20/94
+ */
+
+#ifndef	_SYS_QUEUE_H_
+#define	_SYS_QUEUE_H_
+
+/*
+ * This file defines five types of data structures: singly-linked lists,
+ * lists, simple queues, tail queues, and circular queues.
+ *
+ * A singly-linked list is headed by a single forward pointer. The
+ * elements are singly linked for minimum space and pointer manipulation
+ * overhead at the expense of O(n) removal for arbitrary elements. New
+ * elements can be added to the list after an existing element or at the
+ * head of the list.  Elements being removed from the head of the list
+ * should use the explicit macro for this purpose for optimum
+ * efficiency. A singly-linked list may only be traversed in the forward
+ * direction.  Singly-linked lists are ideal for applications with large
+ * datasets and few or no removals or for implementing a LIFO queue.
+ *
+ * A list is headed by a single forward pointer (or an array of forward
+ * pointers for a hash table header). The elements are doubly linked
+ * so that an arbitrary element can be removed without a need to
+ * traverse the list. New elements can be added to the list before
+ * or after an existing element or at the head of the list. A list
+ * may only be traversed in the forward direction.
+ *
+ * A simple queue is headed by a pair of pointers, one the head of the
+ * list and the other to the tail of the list. The elements are singly
+ * linked to save space, so elements can only be removed from the
+ * head of the list. New elements can be added to the list after
+ * an existing element, at the head of the list, or at the end of the
+ * list. A simple queue may only be traversed in the forward direction.
+ *
+ * A tail queue is headed by a pair of pointers, one to the head of the
+ * list and the other to the tail of the list. The elements are doubly
+ * linked so that an arbitrary element can be removed without a need to
+ * traverse the list. New elements can be added to the list before or
+ * after an existing element, at the head of the list, or at the end of
+ * the list. A tail queue may be traversed in either direction.
+ *
+ * For details on the use of these macros, see the queue(3) manual page.
+ */
+
+/*
+ * Include the definition of NULL only on NetBSD because sys/null.h
+ * is not available elsewhere.  This conditional makes the header
+ * portable and it can simply be dropped verbatim into any system.
+ * The caveat is that on other systems some other header
+ * must provide NULL before the macros can be used.
+ */
+#ifdef __NetBSD__
+#include <sys/null.h>
+#endif
+
+#if defined(_KERNEL) && defined(DIAGNOSTIC)
+#define QUEUEDEBUG	1
+#endif
+
+#if defined(QUEUEDEBUG)
+# if defined(_KERNEL)
+#  define QUEUEDEBUG_ABORT(...) panic(__VA_ARGS__)
+# else
+#  include <err.h>
+#  define QUEUEDEBUG_ABORT(...) err(1, __VA_ARGS__)
+# endif
+#endif
+
+/*
+ * Singly-linked List definitions.
+ */
+#define	SLIST_HEAD(name, type)						\
+struct name {								\
+	struct type *slh_first;	/* first element */			\
+}
+
+#define	SLIST_HEAD_INITIALIZER(head)					\
+	{ NULL }
+
+#define	SLIST_ENTRY(type)						\
+struct {								\
+	struct type *sle_next;	/* next element */			\
+}
+
+/*
+ * Singly-linked List access methods.
+ */
+#define	SLIST_FIRST(head)	((head)->slh_first)
+#define	SLIST_END(head)		NULL
+#define	SLIST_EMPTY(head)	((head)->slh_first == NULL)
+#define	SLIST_NEXT(elm, field)	((elm)->field.sle_next)
+
+#define	SLIST_FOREACH(var, head, field)					\
+	for((var) = (head)->slh_first;					\
+	    (var) != SLIST_END(head);					\
+	    (var) = (var)->field.sle_next)
+
+#define	SLIST_FOREACH_SAFE(var, head, field, tvar)			\
+	for ((var) = SLIST_FIRST((head));				\
+	    (var) != SLIST_END(head) &&					\
+	    ((tvar) = SLIST_NEXT((var), field), 1);			\
+	    (var) = (tvar))
+
+/*
+ * Singly-linked List functions.
+ */
+#define	SLIST_INIT(head) do {						\
+	(head)->slh_first = SLIST_END(head);				\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_INSERT_AFTER(slistelm, elm, field) do {			\
+	(elm)->field.sle_next = (slistelm)->field.sle_next;		\
+	(slistelm)->field.sle_next = (elm);				\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_INSERT_HEAD(head, elm, field) do {			\
+	(elm)->field.sle_next = (head)->slh_first;			\
+	(head)->slh_first = (elm);					\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_REMOVE_AFTER(slistelm, field) do {			\
+	(slistelm)->field.sle_next =					\
+	    SLIST_NEXT(SLIST_NEXT((slistelm), field), field);		\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_REMOVE_HEAD(head, field) do {				\
+	(head)->slh_first = (head)->slh_first->field.sle_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	SLIST_REMOVE(head, elm, type, field) do {			\
+	if ((head)->slh_first == (elm)) {				\
+		SLIST_REMOVE_HEAD((head), field);			\
+	}								\
+	else {								\
+		struct type *curelm = (head)->slh_first;		\
+		while(curelm->field.sle_next != (elm))			\
+			curelm = curelm->field.sle_next;		\
+		curelm->field.sle_next =				\
+		    curelm->field.sle_next->field.sle_next;		\
+	}								\
+} while (/*CONSTCOND*/0)
+
+
+/*
+ * List definitions.
+ */
+#define	LIST_HEAD(name, type)						\
+struct name {								\
+	struct type *lh_first;	/* first element */			\
+}
+
+#define	LIST_HEAD_INITIALIZER(head)					\
+	{ NULL }
+
+#define	LIST_ENTRY(type)						\
+struct {								\
+	struct type *le_next;	/* next element */			\
+	struct type **le_prev;	/* address of previous next element */	\
+}
+
+/*
+ * List access methods.
+ */
+#define	LIST_FIRST(head)		((head)->lh_first)
+#define	LIST_END(head)			NULL
+#define	LIST_EMPTY(head)		((head)->lh_first == LIST_END(head))
+#define	LIST_NEXT(elm, field)		((elm)->field.le_next)
+
+#define	LIST_FOREACH(var, head, field)					\
+	for ((var) = ((head)->lh_first);				\
+	    (var) != LIST_END(head);					\
+	    (var) = ((var)->field.le_next))
+
+#define	LIST_FOREACH_SAFE(var, head, field, tvar)			\
+	for ((var) = LIST_FIRST((head));				\
+	    (var) != LIST_END(head) &&					\
+	    ((tvar) = LIST_NEXT((var), field), 1);			\
+	    (var) = (tvar))
+
+#define	LIST_MOVE(head1, head2, field) do {				\
+	LIST_INIT((head2));						\
+	if (!LIST_EMPTY((head1))) {					\
+		(head2)->lh_first = (head1)->lh_first;			\
+		(head2)->lh_first->field.le_prev = &(head2)->lh_first;	\
+		LIST_INIT((head1));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+/*
+ * List functions.
+ */
+#if defined(QUEUEDEBUG)
+#define	QUEUEDEBUG_LIST_INSERT_HEAD(head, elm, field)			\
+	if ((head)->lh_first &&						\
+	    (head)->lh_first->field.le_prev != &(head)->lh_first)	\
+		QUEUEDEBUG_ABORT("LIST_INSERT_HEAD %p %s:%d", (head),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_LIST_OP(elm, field)					\
+	if ((elm)->field.le_next &&					\
+	    (elm)->field.le_next->field.le_prev !=			\
+	    &(elm)->field.le_next)					\
+		QUEUEDEBUG_ABORT("LIST_* forw %p %s:%d", (elm),		\
+		    __FILE__, __LINE__);				\
+	if (*(elm)->field.le_prev != (elm))				\
+		QUEUEDEBUG_ABORT("LIST_* back %p %s:%d", (elm),		\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_LIST_POSTREMOVE(elm, field)				\
+	(elm)->field.le_next = (void *)1L;				\
+	(elm)->field.le_prev = (void *)1L;
+#else
+#define	QUEUEDEBUG_LIST_INSERT_HEAD(head, elm, field)
+#define	QUEUEDEBUG_LIST_OP(elm, field)
+#define	QUEUEDEBUG_LIST_POSTREMOVE(elm, field)
+#endif
+
+#define	LIST_INIT(head) do {						\
+	(head)->lh_first = LIST_END(head);				\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_INSERT_AFTER(listelm, elm, field) do {			\
+	QUEUEDEBUG_LIST_OP((listelm), field)				\
+	if (((elm)->field.le_next = (listelm)->field.le_next) != 	\
+	    LIST_END(head))						\
+		(listelm)->field.le_next->field.le_prev =		\
+		    &(elm)->field.le_next;				\
+	(listelm)->field.le_next = (elm);				\
+	(elm)->field.le_prev = &(listelm)->field.le_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_INSERT_BEFORE(listelm, elm, field) do {			\
+	QUEUEDEBUG_LIST_OP((listelm), field)				\
+	(elm)->field.le_prev = (listelm)->field.le_prev;		\
+	(elm)->field.le_next = (listelm);				\
+	*(listelm)->field.le_prev = (elm);				\
+	(listelm)->field.le_prev = &(elm)->field.le_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_INSERT_HEAD(head, elm, field) do {				\
+	QUEUEDEBUG_LIST_INSERT_HEAD((head), (elm), field)		\
+	if (((elm)->field.le_next = (head)->lh_first) != LIST_END(head))\
+		(head)->lh_first->field.le_prev = &(elm)->field.le_next;\
+	(head)->lh_first = (elm);					\
+	(elm)->field.le_prev = &(head)->lh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define	LIST_REMOVE(elm, field) do {					\
+	QUEUEDEBUG_LIST_OP((elm), field)				\
+	if ((elm)->field.le_next != NULL)				\
+		(elm)->field.le_next->field.le_prev = 			\
+		    (elm)->field.le_prev;				\
+	*(elm)->field.le_prev = (elm)->field.le_next;			\
+	QUEUEDEBUG_LIST_POSTREMOVE((elm), field)			\
+} while (/*CONSTCOND*/0)
+
+#define LIST_REPLACE(elm, elm2, field) do {				\
+	if (((elm2)->field.le_next = (elm)->field.le_next) != NULL)	\
+		(elm2)->field.le_next->field.le_prev =			\
+		    &(elm2)->field.le_next;				\
+	(elm2)->field.le_prev = (elm)->field.le_prev;			\
+	*(elm2)->field.le_prev = (elm2);				\
+	QUEUEDEBUG_LIST_POSTREMOVE((elm), field)			\
+} while (/*CONSTCOND*/0)
+
+/*
+ * Simple queue definitions.
+ */
+#define	SIMPLEQ_HEAD(name, type)					\
+struct name {								\
+	struct type *sqh_first;	/* first element */			\
+	struct type **sqh_last;	/* addr of last next element */		\
+}
+
+#define	SIMPLEQ_HEAD_INITIALIZER(head)					\
+	{ NULL, &(head).sqh_first }
+
+#define	SIMPLEQ_ENTRY(type)						\
+struct {								\
+	struct type *sqe_next;	/* next element */			\
+}
+
+/*
+ * Simple queue access methods.
+ */
+#define	SIMPLEQ_FIRST(head)		((head)->sqh_first)
+#define	SIMPLEQ_END(head)		NULL
+#define	SIMPLEQ_EMPTY(head)		((head)->sqh_first == SIMPLEQ_END(head))
+#define	SIMPLEQ_NEXT(elm, field)	((elm)->field.sqe_next)
+
+#define	SIMPLEQ_FOREACH(var, head, field)				\
+	for ((var) = ((head)->sqh_first);				\
+	    (var) != SIMPLEQ_END(head);					\
+	    (var) = ((var)->field.sqe_next))
+
+#define	SIMPLEQ_FOREACH_SAFE(var, head, field, next)			\
+	for ((var) = ((head)->sqh_first);				\
+	    (var) != SIMPLEQ_END(head) &&				\
+	    ((next = ((var)->field.sqe_next)), 1);			\
+	    (var) = (next))
+
+/*
+ * Simple queue functions.
+ */
+#define	SIMPLEQ_INIT(head) do {						\
+	(head)->sqh_first = NULL;					\
+	(head)->sqh_last = &(head)->sqh_first;				\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_INSERT_HEAD(head, elm, field) do {			\
+	if (((elm)->field.sqe_next = (head)->sqh_first) == NULL)	\
+		(head)->sqh_last = &(elm)->field.sqe_next;		\
+	(head)->sqh_first = (elm);					\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_INSERT_TAIL(head, elm, field) do {			\
+	(elm)->field.sqe_next = NULL;					\
+	*(head)->sqh_last = (elm);					\
+	(head)->sqh_last = &(elm)->field.sqe_next;			\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_INSERT_AFTER(head, listelm, elm, field) do {		\
+	if (((elm)->field.sqe_next = (listelm)->field.sqe_next) == NULL)\
+		(head)->sqh_last = &(elm)->field.sqe_next;		\
+	(listelm)->field.sqe_next = (elm);				\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_REMOVE_HEAD(head, field) do {				\
+	if (((head)->sqh_first = (head)->sqh_first->field.sqe_next) == NULL) \
+		(head)->sqh_last = &(head)->sqh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define SIMPLEQ_REMOVE_AFTER(head, elm, field) do {			\
+	if (((elm)->field.sqe_next = (elm)->field.sqe_next->field.sqe_next) \
+	    == NULL)							\
+		(head)->sqh_last = &(elm)->field.sqe_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_REMOVE(head, elm, type, field) do {			\
+	if ((head)->sqh_first == (elm)) {				\
+		SIMPLEQ_REMOVE_HEAD((head), field);			\
+	} else {							\
+		struct type *curelm = (head)->sqh_first;		\
+		while (curelm->field.sqe_next != (elm))			\
+			curelm = curelm->field.sqe_next;		\
+		if ((curelm->field.sqe_next =				\
+			curelm->field.sqe_next->field.sqe_next) == NULL) \
+			    (head)->sqh_last = &(curelm)->field.sqe_next; \
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_CONCAT(head1, head2) do {				\
+	if (!SIMPLEQ_EMPTY((head2))) {					\
+		*(head1)->sqh_last = (head2)->sqh_first;		\
+		(head1)->sqh_last = (head2)->sqh_last;		\
+		SIMPLEQ_INIT((head2));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	SIMPLEQ_LAST(head, type, field)					\
+	(SIMPLEQ_EMPTY((head)) ?						\
+		NULL :							\
+	        ((struct type *)(void *)				\
+		((char *)((head)->sqh_last) - offsetof(struct type, field))))
+
+/*
+ * Tail queue definitions.
+ */
+#define	_TAILQ_HEAD(name, type, qual)					\
+struct name {								\
+	qual type *tqh_first;		/* first element */		\
+	qual type *qual *tqh_last;	/* addr of last next element */	\
+}
+#define TAILQ_HEAD(name, type)	_TAILQ_HEAD(name, struct type,)
+
+#define	TAILQ_HEAD_INITIALIZER(head)					\
+	{ TAILQ_END(head), &(head).tqh_first }
+
+#define	_TAILQ_ENTRY(type, qual)					\
+struct {								\
+	qual type *tqe_next;		/* next element */		\
+	qual type *qual *tqe_prev;	/* address of previous next element */\
+}
+#define TAILQ_ENTRY(type)	_TAILQ_ENTRY(struct type,)
+
+/*
+ * Tail queue access methods.
+ */
+#define	TAILQ_FIRST(head)		((head)->tqh_first)
+#define	TAILQ_END(head)			(NULL)
+#define	TAILQ_NEXT(elm, field)		((elm)->field.tqe_next)
+#define	TAILQ_LAST(head, headname) \
+	(*(((struct headname *)(void *)((head)->tqh_last))->tqh_last))
+#define	TAILQ_PREV(elm, headname, field) \
+	(*(((struct headname *)(void *)((elm)->field.tqe_prev))->tqh_last))
+#define	TAILQ_EMPTY(head)		(TAILQ_FIRST(head) == TAILQ_END(head))
+
+
+#define	TAILQ_FOREACH(var, head, field)					\
+	for ((var) = ((head)->tqh_first);				\
+	    (var) != TAILQ_END(head);					\
+	    (var) = ((var)->field.tqe_next))
+
+#define	TAILQ_FOREACH_SAFE(var, head, field, next)			\
+	for ((var) = ((head)->tqh_first);				\
+	    (var) != TAILQ_END(head) &&					\
+	    ((next) = TAILQ_NEXT(var, field), 1); (var) = (next))
+
+#define	TAILQ_FOREACH_REVERSE(var, head, headname, field)		\
+	for ((var) = TAILQ_LAST((head), headname);			\
+	    (var) != TAILQ_END(head);					\
+	    (var) = TAILQ_PREV((var), headname, field))
+
+#define	TAILQ_FOREACH_REVERSE_SAFE(var, head, headname, field, prev)	\
+	for ((var) = TAILQ_LAST((head), headname);			\
+	    (var) != TAILQ_END(head) && 				\
+	    ((prev) = TAILQ_PREV((var), headname, field), 1); (var) = (prev))
+
+/*
+ * Tail queue functions.
+ */
+#if defined(QUEUEDEBUG)
+#define	QUEUEDEBUG_TAILQ_INSERT_HEAD(head, elm, field)			\
+	if ((head)->tqh_first &&					\
+	    (head)->tqh_first->field.tqe_prev != &(head)->tqh_first)	\
+		QUEUEDEBUG_ABORT("TAILQ_INSERT_HEAD %p %s:%d", (head),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_INSERT_TAIL(head, elm, field)			\
+	if (*(head)->tqh_last != NULL)					\
+		QUEUEDEBUG_ABORT("TAILQ_INSERT_TAIL %p %s:%d", (head),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_OP(elm, field)					\
+	if ((elm)->field.tqe_next &&					\
+	    (elm)->field.tqe_next->field.tqe_prev !=			\
+	    &(elm)->field.tqe_next)					\
+		QUEUEDEBUG_ABORT("TAILQ_* forw %p %s:%d", (elm),	\
+		    __FILE__, __LINE__);				\
+	if (*(elm)->field.tqe_prev != (elm))				\
+		QUEUEDEBUG_ABORT("TAILQ_* back %p %s:%d", (elm),	\
+		    __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_PREREMOVE(head, elm, field)			\
+	if ((elm)->field.tqe_next == NULL &&				\
+	    (head)->tqh_last != &(elm)->field.tqe_next)			\
+		QUEUEDEBUG_ABORT("TAILQ_PREREMOVE head %p elm %p %s:%d",\
+		    (head), (elm), __FILE__, __LINE__);
+#define	QUEUEDEBUG_TAILQ_POSTREMOVE(elm, field)				\
+	(elm)->field.tqe_next = (void *)1L;				\
+	(elm)->field.tqe_prev = (void *)1L;
+#else
+#define	QUEUEDEBUG_TAILQ_INSERT_HEAD(head, elm, field)
+#define	QUEUEDEBUG_TAILQ_INSERT_TAIL(head, elm, field)
+#define	QUEUEDEBUG_TAILQ_OP(elm, field)
+#define	QUEUEDEBUG_TAILQ_PREREMOVE(head, elm, field)
+#define	QUEUEDEBUG_TAILQ_POSTREMOVE(elm, field)
+#endif
+
+#define	TAILQ_INIT(head) do {						\
+	(head)->tqh_first = TAILQ_END(head);				\
+	(head)->tqh_last = &(head)->tqh_first;				\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_HEAD(head, elm, field) do {			\
+	QUEUEDEBUG_TAILQ_INSERT_HEAD((head), (elm), field)		\
+	if (((elm)->field.tqe_next = (head)->tqh_first) != TAILQ_END(head))\
+		(head)->tqh_first->field.tqe_prev =			\
+		    &(elm)->field.tqe_next;				\
+	else								\
+		(head)->tqh_last = &(elm)->field.tqe_next;		\
+	(head)->tqh_first = (elm);					\
+	(elm)->field.tqe_prev = &(head)->tqh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_TAIL(head, elm, field) do {			\
+	QUEUEDEBUG_TAILQ_INSERT_TAIL((head), (elm), field)		\
+	(elm)->field.tqe_next = TAILQ_END(head);			\
+	(elm)->field.tqe_prev = (head)->tqh_last;			\
+	*(head)->tqh_last = (elm);					\
+	(head)->tqh_last = &(elm)->field.tqe_next;			\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_AFTER(head, listelm, elm, field) do {		\
+	QUEUEDEBUG_TAILQ_OP((listelm), field)				\
+	if (((elm)->field.tqe_next = (listelm)->field.tqe_next) != 	\
+	    TAILQ_END(head))						\
+		(elm)->field.tqe_next->field.tqe_prev = 		\
+		    &(elm)->field.tqe_next;				\
+	else								\
+		(head)->tqh_last = &(elm)->field.tqe_next;		\
+	(listelm)->field.tqe_next = (elm);				\
+	(elm)->field.tqe_prev = &(listelm)->field.tqe_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_INSERT_BEFORE(listelm, elm, field) do {			\
+	QUEUEDEBUG_TAILQ_OP((listelm), field)				\
+	(elm)->field.tqe_prev = (listelm)->field.tqe_prev;		\
+	(elm)->field.tqe_next = (listelm);				\
+	*(listelm)->field.tqe_prev = (elm);				\
+	(listelm)->field.tqe_prev = &(elm)->field.tqe_next;		\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_REMOVE(head, elm, field) do {				\
+	QUEUEDEBUG_TAILQ_PREREMOVE((head), (elm), field)		\
+	QUEUEDEBUG_TAILQ_OP((elm), field)				\
+	if (((elm)->field.tqe_next) != TAILQ_END(head))			\
+		(elm)->field.tqe_next->field.tqe_prev = 		\
+		    (elm)->field.tqe_prev;				\
+	else								\
+		(head)->tqh_last = (elm)->field.tqe_prev;		\
+	*(elm)->field.tqe_prev = (elm)->field.tqe_next;			\
+	QUEUEDEBUG_TAILQ_POSTREMOVE((elm), field);			\
+} while (/*CONSTCOND*/0)
+
+#define TAILQ_REPLACE(head, elm, elm2, field) do {			\
+        if (((elm2)->field.tqe_next = (elm)->field.tqe_next) != 	\
+	    TAILQ_END(head))   						\
+                (elm2)->field.tqe_next->field.tqe_prev =		\
+                    &(elm2)->field.tqe_next;				\
+        else								\
+                (head)->tqh_last = &(elm2)->field.tqe_next;		\
+        (elm2)->field.tqe_prev = (elm)->field.tqe_prev;			\
+        *(elm2)->field.tqe_prev = (elm2);				\
+	QUEUEDEBUG_TAILQ_POSTREMOVE((elm), field);			\
+} while (/*CONSTCOND*/0)
+
+#define	TAILQ_CONCAT(head1, head2, field) do {				\
+	if (!TAILQ_EMPTY(head2)) {					\
+		*(head1)->tqh_last = (head2)->tqh_first;		\
+		(head2)->tqh_first->field.tqe_prev = (head1)->tqh_last;	\
+		(head1)->tqh_last = (head2)->tqh_last;			\
+		TAILQ_INIT((head2));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+/*
+ * Singly-linked Tail queue declarations.
+ */
+#define	STAILQ_HEAD(name, type)						\
+struct name {								\
+	struct type *stqh_first;	/* first element */		\
+	struct type **stqh_last;	/* addr of last next element */	\
+}
+
+#define	STAILQ_HEAD_INITIALIZER(head)					\
+	{ NULL, &(head).stqh_first }
+
+#define	STAILQ_ENTRY(type)						\
+struct {								\
+	struct type *stqe_next;	/* next element */			\
+}
+
+/*
+ * Singly-linked Tail queue access methods.
+ */
+#define	STAILQ_FIRST(head)	((head)->stqh_first)
+#define	STAILQ_END(head)	NULL
+#define	STAILQ_NEXT(elm, field)	((elm)->field.stqe_next)
+#define	STAILQ_EMPTY(head)	(STAILQ_FIRST(head) == STAILQ_END(head))
+
+/*
+ * Singly-linked Tail queue functions.
+ */
+#define	STAILQ_INIT(head) do {						\
+	(head)->stqh_first = NULL;					\
+	(head)->stqh_last = &(head)->stqh_first;				\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_INSERT_HEAD(head, elm, field) do {			\
+	if (((elm)->field.stqe_next = (head)->stqh_first) == NULL)	\
+		(head)->stqh_last = &(elm)->field.stqe_next;		\
+	(head)->stqh_first = (elm);					\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_INSERT_TAIL(head, elm, field) do {			\
+	(elm)->field.stqe_next = NULL;					\
+	*(head)->stqh_last = (elm);					\
+	(head)->stqh_last = &(elm)->field.stqe_next;			\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_INSERT_AFTER(head, listelm, elm, field) do {		\
+	if (((elm)->field.stqe_next = (listelm)->field.stqe_next) == NULL)\
+		(head)->stqh_last = &(elm)->field.stqe_next;		\
+	(listelm)->field.stqe_next = (elm);				\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_REMOVE_HEAD(head, field) do {				\
+	if (((head)->stqh_first = (head)->stqh_first->field.stqe_next) == NULL) \
+		(head)->stqh_last = &(head)->stqh_first;			\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_REMOVE(head, elm, type, field) do {			\
+	if ((head)->stqh_first == (elm)) {				\
+		STAILQ_REMOVE_HEAD((head), field);			\
+	} else {							\
+		struct type *curelm = (head)->stqh_first;		\
+		while (curelm->field.stqe_next != (elm))			\
+			curelm = curelm->field.stqe_next;		\
+		if ((curelm->field.stqe_next =				\
+			curelm->field.stqe_next->field.stqe_next) == NULL) \
+			    (head)->stqh_last = &(curelm)->field.stqe_next; \
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_FOREACH(var, head, field)				\
+	for ((var) = ((head)->stqh_first);				\
+		(var);							\
+		(var) = ((var)->field.stqe_next))
+
+#define	STAILQ_FOREACH_SAFE(var, head, field, tvar)			\
+	for ((var) = STAILQ_FIRST((head));				\
+	    (var) && ((tvar) = STAILQ_NEXT((var), field), 1);		\
+	    (var) = (tvar))
+
+#define	STAILQ_CONCAT(head1, head2) do {				\
+	if (!STAILQ_EMPTY((head2))) {					\
+		*(head1)->stqh_last = (head2)->stqh_first;		\
+		(head1)->stqh_last = (head2)->stqh_last;		\
+		STAILQ_INIT((head2));					\
+	}								\
+} while (/*CONSTCOND*/0)
+
+#define	STAILQ_LAST(head, type, field)					\
+	(STAILQ_EMPTY((head)) ?						\
+		NULL :							\
+	        ((struct type *)(void *)				\
+		((char *)((head)->stqh_last) - offsetof(struct type, field))))
+
+#endif	/* !_SYS_QUEUE_H_ */
diff -urN arc-null/Makefile arc/Makefile
--- arc-null/Makefile	1970-01-01 00:00:00.000000000 +0000
+++ arc/Makefile	2023-09-23 07:06:54.452622678 +0000
@@ -0,0 +1,92 @@
+# This set of spinroot related files were written by cherry
+# <c%bow.st@localhost> in the Gregorian Calendar year AD.2023, in the month
+# of February that year.
+#
+# We have two specification files and a properties file (".inv")
+#
+# The properties file contains "constraint" sections
+# such as ltl or never claims (either or, not both).
+# The specification is divided into two files:
+# the file with suffix '.drv' is a "driver" which
+# instantiates processes that will ultimately "drive" the
+# models under test.
+# The file with the suffix '.pml' contains the process
+# model code, which, is intended to be the formal specification
+# for the code we are interested in writing in C.
+#
+# We process these files in slightly different ways during
+# the dev cycle, but broadly speaking, the idea is to create
+# a file called 'spinmodel.pml' which contains the final
+# model file that is fed to spin.
+#
+# Note that when we use the model extractor tool "modex" to
+# extract the 'specification' from C code written to implement
+# the model defined above. We use a 'harness' file (see file with
+# suffix '.prx' below.
+#
+# Once the harness has been run, spinmodel.pml should be
+# synthesised and processed as usual.
+# 
+# The broad idea is that software dev starts by writing the spec
+# first, validating the model, and then implementing the model in
+# C, after which we come back to extract the model from the C file
+# and cross check our implementation using spin.
+#
+# If things go well, the constraints specified in the '.inv' file
+# should hold exactly for both the handwritten model, and the
+# extracted one.
+
+spin-gen: arc.pml arc.drv arc.inv
+	cp arc.pml model #mimic modex
+	cat arc_queue/arc.pmh model > spinmodel.pml;cat arc.drv >> spinmodel.pml;cat arc.inv >> spinmodel.pml;
+	spin -am spinmodel.pml
+
+spin-build: #Could be spin-gen or modex-gen
+	cc -DVECTORSZ=65536 -o pan pan.c
+
+all: spin-gen spin-build prog
+
+# Verification related targets.
+spin-run: spin-build
+	./pan -a #Generate arc.pml.trail	on error
+
+# You run the trace only if the spin run above failed and created a trail
+spin-trace: spinmodel.pml.trail
+	spin -t spinmodel.pml -p -g -l #  -p (statements) -g (globals) -l (locals) -s (send) -r (recv)
+	./pan -r spinmodel.pml.trail -g
+
+# Build the implementation
+prog: arc.c arc_queue/arc.h
+	cc -g -o arc arc_drv.c arc.c
+
+# Modex Extracts from C code to 'model' - see arc.prx
+modex-gen: arc.prx arc.c
+	modex -v -w arc.prx
+	cat arc_queue/arc.pmh model > spinmodel.pml;cat arc.drv >> spinmodel.pml;cat arc.inv >> spinmodel.pml;
+	spin -a spinmodel.pml #Sanity check
+
+# Housekeeping
+modex-gen-clean:
+	rm -f spinmodel.pml # Our consolidated model file
+	rm -f _spin_nvr.tmp # Never claim file
+	rm -f model # modex generated intermediate "model" file
+	rm -f pan.* # Spin generated source files
+	rm -f _modex* # modex generated script files
+	rm -f  *.I *.M
+
+prog-clean:
+	rm -f arc
+spin-run-clean:
+	rm -f spinmodel.pml.trail
+
+spin-build-clean:
+	rm -f pan
+
+spin-gen-clean:
+	rm -f spinmodel.pml # Our consolidated model file
+	rm -f _spin_nvr.tmp # Never claim file
+	rm -f model # Intermediate "model" file
+	rm -f pan.* # Spin generated source files
+
+clean: modex-gen-clean spin-gen-clean spin-build-clean spin-run-clean prog-clean
+	rm -f *~



Home | Main Index | Thread Index | Old Index