Source-Changes-D archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: CVS commit: src/sys/net



   Date: Mon, 15 Feb 2016 11:06:52 +0000
   From: Roy Marples <roy%marples.name@localhost>

   On 15/02/2016 10:32, Ryota Ozaki wrote:
   > I think we can fix by returning from if_link_state_change_si
   > if ifp->if_link_state == ifp->if_old_link_state. Is it correct?

   Then we're not doing what we should because the protocol actions may not
   fire.

   Can we not extend softint_schedule to take some user data? That way,
   each call (which I'm assuming would be sequential) can carry the link
   state change in the user data which can then be applied to the interface.

Changing softint is not really a good idea -- it is a significant
semantic change that most current users don't require, because they
already have existing queue mechanisms, e.g. pktq.

If it is necessary to report every link state transition, then we need
to create a queueing mechanism -- and possibly drop some transitions
in the case of memory shortage, as we already do in route_enqueue.

Maybe something like this, with a queue and a link_state_pending field
to avoid creating extra queue entries unnecessarily but guaranteeing
that the most recent link state change will take effect, and be
preceded by LINK_STATE_DOWN if anything was lost:

struct link_state_change {
	SIMPLEQ_ENTRY(link_state_change)	lsc_entry;
	int					lsc_state;
	bool					lsc_lost;
};

static pool_cache_t link_state_change_pc __read_mostly;

void
if_link_state_change(struct ifnet *ifp, int link_state)
{
	int s;

	s = splnet();
	if (ifp->if_link_state_pending == link_state)
		goto out;

	if (ifp->if_link_state_pending != ifp->if_link_state) {
		struct link_state_change *lsc;

		lsc = pool_cache_get(link_state_change_pc, PR_NOWAIT);
		if (lsc == NULL) {
			ifp->if_link_state_lost = true;
			goto change;
		}

		lsc->lsc_state = link_state;
		lsc->lsc_lost = ifp->if_link_state_lost;
		SIMPLEQ_INSERT_TAIL(&ifp->if_link_state_changes, lsc,
		    lsc_entry);
	}

	ifp->if_link_state_lost = false;
change:	ifp->if_link_state_pending = link_state;
	softint_schedule(ifp->if_link_state_softint);
out:	splx(s);
}

static void
if_link_state_change_softintr(void *cookie)
{
	struct ifnet *ifp = cookie;
	struct link_state_change *lsc;
	int s;

	s = splnet();
	while (!SIMPLEQ_EMPTY(&ifp->if_link_state_changes)) {
		lsc = SIMPLEQ_HEAD(&ifp->if_link_state_changes);
		SIMPLEQ_REMOVE_HEAD(&ifp->if_link_state_changes, lsc_entry);
		if_link_state_change_1(ifp, lsc->lsc_state, lsc->lsc_lost);
	}

	if (ifp->if_link_state_pending != ifp->if_link_state)
		if_link_state_change_1(ifp, ifp->if_link_state_pending,
		    ifp->if_link_state_lost);
	ifp->if_link_state_lost = false;
	splx(s);
}

static void
if_link_state_change_1(struct ifnet *ifp, int link_state, bool lost)
{

	if (lost ||
	    (ifp->if_link_state == LINK_STATE_UP &&
		link_state == LINK_STATE_UNKNOWN)) {
		/* pretend LINK_STATE_DOWN first */
	}

	ifp->if_link_state = link_state;
	/* call domain dom_if_link_state_change callbacks */
	/* call rt_ifmsg */
}


Home | Main Index | Thread Index | Old Index