Subject: Re: Implementation of POSIX message queue
To: Mindaugas R. <rmind@NetBSD.org>
From: Jason Thorpe <thorpej@shagadelic.org>
List: tech-kern
Date: 08/16/2007 08:41:44
On Aug 15, 2007, at 2:52 PM, Mindaugas R. wrote:
> - The implementation does not use VFS subsystem and cannot be
> attached to it.
> AFAIK, Linux and FreeBSD implements this via VFS. After some
> estimation,
> seems there are no necessity to implement it in this way - it would be
> additional overhead, while POSIX does not requires this. By the way,
> Solaris
> implements message queues in userland.
I agree -- no need to use the VFS for this.
> - For message queue descriptors, I would like to use a file descriptor
> allocator - fdalloc()/fdremove() (follow the #ifdefs in the code),
> but there
> is a problem - there is no need to allocate the file structure or
> allow
> open(), read(), close(), etc calls, and fdalloc/fdremove is not
> working
> with NULL fdp->fd_ofiles[fd]. Possible solutions:
> a) Change fdalloc/fdremove to work correctly when fdp->fd_ofiles[fd]
> == NULL.
> But I am not sure if this would be a correct thing i.e. could it
> cause
> some problems or violate the abstraction?
> b) Allocate a file structure and use it for a mqueue. It would be
> good for
> copying the descriptors on fork(), currently it looks quite weird
> for me
> (see mqueue_proc_fork(), FIXME mark). However, this would need some
> workaround for fileops and other calls - is it worth?
Yes, I think (b) is the right choice.
> c) Write a separate code for descriptors. But using some existing
> one would
> be better, thought.
> d) Suggestions?
>
> - Currently, proc::p_mqd list of descriptors is protected by global
> mqueue_lock RW-lock. I am not sure if this worth inventing a separate
> per-process lock, because only mq_open(), mq_close() and mq_unlink()
> would
> compete (they acquires write lock). In normal case, these should not
> be an
> often calls in the system. Anyway - any thoughts on this?
I think the single global rw-lock is good enough for now. If we
determine that there are performance problems later, we can always
revisit.
-- thorpej