Subject: Re: Logical Volume Managers
To: Bill Studenmund <wrstuden@zembu.com>
From: Christian Limpach <chris@Nice.CH>
List: tech-kern
Date: 07/01/2000 01:29:27
> Just add a pointer from the lv to the vg storage. :-)
> Look at how the zsc/zstty (Zilog serial port code) code works. There is
> storage for each zs chip, and code for the things which can live on that
> chip. The latter have pointers into the chip-specific data (well, channel
> specific).
I looked and this made me realise, that I still had some catching-up to do
on some basic designs used in the kernel. I hope to get a better
understanding of how things work.
> > information in each per-lv storage. I tend to prefer to use one softc
for
>
> One single struct, or one copy of that struct?
I was thinking of one copy of that struct. That's how I implemented it for
now and it works. But it doesn't fit nicely into the disk framework and
that's probably a bad thing.
> > the whole lvm system, except that I'm somewhat unclear on these points:
> > - is the number of items in one pool limited or is there a performance
> > advantage to use different pools for each vg or each lv?
anybody?
> > - since there is space for a disklabel in the struct disk and this space
is
> > used for on-the-fly generation of disklabels for DIOCGPART, can this
break?
>
> Maybe. I'd need to see the code.
I have a static struct disklabel dk_label;
and I do this:
lvmfakelabel (&dk_label, minor);
((struct partinfo *)data)->disklab = &dk_label;
((struct partinfo *)data)->part =
&dk_label.d_partitions[0];
This works as long as whoever calls DIOCGPART is not storing the returned
pointers for later reference. This will not work too well on an SMP
machine. In conclusion, a shared struct disk for the whole LVM system is
not a good idea.
> > - are there advantages to allocating memory at boot time versus
allocating
> > memory as needed?
>
> Not sure.
anybody?
> > yes and no, wouldn't it be nicer to be able to use as many ccd's as you
> > want? I mean ccd's can be configured at any time, not only at boot and
I
> > don't see any reason in the ccd code why it wouldn't be possible to only
> > allocate the memory needed when the ccd is configured. It's "oh, there
is a
> > ccd here btw" which the user can trigger at any time.
>
> It would be nice, but no one's implimented it yet. :-(
I wouldn't mind implementing it if such a change is welcome.
> So the code you have now probably would work, I'm just suggesting tweaking
> the way lv's and vg's find each other and are allocated. Oh, implicit in
> this is that vg's (the parents) and lv's (the children) have different
> softc structures. And children usually have pointers to their parents.
I'm also trying to make the way lv's and vg's find each other more how
similar things are done in other places in the kernel.
I would need to know what exactly is a softc structure? disk(9) suggests
that it should have a struct device and struct disk while the softc used for
the vnd and ccd devices doesn't have a struct device. If a softc can be any
structure, then I would only have to do some renaming since I use already
structures for vg's and lv's. If a softc for a disk device should have a
struct disk, as suggested by disk(9) then I should probably add one to
either the vg or the lv structure since a shared struct disk for the whole
lvm system has the problems mentioned earlier.
So it's either one struct disk for each volume group or one struct disk for
each logical volume. The first option limits the number of logical volumes
to MAXMAXPARTITIONS since all the logical volumes would be listed as
partitions in this struct disk's disklabel. The second option has no such
limit but wastes more memory.
If I use option two, the structures would look something like this:
struct lv_softc {
struct disk sc_dkdev;
[.. whatever has been in struct lv until now ...]
};
struct vg_softc {
[.. whatever has been in struct vg until now ...]
};
Is this about right? Or is there something mandatory for a softc which
would make vg_softc a real softc as opposed to just a structure and thus the
whole change just a cosmetic change?
I would then disk_attach/detach whenever the kernel is made aware of a
new/removed logical volume.
Thanks for you replies so far, they have been helpful!
christian