On Sun, Mar 3, 2019 at 11:41 AM Christos Zoulas <christos%zoulas.com@localhost> wrote:
> On Mar 3, 2019, at 2:17 PM, Aymeric Vincent <aymericvincent%free.fr@localhost> wrote:
> christos%astron.com@localhost (Christos Zoulas) writes:
>> In article <871s3p49lz.fsf%free.fr@localhost>,
>> Aymeric Vincent <aymericvincent%free.fr@localhost> wrote:
>>> There is no trivial way to get rid of this no longer valid contents,
>>> since for good reason you can't write to a directory as a file. You have
>>> to re-create it (not always possible due to permissions) or create long
>>> entries until your data disappears... :-/
>> Why? The kernel can just zero out the deleted dirents.
> Forgot to mention: "in the current situation". And yes, that's exactly
> what I think, probably the alternative is
> - zero out on unlink() so that the data is no longer on the disk
> (Everybody seems to have expressed preference for this solution but I
> think this requires changing all the affected filesystems)
> - zero out in getdents() so that the data cannot be accessed without
> accessing the raw device, mimicking the behaviour of unlink for the
> data: data still present on disk but not accessible without accessing
> the raw device. (This requires forbidding read() and similar on
Well, even if you zero out the new entries as you delete them, you need
to have a way to update filesystems that have old unclean directories.
Perhaps we can have fsck do it, or even better have way (through fcntl/ioctl/
new syscall/or even abusing open flags) to clean and/or
compact an existing directory (which we cannot do right now).
I am not opposed to changing O_DIRECTORY to be required to open
directories (and overriding globally via sysctl), but that does not fix the
Christos, are you speaking to the use case of when this feature is added to the kernel, some old filesystem may still have dirent artifacts? Isn't addressing that case a bit overkill? If someone needs dirent artifact over written to make old filesystems the same as future filesystems with the new kernel feature, wouldn't it be reasonable to create a one off tool for that case?
Yes, this is what I wrote above; fsck could do it, or we could have a different way to "optimize" a directory that in the process removes stale entries.
Maybe I'm being simplistic, but if you open directory permissions,
isn't it reasonable to expect you are granting the user access to the
all the prior artifacts, in addition to the current data?
Having access to previously deleted data is an implementation specific issue and can be viewed as a bug. This is the same problem we have with Office documents containing previous edit details (people don't expect that their editing history is going to be in there).
a sysctl is added to overwrite (file/directory) data on delete, I would
suggest at least 3 settings, 1) none, 2) background, low priority
overwrite, 3) atomic blocking, ie the rm command doesn't return until
successful overwrite completes.
I don't think it is going to be expensive to zero the directory entry on delete. We are not talking about the data (only the metadata).
I don't have a particular need for this feature, but I would turn on option two, because why not. In most cases, on a non-loaded system, it would have nominal performance hit.