Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

getmntinfo compatibility question



Hi!

I'm debugging a problem with using getmntinfo from rust. It looks like
the statvfs struct returned by getmntinfo is not parsed correctly by
rust, leading to weird output and segfaults.

IIUC, in 2019, christos changed the statvfs structure, see
https://mail-index.netbsd.org/source-changes/2019/09/22/msg109266.html
for src/lib/libc/sys/statvfs.c:

revision 1.7
date: 2019-09-23 00:59:38 +0200;  author: christos;  state: Exp;  lines: +4 -14;  commitid: rROHZPwp809xR3EB;
Add a new member to struct vfsstat and grow the unused members
The new member is caled f_mntfromlabel and it is the dkw_wname
of the corresponding wedge. This is now used by df -W to display
the mountpoint name as NAME=

adding

char    f_mntfromlabel[_VFS_MNAMELEN];  /* disk label name if avail */

but also by replacing

        uint32_t        f_spare[4];     /* spare space */

by

	uint64_t        f_spare[4];     /* spare space */

and versioned the statvfs syscall. I'm not sure how the syscall
compatibility works, but rust's libc crate still has the old
definition of the struct, so it must still be using the old
(compatibility) statvfs function before the change. I verified that
using statvfs from rust works as expected, so using the (old) statvfs
struct is ok here.

getmntinfo also returns statvfs structures.

So currently my best hunch about the problem is that there was a bug
in versioning getmntinfo, or that rust calls the wrong version of
getmntinfo, and gets the new statvfs struct where all the char fields
are moved around by 16 bytes, but parses it using the old struct
definition, and chaos ensues.

How does the getmntinfo backwards compatibility work?

Could my guess be correct?

I'll attach my rust test program (based on code from the trash-rs
crate), which on my system outputs

count 132
0 -  - ve/packages/pbulk - sr/pkgsrc
1 -  - bin - hive/sandboxes/client3/dev/ptyfs
zsh: segmentation fault (core dumped)  cargo run

which means it finds 132 file systems mounted (due to bulk build
sandboxes) and tries printing information for some random two of them,
before segfaulting on a third one.

A test program in C I wrote starts with
0: ffs - / - /dev/dk1 - some-hex-code-here
1: kernfs - /kern - kernfs -
2: ptyfs - /dev/pts - ptyfs -
3: procfs - /proc - procfs -
4: tmpfs - /tmp - tmpfs -
5: tmpfs - /var/shm - tmpfs -
...
and works fine.
 Thomas

Attachment: rust-getmntinfo.tar.gz
Description: Binary data



Home | Main Index | Thread Index | Old Index