Subject: Re: segmentation fault on fclose?
To: None <current-users@NetBSD.ORG>
From: der Mouse <mouse@Collatz.McRCIM.McGill.EDU>
List: current-users
Date: 09/06/1994 10:10:27
>>> Just two questions...  Should the following code cause a
>>> segmentation fault?
>>>    fp = NULL;
>>>    fclose(fp);

>> ANSI does not require such a special case, and I really don't like
>> the idea of masking bugs in applications by kluging up the C
>> library.

I'm with you all the way here.

> I considered advocating checking for NULL in fclose, but it occured
> to me I would actually rather that my program dump core, as a warning
> to me that I was closing a file twice, closing a file that was never
> open, or somehow not handling a file correctly in some other way,
> rather than blindly continue on.  I would consider that segmentation
> fault to be a debugging aid.

Me too.  If I might offer a somewhat programmer-friendlier debugging

	fclose(NULL) called
	Software abort (core dumped)

might be preferred over

	Segmentation fault (core dumped)

and having to fire up a debugger to discover where it died.  Something
like this, perhaps:

	fclose(FILE *fp)
	 if (! fp)
	  { write(2,"fclose(NULL) called\r\n",21);

I don't recommend calling abort() because that has a nasty tendency to
try to fclose() everything, which may be the last thing you want in a
case like this.  (At least it does on our Suns.  I haven't looked at
NetBSD abort().)

Of course, carrying this to its logical extreme would have every libc
routine that takes a pointer argument checking for nil pointers.  A
good case could be made that it's excessive, and I don't really see
anything wrong with the current behavior.

					der Mouse