Tcl Source Code

View Ticket
Login
Ticket UUID: 219139
Title: file delete -force does not work on read-only directories
Type: Bug Version: obsolete: 8.0p2
Submitter: nobody Created on: 2000-10-26 05:02:14
Subsystem: 37. File System Assigned To: vincentdarley
Priority: 5 Medium Severity:
Status: Closed Last Modified: 2001-09-04 23:10:04
Resolution: Fixed Closed By: vincentdarley
    Closed on: 2001-09-04 16:10:04
Description:
OriginalBugID: 668 Bug
Version: 8.0p2
SubmitDate: '1998-07-29'
LastModified: '2000-01-20'
Severity: MED
Status: Assigned
Submitter: stanton
ChangedBy: hobbs
OS: All
OSVersion: NA
Machine: NA
FixedDate: '2000-01-20'
ClosedDate: '2000-10-25'


If a directory is not writable, file delete -force fails on the first file in the directory.



like 'rm -rf readonlydir/', the error should be expected.  The docs
don't mention this, so the docs have been updated for 8.3.0. 
-- 01/20/2000 hobbs
Correction - the intention is that, unlike rm, file delete -force
should delete hell-or-high-water, which means we need to adjust
the perms on a directory before nixing it in the -force case. 
-- 01/20/2000 hobbs
User Comments: vincentdarley added on 2001-09-04 23:10:04:
Logged In: YES 
user_id=32170

This works in latest patch.

vincentdarley added on 2001-09-04 22:57:33:

File Deleted - 10357: 



File Added - 10364: fsfixes2.patch

vincentdarley added on 2001-09-04 20:56:22:
Logged In: YES 
user_id=32170

I assume you converted eols to unix, etc?

dkf added on 2001-09-04 20:54:25:
Logged In: YES 
user_id=79902

Patch doesn't even apply cleanly.  (I think only one chunk
out of the whole lot applies at all!)  :^(

vincentdarley added on 2001-09-04 19:57:24:
Logged In: YES 
user_id=32170

I considered doing that, perhaps combining the function 
with the (existing) 'TclpCreateTempFile' which is above.  

I appreciate the comment about 'tmpnam'; the approach in 
the patch does have the benefit that the file actually 
exists (even though we closed it), so I don't think an 
attacker can mess with it.  What happens next is that we 
overwrite the existing file.

All in all I didn't see massive benefits from returning 
both the name and the channel.  While avoiding the 'open-
close-open' behaviour of the current code, in other ways it 
complicates things (the Win/Mac ports must explicitly open 
the temp file, the file handle must be converted to a 
Tcl_Channel in all three platforms, we need to modify the 
shared CrossFilesystemCopy, etc...)

Anyway, I don't feel that strongly about it, so am happy to 
be argued back (!).   Does the patch actually compile and 
pass the tests?

dkf added on 2001-09-04 19:45:30:
Logged In: YES 
user_id=79902

It strikes me as odd that we have an open fd for the
temporary file and then we immediately close it only to
reopen it again (as part of copying.)  Wouldn't it be better
to change TclpTempFileName() to pass back both the name of
the file and a (binary read-write) channel with which to
access the file?  One of the faults with using tmpnam() as I
understand it is that you can have an attacker create the
file with some nefarious content in-between getting the
filename (of a non-existing-at-the-time file) and creating
the file.  If changed to produce both channel and filename,
then it is obvious that the name of the function needs to
change as well (to TclpCreateTempFile for example?)

vincentdarley added on 2001-09-04 18:25:18:

File Deleted - 10356: 



File Added - 10357: fsfixes.patch

vincentdarley added on 2001-09-04 18:21:56:

File Added - 10356: fsfixes.patch

Logged In: YES 
user_id=32170

Attached patch to fix this and some other problems.  Please 
test on unix (and ideally add a test case...).

Vince.

dkf added on 2001-09-04 16:48:21:
Logged In: YES 
user_id=79902

I'd guess the permissions to be 0700; don't care about
losing perms for other people as the dir is about to be
nuked anyway.  Though if the delete then fails, we must
restore the perms to what they were before since the failure
might be due to another process owned by another user using
that dir as a working directory and making it fail in that
case is not too good.  I think...

The classic way to test this is (I suppose) to create a
directory with a file in it, change its permissions to
something aggressively unreadable, and then attempt to
delete it.  We can then force an unfixable failure by
putting a subprocess in there (which would also be good for
making sure that the permissions don't end up too screwed in
the case of a failure.)

vincentdarley added on 2001-09-04 16:36:12:
Logged In: YES 
user_id=32170

It would also be useful to have a new test which currently 
fails on unix, but should pass after add 'chmod' as below --
- any takers?

vincentdarley added on 2001-09-04 16:35:05:
Logged In: YES 
user_id=32170

The changing permisions piece is very easy to implement.  
In 'DoRemoveDirectory' (tclUnixFCmd.c), we must simply add:

    
    if (recursive) {
        /* We should try to change permissions 
         * so this can be deleted */
chmod(path, XXX);
    }
    
could someone enlighten me to what the correct value 
of 'XXX' should be here?

dkf added on 2001-09-03 16:31:45:
Logged In: YES 
user_id=79902

Following on from Vince's comment, things become even more
sticky when the 'cwd' of another process is inside the
directory.  At that point, you're basically stuck.  (I have
no idea if there are networked filesystems that support that
style of directory locking, but if there are you have the
situation where a blocking process might not even be on the
same computer.  But even without that finesse, the point
still holds since you can't change the directory of other
processes underneath their feet...)

Anyway, hell-or-high-water isn't possible.  Best effort
(probably up to about the level of 'rm -rf' I'd guess) is.

hobbs added on 2001-09-01 00:11:21:
Logged In: YES 
user_id=72656

Yes, the idea would be to attempt other things to actually 
delete the directory.  This was mentioned to me by Scott 
Stanton as the original intention (and thus the bug 
report), but I'm not dead set on it.

vincentdarley added on 2001-08-31 20:22:11:
Logged In: YES 
user_id=32170

Also if 'cwd' is inside the directory, a delete will fail 
(although we could even check for that and do a 'cd [file 
dirname $dir]')

I assume from Jeff's comments that the idea of 'file 
delete -force' is to try everything possible to get rid of 
the file, including moving the cwd, changing permissions 
etc?

dkf added on 2000-11-10 20:53:54:
It might not be possible to delete the file in any case e.g. if it exists in a directory to which we have no write access and where we cannot change the permissions on the file.

Attachments: