[NTLUG:Discuss] Bash script way to tell if a filesystem is mounted

Ralph Green sfreader at sbcglobal.net
Sun May 30 23:31:45 CDT 2010


On Sun, 2010-05-30 at 21:53 -0500, Leroy Tennison wrote:
> Ralph Green wrote:
> > Howdy,
> >   I thought I had the script all working and then I ran across something
> > like what you found.  When I run the stat command on a system with jfs,
> > I always get a value of 0.  On xfs and ext2, everything works.  I don't
> > have any systems with ext3.  Can anyone verify if a stat like Leroy used
> > returns a non-zero value on ext3?  I would not ever setup a system with
> > ext3 or ext4, but I know I should be prepared if I come across one and
> > need to do diagnostics.
> >  I am going to redo my script to use Chris' idea of grepping /etc/mtab
> > or /proc/mounts.
> > Good night,
> > Ralph
> >
> > On Sat, 2010-05-29 at 16:24 -0500, Leroy Tennison wrote:
> >   
> >> [root at localhost ~]# stat -f --format='%i' /boot
> >> 0
> >> [root at localhost ~]# stat -f --format='%i' /boot/grub
> >> 0
> >> [root at localhost ~]#
> >>
> >> Not only am I seeing a directory and it's parent with the same file 
> >> system ID when it is a mount point,  both IDs are zero.  Just in case 
> >>     
> >
> >   
> >> Any idea what's wrong?
> >>     
> >
> >   
> Well, I need to "add" to my reply.  Although my root (/) is ext3, /boot 
> (= /dev/hda1) is ext2.  So, either root's being ext3 is a problem or 
> stat doesn't work on ext2 on CentOS 5.4.  Previously I reported my 
> version of stat, what are other people's versions?

Leroy,
 Your root filesystem should not matter for either of these two stats.
I'd still like to hear back from someone else who uses ext3 to see what
stat returns for them.  May, although it seems unlikely, ext3 does not
report that information.  It seems unlikely to me, since ext3 is just a
hacked ext2 and ext2 reports the info.
  Why don't you broaden the information you request from stat a bit?
Try:  stat -f /boot
 The ID field is what we are interested in and it will be 0, if it is
consistent with the results you posted before.  Will anything else look
unusual?
 I'll show you what I get for 2 filesystem types on two computers on my
network.  First, this computer uses xfs as it's root filesystem.
ralph at belinda:~$ stat -f /cifs_mounts
  File: "/cifs_mounts"
    ID: 80500000000 Namelen: 255     Type: xfs
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 46949109   Free: 21329907   Available: 21329907
Inodes: Total: 187888128  Free: 187841237

 This system uses jfs as it's root filesystem.
ralph at miro:~$ stat -f /cifs_mounts
  File: "/cifs_mounts"
    ID: 0        Namelen: 255     Type: jfs
Block size: 4096       Fundamental block size: 4096
Blocks: Total: 7708745    Free: 1984864    Available: 1984864
Inodes: Total: 16091936   Free: 15928013

  You asked why I don't use ext3 and ext4.  I do run test systems with
them at times.  Right now, I don't have either one installed.  I think
ext3 is reasonably stable, but has poor performance compared to xfs or
jfs.  My main objection is that ext3 is just a hack on top of ext2 and I
think xfs and jfs are much better designs.  It takes more than just a
good design, so I would not use them unless they were also solid
implementations and I have found either one to be stable.  There are
reasons to stay away from certain filesystems in certain cases.  As
Chris Cox has commented, ext4 journaling is more complete than xfs or
jfs.  

  I do not trust ext4 for anything more than testing.  I have corrupted
ext4 filesystems with basic testing.  My testing tends to use small hard
drives and every problem has happened when the filesystem filled up.  I
don't have any pattern I can repeat or I'd open a bug report.  But, I
just use the system for a couple of weeks and something bad happened to
the filesystem 3 different times.  The last time was 4 or 5 months ago,
and I suppose I am due to try this again.

  I do like what I read about ZFS.  I hope to have a ZFS based system up
soon.  I have been dithering a bit, trying to decide if I should use
OpenSolaris, FreeBSD8, or Linux with ZFS mounted with FUSE.  If anyone
in the group has practical advice on this, I'd like to hear it.

  BTRFS looks promising, but there are a couple of details I am still
trying to figure out.  At the moment, I am concerned that BTRFS may have
the same problem with hash lookups as Reiser4 did.  Resier4 was neat,
but it was not that hard to have 2 files with the same hash code in the
indices and then the filesystem could not tell the files apart.  The
documentation is not clear enough on BTRFS for me to tell about this
potential problem and I have just started going over the code.  I have
only used BTRFS a bit.  I was waiting until the on disk format was
locked down, and I understand that happened recently.  I have a nice
sized hard drive set aside for BTRFS test.  I won't depend on it for
single copies of any important files, but I don't trust any hard drive
for that.
Have a good day,
Ralph





More information about the Discuss mailing list