Windows NFS server permissions

One issue we recently ran into was linux nfs clients were blowing away inherited permissions on windows volumes. In order to allow rename/mv and chmod to work properly on an nfs (4 or 3) mount, you need to grant clients ‘full permissions’ on the directory they will be working in. This has the lovely side affect of a chmod, rsync, tar -xpf or anything that touches permissions completely changing the local permissions on that directory for ALL users/groups you may have assigned on NTFS

  1. Create a directory, set appropriate ntfs permissions (Full permissions) with inheritance for multiple security groups
  2. Share that directory out to an nfs client.
  3. On the nfs client, mount the volume, and run ‘chmod 700 /mountpoint’
  4. Go back into windows and notice you’ve lost all the inherited permissions you thought you assigned on that share.
  5. Scratch your head, check the KeepInheritance registry key, run tcp dump.
  6. Realize you need to place the permissions you wish to inherit in a place that the nfs client cannot change them.

How we now share volumes out is the following ‘X:\[projectname]\[data]

  • projectname – high level, NOT shared directory that is the holder of all permissions for a project (subfolders, etc).
    • For groups/users that apply to your unix clients make sure they have full permission.
    • For your windows only folks, ‘Modify’ is generally good enough.
  • data – directory that is actually shared out via cifs/nfs

So far this scheme is working pretty well and allows unix clients to work properly and do horrible things on local files while preserving the broader group permissions you wish to see on your windows clients.

Size of a Petabyte

A fun back of the napkin game I remember calculating for the past decade around the time affordable (under 10k) IDE-SCSI terabyte sized raids came out was, “How big is a petabyte?”. Around the time these became interesting (2003-4) it looked like ~16 racks of hard drives and 1u controlling servers in 4-6t raid volumes.

The next big upgrade was around a little before the Sun Thumpers arrived and reduced that size down to ~43 servers (500gb drives) and reduced that size down to a little over 4 racks total.

Today, it looks like you can easily get 80 3.5″ drives in a 4u chassis, so that reduces the total size from 12 racks a decade+ ago down to about 16u today. Assuming I run our 10Gbps pipe at full throttle, that’s around 10 days to fully fill.  (not counting network, storage, metadata overhead).

Guess its time to start counting racks per exabyte. (304 at today’s density).