Showing posts with label mount. Show all posts
Showing posts with label mount. Show all posts

Sunday, 28 December 2025

NFS - Network File System

Network File System (NFS) is used to mount external sources of files between Linux/Unix servers.

Nowadays, I think most people use SAMBA (SMB) as that also allows Windows to join the club, but I decided to try NFS (again).

I used to work with NFS back in the past, but it seems to still be viable.

On the server

We will be sharing the directory /mnt/raid.

I've edited the file /etc/exports.d/raid.exports to contain2:

/mnt/raid 192.168.2.0/24(rw,sync,no_root_squash)
rw
read and write access
sync
is the default, makes certain new requests only get a reply when changes have been committed to stable storage
no_root_squash
By default root files are re-mapped to nobody, to prevent write-access etc. in my case, I wish to edit files using root. I don't mind the security consequences very much in my home situation

To apply changes to this file, run "exportfs -ra" or restart the NFS server.

Start services:

# systemctl start nfs-server.service
# systemctl enable nfs-server.service

On the client

To view all mountable directories of a certain server, try this:

# mount | grep raid
watson:/mnt/raid on /mnt/raid type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,fatal_neterrors=none,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.2.1,local_lock=none,addr=192.168.2.6)

Or of course try "cat /proc/mounts".

Then, on the client, mount the directory:

# mount -t nfs -o vers=4 watson:/mnt/raid /mnt/raid

Surprisingly, I did not have to actually change any configuration of the NFS service.

Works surprisingly well.

References

[1] Fedora Server User Documentation - File sharing with NFS – Installation
https://docs.fedoraproject.org/en-US/fedora-server/services/filesharing-nfs-installation/
[2] My Blog - My Current MDADM Setup
https://randomthoughtsonjavaprogramming.blogspot.com/2024/09/my-current-mdadm-setup.html

Thursday, 27 January 2022

RAID Mirrorring with MDADM

So I posted about my RAID solution plenty of times, but I have some additional information.

So here're my current harddrives:

# lsscsi
[0:0:0:0]    disk    ATA      Samsung SSD 860  1B6Q  /dev/sda 
[4:0:0:0]    disk    ATA      WDC WD4003FZEX-0 1A01  /dev/sdb 
[5:0:0:0]    disk    ATA      WDC WD5000AAKS-0 3B01  /dev/sdc 
[5:0:1:0]    disk    ATA      ST2000DM001-1CH1 CC29  /dev/sdd 
[10:0:0:0]   disk    WD       Ext HDD 1021     2021  /dev/sde 
[11:0:0:0]   disk    WD       Ext HDD 1021     2021  /dev/sdf 

Three harddrives of which I use for my RAID 1. One internal SATA drive and two external USB drives.

/dev/sdd - harddrive 2 TB
./dev/sdd1        2048 3907024895 3907022848  1.8T fd Linux raid autodetect

/dev/sde - external WD harddrive 2 TB
/dev/sde1        2048 3907024895 3907022848  1.8T fd Linux raid autodetect

/dev/sdf - external WD harddrive 2 TB
/dev/sdf1        2048 3907024895 3907022848  1.8T fd Linux raid autodetect

And then I had a problem.

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Wed Mar  6 22:16:05 2013
        Raid Level : raid1
        Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
     Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
      Raid Devices : 3
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sat Nov 27 20:57:13 2021
             State : clean, degraded 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : micemouse:0
              UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
            Events : 6205

    Number   Major   Minor   RaidDevice State
       2       8       65        0      active sync   /dev/sde1
       -       0        0        1      removed
       3       8       49        2      active sync   /dev/sdd1

For some reason, we're missing /dev/sdf with partition /dev/sdf1.

I am fairly concerned here.

So I reattached the device, and then had to rebuild, etc, etc and it took a few days.

mdadm /dev/md127 -a /dev/sdf1

A better solution would have been to prevent the automounting of raid arrays, until I've attached said external drives.

They are not attached by default.

Turning off auto raid detection

The reason I wanted to turn off automatic raid detection, is because I have two disks that can be attached via USB that are part of the RAID array.

When they are not attached, and the raid is autodetected, this time it started up with one device removed.

And it takes a dickens of a time to reattach the devices to the raid.

Sometimes the configuration is simply not there (autodetects the superblock on drives, and works it out from there).

If it is there, it should be either in:

/etc/mdadm/mdadm.conf
for some Linux, for example Ubuntu.
/etc/mdadm.conf
for some Linux, for example Fedora.

Simply adding the following should be enough:

echo "AUTO -all" >> /etc/mdadm.conf

A better way would probably be to have stopped the raid, and re-assemble it again.

Will try this next time if it happens.

Stopping an array

sudo umount /mnt/md127
sudo mdadm --stop --scan

Starting an Array

For simple setups:

sudo mdadm --assemble --scan

For specific setups:

sudo mdadm --assemble /dev/md127 /dev/sdd /dev/sde /dev/sdf

For machine/harddrive independent setup (he gathers the appropriate components itself):

mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c

And then you can do:

sudo mount /dev/md127 /mnt/raid

References

DigitalOcean - How To Manage RAID Arrays with mdadm on Ubuntu 16.04
https://www.digitalocean.com/community/tutorials/how-to-manage-raid-arrays-with-mdadm-on-ubuntu-16-04
Re: turn off auto assembly
https://www.spinics.net/lists/raid/msg30997.html

Thursday, 20 July 2017

mount: unknown filesystem type 'exfat'

When attempting to mount a USB drive, I encountered the following error regarding the filesystem exFAT1:
mount: unknown filesystem type 'exfat'
I had to install fuse-exfat as detailed according to [2].
# yum install fuse-exfat
Redirecting to '/usr/bin/dnf install fuse-exfat' (see 'man yum2dnf')

Last metadata expiration check: 0:00:02 ago on Fri May 12 07:06:52 2017.
Dependencies resolved.
================================================================================
Package        Arch       Version             Repository                  Size
================================================================================
Installing:
fuse-exfat     x86_64     1.2.5-1.fc25        rpmfusion-free-updates      40 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 40 k
Installed size: 71 k
Is this ok [y/N]: Y
Downloading Packages:
fuse-exfat-1.2.5-1.fc25.x86_64.rpm              335 kB/s |  40 kB     00:00    
--------------------------------------------------------------------------------
Total                                           118 kB/s |  40 kB     00:00     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Installing  : fuse-exfat-1.2.5-1.fc25.x86_64                              1/1
Mounting after that worked flawlessly.
# mount /dev/sdb1 mydrive
FUSE exfat 1.2.5

References

[1] Wikipedia - exFAT
https://en.wikipedia.org/wiki/ExFAT
[2] Mounting EXFAT formatted pendrives in fedora linux
https://coderwall.com/p/nvwgea/mounting-exfat-formatted-pendrives-in-fedora-linux

Thursday, 4 June 2015

RAID-1 Degraded - missing device

I have got two hard disks, both the same type/brand. They are two USB2.0 connected WesternDigital drives that work pretty well. I've setup a software-based RAID1 system using them both, by means of the MDADM Linux program. I find it works very well.

Thankfully I noticed something odd about the hard disks I used. It is probably due to Partial RAID1 mounting that I tried recently that I screwed something up.

The raid1 I setup contains two disks, as visible in the lsscsi command:
root@micemouse:~# lsscsi
[0:0:0:0]    disk    ATA      WDC WD5000AAKS-0 01.0  /dev/sda 
[4:0:1:0]    cd/dvd  SAMSUNG  DVD-ROM SDR-430  1.06  /dev/sr0 
[10:0:0:0]   disk    WD       Ext HDD 1021     2021  /dev/sdb 
[11:0:0:0]   disk    WD       Ext HDD 1021     2021  /dev/sdc
As you can see in the printout below, it is set to "Degraded", and we're missing one of the devices.
root@micemouse:~# mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Wed Mar  6 22:16:05 2013
     Raid Level : raid1
     Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:50:47 2015
          State : clean, degraded 
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : micemouse:0  (local to host micemouse)
           UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
         Events : 3748

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1
We are missing /dev/sdb1.

Rebuilding the array

Let's re-add the missing device.
root@micemouse:~# mdadm --remove /dev/md127 /dev/sdb1
mdadm: hot remove failed for /dev/sdb1: No such device or address
root@micemouse:~# mdadm --add /dev/md127 /dev/sdb1
mdadm: added /dev/sdb1
The array is rebuilding itself.
root@micemouse:~# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Wed Mar  6 22:16:05 2013
     Raid Level : raid1
     Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sat May 16 15:54:20 2015
          State : clean, degraded, recovering 
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

 Rebuild Status : 0% complete

           Name : micemouse:0  (local to host micemouse)
           UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
         Events : 3750

    Number   Major   Minor   RaidDevice State
       2       8       17        0      spare rebuilding   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
Trying out "watch -n 2 cat /proc/mdstat" to check on its progress (every two seconds in this case).
Every 2.0s: cat /proc/mdstat                            Sat May 16 16:00:38 2015

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra
id10]
md127 : active raid1 sdb1[2] sdc1[1]
      1953380160 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.2% (5763072/1953380160) finish=6519
.5min speed=4978K/sec

unused devices: <none>
Recovery in progress, but it looks a long long way away yet.

Much later...

After a very long time, /proc/mdstat mentioned that things were finished:
Every 60.0s: cat /proc/mdstat                           Sun May 17 22:23:29 2015

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra
id10]
md127 : active raid1 sdb1[2] sdc1[1]
      1953380160 blocks super 1.2 [2/2] [UU]

unused devices: <none>
And a closer look at the raid device, shows:
root@micemouse:~# mdadm -D /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Wed Mar  6 22:16:05 2013
     Raid Level : raid1
     Array Size : 1953380160 (1862.89 GiB 2000.26 GB)
  Used Dev Size : 1953380160 (1862.89 GiB 2000.26 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sun May 17 20:46:50 2015
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : micemouse:0  (local to host micemouse)
           UUID : ed4531c4:59c132b2:a6bfc3d1:6da3b928
         Events : 5216

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
The important part being "State: clean".

References

RAID1 - State : clean, degraded
https://www.howtoforge.com/community/threads/raid1-state-clean-degraded.66744/
Mdadm - recover degraded Array
https://www.thomas-krenn.com/en/wiki/Mdadm_recover_degraded_Array

Thursday, 28 May 2015

Partial RAID-1 mounting

I recently (read: ages ago) needed to bring my RAID1 disks to a friend, and it seemed silly for a mirrored disk array to bring more than 1 harddisk. Surely a single harddisk would cover it? Mount it read-only and away we go.

Let's take /dev/sdb1 drive of an array {/dev/sdb1, /dev/sdc1}.

The simplest way is to just start up the raid array using mdadm with a single disk2. But for some reason I failed to get this to work.

One way to do it (according to [1]), is to assume that the raidarray uses ext4 as an underlying filesystem and then just fiddle with the offsets.

The command "mdadm --examine /dev/sdb1" shows "Data Offset : 262144 sectors".

Then create a loop device, starting from that offset multiplied by the mdadm sector size (512).
losetup --find --show --read-only --offset $((262144*512)) /dev/sdb1
Than, simply mount the loop device.
mount /dev/loop0 /mnt/raid1/

References

[1] StackExchange - How to mount/recover data on a disk that was part of a mdadm raid 1 on another machine?
http://unix.stackexchange.com/questions/64889/how-to-mount-recover-data-on-a-disk-that-was-part-of-a-mdadm-raid-1-on-another-m
[2] sleeplessbeastie's notes - How to mount software RAID1 member using mdadm
https://blog.sleeplessbeastie.eu/2012/05/08/how-to-mount-software-raid1-member-using-mdadm/