Hi. I've enabled SSH access to my box and installed NFS, but that's about the extent of my hackery. It has been running reliably and untouched for a long time.
The other day, my boss said he couldn't write files. When I checked, sure enough all the shares were read-only. I looked at the web interface, and it said Drive A had failed. Does it normally make the shares read only if the RAID is degraded? In SSH, it showed a drive missing from the RAID, and fdisk -l didn't even list the bad drive.
We got a new WD 500GB drive, almost exactly the same model except the new one is "green" :) I checked that the box booted and files were accessible with the bad drive removed (to make sure I got the right one). We installed the new drive, used the web interface's format new drive option and then left it rebuilding. I noticed that it stopped doing SSH and web (but was still pingable) but assumed this was part of its process to shut everything down while syncing the mirror. We left it overnight. In the morning, it was still flashing the rings alternately to indicate that the RAID was degraded. My boss powered it down and tried to reboot. The blue lights flash a bit and then you get a steady blue light. It didn't do ssh or web, and was not even pingable. We tried the reset config button, but no change.
I'm not onsite, but since then he removed the new drive and it boots, but the web interface gives a 404 and ssh starts but terminates quickly. CIFS connections seem to be timing out, and NFS is sort of working but throwing lots of errors, e.g:
- session request to 192.168.1.2 failed (Call returned zero bytes (EOF))
- ls: cannot access Price Lists: Stale NFS file handle
I'm thinking of putting the drive or drives into a linux box and poking around. Anyone got any clues from the above what is going on? Did we screw something up? Where did we go wrong?
Any help much appreciated!