Recent Forum Posts
From categories:
page 1123...next »

In case anyone else is interested, I succeeded in installing Debian Wheezy on my machine ( a Mybook World Edition with blue rings ).

The kernel cannot be updated (the relevant code was not part of the mainline, so it was left behind at 2.6.17). What I did instead, was compile a GLIBC version that works with this old kernel, and used it to overwrite the libc6 files of a debootstrap-ed /wheezy/lib/arm-linux-gnueabi/ folder.

After that, I chrooted successfully into a /wheezy - and seeing that it worked, I used a busybox-static shell to replace the /bin,/sbin,/usr,/var,etc folders of Lenny with the Wheezy ones.

Rebooted and - voila! My MBWE blue rings is probably the only one on the planet running Debian Wheezy :-)

You can read more details in the relevant UNIX stackexchange question ( http slash goo.gl slash RmY3yo ) - where I ended up answering myself :-)

And if anyone else is interested, I can upload the compiled GLIBC files someplace.

you'll have to know Linux a bit to fix this and I can't take responsibility if you mess up your equipment. I recommend you backup the original files to the root of the mbw for worst case purposes. however, I can say we have confirmed this working. please see jdz2k post below but remove 2 spaces.
http://community. wd .com/t5/Other-Network-Drives/My-Book-World-Edition-loses-connectivity-with-my-PC/m-p/399009#M8192

For all those that like to keep uptodate with Twonky, 7.2.7 seems to be the last MSS2 code train from Twonkyforum that will work with the MBWE I or II (white light) .

7.2.8. and 7.3RC4 on white light create the following error in the syslog ever 30 seconds or so :-

Oct 18 15:04:49 ActiveSVR daemon.err Airplay-Plugin: GetLargeResourceRecord: opt 65002 optlen 8 wrong

The effect is to stop the NAS from going to sleep. Disks and system remain active irrespective of any other action.
I have not got to the bottom of the error. It looks like a bad call in compiled code.

The pointer to the code list is :-

twonkyforumdotcom- Forum TwonkyServer - Topic - Twonky Server Revision History - Aug 11 2014 - User - Philbertron

If anyone has any thoughts on this error, please feel free to let me know.
Regards.
Damian

I have the 1 TB white light edition and it's running the 01.02.14 with MioNet 4.3.1.13 firmware but I would like it to run the Mybook Live firmware so I can use this device with my iPhone apps. Is it possible to have it run the my book live firmware?

Thanks

My Book World Edition II (white light)

The disk becomes unavailable after a certain time.

The disk becomes unavailable after a certain time. To solve the problem, I tried:

- Factory defaults
- Reinstall latest version of firmware

but without success. When I reset the drive become accesable for certain time for example day or two and then

become unavailable. Disk responds to ping, signal lights are on but is not accessable fom computer. Also, the web

interface is not reachable. Any advice is welcome. Thank you in advance. RG

Thanks for this post and the helpful comments below, you all saved my weekend!!!

I googled your issue first try and delete any logs, temp files and downloaded .ipkg- still left behind.

Check this and see if it helps

http://wiki.openwrt.org/doc/techref/opkg?s[]=ipkg&s[]=manager#out.of.space
Re: Root partiion is full by tdmiketdmike, 11 Oct 2014 02:10

could you please share the files with us?

I've got the same problem; in my case there is a physical defect and I hear strange noises at boot time. I was able to access my drive by shaking it a bit first, and then power it on. Yes, I know it sounds strange and I shouldn't recommend it.
What you could try is taking it out, connect it to a Linux machine and see if you can save your data. (There are a few helpful guides on this site for this procedure).

Hi All,

i do have an MBWE II white light and it seems that something went really wrong…..
After boot it remains at a state which only the bottom led is blinking.

According to the manual this should be the "Powering up state" but it stays there forever.
Also i can not perform a "factory reset"…

Any ideas?

Hi

My root partition is full.

df -h
Filesystem Size Used Available Use% Mounted on
/dev/md0 1.8G 1.8G 0 100% /
/dev/md3 949.6M 179.5M 721.8M 20% /var
/dev/md2 1.8T 91.1G 1.7T 5% /DataVolume
/dev/ram0 61.9M 12.0k 61.9M 0% /mnt/ram

What can I do to clean up? or any other solutions

Please help

Root partiion is full by slysurfzslysurfz, 01 Oct 2014 20:36

It "might" be possible to do it with this build, but then again probably not

This is the PowerPC version of Plex Media Server for a Synology NAS
https://downloads.plex.tv/plex-media-server/0.9.9.14.531-7eef8c6/PlexMediaServer-0.9.9.14.531-7eef8c6-ppc_qoriq.spk

Hi,
someone tried install plex media server on a WD my book live ?

Can we make the same like the other tutorial "how-to-install-plex-on-mybook-white-light"

Plex Media Server on WD live by addams14addams14, 30 Sep 2014 21:53

Thks to all who contributed :) especially the script designer.

My Book Live got bricked by my own stupid move to update Bash, so i disassemble it, connect it via SATA-to-USB2.0 converter to my laptop (dont have a decent desktop anymore), fire up VMware Workstation 9 and a VM of Kali Linux, used the script and instructed and all my data was saved (plus my NAS is up and running again).

Kudos to all.

Thanks for the tip.

That's quite messed up. It doesn't seem to be recommended to create users via ssh and normal linux commands.

Re: Can not modify file (SSH) by flexusflexus, 27 Sep 2014 03:33

Dear Tommy1965

Where do you want to upload it to?
I usually use scp or rsync to transfer such files between my hosts.
I am not sure what you want to do!

Regards

Blip

Re: Manual firmware upgrade! by bleepbleep, 26 Sep 2014 17:07

can someine share files again pretty please? :)

Re: Files by chupa_dnbchupa_dnb, 24 Sep 2014 12:59

I've connected an extra HDD on the USB 2.0 port and transmission doesn't work, it blocks and keeps the CPU processor on 90%

THe story.
I have a Mybook Live 3TB Nas, and due to power/circuit break in my house, it won't start up.
Now i found a repair script that can help you unbrick it, but there's a crucial superblock backup that is needed to make the script work. And from all the superblock backups, this specific superblock is damaged which interups the process to unbrick the Nas hdd. Nor do i know how to manually install a new firmware and partition it correctly.
Now i am at wit's end, and lacking the knowledge to manually input the commands that are in the script to make it work via terminal.

Things i have done so far
Installed curl and mdam (default settings)

sudo apt-get install curl
sudo apt-get install mdadm

-made the script executable

chmod +x sudo repair_mybooklive.sh

Gave terminal root and administrative privileges, before starting the script

su root
sudo -i

made sure dev/md0 is not mounted or in use, before starting the script

mdadm --stop /dev/md0
mdadm --remove /dev/md0

made sure i have the right hdd to run the script

sudo fdsik -l

Model: WDC WD30 EZRX-00MMMB0 (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number Start End Size File system Name Flags
3 15,7MB 528MB 513MB primary msftdata
1 528MB 2576MB 2048MB ext3 primary raid
2 2576MB 4624MB 2048MB primary raid
4 4624MB 3001GB 2996GB ext4 primary msftdata

Results from the script

Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Checking for bad blocks (read-only test): 0.00% done, 0:00 elapsed. (0/0/0 errdone
Warning: the backup superblock/group descriptors at block 65536 contain bad blocks.

Warning: the backup superblock/group descriptors at block 98304 contain bad blocks.

Warning: the backup superblock/group descriptors at block 131072 contain bad blocks.

Warning: the backup superblock/group descriptors at block 425984 contain bad blocks.

Warning: the backup superblock/group descriptors at block 458752 contain bad blocks.

Warning: the backup superblock/group descriptors at block 491520 contain bad blocks.

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: 0/16
Warning, had trouble writing out superblocks.mdadm: added /dev/sdc2
copying image to disk…
dd: writing to ‘/dev/md0’: Input/output error
362273+0 records in
362272+0 records out
185483264 bytes (185 MB) copied, 486,404 s, 381 kB/s
cp: failed to access ‘/mnt/md0/boot/boot.scr’: Input/output error
./repair_mybooklive.sh: line 359: /mnt/md0/etc/nas/service_startup/ssh: Input/output error
mdadm: stopped /dev/md0

My request
Could someone please help me with the following:
Basically i have two issues, superblock backup 65536 is damaged, and i can't install the image.

Please tell me step-by-step instructions how i can repair this hdd.

1. fix the superblock size 65536 so this error goes away
#blocksize 65536 is required by the hardware, you won't be able to mount if different.
mkfs.ext4 -b 65536 -m 0 $diskData
2. what to delete from the script to make it run-without checking for bad superblock. (i already finished that test , took me 13 hours.)
OR
3. help me decript the script commands, so i can manually enter them in terminal

4. How to implement this command to restore superblock from other superblock backup.

sudo e2fsck -b block_number /dev/xxx
script will use the following disk: 

Model: WDC WD30 EZRX-00MMMB0 (scsi)
Disk /dev/sdc: 3001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 3      15,7MB  528MB   513MB                primary  msftdata
 1      528MB   2576MB  2048MB  ext3         primary  raid
 2      2576MB  4624MB  2048MB               primary  raid
 4      4624MB  3001GB  2996GB  ext4         primary  msftdata

is this REALLY the disk you want? [y] y

********************** IMAGE          **********************

********************** IMPLEMENTATION **********************

everything is now prepared!
device:       /dev/sdc
image_img:    ./CacheVolume/upgrade/rootfs.img
destroy:      false

this is the point of no return, continue? [y] y

mdadm: /dev/sdc1 appears to contain an ext2fs file system
    size=1999808K  mtime=Thu Sep 18 09:10:04 2014
mdadm: size set to 1999808K
mdadm: creation continuing despite oddities due to --run
mdadm: array /dev/md0 started.
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
125184 inodes, 499952 blocks
24997 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=515899392
16 block groups
32768 blocks per group, 32768 fragments per group
7824 inodes per group
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912

Checking for bad blocks (read-only test):   0.00% done, 0:00 elapsed. (0/0/0 errdone                                                 
Warning: the backup superblock/group descriptors at block 65536 contain
    bad blocks.

Warning: the backup superblock/group descriptors at block 98304 contain
    bad blocks.

Warning: the backup superblock/group descriptors at block 131072 contain
    bad blocks.

Warning: the backup superblock/group descriptors at block 425984 contain
    bad blocks.

Warning: the backup superblock/group descriptors at block 458752 contain
    bad blocks.

Warning: the backup superblock/group descriptors at block 491520 contain
    bad blocks.

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information:  0/16
Warning, had trouble writing out superblocks.mdadm: added /dev/sdc2

synchronize raid... done

copying image to disk... 
dd: writing to ‘/dev/md0’: Input/output error
362273+0 records in
362272+0 records out
185483264 bytes (185 MB) copied, 486,404 s, 381 kB/s
cp: failed to access ‘/mnt/md0/boot/boot.scr’: Input/output error
./repair_mybooklive.sh: line 359: /mnt/md0/etc/nas/service_startup/ssh: Input/output error
mdadm: stopped /dev/md0

all done! device should be debricked!

I have a gigabitSwitch that has jumbo frames.
I can ping the WD hdd and is working. No difference in speed. Only 30MBytes/s reading and in writing about 18MBytes/s max.
The only thing that I had probles is with the Twonky media player.
It just keeps the CPU at 90%.
Removed JumboFrames and everything is back to normal.
No difference in speed

Re: MTU 9000 Jumbo Frames by Andi DanAndi Dan, 18 Sep 2014 22:38
page 1123...next »
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License