Can't install Porteus on RAID 1 array

Please reproduce your error on a second machine before posting, and check the error by running without saved changes or extra modules (See FAQ No. 13, "How to report a bug"). For unstable Porteus versions (alpha, beta, rc) please use the relevant thread in our "Development" section.
User avatar
Blaze
DEV Team
DEV Team
Posts: 3869
Joined: 28 Dec 2010, 11:31
Distribution: ⟰ Porteus current ☯ all DEs ☯
Location: ☭ Russian Federation, Lipetsk region, Dankov
Contact:

Can't install Porteus on RAID 1 array

Post#1 by Blaze » 08 May 2017, 09:39

I have done RAID 1 array via my how to Как создать программный RAID 1 массив (зеркало) в Porteus (Russian topic) and I'm not able to install Porteus on RAID 1 array.

1) Image 2) Image

Code: Select all

root@porteus:~$ /opt/porteus-scripts/xorg/psu "/opt/porteus-scripts/pinstaller"
/opt/porteus-scripts/pinstaller: line 60: get_user: command not found
/root
grep: /tmp/pinstaller/partinfo.tmp: No such file or directory
root@porteus:~$
3) At boot up of Porteus I have these messages
Image

4) My fstab
Image

4) If i try to install Porteus manually

Code: Select all

root@porteus:/mnt/md1p1# mloop Porteus-MATE-v3.2.2-ru-x86_64.iso
using /dev/loop4
 
Please wait while i gather some info ....
 
mount: /dev/loop18 is write-protected, mounting read-only
 
 
 #################################
Your image has been mounted at:
/mnt/loop
 
 You can unmount it by typing uloop
 
 Here is a list of the files:
EFI  USB_INSTALLATION.txt  boot  porteus
 
root@porteus:/mnt/md1p1# cp -a /mnt/loop/* /mnt/md0p1/
root@porteus:/mnt/md1p1# cd /mnt/md0p1/boot/
root@porteus:/mnt/md0p1/boot# ./Porteus-installer-for-Linux.com
Verifying archive integrity... All good.
Uncompressing Porteus Installer......
 
                             _.====.._
                           ,:._       ~-_
                               '\        ~-_
                                 \        \.
                               ,/           ~-_
                      -..__..-''   PORTEUS   ~~--..__
 
==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--==--
 
Installing Porteus to /dev/md0p1
WARNING: Make sure this is the right partition before proceeding.
 
Type 'ok' to continue or press Ctrl+c to exit.
ok
Flushing filesystem buffers...
 
Installation failed with error code '2'.
Please ask for help on the Porteus forum: www.porteus.org/forum
and provide the information from /mnt/md0p1/boot/debug.txt
 
Exiting now...
cat: /mnt/md0p1/boot/syslinux/lilo.menu: No such file or directory
root@porteus:/mnt/md0p1/boot#
My /mnt/md0p1/boot/debug.txt

The same in ROSA Fresh R9
Image

I read that LILO need to use with "mdadm --create --metadata=0.90" for RAID with bootloader - but it does not help (I tried RAID with 0.90 and 1.2 metadata blocks)

Any solution are welcome.

BTW, I do a RAID 1 array in VirtualBox
Image
Linux 6.6.11-porteus #1 SMP PREEMPT_DYNAMIC Sun Jan 14 12:07:37 MSK 2024 x86_64 Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz GenuineIntel GNU/Linux
MS-7A12 » [AMD/ATI] Navi 23 [Radeon RX 6600] [1002:73ff] (rev c7) » Vengeance LPX 16GB DDR4 K2 3200MHz C16

Falcony
Full of knowledge
Full of knowledge
Posts: 237
Joined: 01 Jan 2011, 12:44
Location: Russia

Re: Can't install Porteus on RAID 1 array

Post#2 by Falcony » 17 May 2017, 10:36

Blaze wrote:I have done RAID 1 array via my how to

/opt/porteus-scripts/pinstaller: line 60: get_user: command not found
/root
grep: /tmp/pinstaller/partinfo.tmp: No such file or directory
root@porteus:~$
Your q. isn't related to bug section. You just need correct installation and be okey.

Porteus installer not intended to use with RAID partition and installation. So do not even run it.
RAID is complex matter - and isn't for desktop PCs.
So as Porteus itself - isn't intended for server.
3) At boot up of Porteus I have these messages
Image
Error due you are not configure.

At boot stage raid1 module isnt't loaded.
U need to load it at boot stage - yes, surely.
Then proper configure

4) My fstab
Image
No. Not that way. No need /dev/sdX etc. to mount
Such partition DO NOT mount at all - or your break consistency of your RAID.
4) If i try to install Porteus manually
Sure, only manually.
Any solution are welcome.
Steps are follow.

1. Modify initrd.xz

Extract it

add files lib/modules/kernel-ver-porteus/kernel/drivers/md/*

to modules structure

modify linuxrc

add row

insmod /lib/modules/4.9.0-porteus/kernel/drivers/md/raid1.ko

compress inird.xz

2. The same as - module raid1 have to be loaded after initrd stage

lsmod | grep raid1
raid1 19296 0
md_mod 75248 1 raid1

3. Swap partitions

now no needed at all. Use zram

4. Boot loader

use extlinux how to described in this http://edoceo.com/howto/mdadm-raid1

5. Other things - read https://linux.mkrovlya.ru/book/%D0%BE%D ... 0%B2-linux

User avatar
Blaze
DEV Team
DEV Team
Posts: 3869
Joined: 28 Dec 2010, 11:31
Distribution: ⟰ Porteus current ☯ all DEs ☯
Location: ☭ Russian Federation, Lipetsk region, Dankov
Contact:

Re: Can't install Porteus on RAID 1 array

Post#3 by Blaze » 17 May 2017, 20:19

Hi Falcony.
Thanks for your info.
Falcony wrote:Then proper configure

4) My fstab

...

No. Not that way. No need /dev/sdX etc. to mount
Such partition DO NOT mount at all - or your break consistency of your RAID.
I wrote that if I use RAID1 my /dev/sdX mounts automatically in fstab. It's bug.
3. Swap partitions

now no needed at all. Use zram
If use zram what about CPU loading?
4. Boot loader

use extlinux
Thanks for info about syslinux
RAID is complex matter - and isn't for desktop PCs.
So as Porteus itself - isn't intended for server.
Not bad to get Porteus as server, of course if it not destroy conception of desktop PCs.
Linux 6.6.11-porteus #1 SMP PREEMPT_DYNAMIC Sun Jan 14 12:07:37 MSK 2024 x86_64 Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz GenuineIntel GNU/Linux
MS-7A12 » [AMD/ATI] Navi 23 [Radeon RX 6600] [1002:73ff] (rev c7) » Vengeance LPX 16GB DDR4 K2 3200MHz C16

Falcony
Full of knowledge
Full of knowledge
Posts: 237
Joined: 01 Jan 2011, 12:44
Location: Russia

Re: Can't install Porteus on RAID 1 array

Post#4 by Falcony » 18 May 2017, 05:38

Blaze wrote: I wrote that if I use RAID1 my /dev/sdX mounts automatically in fstab. It's bug.
Yep, you right - such device have to be skipped during automating boot.
Brokenman and Fanthom have to care about.
If use zram what about CPU loading?
Think about io during syncing swap data :)
it is quicker, very much quicker then if you using swap partition on raid1.

zram even much better on any PC - without raid
Only old CPU - very old like P5 or P6 - may have some influence. But using zram on such PC no good by another reason - less RAM.
In FIDOSlax boot configuration - default i mean - it set via boot option zram=5% automatically.
5% RAM for swap quite enough and very reasonable digit for automatic usage - even for very old PC with 128Mb RAM as same as for new PC with 16Gb RAM.

Not bad to get Porteus as server, of course if it not destroy conception of desktop PCs.
home server - reliable workstation - may be - then raid1 may be reasonable.
But not for production server.

Bogomips
Full of knowledge
Full of knowledge
Posts: 2564
Joined: 25 Jun 2014, 15:21
Distribution: 3.2.2 Cinnamon & KDE5
Location: London

Re: Can't install Porteus on RAID 1 array

Post#5 by Bogomips » 21 May 2017, 01:12

Falcony wrote:zram even much better on any PC - without raid
Only old CPU - very old like P5 or P6 - may have some influence. But using zram on such PC no good by another reason - less RAM.
In FIDOSlax boot configuration - default i mean - it set via boot option zram=5% automatically.
5% RAM for swap quite enough and very reasonable digit for automatic usage - even for very old PC with 128Mb RAM as same as for new PC with 16Gb RAM.
Did try 128 MiB zram of total 871 MiB, ~15% of ram. Noticed no improvement, if anything slight deterioration. So you would say using zram of 5% will give improvement in performance?
Linux porteus 4.4.0-porteus #3 SMP PREEMPT Sat Jan 23 07:01:55 UTC 2016 i686 AMD Sempron(tm) 140 Processor AuthenticAMD GNU/Linux
NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) MemTotal: 901760 kB MemFree: 66752 kB

Falcony
Full of knowledge
Full of knowledge
Posts: 237
Joined: 01 Jan 2011, 12:44
Location: Russia

Re: Can't install Porteus on RAID 1 array

Post#6 by Falcony » 25 May 2017, 08:05

Bogomips wrote:Did try 128 MiB zram of total 871 MiB, ~15% of ram. Noticed no improvement, if anything slight deterioration. So you would say using zram of 5% will give improvement in performance?
Yes, for initial it is fair good value. 15% too much, you need space to run porteus as it live and mount loops.

For low mem PC zram 5% looks quite reasonable for default - and it is safe and give additional performance on PC with low HDD.
Then you need to do a little bit tune up. Add swapfile or swap partition at least equal RAM size. Then move /tmp to any data linux partition. Tune up vm.swappiness param.

Than you will have optimized system which first use RAM, then zram and then swap. Works like a cache and really speed up old PCs.

User avatar
Blaze
DEV Team
DEV Team
Posts: 3869
Joined: 28 Dec 2010, 11:31
Distribution: ⟰ Porteus current ☯ all DEs ☯
Location: ☭ Russian Federation, Lipetsk region, Dankov
Contact:

Can't install Porteus on RAID 1 array

Post#7 by Blaze » 20 Nov 2019, 18:24

RAID 1 is work in Porteus with kernel of neko

Code: Select all

# cat /usr/src/linux/.config | grep -i RAID
CONFIG_RAID_ATTRS=m
CONFIG_BLK_DEV_3W_XXXX_RAID=m
CONFIG_SCSI_AACRAID=m
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_PMCRAID=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_DM_RAID=m
CONFIG_DMA_ENGINE_RAID=y
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_RAID6_PQ=m
# CONFIG_RAID6_PQ_BENCHMARK is not set
# CONFIG_ASYNC_RAID6_TEST is not set
and without:

Code: Select all

modprobe raid1
Image
:thumbsup:

But Porteus is have bug:

/etc/fstab generated not correct:

Code: Select all

# Do not edit this file as fstab is recreated automatically during every boot.
# Please use /etc/rc.d/rc.local or sysvinit scripts if you want to mount/unmount
# drive, filesystem or network share.

# System mounts:
aufs / aufs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
devtmpfs /dev devtmpfs defaults 0 0
devpts /dev/pts devpts rw,mode=0620,gid=5 0 0

# Device partitions:
/dev/sda /mnt/sda ext2 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0
/dev/sdb1 /mnt/sdb1 linux_raid_member users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0
/dev/sdc1 /mnt/sdc1 linux_raid_member users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0
/dev/md0p1 /mnt/md0p1 ext4 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0

# RAID arrays:
/dev/md0 /mnt/md0  noatime,nodiratime,suid,dev,exec,async  0 0
/dev/md0p1 /mnt/md0p1 ext4 noatime,nodiratime,suid,dev,exec,async  0 0

# LVM volumes:

# Hotplugged devices:
Image

/etc/fstab must to looks like this:

Code: Select all

# Do not edit this file as fstab is recreated automatically during every boot.
# Please use /etc/rc.d/rc.local or sysvinit scripts if you want to mount/unmount
# drive, filesystem or network share.

# System mounts:
aufs / aufs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
devtmpfs /dev devtmpfs defaults 0 0
devpts /dev/pts devpts rw,mode=0620,gid=5 0 0

# Device partitions:
/dev/sda /mnt/sda ext2 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0

# RAID arrays:
/dev/sdb1 /mnt/sdb1 linux_raid_member users,noatime,nodiratime,suid,dev,exec,async 0 0
/dev/sdc1 /mnt/sdc1 linux_raid_member users,noatime,nodiratime,suid,dev,exec,async 0 0
/dev/md0 /mnt/md0  noatime,nodiratime,suid,dev,exec,async  0 0
/dev/md0p1 /mnt/md0p1 ext4 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0

# LVM volumes:

# Hotplugged devices:
Image
for /dev/sdb1 and /dev/sdc1 need to remove

Code: Select all

,comment=x-gvfs-show
'linux_raid_member' /mnt/sdb1 and /mnt/sdc1 volumes of the mirror RAID1 must does not show at the left side of any file manager

+ need to move lines with /dev/sdb1 and /dev/sdc1 (linux_raid_member) to

Code: Select all

# RAID arrays:
section.

and for /dev/md0p1 need to add users, and ,comment=x-gvfs-show

Any advice are welcome.

Thanks.

UPDATE
Well, seems RAID stuff placed in the /etc/rc.d/rc.S:

Code: Select all

# Raid support:
if grep -qw 'TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null; then
    echo "Assembling RAID arrays"
    # Create fstab entries:
    blkid | grep /dev/md | grep -v 'TYPE="LVM' | sort >/mnt/live/tmp/raid
    echo -e "\n# RAID arrays:" >>/etc/fstab
    MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="noatime,nodiratime,suid,dev,exec,async"
    RAID=`grep -c / /mnt/live/tmp/raid`; x=1;
    while [ $x -le $RAID ]; do
        NAME=`sed -n "$x"p /mnt/live/tmp/raid | cut -d: -f1 | sed s@/dev/@@`
        FS=` sed -n "$x"p /mnt/live/tmp/raid | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
        echo "/dev/$NAME /mnt/$NAME $FS $MOPT  0 0" >>/etc/fstab
        mkdir /mnt/$NAME; [ -z "`egrep -qo "^noauto( |\$)" /etc/bootcmd.cfg`" ] && mount /mnt/$NAME; let x=x+1
    done
fi

# LVM support:
if egrep -q 'TYPE="LVM|TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null; then
    echo "Initializing LVM (Logical Volume Manager):"
    vgscan --mknodes --ignorelockingfailure
    if [ $? = 0 ]; then
        vgchange -ay --ignorelockingfailure
        # Create fstab entries:
        blkid | grep /dev/mapper | sort >/mnt/live/tmp/lvm
        echo -e "\n# LVM volumes:" >>/etc/fstab
        MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="noatime,nodiratime,suid,dev,exec,async"
        LVM=`grep -c / /mnt/live/tmp/lvm`; x=1
        while [ $x -le $LVM ]; do
            NAME=`sed -n "$x"p /mnt/live/tmp/lvm | cut -d: -f1 | sed -e 's@/dev/mapper/@@' -e 's@-@/@g'`
            # Fallback mode:
            test -h /dev/$NAME || NAME=`sed -n "$x"p /mnt/live/tmp/lvm | cut -d: -f1 | sed s@/dev/mapper/@@`
            FS=` sed -n "$x"p /mnt/live/tmp/lvm | egrep -o " TYPE=[^ ]+" | cut -d'"' -f2`
            echo "/dev/$NAME /mnt/$NAME $FS $MOPT 0 0" >>/etc/fstab
            mkdir -p /mnt/$NAME; [ -z "`egrep -qo "^noauto( |\$)" /etc/bootcmd.cfg`" ] && mount /mnt/$NAME; let x=x+1
        done
    fi
fi
and in initrd.xz/finit

Code: Select all

# Run fstab for setup
fstab() { rm -f /tmp/devices
param nocd || for x in /dev/sr*; do blkid $x >>/tmp/devices; done
param nohd || blkid | egrep -v '/dev/sr|/dev/loop|/dev/mapper' >>/tmp/devices
dev=`egrep -v 'TYPE="sw|TYPE="LVM|TYPE=".*_raid_member"' /tmp/devices 2>/dev/null | cut -d: -f1 | cut -d/ -f3 | sort | uniq`
cat > /etc/fstab << EOF
# Do not edit this file as fstab is recreated automatically during every boot.
# Please use /etc/rc.d/rc.local or sysvinit scripts if you want to mount/unmount
# drive, filesystem or network share.

# System mounts:
aufs / aufs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
devtmpfs /dev devtmpfs defaults 0 0
devpts /dev/pts devpts rw,mode=0620,gid=5 0 0

# Device partitions:
EOF
for x in $dev; do
    fs=`grep -w /dev/$x /tmp/devices | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
    [ $fs = vfat ] && echo "/dev/$x /mnt/$x vfat $MOPT,umask=0,check=s,utf8 0 0" >>/etc/fstab || echo "/dev/$x /mnt/$x $fs $MOPT 0 0" >>/etc/fstab
    if [ ! -d /mnt/$x ]; then
	mkdir /mnt/$x
	if [ $fs = ntfs ]; then
	    ntfs-3g /dev/$x /mnt/$x -o $MOPT 2>/dev/null || { sed -i "/$x /d" /etc/fstab; rmdir /mnt/$x; }
	else
	    mount -n /mnt/$x 2>/dev/null || { modprobe $fs 2>/dev/null && mount -n /mnt/$x 2>/dev/null || { sed -i "/$x /d" /etc/fstab; rmdir /mnt/$x; }; }
	fi
    fi
done

if [ -z "`egrep -o " noswap( |\$)" /proc/cmdline`" -a -e /tmp/devices ]; then
	#echo -e "\n# Swap partitions:" >>/etc/fstab
	for x in `grep 'TYPE="swap"' /tmp/devices | cut -d: -f1`; do echo "$x none swap sw,pri=1 0 0" >>/etc/fstab; done
fi }

# Mount things
mount_device() {
fs=`blkid /dev/$1 | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
if [ "$fs" ]; then
    mkdir /mnt/$1
    if [ $fs = vfat ]; then
	mount -n /dev/$1 /mnt/$1 -o $MOPT,umask=0,check=s,utf8 2>/dev/null || rmdir /mnt/$1
    elif [ $fs = ntfs ]; then
	ntfs-3g /dev/$1 /mnt/$1 -o $MOPT 2>/dev/null || rmdir /mnt/$1
    else
	mount -n /dev/$1 /mnt/$1 -o $MOPT 2>/dev/null || { modprobe $fs 2>/dev/null && mount -n /dev/$1 /mnt/$1 -o $MOPT || rmdir /mnt/$1; }
    fi
fi }

# Search for boot location
search() { FND=none; for x in `ls /mnt | tac`; do
[ $1 /mnt/$x/$2 ] && { DEV=$x; FND=y; break; }; done
[ $FND = y ]; }

# Delay booting a little until devices have settled
nap() { echo -en $i"device not ready yet? delaying [1;33m$SLEEP[0m seconds \r"; sleep 1; }
lazy() { SLEEP=6; while [ $SLEEP -gt 0 -a $FND = none ]; do nap; let SLEEP=SLEEP-1; fstab; search $*; done }

# Find location of Porteus files
locate() { LPATH=`echo $2 | cut -b-5 | sed s@/dev@/mnt@`
if [ $LPATH = /mnt/ ]; then
    DEV=`echo $2 | cut -d/ -f3`; LPTH=`echo $2 | cut -d/ -f4-`; SLEEP=6
    while [ $SLEEP -gt 0 -a ! -b /dev/$DEV ]; do nap; let SLEEP=SLEEP-1; fstab; done
    [ -d /mnt/$DEV ] || mount_device $DEV
    [ $1 /mnt/$DEV/$LPTH ]
elif [ $LPATH = UUID: -o $LPATH = LABEL ]; then
    ID=`echo $2 | cut -d: -f2 | cut -d/ -f1`; LPTH=`echo $2 | cut -d/ -f2-`; DEV=`blkid | grep $ID | cut -d: -f1 | cut -d/ -f3`; SLEEP=6
    while [ $SLEEP -gt 0 -a "$DEV" = "" ]; do nap; let SLEEP=SLEEP-1; fstab; DEV=`blkid | grep $ID | cut -d: -f1 | cut -d/ -f3`; done
    [ -d /mnt/$DEV ] || mount_device $DEV
    [ $1 /mnt/$DEV/$LPTH ]
else
    LPTH=$2; search $* || lazy $*
fi }
Let's see...

UPDATE
1st fix for /etc/rc.d/rc.S

Code: Select all

    MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="noatime,nodiratime,suid,dev,exec,async"
replace with

Code: Select all

    MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show"
and now I need to remove what I crossed out with a red line in

Code: Select all

# Device partitions:
section of fstab (probably this to do in initrd.xz/finit)
Image

My test stand in VirtualBox:

Code: Select all

# blkid | egrep -v '/dev/sr|/dev/loop|/dev/mapper' >>/tmp/devices

/dev/sda: UUID="22cb345c-d364-4f00-a8ff-cac6117c2a09" TYPE="ext2"
/dev/sdb1: UUID="deab012d-a49a-bbfd-6779-400ff3c13331" UUID_SUB="0610e2b7-9646-baaa-0374-1a7d1f049a6b" LABEL="porteus.example.net:0" TYPE="linux_raid_member" PARTUUID="b28fe85b-01"
/dev/sdc1: UUID="deab012d-a49a-bbfd-6779-400ff3c13331" UUID_SUB="a6c173ef-3f5b-8c0b-6b0e-b29e37cf0468" LABEL="porteus.example.net:0" TYPE="linux_raid_member" PARTUUID="bf04882c-01"
/dev/md0p1: UUID="07b72d9a-b258-4702-b184-ea9d1e561936" TYPE="ext4" PARTUUID="16d0e273-01"
/dev/md0: PTUUID="16d0e273" PTTYPE="dos"
Linux 6.6.11-porteus #1 SMP PREEMPT_DYNAMIC Sun Jan 14 12:07:37 MSK 2024 x86_64 Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz GenuineIntel GNU/Linux
MS-7A12 » [AMD/ATI] Navi 23 [Radeon RX 6600] [1002:73ff] (rev c7) » Vengeance LPX 16GB DDR4 K2 3200MHz C16

User avatar
Blaze
DEV Team
DEV Team
Posts: 3869
Joined: 28 Dec 2010, 11:31
Distribution: ⟰ Porteus current ☯ all DEs ☯
Location: ☭ Russian Federation, Lipetsk region, Dankov
Contact:

Can't install Porteus on RAID 1 array

Post#8 by Blaze » 24 Nov 2019, 10:00

At this moment I fixed RAID section only in /etc/rc.d/rc.S (I not familiar with LVM, but I applied some fixes for LVM that are visually incorrect):

Open /etc/rc.d/rc.S and find:

Code: Select all

# Raid support:
if grep -qw 'TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null; then
    echo "Assembling RAID arrays"
    # Create fstab entries:
    blkid | grep /dev/md | grep -v 'TYPE="LVM' | sort >/mnt/live/tmp/raid
    echo -e "\n# RAID arrays:" >>/etc/fstab
    MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="noatime,nodiratime,suid,dev,exec,async"
    RAID=`grep -c / /mnt/live/tmp/raid`; x=1;
    while [ $x -le $RAID ]; do
        NAME=`sed -n "$x"p /mnt/live/tmp/raid | cut -d: -f1 | sed s@/dev/@@`
        FS=` sed -n "$x"p /mnt/live/tmp/raid | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
        echo "/dev/$NAME /mnt/$NAME $FS $MOPT  0 0" >>/etc/fstab
        mkdir /mnt/$NAME; [ -z "`egrep -qo "^noauto( |\$)" /etc/bootcmd.cfg`" ] && mount /mnt/$NAME; let x=x+1
    done
fi

# LVM support:
if egrep -q 'TYPE="LVM|TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null; then
    echo "Initializing LVM (Logical Volume Manager):"
    vgscan --mknodes --ignorelockingfailure
    if [ $? = 0 ]; then
        vgchange -ay --ignorelockingfailure
        # Create fstab entries:
        blkid | grep /dev/mapper | sort >/mnt/live/tmp/lvm
        echo -e "\n# LVM volumes:" >>/etc/fstab
        MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="noatime,nodiratime,suid,dev,exec,async"
        LVM=`grep -c / /mnt/live/tmp/lvm`; x=1
        while [ $x -le $LVM ]; do
            NAME=`sed -n "$x"p /mnt/live/tmp/lvm | cut -d: -f1 | sed -e 's@/dev/mapper/@@' -e 's@-@/@g'`
            # Fallback mode:
            test -h /dev/$NAME || NAME=`sed -n "$x"p /mnt/live/tmp/lvm | cut -d: -f1 | sed s@/dev/mapper/@@`
            FS=` sed -n "$x"p /mnt/live/tmp/lvm | egrep -o " TYPE=[^ ]+" | cut -d'"' -f2`
            echo "/dev/$NAME /mnt/$NAME $FS $MOPT 0 0" >>/etc/fstab
            mkdir -p /mnt/$NAME; [ -z "`egrep -qo "^noauto( |\$)" /etc/bootcmd.cfg`" ] && mount /mnt/$NAME; let x=x+1
        done
    fi
fi
Replace with:

Code: Select all

# Raid support:
if grep -qw 'TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null; then
	# Linux RAID members as fstab entries
    echo "Linux RAID members as fstab entries"
	dev=`grep -w 'TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null | cut -d: -f1 | cut -d/ -f3 | sort -u`
    echo -e "\n# Linux RAID members:" >>/etc/fstab
	for x in $dev; do
    fs=`grep -w /dev/$x /mnt/live/tmp/devices | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
    if [ ! -d /mnt/$x ]; then
		mkdir /mnt/$x
	elif [ $fs = linux_raid_member ]; then
		RAIDMOPT="noatime,nodiratime,suid,dev,exec,async,nofail"
		echo "/dev/$x /mnt/$x $fs $RAIDMOPT  0 0" >>/etc/fstab
	fi
	done
	# Assembling RAID arrays
    echo "Assembling RAID arrays"
    blkid | grep '^/dev/md.*p.*' | grep -v 'TYPE="LVM' | sort >/mnt/live/tmp/raid
    echo -e "\n# RAID arrays:" >>/etc/fstab
    MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show"
    RAID=`wc -l < /mnt/live/tmp/raid`; x=1;
    while [ $x -le $RAID ]; do
        NAME=`sed -n "$x"p /mnt/live/tmp/raid | cut -d: -f1 | sed s@/dev/@@`
        FS=`sed -n "$x"p /mnt/live/tmp/raid | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
        echo "/dev/$NAME /mnt/$NAME $FS $MOPT  0 0" >>/etc/fstab
        mkdir /mnt/$NAME; [ -z "`egrep -qo "^noauto( |\$)" /etc/bootcmd.cfg`" ] && mount /mnt/$NAME; let x=x+1
    done
fi

# LVM support:
if egrep -q 'TYPE="LVM|TYPE=".*_raid_member"' /mnt/live/tmp/devices 2>/dev/null; then
    echo "Initializing LVM (Logical Volume Manager):"
    vgscan --mknodes --ignorelockingfailure
    if [ $? = 0 ]; then
        vgchange -ay --ignorelockingfailure
        # Create fstab entries:
        blkid | grep /dev/mapper | sort >/mnt/live/tmp/lvm
        echo -e "\n# LVM volumes:" >>/etc/fstab
        MOPT=`egrep -o "^mopt=[^ ]+" /etc/bootcmd.cfg | cut -d= -f2`; [ $MOPT ] || MOPT="users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show"
        LVM=`wc -l < /mnt/live/tmp/lvm`; x=1;
        while [ $x -le $LVM ]; do
            NAME=`sed -n "$x"p /mnt/live/tmp/lvm | cut -d: -f1 | sed -e 's@/dev/mapper/@@' -e 's@-@/@g'`
            # Fallback mode:
            test -h /dev/$NAME || NAME=`sed -n "$x"p /mnt/live/tmp/lvm | cut -d: -f1 | sed s@/dev/mapper/@@`
            FS=`sed -n "$x"p /mnt/live/tmp/lvm | egrep -o " TYPE=[^ ]+" | cut -d'"' -f2`
            echo "/dev/$NAME /mnt/$NAME $FS $MOPT 0 0" >>/etc/fstab
            mkdir -p /mnt/$NAME; [ -z "`egrep -qo "^noauto( |\$)" /etc/bootcmd.cfg`" ] && mount /mnt/$NAME; let x=x+1
        done
    fi
fi
now I have this picture in /etc/fstab
# Do not edit this file as fstab is recreated automatically during every boot.
# Please use /etc/rc.d/rc.local or sysvinit scripts if you want to mount/unmount
# drive, filesystem or network share.

# System mounts:
aufs / aufs defaults 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
devtmpfs /dev devtmpfs defaults 0 0
devpts /dev/pts devpts rw,mode=0620,gid=5 0 0

# Device partitions:
/dev/sda /mnt/sda ext2 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0
/dev/sdb1 /mnt/sdb1 linux_raid_member users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0
/dev/sdc1 /mnt/sdc1 linux_raid_member users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0
/dev/md0p1 /mnt/md0p1 ext4 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0


# Linux RAID members:
/dev/sdb1 /mnt/sdb1 linux_raid_member noatime,nodiratime,suid,dev,exec,async,nofail 0 0
/dev/sdc1 /mnt/sdc1 linux_raid_member noatime,nodiratime,suid,dev,exec,async,nofail 0 0

# RAID arrays:
/dev/md0p1 /mnt/md0p1 ext4 users,noatime,nodiratime,suid,dev,exec,async,comment=x-gvfs-show 0 0

# LVM volumes:

# Hotplugged devices:
I need to remove crossed out lines from /etc/fstab. This can do only in initrd.xz/finit (his code at my previous post) IMHO.

I have tried to find

Code: Select all

dev=`egrep -v 'TYPE="sw|TYPE="LVM|TYPE=".*_raid_member"' /tmp/devices 2>/dev/null | cut -d: -f1 | cut -d/ -f3 | sort | uniq`
and replace with

Code: Select all

dev=`egrep -v '/dev/md|TYPE="sw"|TYPE="LVM"|TYPE=".*_raid_member"' /tmp/devices 2>/dev/null | cut -d: -f1 | cut -d/ -f3 | sort -u`
but it does not give me result, because this function

Code: Select all

# Mount things
mount_device() {
fs=`blkid /dev/$1 | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2`
if [ "$fs" ]; then
    mkdir /mnt/$1
    if [ $fs = vfat ]; then
	mount -n /dev/$1 /mnt/$1 -o $MOPT,umask=0,check=s,utf8 2>/dev/null || rmdir /mnt/$1
    elif [ $fs = ntfs ]; then
	ntfs-3g /dev/$1 /mnt/$1 -o $MOPT 2>/dev/null || rmdir /mnt/$1
    else
	mount -n /dev/$1 /mnt/$1 -o $MOPT 2>/dev/null || { modprobe $fs 2>/dev/null && mount -n /dev/$1 /mnt/$1 -o $MOPT || rmdir /mnt/$1; }
    fi
fi }
spoils the whole picture because /dev/$1 is get /dev/sdb1 and /dev/sdc1 + /dev/md0p1, but these devices should be excluded :(

I need suggestion :Bravo:

Additional information that can help to understand my situation:
  • /dev/sda - this simple hard disk with porteus (this disk not member of RAID)
  • /dev/sdb1 and /dev/sdc1 - RAID members
  • /dev/md0p1 - RAID 1 array from /dev/sdb1 and /dev/sdc1
  • /dev/md0 - this entry is not needed in /etc/fstab [DONE]

Code: Select all

# cat /tmp/devices 
/dev/sda: UUID="22cb345c-d364-4f00-a8ff-cac6117c2a09" TYPE="ext2"
/dev/sdb1: UUID="deab012d-a49a-bbfd-6779-400ff3c13331" UUID_SUB="0610e2b7-9646-baaa-0374-1a7d1f049a6b" LABEL="porteus.example.net:0" TYPE="linux_raid_member" PARTUUID="b28fe85b-01"
/dev/sdc1: UUID="deab012d-a49a-bbfd-6779-400ff3c13331" UUID_SUB="a6c173ef-3f5b-8c0b-6b0e-b29e37cf0468" LABEL="porteus.example.net:0" TYPE="linux_raid_member" PARTUUID="bf04882c-01"
/dev/md0p1: UUID="07b72d9a-b258-4702-b184-ea9d1e561936" TYPE="ext4" PARTUUID="16d0e273-01"
/dev/md0: PTUUID="16d0e273" PTTYPE="dos"

# egrep -v '/dev/md|TYPE="sw"|TYPE="LVM"|TYPE=".*_raid_member"' /tmp/devices 2>/dev/null | cut -d: -f1 | cut -d/ -f3 | sort -u
sda

# blkid
/dev/sda: UUID="22cb345c-d364-4f00-a8ff-cac6117c2a09" TYPE="ext2"
/dev/sdb1: UUID="deab012d-a49a-bbfd-6779-400ff3c13331" UUID_SUB="0610e2b7-9646-baaa-0374-1a7d1f049a6b" LABEL="porteus.example.net:0" TYPE="linux_raid_member" PARTUUID="b28fe85b-01"
/dev/sdc1: UUID="deab012d-a49a-bbfd-6779-400ff3c13331" UUID_SUB="a6c173ef-3f5b-8c0b-6b0e-b29e37cf0468" LABEL="porteus.example.net:0" TYPE="linux_raid_member" PARTUUID="bf04882c-01"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/md0p1: UUID="07b72d9a-b258-4702-b184-ea9d1e561936" TYPE="ext4" PARTUUID="16d0e273-01"
/dev/md0: PTUUID="16d0e273" PTTYPE="dos"

# lsblk -e7
NAME        MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda           8:0    0    4G  0 disk  /mnt/sda
sdb           8:16   0    8G  0 disk  
└─sdb1        8:17   0    8G  0 part  
  └─md0       9:0    0    8G  0 raid1 
    └─md0p1 259:0    0    8G  0 part  /mnt/md0p1
sdc           8:32   0    8G  0 disk  
└─sdc1        8:33   0    8G  0 part  
  └─md0       9:0    0    8G  0 raid1 
    └─md0p1 259:0    0    8G  0 part  /mnt/md0p1
sr0          11:0    1 1024M  0 rom   

# blkid /dev/sda | egrep -o ' TYPE=[^ ]+' | cut -d'"' -f2
ext2
If you have multiple disks in a software RAID configuration, the preferred way to boot is:

Create a separate RAID-1 partition for /boot.

Note that the Linux RAID-1 driver can span as many disks as you wish.

Install the MBR on *each disk*, and mark the RAID-1 partition as "active".

Run "

Code: Select all

extlinux --raid --install /boot
" to install EXTLINUX.

This will install it on all the drives in the RAID-1 set, which means you can boot any combination of drives in any order.
https://wiki.syslinux.org/wiki/index.php?title=EXTLINUX
Linux 6.6.11-porteus #1 SMP PREEMPT_DYNAMIC Sun Jan 14 12:07:37 MSK 2024 x86_64 Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz GenuineIntel GNU/Linux
MS-7A12 » [AMD/ATI] Navi 23 [Radeon RX 6600] [1002:73ff] (rev c7) » Vengeance LPX 16GB DDR4 K2 3200MHz C16

Post Reply