Semi-automated partitioning for ZFS pool creation on GPT labels

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
#!/bin/sh
# zhandle.sh: semi-automated partitioning for ZFS pool creation.
# Vladimir A Smirnov, 2014. kkatarn@sleepgate.ru
# OS: FreeBSD.
# Usage: uncomment some lines, then execute.
# This script is UNPORTABLE. The reasons of unportablity are following:
# - there is no 'labelclear' command in ZoL;
# - GPT labels are in different places (for linux: /dev/disk/by-partlabel);
# - the partitioning tool in linux is GNU parted, for Illumos I have no idea;
# - the disk namings in linux and Illumos are completely different;
# - extraction methods for disk list and serial numbers are different;
# - for ZoL, ashift may be specified; for FreeBSD 'nop's must be created;
# - as far as I know, neither linux nor Illumos can not be directly booted from
# zfs pool on GPT partitions, with small boot loader embedded in another GPT
# partition. They both require grub installed in MBR partition. At the
# moment, only FreeBSD and commercial Solaris can be booted in such a way.
# Thus, just boot FreeBSD LiveCD/fixit (ipmi will help), cd /tmp and do the job.
rm -f devlist
# I had da0...da5 disks.
# In case of FreeBSD, /dev/passXXX devices can also be used, but this
# naming complicates extraction of serial numbers from dmesg (they can be easily
# extracted with smartctl, but there is no smartctl in FreeBSD LiveCD/fixit).
for f in `seq 0 5`; do
# Erase ZFS labels, if there are any:
# zpool labelclear /dev/da$f
# Erase partitionings, if there are any:
# gpart destroy /dev/da$f
# Create GPT partitionings:
# gpart create -s gpt /dev/da$f
# Add FreeBSD boot partitions:
# gpart add -t freebsd-boot -a 2M -s 64k /dev/da$f
# Extract disk serial numbers:
SERIAL=`dmesg | grep ^da$f | grep Serial | awk '{print $4}' | tail -c 5`
# My HDDs were Hitachi Deskstar:
PREFIX=hds
# Prepare device list for pool creation. For ashift = 9 this is enought:
# echo -n "/dev/gpt/${PREFIX}${SERIAL} " >> devlist
# For ashift = 12, the nops must be created on FreeBSD:
echo -n "/dev/gpt/${PREFIX}${SERIAL}.nop " >> devlist
# gnop create -S 4096 /dev/gpt/${PREFIX}${SERIAL}
# Write pmbrs and FreeBSD bootcode:
# gpart add -t freebsd-zfs -a 2M -l ${PREFIX}${SERIAL} /dev/da$f
# gpart bootcode -i 1 -b /boot/pmbr -p /boot/gptzfsboot /dev/da$f
# Display the resulted partitioning:
# gpart show -l da$f
# This line is for removing nops after 'zpool create'
# gnop destroy /dev/gpt/${PREFIX}${SERIAL}.nop
done
# Stage 1: use this script to create gpt partitions and labels,
# to write bootcode and to create the device list.
# Stage 2: issue:
# zpool create -d [-m where-to-mount] your-conf `cat devlist`
# For the above case, it is obvious your-conf == raidz2
# Do not forget the '-d' flag! Otherwise, all features will be enabled and
# pool becomes almost unportable. This is probably not what you want.
# Beware: if you use where-to-mount != / (you have to if / is mounted ro),
# later you will have to fix mountpoints of all datasets with
# zfs set mountpoint=blahblahblah
# The 'altroot' property can also be specified for the purpose.
# Stage 3: export the pool, use this script to remove nops. Then re-import the
# pool with -d /dev/gpt -o cachefile=/tmp/zpool.cache flags. Done.
# Probably the next steps will be:
# zfs set atime=off mynewpool
# zfs set feature@lz4_compress=enabled mynewpool
# zfs set feature@async_destroy=enabled mynewpool
# Now you can issue zfs set compression=lz4 for new ZFS datasets.
# FreeBSD installation:
# zfs create mynewpool/ROOT mynewpool
# zfs create mynewpool/ROOT/mybsd mynewpool
# Install FreeBSD tree to /mountpoint/ROOT/mybsd (unpack/copy/whatever). Then:
# zpool set bootfs=mynewpool/ROOT/mybsd
# cp /tmp/zpool.cache /mountpoint/ROOT/mybsd/boot/zfs/
# echo opensolaris_load=\"YES\" > /mountpoint/ROOT/mybsd/boot/loader.conf
# echo zfs_load=\"YES\" >> /mountpoint/ROOT/mybsd/boot/loader.conf
# echo vfs.root.mountfrom=\"zfs:mynewpool/ROOT/mybsd" >> /mountpoint/ROOT/mybsd/boot/loader.conf
# echo mynewpool/ROOT/mybsd / zfs rw 0 0 > /mountpoint/ROOT/mybsd/etc/fstab
# FreeBSD installed. You can boot from any disk of the pool (all pool members
# must be visible, e.g. must be initialized by BIOS of the controller).