Showing posts with label zareason. Show all posts
Showing posts with label zareason. Show all posts

Wednesday, May 4, 2016

Building a Ubuntu Server - Part 4 - Multi-part Series

After struggling over the week-end with my new Ubuntu Server (Xenial Xerus), I have come up with the below solution to satisfy my setup requirements.

First off, let me describe what I was attempting to do. The system has 3 hot swap drives, each at 500G. The system came pre-installed and I wanted to start from scratch as Xenial Xerus had just been released and the system from Zareason had the previous release Wily Werewolf installed.

For the setup I wanted to do a RAID 1 (mirror) with a spare of the base system. I also wanted to setup LVM and encryption. All available via the installer.

So, first instinct was to load the installer, flatten the drives and install from USB. No luck. GPT Woes and failed to install grub, etc.

I tried many times, and if I removed LVM and encryption, and just went RAID, it worked fine. Testing the same setup with no encryption yielded a working system as well. It turns out the installer lets you encrypt the entire partition and then install the /boot partition within the encrypted drive.

I now realize that while this was allowed in the installer, this wasn't going to work. The /boot partition needs to be accessible non-encrypted in order for the installer to load grub, etc.

Now that I knew this, I was able to plan things out with the required EFI and /boot partitions separate from my encrypted RAID/LVM partition for /.

First, since the drives have existing partitions, I needed to clear those out. So, from a Live CD I opened a terminal and executed the following to remove the existing partition tables. This step shouldn't be necessary if you have brand new and clean drives to start.
sudo sgdisk -Z /dev/sda
sudo sgdisk -Z /dev/sdb
sudo sgdisk -Z /dev/sdc
After I reset the tables, I created my partitions on first drive, also from the Live CD. This just made it easier to setup the drives ahead of time and let the installer see those partitions later.
sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sda 
sudo sgdisk -n 2:0:+512M -t 2:fd00 -c 2:"Linux RAID" /dev/sda
sudo sgdisk -n 3:0:+64G -t 3:fd00 -c 3:"Linux RAID" /dev/sda
The above will create 3 partitions:
  • 512MB EFI
  • 512MB RAID
  • 64GB RAID
Next, I copied the structure over to the second drive (mirror) and third drive (spare). The below command will randomize the UUID as well on the copy.
sudo sgdisk /dev/sda -R /dev/sdb -G
sudo sgdisk /dev/sda -R /dev/sdc -G
The first partition of each drive sd[abc]1 will be used for hosting the EFI mount point. This cannot be RAIDed and will need to be cloned (dd) to the other disks and added to the boot chain (see further down).

The second partition will be for my boot partition, and will be a plain RAID 1 mirror on sd[ab]2 with sdc2 being a spare. No encryption and no LVM.

The 3rd and final partition will be for the rest of the system and will be installed in LVM on top of an encrypted RAID 1 array sd[ab]3 with spare sdc3.

Now we need to prepare the EFI partition to ensure the installer can see and use it.
sudo mkfs.fat -F 32 /dev/sda1
mkdir /tmp/sda1
sudo mount /dev/sda1 /tmp/sda1
sudo mkdir /tmp/sda1/EFI
sudo umount /dev/sda1
At this point I was able to safely hop back into the server installer and configure the partitions.

I chose to use the manual partitioning so i could select the EFI partitions on each drive and set them as ESP, build both RAID1 sets (sd[ab]1+sdc1 (spare) and sd[ab]2+sdc2 (spare)), create the encrypted volume on MD1 (64GB RAID1 from partition 2), then add LVM on the encrypted volume as one volume group with two logival volumes (17GB swap and 25GB for /).

With this setup, the installer was finally happy with me, and I was able to complete my install. All that was left at this point was to mirror the EFI partition and update the boot chain.
To do this, I rebooted into my new server and logged in tomy admin account and executed the following:
dd if=/dev/sda1 of=/dev/sdb1
dd if=/dev/sda1 of=/dev/sdc1
efibootmgr -c -g -d /dev/sdb -p 1 -L "Ubuntu (Mirror)" \
 -l '\EFI\Ubuntu\grubx64.efi'
efibootmgr -c -g -d /dev/sdc -p 1 -L "Ubuntu (Spare)" \
-l '\EFI\Ubuntu\grubx64.efi'
All that was left was to test the drives, rebuilding the array from simulated failure, etc. But Ill leave that for a future discussion.

Now, off to secure my server, configure my UPS, etc.

Reference article: how to install ubuntu 14.04 64 bit with a dual boot raid 1 partition on an uefi

Building a Ubuntu Server - Part 3
Building a Ubuntu Server - Part 5

Tuesday, April 26, 2016

Building a Ubuntu Server - Part 2 - Multi-part Series

When starting out to build my server, I took some time to do some research on the different technologies I wanted to tinker with.

Here are some links to important references used in my research and decision making on what technologies to use and implement on my server.

My first stop was a review of the most current LTS Server Guide for Ubuntu. It covers most of the bases on what to setup and where to get more information.

Ubuntu Server Guide (LTS)

What I didnt get was whether I should go with RAID, RAID + LVM or something else. I guess it would seem to many a matter of choice and depend greatly on how "available" you needed the system to be and whether you were doing backups and how they would be done.

For me, I specifically wanted a server which could stay up even if a drive failed. This led me down the path of RAID. But this only addressed my concerns about a drive failure, and not about backups. Using just RAID meant that I needed to take the server down to do a true full backup. This led me to think about LVM, and doing live snapshots. Ok, so this was better and kept my system up and running.

Then I came across ZFS, which was now a part of the new Xenial LTS release. It does both what RAID and LVM do seperately, in one neat package. Here are some links on ZFS and how to implement the filesystem, do backups, etc.


Now that I had my filesystem and storage figured out, my next plan was to install a base system on top of this and virtualize everything else. I was quite familiar with KVM and Virt-Manager, so that's presently what I am reviewing and evaluating. I know LXC/LXD and Juju are possibilities as well, and I may yet end up deploying some "services" using that technology, but for the mission critical, like DNS, Web, Email, etc, I will likely stick to my KVM.

Community Guide - KVM/Installation
Community Guide - KVM/VirtManager

My server arrives this Friday, so I may post up an "unboxing video". I am super excited to get my shiny new server from ZaReason.

Building a Ubuntu Server - Part 1
Building a Ubuntu Server - Part 3

Regards,
Rod

Building a Ubuntu Server - Part 1 - Multi-part Series

I've been out of the development game for a while now. Last post was circa 2011, and here we are 5 years later.

So, to get myself back into things, I am starting by building a new Ubuntu development server and catching up on all the new shiny bits which have been developed since my hiatus.

First off, I wanted a reasonable system, decent RAM, and hot swap storage. I thought about building my own, and sourcing all the parts, but really didnt have the time or energy to review all the parts for compatibility, etc. So I took the easy route and sourced a system from ZaReason.

Here is what I eventually ended up buying:

Breeze Server 560

CPU: i7-6700 3.4-4 GHz 4-core, 8 threads
RAM: 16 GB DDR4-2133
Storage: 3x500 GB 7200rpm HD
Networking: 2x Gigabit Ethernet ports
Networking Wireless: Intel® Wireless AC Dual-Band+Bluetooth
Power Supply Breeze Server: 250W

It's a fairly decent system for what I needed and with some room to grow in RAM and Storage.

Over the next few posts I will be detailing my setup and documenting the choices and my reasons. Mostly for my own benefit, but perhaps others may find this useful.

Building a Ubuntu Server - Part 2

Regards,
Rod