My Rambling Thoughts

Bitrate rule-of-thumb

A decent 720p video encoded using H.264 uses 4 Mbps. That's 500 kiB/s, or 30 MiB/min, or 1.8 GiB/hour.

If the video is mostly static scenes, we can half the bitrate. A low-motion 720p video can look decent with 2 Mbps. On the other side, 6 Mbps (1.5x) is "good enough" for most purposes.

The trend now is to use CRF (Constant Rate Factor). The user chooses the encoding quality, not the bitrate. The bitrate depends on the video and is only known after the file is encoded.

A 1080p video is 2.25x a 720p video. To have the same video quality, it needs 9 Mbps. That is 1.125 MiB/s, or 67.5 MiB/min, or 4.05 GiB/hour.

A 480p video is 0.333x a 720p video. To have the same video quality, it needs just 1.333 Mbps. That is 166.7 kiB/s, or 10 MiB/min, or 600 MiB/hour.

This is a quick way to estimate the video's bitrate. If the file is too small, say by half, we wonder if the quality is acceptable. If it's too big, say double, we wonder if it's an overkill.

File size per hour quick guide:

Res Small Ok Big
480p 640 x 480 0.3 GiB 0.6 GiB 0.9 GiB
720p 1280 x 720 0.9 GiB 1.8 GiB 2.7 GiB
1080p 1920 x 1080 2.03 GiB 4.05 GiB 6.08 GiB

Just to take note, the maximum Bluray video bitrate is 40 Mbps, or 5 MiB/s, or 300 MiB/min, or 18 GiB/hour.

Updated: added 480p.

My last Transformers comic for a while

IDW Collection Vol 5

Taking advantage of an opportunistic visit to Ngee Ann City, I visited Kinokuniya and bought The Transformers: The IDW Collection Volume Five (S$75.33) and Last Stand of the Wreckers (S$43.94). Yes, they are pretty expensive.

At this point, I'm just missing the current continuity. But I'm going to stop here for a while. It's not that I've outgrown Transformers, but the new stories are not that different. Robots, grand schemes, battles, doomsday.

I have enough Transformers comics to keep me occupied. :-D

The battle of the AC adapters

Asus 1215 AC Adapters

I bought my Asus 1215N netbook from Amazon, so it came with a US-style 2-pin AC adapter. I need to use a 2-pin to 3-pin adapter.

As is my usual practice, I bought a spare AC adapter. However, I didn't manage to buy the exact same one. It's a case of not-doing-my-research. I assumed the SLS folks would know the correct adapter, but the Asus 1215N was too new for them.

In the end, I just got an AC adapter for other Asus netbooks for S$40. The connector fits, but it is slightly too long. Well, at least it works.

Just one year four months later, it stopped working. This is the first time I heard of an AC adapter failing. Oh well, another trip down to SLS. This time, I made sure I got the model numbers. My original AC adapter is EXA0901XH. The one I got was EXA081XA. I'll prefer to get the first one, but if I can't find it, the second one is still fine.

I went back to the original shop (not intentionally; it was in a good spot). This time, the sales person quoted me S$65 for the same AC adapter I bought the last time! I passed.

The second shop quoted me S$35, and would throw in a free power cable. However, it was the EXA081XA model, and it didn't have the LCD light. Hmm, a variant? I hesitated. The sales person claimed they supplied the entire SLS, so there's no point looking further. He's obviously lying. I decided to look some more.

The third shop quoted me just S$23! But the connector didn't look quite the same. Then, I noticed the AC adapter was a "compatible". The sales person advised me to bring my netbook down to confirm. The price was attractive, but I decided to pass.

The fourth shop I went to, they had the EXA0901XH model. Jackpot! However, it was not exactly the same as my original AC adapter as it had a green LCD. But it had the correct connector, that's more important.

The AC adapter costs S$30, but they are willing to lower it by S$5 if I take just one-month warranty instead of three. Does an AC adapter fail? Neh, never heard of it.

Mission accomplished.

My new green HD

Barracuda vs Caviar HD

S$185 for 2 TB WD Caviar Green. (vs S$179 for a 2 TB Seagate HD.)

HD prices are still high due to the Thailand flood. I think it'll take another six months before prices slowly subside. (I was told a 2 TB HD was close to S$100 before the flood. :-O)

The WD Caviar Green spins at a mere 5400 RPM. If I had known that, I would have bought it two years ago and the HD may not have crashed. I wanted a 5400 RPM drive instead of a standard 7200 RPM drive, but I thought they were extinct due to the drive-for-speed. For my near zero-traffic 24/7 server, power usage and reliability trumps performance.

As an aside, WD refused to disclose the Caviar Green's RPM when it first came out because WD feared consumers would prefer its competitors' 7200 RPM HDs. But the Caviar Green was as fast as the 7200 RPM drives. Areal density is also important.

But that's not why I didn't buy the Caviar Green two years ago. I just thought Seagate was more reliable than WD, that's all. The price difference was less than S$10.

Accessing an external drive

An external drive works as expected if we are logged into the local console. Plug the drive in and the drive icon appears on the desktop. As simple as ABC.

But if we do it over VNC, we get a "Not authorized to mount" error message instead. By default, Ubuntu only allows the local user to mount an external drive. This is frustrating because I'm accessing the server via VNC.

We need to set the desktop policy to allow it. Edit /usr/share/polkit-1/actions/org.freedesktop.udisks.policy.

Change the Mount a device and Detach a drive permissions:

<allow_inactive>yes</allow_inactive>

The rest of the permissions can be set to yes as well, but they are not crucial.

I don't know how to make the new policy active except to restart the server.

The old-fashion way

First, find out which disk it is:

sudo fdisk -l

The 2nd HD is usually /dev/sdb. Its first partition will be named /dev/sdb1.

Create a mount point. We just need to do this once:

sudo mkdir -m 777 /media/MyExtDrive

The traditional Unix mount point is /mnt, but Ubuntu uses /media for some reason.

Then mount it:

sudo mount /dev/sdb1 /media/MyExtDrive -t ntfs

After we are done, we need to unmount it:

sudo umount /media/MyExtDrive

Finally, put the drive to sleep (optional):

sudo udisks --detach /dev/sdb

Configuring SVN server

The SVN server is not installed by default. Install it:

sudo apt-get install subversion libapache2-svn

The recommendation is to create a svn user:

sudo adduser --no-create-home svn

Then create a place to put the project files:

sudo mkdir /home/svn
sudo mkdir /home/svn/myproject
sudo svnadmin create /home/svn/myproject

Change the ownership and permissions:

sudo chown -R svn:svn /home/svn
sudo chmod -R g+rws /home/svn

Edit /etc/apache2/mods-available/dav_svn.conf:

<Location /svn>
    DAV svn
    SVNParentPath /home/svn
    SVNListParentPath On
    SSLRequireSSL
    AuthType Basic
    AuthName "Subversion Repository"
    AuthUserFile /etc/subversion/passwd
    Require valid-user
</Location>

We set SSLRequireSSL so that the SVN requests can only be done over HTTPS.

Note that this is a private SVN repo. A valid user/pw is needed to check out the files.

A public SVN will have the following block:

<LimitExcept GET PROPFIND OPTIONS REPORT>
    Require valid-user
</LimitExcept>

Add a SVN user:

sudo htpasswd -c /etc/subversion/passwd user
sudo chmod 644 /etc/subversion/passwd

Put Apache in the svn group so that it can update the files:

sudo usermod -G svn -a www-data

Restart Apache.

If we use the SVN client on the server itself, it will keep prompting us whether to save the password. Edit ~/.subversion/server:

store-plaintext-passwords = no

We will always be prompted for the password.

Configuring phpMyAdmin

phpMyAdmin allows us to access the MySQL database over the Internet. This is a very powerful feature, so we need to make sure it is secured.

First, install it:

sudo apt-get install phpmyadmin

We always want to access phpMyAdmin over SSL. Edit /etc/phpmyadmin/config.inc.php:

$cfg['ForceSSL'] = true;

We also want to protect it using HTTP authentication. Add a new user who is allowed to access it:

sudo htpasswd -c /etc/apache2/htpasswd.pma pma
sudo chmod 644 /etc/apache2/htpasswd.pma

With this as the first line of defense, the failed attempts will be logged in Apache's log. MySQL does not log failed attempts into it.

Edit /etc/apache2/conf.d/phpmyadmin.conf.

First, we want to use an obscure URL. Every little bit helps.

Alias /my-private-pma-path /usr/share/phpmyadmin

Then, we enable HTTP authentication:

<Directory /usr/share/phpmyadmin>
    ...
    AuthType Basic
    AuthName phpMyAdmin
    AuthUserFile /etc/apache2/htpasswd.pma
    Require user pma
    ...

Restart Apache.

Enabling SSL for Apache

It just takes a few more steps to enable SSL for Apache.

Edit /etc/apache2/sites-available/default-ssl and change these two lines:

SSLCertificateFile    /etc/apache2/ssl/server.crt
SSLCertificateKeyFile /etc/apache2/ssl/server.key

Make the website available over SSL:

ln -s /etc/apache2/sites-available/default-ssl /etc/apache2/sites-enabled/000-default-ssl

Ensure this line exists in /etc/apache2/ports.conf:

NameVirtualHost *:443

Finally, enable the SSL module:

a2enmod ssl

Restart Apache.

Generating self-signed cert

The first step to serving the website over HTTPS is to generate a SSL cert.

I prefer to generate a self-signed CA cert, then generate the SSL cert from that. It is only an extra step.

First, we create a CA cert that is valid for 10 years:

openssl genrsa -des3 -out ca.key 4096
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt

For organization name, I use something CA. For common name, I use my-domain-name CA.

Now we can generate the server cert:

openssl genrsa -des3 -out server.key 4096
openssl req -new -key server.key -out server.csr

I set the common name to my-domain-name. It must be different from the CA cert! There is no error at this step, but browsers will reject the cert as invalid. (And they won't say why.)

openssl x509 -req -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

We will be prompted for the passcode when we use the cert. This will prevent Apache from starting without user intervention. To fix this, we will create an insecure cert that doesn't have a passcode.

openssl rsa -in server.key -out server.key.insecure
mv server.key server.key.secure
mv server.key.insecure server.key

Copy to Apache's dir:

sudo mkdir -m 755 /etc/apache2/ssl

sudo cp server.key /etc/apache2/ssl
sudo cp server.crt /etc/apache2/ssl

(Make sure they have the proper permissions so that Apache can read them.)

As a precaution, we are going to secure the original certs:

chmod 600 ca.* server.*
sudo chown root:root ca.* server.*

Configuring MySQL

MySQL is installed and configured properly out-of-the-box. However, there are two things we may want to do.

Move the data files

The MySQL data files are in /var/lib/mysql by default. However, my /var partition is just 2 GB. It'll fill up in no time. We want to use a bigger partition.

First, stop SQL:

service mysql stop

Move the whole folder:

mv /var/lib/mysql /home

Edit /etc/mysql/my.cnf.

Change this:

datadir = /home/mysql

This is only half done. AppArmor will prevent MySQL from accessing the new data files, so edit /etc/apparmor.d/usr.sbin.mysqld:

  ...
  /home/mysql/ r,
  /home/mysql/** rwk,
  ...

Restart both services:

service apparmor restart
service mysql start

Rename the root user

It is a good idea to change the default MySQL root user (no relation to the Unix root user). Connect to the MySQL server and issue these commands:

use mysql;
update user set user="new-root-id" where user="root";
flush privileges;

This is much more important when we expose MySQL through phpMyAdmin. The root user will be under constant assault.

Configuring PHP

PHP is already installed and configured properly, we just need to harden it and add missing extensions.

Edit /etc/php5/apache2/php.ini:

allow_url_fopen = Off

This disallows opening files over the Internet, as in,

fopen("http://some-site.com/some-file");

It is very convenient — I think I wrote some code that does this. Oops, I need to rewrite them.

By default, PHP will add the HTTP X-Powered-By response header to let the client know that the page is processed by PHP. Disable it:

expose_php = Off

Get the GD library. I use it to resize images on-the-fly:

sudo apt-get php5-gd

And restart Apache.

Configuring Apache

My server is not just a file server, but is also a staging server for my website.

It is very easy to get Apache up and running. It is almost working out-of-the-box.

Apache will warn that it is unable to determine the server's fully qualified domain name every time it starts up. To get rid of it, edit /etc/apache2/httpd.conf:

ServerName 127.0.0.1

By default, Apache will leak the Apache and OS version via the HTTP Server response header and error pages. Turn them off by editing /etc/apache2/conf.d/security:

ServerToken Prod
ServerSignature Off

Edit /etc/apache2/sites-available/default to configure where to serve files from:

DocumentRoot /home/web

<Directory /home/web/>
    Options -Indexes FollowSymLinks MultiViews
    AllowOverride All
    Order allow,deny
    allow from all
</Directory>

Note that Indexes is turned off. This disables the built-in directory listing if index.* is absent.

I set AllowOverride to All to allow .htaccess processing. We need to enable the rewrite module too:

sudo a2enmod rewrite

We need to keep default-ssl in sync too, as that config file is used when Apache is accessed using HTTPS.

Restart Apache

sudo service apache2 restart

Configuring Samba

Samba is used to share the Unix file system over the network with a Windows PC.

Edit /etc/samba/smb.conf.

Soft links are disabled by default, as they allow the user to circumvent the shared directories. However, I need this feature.

To allow following soft links:

[global]
follow symlinks = yes
wide links = yes
unix extensions = no

Enable home directories with the right permissions:

[homes]
   comment = Home Directories
   browseable = no
   read only = no
   create mask = 0600
   directory mask = 0700

A sample share directory:

[repo]
    path = /home/repo
    browsable = yes
    guest ok = yes
    read only = yes

I prefer to keep the shares read-only for all users. To write to it, I create a soft link in the home directory.

Restart:

sudo service smbd restart

Configuring VNC server

The VNC server is not installed by default. Install it:

sudo apt-get install vnc4server

I want to run the VNC server on startup as I'm running a headless server. I don't want to SSH in just to start the VNC server manually.

Add a new file /etc/init.d/vncserver:

#!/bin/sh -e
### BEGIN INIT INFO
# Provides:          vncserver
# Required-Start:    networking
# Default-Start:     3 4 5
# Default-Stop:      0 6
### END INIT INFO


PATH="$PATH:/usr/X11R6/bin/"

# The Username:Group that will run VNC
export USER="my-user-id"

# The display that VNC will use
DISPLAY="1"

# Color depth (between 8 and 32)
DEPTH="16"

# The Desktop geometry to use
GEO="1200x700"
GEO2="1400x1100"
GEO3="800x600"

# The name that the VNC Desktop will have
NAME="my-vnc-server"

OPTIONS="-localhost -name ${NAME} -depth ${DEPTH} -geometry ${GEO} -geometry ${GEO2} -geometry ${GEO3} :${DISPLAY}"

. /lib/lsb/init-functions

case "$1" in
start)
 log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
 su ${USER} -c "/usr/bin/vncserver ${OPTIONS}"
 ;;

stop)
 log_action_begin_msg "Stoping vncserver for user '${USER}' on localhost:${DISPLAY}"
 su ${USER} -c "/usr/bin/vncserver -kill :${DISPLAY}"
 ;;

restart)
 $0 stop
 $0 start
 ;;
esac

exit 0

There are two things to note about this script. First, it only allows local connections. The VNC protocol is unencrypted, so we absolutely don't want a VNC client to connect to the VNC server directly — especially over the Internet. We will do it through a SSH tunnel.

Second, we can specify multiple video resolutions. Use xrandr to switch resolutions in the VNC client.

Make the script executable and add it to the startup scripts:

sudo chmod +x /etc/init.d/vncserver
sudo update-rc.d vncserver defaults

The next step is to run (and kill) the VNC server. It will create a bunch of ini files:

vncserver
vncserver -kill :1

Edit ~/.vnc/xstartup:

unset SESSION_MANAGER
exec sh /etc/X11/xinit/xinitrc

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources

We can now start the VNC server for real manually:

sudo service vncserver start

Configuring screen

screen is a "terminal multiplexer". In GUI terms, it adds tabs — virtual terminal sessions — to a shell terminal session.

Why use screen when we can use xterm and other GUI terminals that have the tab functionality?

Very simple: screen allows us to "detach" a session and resume it at another place and time. Just like VNC, but much faster over (modern-day) slow connections (say, 30 kB/s).

Edit /etc/screenrc:

The startup screen just wastes time, so turn it off:

startup_message off

I use Ctrl-A to go to the start of the command-line prompt, so I have to map screen's command key to something else. Ctrl-G is a good choice. It just sounds the "bell" normally.

escape ^Gg

I like to see the windows and date/time all the time:

hardstatus alwayslastline "%?%{wk}%-Lw%?%{Yk}%n*%f %t%?(%u)%?%?%{wk}%+Lw%? %{gk}%=%c %{yk}%d%M"

screen has a macro language. The bad thing about Unix is that different commands have their own macro language. It takes time to master each of them. But they allow you to do amazing things, so it's worth the time to learn.

The status can be broken into four parts:

%?%{wk}%-Lw%? Show list of windows before the active window
%{Yk}%n*%f %t%?(%u)%? Show active window (in bright yellow)
%?%{wk}%+Lw%? Show list of windows after the active window
%{gk}%=%c %{yk}%d%M Shows time (green) / date (yellow) on the right

The bold parts are the main parts. The others are modifiers and colors.

Cron and auth.log

It is important to at least glance through /var/log/auth.log from time to time to observe how the server is under attack.

However, it is filled with this useless junk:

Mar 11 08:39:01 <hostname> CRON[3004]: pam_unix(cron:session): session opened for user root by (uid=0)
Mar 11 08:39:01 <hostname> CRON[3004]: pam_unix(cron:session): session closed for user root

You can guess it's from the cron jobs.

There are a couple of ways to remove it, but I think the following is the correct way.

Edit /etc/pam.d/common-session-noninteractive. Add this line:

session [success=1 default=ignore] pam_succeed_if.so service in cron quiet use_uid

before

session required      pam_unix.so

(I don't really understand what the line does, but I can guess.)

Winbind and auth.log

This shows an example of a failed login attempt. Obviously, someone was trying to get into my server.

Mar 12 01:45:33 <hostname> sshd[7254]: User bin from 182.236.164.11 not allowed because not listed in AllowUsers
Mar 12 01:45:33 <hostname> sshd[7254]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=182.236.164.11  user=bin
Mar 12 01:45:33 <hostname> sshd[7254]: pam_winbind(sshd:auth): getting password (0x00000388)
Mar 12 01:45:33 <hostname> sshd[7254]: pam_winbind(sshd:auth): pam_get_item returned a password
Mar 12 01:45:33 <hostname> sshd[7254]: pam_winbind(sshd:auth): request wbcLogonUser failed: WBC_ERR_AUTH_ERROR, PAM error: PAM_USER_UNKNOWN (10), NTSTATUS: NT_STATUS_NO_SUCH_USER, Error message was: No such user
Mar 12 01:45:34 <hostname> sshd[7254]: Failed password for invalid user bin from 182.236.164.11 port 60984 ssh2

Lines 3-5 indicate that we are doing a Winbind lookup. I have no ActiveDirectory, so it shouldn't be there.

Run pam-auth-update and turn off Winbind lookup.

This simplifies /var/log/auth.log.

Hardening OpenSSH

One of the first tasks after installing Ubuntu is to harden the SSH server. After all, it is one of the three open ports I allow from the Internet, the others being port 80 (HTTP) and port 443 (HTTPS).

Edit /etc/ssh/sshd_config.

It is recommended to lower the permitted login time to prevent DoS. Someone recommended just 20s. I've seen it timing out on very slow connections. Thus, I use 40s.

LoginGraceTime 40

Unlike others, I allow the root user to log on. Just in case.

PermitRootLogin yes

But only from the local network:

AllowUsers <my_user_id> *@192.168.*
DenyUsers root@!192.168.*

Restart the service:

service ssh restart

And we are done!

User accounts

Enabling root user

Ubuntu disables the root user by default. The recommended way is to use sudo for all administrator-related tasks.

I don't mind using sudo for one-off commands, but sometimes I need to issue a series of commands. I prefer to use the root a/c for that.

To enable the root a/c, we just need to set a valid password for it:

sudo passwd root

Actually, it is possible to switch to the root a/c by doing this:

sudo -i

I'm just used to using su.

Restricting su

su can be run by anyone. It is possible to restrict it to just admin users by changing its group and permission:

sudo chown :adm /bin/su
sudo chmod o-rx /bin/su

I didn't do it, though.

Configuring user a/c

The default umask allow created files to be accessed by everyone. They should only be accessible by the current user.

Edit /etc/profile:

umask 077

Correct my /home directory too:

chmod 700 /home/<my_user_id>
chmod -R g-rwx,o-rwx /home/<my_user_id>

I like to make users the primary group:

usermod -g users <my_user_id>

And all my files too. This makes things easier to share with other normal users, but not with anyone else.

chown -R :users /home/<my_user_id>

Ubuntu server out-of-box

Setup Q&A

The only thing to take note is to assign a static IP address and not use DHCP.

Initial packages

I selected these initial packages:

  • LAMP server
  • OpenSSH server
  • Samba server

I also quickly installed quota and joe

joe is a WordStar-like editor that I've used since I graduated from pico. I never took to standard Unix editors such as vi, vim and emacs.

(It is still important to know some vi commands because it is the editor that is guaranteed to be present.)

The initial installation used 1.1 GB of space.

Updates

I turned off automatic updates, so I have to issue these commands:

sudo apt-get update
sudo apt-get dist-upgrade

They have to be issued from time-to-time. Somehow, I prefer to do the update using command-line rather than the GUI.

Installing the desktop

It is possible to use just command-line, but the GUI is a nice-to-have. Some things are just easier done with the GUI.

sudo apt-get install --no-install-recommends ubuntu-desktop

With --no-install-recommends, just the basic desktop is installed. Apps such as the Office suite and Firefox are not installed.

The indicator applet is not installed properly with the basic desktop (broken dependency?). We don't get any status indicators and can't even log out or shutdown from the GUI. However, we can fix it:

sudo apt-get install indicator-applet-complete
sudo apt-get install indicator-session

Ubunutu now uses 1.6 GB of space. It also boots up 1-2 seconds slower. (On an Atom CPU, what do you expect?)

Power management

Sometimes, I want to put the server to sleep or hibernate to save power, yet preserve the current state.

First, the package:

sudo apt-get install powermanagement-interface

The command to hibernate:

pmi action hibernate

I seldom hibernate the server, though. Most of the time, I just shut it down when I don't need it.

Hibernation would have been a lot more useful if I could wake the server over the Internet using Wake-on-LAN (WOL). I could do it within the local network, but I have not succeeded over the Internet. I still haven't figured out the cause.

Maybe next time I should buy a router that supports WOL.

File server disk allocation 2012

FSSize
/11 GB I don't install much apps anyway.
/var2 GB In case some apps store lots of data
/var/log1 GB To avoid log from overflowing /var and causing DoS.
Flags: noatime, nodev, noexec, nosuid
/var/tmp1 GB Similar to /tmp, but is preserved on power-cycle.
Flags: nodev, noexec, nosuid, grpquota, usrquota
/tmp2 GB A reasonably sized tmp partition is unavoidable.
Flags: nodev, noexec, nosuid, grpquota, usrquota
swap2 GB Same as RAM size. For hibernation.
/homeThe rest Flags: noatime, grpquota, usrquota

Mounting /tmp as noexec requires adding a new file /etc/apt/apt.conf.d/50remount:

DPkg::Pre-Install-Pkgs {"mount -o remount,exec /tmp";};
DPkg::Post-Invoke {"mount -o remount /tmp";};

Otherwise apt-get won't work properly.

Installing Ubuntu; not exactly a piece of cake

I tried to install Ubuntu 11.10 Server, but it hung right after I selected the "Install" option.

Ubuntu 11.10 Desktop works, though. However, I don't want the excess baggage that comes with it. One thing I'm quite sure after trying the Unity desktop: I will still be using Gnome for a while! The Unity desktop is so simplified — I can't find anything at all!

After a few hours of trying, I finally decided to install Ubuntu 10.04 Server. It's tried and proven.

Unfortunately, it wasn't smooth sailing either. It complained it couldn't find the CD-ROM drive. I don't remember encountering this error before. Luckily, the Internet comes to the rescue.

Press Alt-F2 at the CD-ROM drive screen. Enter:

mkdir /mnt/usb /mnt/iso
mount –t vfat /dev/sdb1 /mnt/usb
mount –t iso9660 –o loop /mnt/usb/ubuntu-10.04.4-server-i386.iso /mnt/iso

Then press Alt-F1 to return to the installation dialog and answer:

  • Load CD-ROM driver from removable media? No
  • Manually select CD-ROM module and device? Yes
  • Module needed for accessing the CD-ROM: none
  • Device file for accessing the CD-ROM: /dev/loop0

And the installation continued smoothly after that.

After installation, I wanted to upgrade to Ubuntu 11.10, but I upgraded to 12.04 beta instead! Downgrading is not possible, so I got to reinstall it.

And I just realized that 12.04 is scheduled to be released in late April, so there's no point to upgrade to 11.10 in the first place. I'll stick with 10.04 and upgrade to 12.04 then.

It has to happen

My server's HD crashed at last, five months since I knew it was dying. The server hung midway. Then, it refused to boot up. At this point, I presume it is all data lost.

The good news is, I've backed up most of the stuff. The bad news is,

  • I did not backup everything
  • Some files are out-of-date, thankfully quite recently

It'll take some time to get the server up and running. Then, I'll need to find out if anything is salvageable from the old HD.