How to use Git with Subversion

If you know Git, then you know very well its productivity benefits over other source code control like Subversion. If you work on a project that uses Git entirely, then good for you. However, some software projects are still using Subversion. If you end up working on a project that still uses Subversion, you don’t have to use Subversion entirely to do your work. You can use Git for almost all of your source code control tasks. And all those tasks can remain transparent to the Subversion repository and to the other people on the project that uses Subversion. All they’ll ever see is that you are checking out and then committing back to the Subversion repository like from any svn client. But in reality it was from Git. All they will ever find out is how fast you perform merges, rebases, and cherry-picks – because Git makes all that easy compared to Subversion. They will also be amazed by how clean your commits are.

To use Git with Subversion, you use the git-svn-clone command to checkout the Subversion repository. Once checked-out, everything is now in Git and you can work on your code entirely using Git. To commit changes back to Subversion, you use the git-svn-dcommit command.

Below is a sample workflow on how to use Git with SVN.

1. Clone the Subversion repository

mkdir -p ~/work/
cd ~/work/
git svn clone https://mysvnrepo.com/project/trunk project.git

The svn url is the same url you use when you run the svn checkout command. The subversion repository does not need to be the trunk. It could be a branch.

2. Create a branch off the master

cd ~/work/project.git
git checkout -b feature_branch master

By default, the cloned repository will create a master branch. Although you can work on the master branch directly, it is preferable that you create a branch off it and then use this branch to make your changes. Think of it like a feature branch. The reason behind doing this is so that you can maintain the master branch that will be in-sync with the svn repository while you have a separate branch to perform your work. This will allow you to regularly pull updates from the svn repository through the master branch and then merge or pick only some commits into your feature branch. And when you are ready to commit your feature branch to the svn repository, merge or rebase the feature branch to the master branch and commit it to subversion. The master branch serves as a gateway or staging between the Subversion repository and your Git branches.

3. Make code changes to the feature_branch

(edit files)
git add file1 file2 ...
git commit -m "trying this optionA..."

(test application)
(edit more files)
git add file1 file2 file3 ...
git commit -m "optionA didn't work. trying optionB..."
(test application)

While working on the feature-branch, you can perform as many commits as you want. In fact, this is preferable so that you can save your work and push them to a remote repository to serve as a backup. You don’t have to hurry to commit your changes back to subversion if you don’t need to just because you want to save your changes. You can use a remote git repository to serve as another site and save your work.

The commits you made can also be used if you need to revert back to them. Don’t worry about doing many commits even silly ones because when you finally commit your changes back to the svn repository, you will have the opportunity to clean them up before they get committed into subversion. This includes the comments in every commit. You will have the opportunity to modify them later when you are ready to commit to Subversion.

4. Merge changes from Subversion

If you need to get new updates from Subversion into your feature-branch, run the following steps:

# Switch to the master branch
git checkout master

# Pull changes from Subversion
git svn rebase

# Switch back to the feature branch and run rebase off master
git checkout feature-branch
git rebase -i master

The rebase command above will merge the master branch into the feature branch but in such a way that your commits are put back on top of the commits of the updates in the master. This rebase is a kind of merge where it’s like you’ve just created the feature-branch off the repository and then made your commits after it. Even though your commits were made earlier than the new updates in Subversion, a rebase will make it in such a way that your commits were made after. When you run git log to view the history, you will see it like your commits in the feature-branch were done after the updates from Subversion.

The -i option will perform an interactive operation under a VI session. It will show the history of commits in your feature branch. Below is a sample output.

pick eab79aa Made change #1
pick 97bd219 Made change #2
pick 5f7122f Made change #3
pick 33a8b6c Made change #4

# Rebase 59ef334..43910eb onto 59ef334 (4 command(s))
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

You may accept the commits as is or modify the commits by merging them together as fewer commits or one single commit for all.

Replace the word “pick” with “squash” to remove the commit from the history. Only the commit entry will be deleted but its changes will be merged into the previous commit. You can use the “s” letter as a shortcut for squash.

pick eab79aa Made change #1
squash 97bd219 Made change #2
squash 5f7122f Made change #3
squash 33a8b6c Made change #4

Repeat the rebase process every time you need to pull updates from the Subversion repository.

When doing a rebase, you can squash commits now to clean up your git commits or use the actual commit comments you wanted to appear in Subversion. You don’t have to because you can do this later when you are ready to commit to Subversion. I usually reserve this in the final rebase unless the number of commits have become too many to maintain and I see no need to revert to or use any of them.

One thing to be aware about rebase is that it will change the history of your feature-branch. If you are pushing your feature branch to a remote git repository, you will need to use the –force option. This should not be problem if you are the only one using this branch. This becomes an issue when the feature-branch is shared by more than one git use and also performing updates.

5. Merge the feature branch to the master and commit to Subversion

When you are ready to commit your changes back to Subversion, merge your feature branch to master and commit to svn.

git checkout master
git svn rebase
git merge feature_branch
git svn dcommit

6. Continue using the feature branch

You can still continue to use your feature-branch even after committing it to Subversion. Same process applies. You update your master from Subversion and rebase your feature branch off it. This is to ensure that your feature branch is updated with the Subversion repository and that any new commits you make goes on top just like a freshly created branch off the Subversion repository.

# make sure you're on master branch
git checkout master

# Update master branch with latest from Subversion
git svn rebase

# switch to feature-branch and rebase from master
git checkout feature-branch
git rebase -i master

Merging or cherry-picking changes between Subversion branches

Another most commonly used workflow in source code control is merging between branches. This could be a full merge or partial merge like selecting only certain changes (commits) from one branch and putting them to another. This workflow is commonly used when you need to port some fixes from one branch to another without necessarily merging everything. All of this be done easily using Git.

To do this in Git, do a git-svn-clone of each of the branches you need to merge or cherry-pick commits. This will create a Git repository for each Subversion branch. For each git repository, create a branch for each master and then push them to a common Git repository. Once they are in a common repository, you can then query the commits for each and perform cherry-pick merges. Below is a step-by-step example.

1. Clone each Subversion branch

# Clone Subversion branchA
cd ~/work
git svn clone https://mysvnrepo.com/svn/branches/branchA branchA.git

# Clone Subversion branchB
git svn clone https://mysvnrepo.com/svn/branches/branchB branchB.git

# create a Git branch for each
cd ~/work/branchA.git
git checkout -b branchA master

cd ~/work/branchB.git
git checkout -b branchB master

You now have two directories and each is a Git repository for each Subversion branches A and B.

2. Create a common Git repository

cd ~/work
git init common.git

3. Set common.git as a remote repository for branchA.git and branchB.git

cd ~/work/branchA.git
git remote add origin ~/work/common.git

cd ~/work/branchB.git
git remote add origin ~/work/common.git

4. Push branchA and branchB into the common repository

cd ~/work/branchA.git
git push origin branchA

cd ~/work/branchB.git
git push origin branchB

5. Go to the common repository

From the common repository, if you need to merge changes from branchB into BranchA, checkout branchA and merge or cherry-pick commits from branchB.

cd ~/work/common.git
git checkout branchA

# view commits in branchB to find which one you wish to merge
git log branchB
git cherry-pick e065b0230d7b368cfb534fc1818d55adc8911e2c

The cherry-pick command above will merge that commit in branchB into branchA.

6. Commit the merged changes in branchA into Subversion

Go to branchA.git repository and pull the merged changes from common.git

# Go to branchA.git and run git-fetch
cd ~/work/branchA.git
git fetch
git checkout branchA
git pull origin branchA

7. Commit merged changes in branchA into Subversion

To commit back to Subversion, first update the master branch with the latest from the Subversion repository by running git-svn-rebase.

cd ~/work/branchA.git
git checkout master
git svn rebase

Switch to branchA and perform a rebase off the master so that your commits will go on top of the latest SVN commits.

cd ~/work/branchA.git
git checkout branchA
git rebase -i master

Finally, merge branchA into master and commit to svn.

git checkout master
git merge branchA
git svn dcommit

It looks complicated with a lot of steps. But I assure you that once you have understood the concept, you’d realize how elegant and powerful merges and cherry-picks can be done using Git.

You don’t need to clone for each Subversion branch. You can run a single clone for both branches. The git-svn-dcommit will cover for all branches (all or selectable) in the Subversion repository. This should work well too and could be simpler since you don’t have to maintain many repositories. But I prefer separate repositories for each svn branch so that I can maintain isolation between svn branches while working in Git.

How to Remotely Access the Initial Ramdisk of an Encrypted Linux System

If you use cryptsetup to encrypt your Linux root file system, the default setup requires console access to enter the password and boot up the system. If your system is remote or doesn’t have console access, you will need to find a way to get remote access to the console.

If you cannot install a remote console or if your system doesn’t allow one, for example, instances in Amazon Web Services (AWS), you can still obtain remote access to enter the password by installing an ssh server in the initial ramdisk. Once you can login by ssh into the initrd, you can then supply the password to decrypt and boot up the system.

Below is a step-by-step guide on how to install an ssh server in the initial ramdisk, login to it, and enter the password to boot up your encrypted Linux system. The guide is created for Ubuntu and Debian systems and using DHCP to establish network access. The idea can be easily applied to other distributions like CentOS or RHEL.

The guide shows both a manual process and a script to automate the steps. It is recommended to try the manual steps first so that you will have an understanding of the underlying process. This can help in troubleshooting the script in case you encounter problems.

You will need access to GitHub to download the scripts.

Install dropbear

apt-get install dropbear

Get your public ssh-key and copy it to this file on your remote system:

/etc/initramfs-tools/root/.ssh/authorized_keys

If you don’t have an ssh-key pair, you can create one using the ssh-keygen command

ssh-keygen

This command will generate two files:

~/.ssh/id_rsa
~/.ssh/id_rsa.pub

The id_rsa.pub file is the public key where you will need to copy to the remote system as described in #2 above.

The id_rsa file is the private key where you will need to copy to the client that will login by ssh.

If you are using an AWS instance, copy the public-key of your instance into this authorized_keys file.

Update the initial ramdisk image

Run this command to update the initial ramdisk file:

update-initramfs -u

Boot up the system

When the system boots up, connect to the system using ssh and the private key you created above.

ssh -i ~/.ssh/id_rsa root@ip-address

You don’t actually need to supply the -i option for that key-file since that is the default. This is just to illustrate that you need to use the matching key-file to login by ssh.

Enter password remotely

Once you gained access to the system by ssh, execute the following steps in sequence:

  1. Kill the cryptroot process.Sample run:
    # pidof cryptroot
    161
    
    # kill -9 161
    
  2. For Ubuntu systems, wait for the cryptsetup process to terminate. You can monitor this process by running this command:
    # pidof cryptsetup
    

    This could take about a minute to complete.

  3. For Debian systems, wait for the /bin/sh -i process to come up. This is also applicable for Ubuntu.
    Example:

    # ps | grep '/bin/sh -i' | grep -v grep
      214 root      4620 S    /bin/sh -i
    

    This could take about a minute to complete. Take note of the ID of this process when it comes up.

  4. Once cryptsetup has terminated (for Ubuntu only) or the /bin/sh -i process has come up, run the cryptroot command:
    /scripts/local-top/cryptroot
    

    This command will prompt for the password to decrypt the system.

    Sample run:

    # /scripts/local-top/cryptroot
    
    /scripts/local-top/cryptroot: line 1: modprobe: not found
    Unlocking the disk /dev/disk/by-uuid/b1aa9e88-c344-4922-a0a6-d3f7c52f947a (sda5_crypt)
    Enter passphrase:
    
       Reading all physical volumes.  This may take a while...
      Found volume group "ubuntu-vg" using metadata type lvm2
      2 logical volume(s) in volume group "ubuntu-vg" now active
    cryptsetup: sda5_crypt set up successfully
    
  5. After entering the password, kill the process-id of the /bin/sh -i you saw earlier.Important Note: At this point, your terminal is kind of screwed up. You won’t be able to see what you are typing. So carefully, type the kill command using the /bin/sh -i process-id you got earlier and hit the enter key.
    # kill -9 214
    
  6. Type ctrl-d to disconnect from your ssh access. At this point, the system should be on the way up booting the system. If you have access to the console, you can observe how long it takes for your system to boot up. You can use this to estimate the time it takes for your system to fully come up after you enter the password remotely and start accessing the system.

Automating the remote password process

The remote password process above seems to be a lot of work just to enter the password and boot up the system. If this doesn’t sound fun to you, I wrote a couple of scripts to automate the steps.

  1. Download these two scripts from GitHub:
  2. Save the files at this location:
    /root/cryptroot.sh
    /etc/initramfs-tools/hooks/my_initrd_hook
    

    Make them executable:

    chmod +x /root/cryptroot.sh
    chmod +x /etc/initramfs-tools/hooks/my_initrd_hook
    
  3. Update the initial ramdisk image
    update-initramfs -u
    
  4. Boot up the system and login remotely by ssh
  5. At the shell prompt, run this command and enter the password when you get the prompt:
     /root/cryptroot.sh
    

    Sample run:

    # /root/cryptroot.sh
    
    cryptroot has terminated.
    Waiting for /bin/sh -i to start...
      277 root      4620 S    /bin/sh -i
    sh -i has started.
    
    Unlocking the disk /dev/disk/by-uuid/b1aa9e88-c344-4922-a0a6-d3f7c52f947a (sda5_crypt)
    Enter passphrase:   
    
    Reading all physical volumes.  This may take a while...
      Found volume group "ubuntu-vg" using metadata type lvm2
      2 logical volume(s) in volume group "ubuntu-vg" now active
    cryptsetup: sda5_crypt set up successfully
    Terminating sh -i process...
    Press ctrl-d or type exit to disconnect from initrd dropbear.
    
    #
    

Show the History of CVS Commits Similar to Git or SVN

I wrote a simple script that will display the history of CVS commits similar to the way Git or SVN do. You can download the script from Github: https://github.com/alvinabad/cvs-utils/blob/master/cvs-history.py

Basic usage

cvs-history.py [-b BRANCH] module or files

For more options, see: https://github.com/alvinabad/cvs-utils#cvs-historypy

Background

If you use the “cvs history” command to display the history of commits in CVS, you will get an output that is not sorted chronologically.

Here’s a sample run:

$ cvs history -a -c hello_cvs/
M 2015-02-28 23:52 +0000 alice 1.6     hello1.py hello_cvs == ~/main/hello_cvs
M 2015-02-28 23:48 +0000 alice 1.3     hello2.py hello_cvs == ~/main/hello_cvs
M 2015-03-01 17:05 +0000 alice 1.3.2.1 hello2.py hello_cvs == ~/workspace/hello_cvs
A 2015-03-01 00:38 +0000 alice 1.1.2.1 hello3.py hello_cvs == ~/feature_branch/hello_cvs
M 2015-03-01 00:42 +0000 alice 1.1.2.2 hello3.py hello_cvs == ~/workspace/hello_cvs
M 2015-02-28 23:10 +0000 alvin 1.2     hello1.py hello_cvs == ~/src/hello_cvs
M 2015-02-28 23:13 +0000 alvin 1.3     hello1.py hello_cvs == ~/src/hello_cvs
M 2015-02-28 23:18 +0000 alvin 1.4     hello1.py hello_cvs == ~/src/hello_cvs
M 2015-02-28 23:23 +0000 alvin 1.5     hello1.py hello_cvs == ~/src/hello_cvs
M 2015-03-01 00:37 +0000 alvin 1.7     hello1.py hello_cvs == ~/src/hello_cvs
A 2015-02-28 23:20 +0000 alvin 1.1     hello2.py hello_cvs == ~/src/hello_cvs
M 2015-02-28 23:23 +0000 alvin 1.2     hello2.py hello_cvs == ~/src/hello_cvs
M 2015-03-01 00:44 +0000 alvin 1.4     hello2.py hello_cvs == ~/src/hello_cvs

As you can see from the output above, it is hard to figure out the order when the changes happened. The output gets worse if you have many branches. The output displays all the history of commits in all branches and mixed altogether.

A Simple Option Parser in Bash

There is a builtin command in bash named getopts that can process command-line options of a shell script. It works pretty well except that it can only handle single-letter options like -a, -b, -c, etc. It cannot handle long option names like --version, --verbose, and --force. For long option names, there is an external command named getopt, however for me, it is one command that is not easy to use.

Single-letter options in a command is fine until you run out of letters or you require more user-friendly usage. For example, if you need both version and verbose options, which one should you assign -v for? If you need options like --force and --file, which one should use -f for? When this happens, you’re left with no choice but to use a different character for the other option. And most of the time, you end up with a character that doesn’t make sense with the option it pertains to.

So, I decided to just implement my own parsing. It can handle both short and long option names. Below is an example.

#!/bin/bash

while true
do
  case $1 in
    --version)  # get version
        VERSION=$2
        shift; shift
        ;;
    -v) # use verbose mode
        VERBOSE=true
        shift
        ;;
    -*)
        echo "Unknown option: $1"
        exit 1
        ;;
    *)
        break
        ;;
  esac
done

echo $VERSION
echo $VERBOSE
echo $*

The bash code above handles the options --version and -v. The idea is simple. All it has to do is loop through all the command line arguments using the while and case commands. For each argument it encounters, it checks if it’s one of the options it is expecting. If the option requires a parameter, which is usually the case for long option names, it gets the value in $2 which is the next parameter since $1 contains the name of the option itself. Before proceeding to the next option, it performs a double shift command in order to set $1 as the next option name. If the option does not require a parameter, it only executes a single shift command. You can also use long option names that does not require a parameter by simply using a single shift command.

Once the script encounters a non-option argument, it breaks out of the while-loop and the rest of the arguments get assigned to $*.

For non-expected options, you search for the -* string (after exhausting all expected options) and then generate an error. If you prefer, you may just ignore it or simply pass it to the rest of the arguments as a non-option.

The code above is short and simple. It’s also easy to read for anyone who wants to know how to run your script since it’s almost self-documenting. You don’t need to write an elaborate usage documentation on what options to use.

One limitation to this parser is that all options must be specified before the non-option arguments. It’s a small price to pay for the ease of use it offers.

Setting up a Raspberry Pi

This little beauty is a lot of fun.

The Raspberry Pi




Image linked from – http://www.raspberrypi.org/faqs

For only $35, you can set up a small Linux server. I’ve documented the steps below on how to set it up. This assumes a MAC OSx environment.

1. Download the Raspbian image.

The download page is located here: http://www.raspberrypi.org/downloads

Locate the latest wheezy-raspbian zip file and run wget to download:

$ wget http://files.velocix.com/c1410/images/raspbian/2013-02-09-wheezy-raspbian/2013-02-09-wheezy-raspbian.zip

$ unzip 2013-02-09-wheezy-raspbian.zip

Archive: 2013-02-09-wheezy-raspbian.zip
inflating: 2013-02-09-wheezy-raspbian.img

2. Load an SD card of at least 2GB in size into your MAC.

Identify the device the sd card is connected to using the diskutil command

$ diskutil list
/dev/disk4
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *4.0 GB disk4
1: DOS_FAT_32 NO NAME 4.0 GB disk4s1

3. Unmount the sd card

$ diskutil unmountDisk /dev/disk4
Unmount of all volumes on disk4 was successful

4. Save the raspbian image into the SD card. Grab a coffee. This takes several minutes. In my case, it took almost 15 minutes.

$ sudo dd bs=1m if=2013-02-09-wheezy-raspbian.img of=/dev/disk4
Password:
1850+0 records in
1850+0 records out
1939865600 bytes transferred in 865.345934 secs (2241723 bytes/sec)

5. Unmount and eject the sd card

$ diskutil list /dev/disk4
/dev/disk4
#: TYPE NAME SIZE IDENTIFIER
0: *4.0 GB disk4

$ diskutil unmountDisk /dev/disk4
Unmount of all volumes on disk4 was successful

$ diskutil eject /dev/disk4
Disk /dev/disk4 ejected

6. Load the sd card into the Raspberry Pi and power it up.

I used a serial connection to the raspberry pi so that I can cut-and-paste the bootup information below.

Debian GNU/Linux 7.0 raspberrypi ttyAMA0

raspberrypi login: pi
Password:
Linux raspberrypi 3.6.11+ #371 PREEMPT Thu Feb 7 16:31:35 GMT 2013 armv6lThe programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

NOTICE: the software on this Raspberry Pi has not been fully configured. Please run ‘sudo raspi-config’

7. Set initial configuration.

pi@raspberrypi:~$ sudo raspi-config

8. Change passwords

# sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

# sudo passwd pi
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully

9. Run update and upgrade

To run the update and upgrade commands below, you will need to have Internet connection. Connect your pi to your ethernet network that has dhcp. With dhcp, the pi is preconfigured to connect automatically and establish network connection.

If you are using the model A Raspberry Pi, it doesn’t have an ethernet connection. You will need to use a wifi usb adapter to obtain Internet connectivity. Follow step #11 below to setup wifi access.

$ sudo apt-get update
$ sudo apt-get upgrade

10. Configure time zone and reboot.

$ sudo dpkg-reconfigure tzdata
$ sudo reboot

11. Configure Wireless LAN

I recommend the Edimax-EW-7811Un wireless USB adapter for the Raspberry pi. This device is plug-and-play on the raspberry pi. You don’t need to download any drivers. The drivers are already available with the image. This device also offers a very simple configuration. I’ve seen configurations out there for other adapters that are too complex. As you’ll see below. you only need to update one file to get this working.

First, check if the raspberry pi has detected the device. The ifconfig command should show the device wlan0.

# ifconfig -a
eth0 Link encap:Ethernet HWaddr b8:27:eb:01:b6:8a
inet addr:10.0.99.19 Bcast:10.0.99.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:130 errors:0 dropped:0 overruns:0 frame:0
TX packets:95 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:12743 (12.4 KiB) TX bytes:14501 (14.1 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

wlan0 Link encap:Ethernet HWaddr 80:1f:02:9b:f0:3b
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Update /etc/network/interfaces file

# cat /etc/network/interfaces
auto lo
iface lo inet loopback
iface eth0 inet dhcpauto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp

# WPA type
wpa-ssid “MYSSID”
wpa-psk “secret”

### WEP type
#wireless-essid MYSSID
#wireless-key secret

iface default inet dhcp

Run ifdown/ifup or simply reboot for changes to take effect.

# ifdown wlan0
# ifup wlan0

That’s it. It’s just one file to update to enable wifi access.

Run iwconfig to check the status of your wifi connection.

# iwconfig wlan0
wlan0 IEEE 802.11bgn ESSID:”MYSSID” Nickname:””
Mode:Managed Frequency:2.462 GHz Access Point: 90:E6:BA:D3:C8:68
Bit Rate:150 Mb/s Sensitivity:0/0
Retry:off RTS thr:off Fragment thr:off
Encryption key:****-****-****-****-****-****-****-**** Security mode:open
Power Management:off
Link Quality=100/100 Signal level=73/100 Noise level=0/100
Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0
Tx excessive retries:0 Invalid misc:0 Missed beacon:0

12. Prevent your wifi adapter from sleeping

Some wifi adapters are set to sleep when there is no activity. You may want to disable this feature if you want your wifi access to be on at all times. This would be useful if you use your pi as a headless server and you only connect to it through its wifi.

Update or create the file below (or equivalent) to prevent your wifi device from sleeping. The filename may be different with your wifi adapter.

# cat /etc/modprobe.d/8192cu.conf
options 8192cu rtw_power_mgnt=0 rtw_enusbss=0 rtw_ips_mode=1

Some Debian and Ubuntu Quirks

EDITOR=nano

I’m not sure why Debian and Ubuntu decided to make the EDITOR default to nano instead of vi. It is annoying whenever I run the visudo command the sudoers file gets opened up by nano instead of vi. Isn’t it why it was called visudo and not nanosudo? :)

Arrow keys in VI

The vi in Debian doesn’t allow the arrow keys in edit mode. Why? An easy fix would be to do :set nocompatible in vi. But still, would it be nice if this would be the default like in other distros (Ubuntu, Redhat, and CentOS)? VI in MacOSX also allows the arrow keys in edit mode.

How to specify an ssh-key file with the Git command

If you want to use an ssh-key file whenever you run the ssh command, one convenient way to do this is to use the -i option.


ssh -i ~/.ssh/thatuserkey.pem thatuser@myserver.com

This is pretty neat. It’s simple, elegant, and highly intuitive. I want to do the same thing with the Git command like this:


git -i ~/.ssh/thatuserkey.pem clone thatuser@myserver.com:/git/repo.git

Unfortunately, there is no such -i option in the git command. Bummer.

I’ve looked around but I can’t find a solution like this. There are two options I can think of: 1) use GIT_SSH and 2) use a wrapper script.

Option 1: Use the GIT_SSH environment variable

The GIT_SSH option will allow you to specify a key file with the Git command like this:


PKEY=~/.ssh/thatuserkey.pem git clone thatuser@myserver.com:/git/repo.git

where ~/.ssh/thatuserkey.pem is the keyfile you want to use.

To make this work, it needs some pre-configuration. The first step is to create a shell script that contains the following.

~/ssh-git.sh


#!/bin/sh
if [ -z "$PKEY" ]; then
# if PKEY is not specified, run ssh using default keyfile
ssh "$@"
else
ssh -i "$PKEY" "$@"
fi

The script needs to be executable so do a chmod +x on it.

Next step is to set the value of the GIT_SSH variable to the path of the script above. The variable will need to be exported to the shell environment.


export GIT_SSH=~/ssh-git.sh

Now every time you run the git command, the keyfile you set to the PKEY variable is passed to the shell script GIT_SSH is pointing to. This will then allow Git to connect using that key file.


PKEY=~/.ssh/thatuserkey.pem git clone thatuser@myserver.com:/git/repo.git

From hereon, every time you run the Git command, you can freely choose any key file you want to use by setting the PKEY variable.[1]

If you run the git command without the PKEY line, the GIT_SSH script will still run since this is exported to the shell environment. The script has a fail safe to avoid using the -i option if there was no keyfile set so that it can still run using the default keyfile.

Be careful when exporting PKEY to the shell environment because GIT_SSH will use whatever value it is set to even if you don’t specify it with the git command. This brings another problem with GIT_SSH exported to the environment since Git will always use this when it runs. So you need to be constantly conscious that you have this set. You can always chain the GIT_SSH setting with the git command to avoid exporting it to the environment, but at the expense making the entire command longer.

The PKEY-line usage works well except that the setting of PKEY together with the git command is somehow unconventional.[2]

If you find this unintuitive, there is another option.

Option 2: Use a wrapper script

The -i option with ssh is pretty neat and elegant. You supply the -i option to choose the key file you want to use. If you don’t use the option, ssh will fall back to use the default ssh-key file.

To use the -i option with the Git command, we need to write a wrapper script. The wrapper script will then allow us to set the usage we like and that is to mimic the -i option in ssh.

The usage will be something like this:


git.sh -i ~/.ssh/thatuserkey.pem clone thatuser@myserver.com:/git/repo.git

where git.sh is the wrapper script.

The only thing you need to do is create that script, put it in your PATH, and you’re all set.

To get the code, you can download it from here or cut-and-paste that below.

git.sh


#!/bin/bash

# The MIT License (MIT)
# Copyright (c) 2013 Alvin Abad

if [ $# -eq 0 ]; then
    echo "Git wrapper script that can specify an ssh-key file
Usage:
    git.sh -i ssh-key-file git-command
    "
    exit 1
fi

# remove temporary file on exit
trap 'rm -f /tmp/.git_ssh.$$' 0

if [ "$1" = "-i" ]; then
    SSH_KEY=$2; shift; shift
    echo "ssh -i $SSH_KEY \$@" > /tmp/.git_ssh.$$
    chmod +x /tmp/.git_ssh.$$
    export GIT_SSH=/tmp/.git_ssh.$$
fi

# in case the git command is repeated
[ "$1" = "git" ] && shift

# Run the git command
git "$@"

The wrapper script can fail gracefully. If you don’t specify the -i option, it will run git using your default key-file.

This wrapper script uses the same principle of the GIT_SSH environment variable. But instead of pre-setting this up manually, the wrapper script sets this up on the fly every time it runs the actual git command.

Other options

There are other ways to use different ssh-keys with the Git command. There is this $HOME/.ssh/config file where you can map different keys to hosts you want to connect to. But this method doesn’t allow you to choose any key file at will when you need to run the git command. The keys need to be pre-defined in the config file.

You can also use ssh-agent to programmatically add the key you want to use. I also wrote a wrapper script that uses ssh-agent to allow the -i option. But it turned out to be more complex than GIT_SSH way. I’ll probably post that to show how it can be done that way.

With all the different methods available, none is necessarily better than the other. It will all depend on the circumstances and of course your personal taste.

Alvin


[1] I prefer this kind of control in my workflow because I use different keys for different servers I use. I have a different key for my servers at work and different keys for my personal servers and public sites (like Github). It works the same with passwords. You don’t use the same password on your Facebook and bank accounts.


[2] Personally, I find this all right since I am used to this usage. I run a lot of scripts and make commands that require environment settings. But I don’t like the idea of exporting all of them to the shell environment and leaking them everywhere so I only specify them with the command.

How to Use LUKS to Encrypt a Disk Partition

You can use LUKS to encrypt a partition of a disk drive or USB. If you store sensitive information in portable drives it’s more compelling than ever to protect them using encryption since they carry a high risk of getting lost or stolen.

LUKS/dm-crypt is a good choice for encrypting Linux devices. It’s usually pre-installed in most Linux distros and if not, it’s easy to install using YUM or APT.

Here are seven easy steps to encrypt a disk partition:

Step 1. Create the disk partition you wish to encrypt. For example, let’s say you have a USB drive and it’s connected to /dev/sdb. The partition you’d want to create would be /dev/sdb1.

# fdisk -l /dev/sdb
Disk /dev/sdb: 512 MB, 512483328 bytes
255 heads, 63 sectors/track, 62 cylinders, total 1000944 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63     1000943      500440+  83  Linux

Step 2. Encrypt the partition

# cryptsetup -q -y luksFormat /dev/sdb1
Enter LUKS passphrase: 
Verify passphrase: 

Step 3. Map a logical partition

# cryptsetup luksOpen /dev/sdb1 sdb1_crypt
Enter passphrase for /dev/sdb1:

This will create a device mapper:

# ls -al /dev/mapper/sdb1_crypt
brw-rw---- 1 root disk 253, 5 Sep 23 11:53 /dev/mapper/sdb1_crypt

Step 5. Format the encrypted partition

# mkfs.ext3 /dev/mapper/sdb1_crypt 
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
124928 inodes, 498392 blocks
24919 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
61 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done 

Step 6. Mount the encrypted partition:

# mkdir /mnt/sdb1
# mount /dev/mapper/sdb1_crypt /mnt/sdb1

Step 7. When done unmount the logical partition and close (unlock) the encrypted partition

# unmount /mnt
# cryptsetup luksClose sdb1_crypt

How to Recover a LUKS Encrypted Disk

This morning, my Ubuntu laptop won’t boot up. My immediate reaction was shivers at the back of my neck and then down the shoulders. I don’t understand why it felt that way. It’s like one of those situations when you suddenly discovered you left your wallet, your laptop, or something valuable in a restaurant and your mind started racing trying to figure out how to get back quickly and at the same time thinking what would happen and what would you do if you can’t locate it anymore… And all those thoughts happening in just two or three seconds.

In this case, my laptop won’t boot up, it uses LUKS full disk encryption, I have no idea how to recover it, I haven’t done it before, I don’t know if it can be recovered, and I’ve got tons of important data in there!

Fortunately, those were just a couple of seconds and after that you go back to your sane state and do what most everyone else does – Google your way out. Thanks again to Google, it was easy to research on how to recover an LUKS-encrypted disk.

I missed the old days when recovering a disk from a failed system was as simple as attaching the disk to another system, mounting it, and start accessing your data back. Life has changed. With most of our valuables in digital format nowadays, you need to apply the same security you do with physical valuables – you lock them up with a key. But instead of locking with a door, you lock the data by encrypting it.

Below are the steps to recover a LUKS encrypted disk. There are tons of information like this online but I thought I just need to add another document among hundreds already out there in case one poor soul can only find this one.

Summary:

  1. Boot the system from a rescue disk or Live CD. If this is not possible, attach the encrypted disk to another Linux system
  2. Identify the device name of the encrypted disk
  3. Identify the encrypted partition in the disk
  4. Unlock the encrypted partition
  5. If the partition is using logical volumes, identify the volume group(s) used
  6. If the partition is using logical volumes, enable the volume group
  7. Mount the logical volume desired. If the partition is not using logical volumes, mount the mapped device directly.
  8. Unmount and lock the encrypted partition when done

Details:

Step 1. Remove the disk from the bad system and attach it to another Linux machine. If the machine is still running fine but the problem was just the OS being corrupted or something, you can boot from a rescue or Live CD.

Step 2. Once the disk is attached to another system, identify the device name of the disk as to how it got attached. You can view this by viewing /proc/partitions. If you are booting from a rescue disk or Live CD, do the same, but you’ll discover that it is easier to identify the disk since you won’t be confusing it with other disks like when it is attached to another system.

Below is a sample output. In this case the device you need to recover is /dev/sda with partitions sda1, sda2, and sda5.

# cat /proc/partitions
major minor #blocks name

7 0 731040 loop0
8 0 58605120 sda
8 1 248832 sda1
8 2 1 sda2
8 5 58353664 sda5

It’s probably easier to use the fdisk command to get a much clearer view of the layout of the partitions.

# fdisk -l /dev/sda

Disk /dev/sda: 60.0 GB, 60011642880 bytes
255 heads, 63 sectors/track, 7296 cylinders, total 117210240 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00089bcb

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048      499711      248832   83  Linux
/dev/sda2          501758   117209087    58353665    5  Extended
/dev/sda5          501760   117209087    58353664   83  Linux

Step 3. Next, find the partition that is encrypted. This is where the actual data resides. Run the cryptsetup command on each partition and find the one that says something like “LUKS header information…”

# cryptsetup -v luksDump /dev/sda1
Device /dev/sda1 is not a valid LUKS device.
Command failed with code 22: Device /dev/sda1 is not a valid LUKS device.

 

# cryptsetup -v luksDump /dev/sda2
Device /dev/sda2 is not a valid LUKS device.
Command failed with code 22: Device /dev/sda2 is not a valid LUKS device.

 

# cryptsetup -v luksDump /dev/sda5
LUKS header information for /dev/sda5

Version:       	1
Cipher name:   	aes
Cipher mode:   	xts-plain64
Hash spec:     	sha1
Payload offset:	4096
MK bits:       	512
MK digest:     	21 84 3e c0 3e 9e 23 1e 34 9b 39 05 8f b9 47 61 89 a6 2a 81
MK salt:       	fc ac 3d 4f 1e 3d d4 ce 66 6b d3 90 ba f4 79 a8
               	d9 c9 38 a0 c2 79 bc 47 71 c6 8f 49 23 46 f1 6b
MK iterations: 	22500
UUID:          	2c8d56ec-749f-4d95-ab39-4ea17edb4c01

Key Slot 0: ENABLED
	Iterations:         	90067
	Salt:               	e4 25 ae 7c 5d 62 81 5e ea 37 95 0f 59 7b c8 7f
	                      	13 4f bc 15 70 4e 82 e1 41 db 1d 4b 65 7a de 5c
	Key material offset:	8
	AF stripes:            	4000
Key Slot 1: DISABLED
Key Slot 2: DISABLED
Key Slot 3: DISABLED
Key Slot 4: DISABLED
Key Slot 5: DISABLED
Key Slot 6: DISABLED
Key Slot 7: DISABLED
Command successful.

From the sample output above, the partition we need to recover is /dev/sda5.

Step 4. Unlock the encrypted partition using the cryptsetup command. You will need the passphrase of the encrypted disk. I forgot to mention, you need this password to recover the data from the disk. If you don’t know the password or you forgot, then you’re out of luck. Unless you can guess the password or do a brute force password guessing, no current technology or witchcraft on earth can help you unlock the encryption.

That’s the whole idea of encrypting a disk so that in case your laptop got stolen, got lost, or got left at a restaurant, no one but you who knows the password can see your data.

# cryptsetup -v luksOpen /dev/sda5 sda5_crypt
Enter passphrase for /dev/sda5:
Key slot 0 unlocked.
Command successful.

Step 5. After unlocking the device, check if the partition is using LVM (Logical Volume Management). If your data resides in a logical volume, you’d need to identify those volumes so that you’d know what to mount. Run the lvdisplay command to see the volumes. Below is a sample output.

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/ubuntu/root
  LV Name                root
  VG Name                ubuntu
  LV UUID                fy5LpG-HPX7-Spe1-C92m-nzPq-B3IB-eTjCFA
  LV Write Access        read/write
  LV Creation host, time ubuntu, XXXX
  LV Status              available
  # open                 0
  LV Size                53.61 GiB
  Current LE             13724
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:1

  --- Logical volume ---
  LV Path                /dev/ubuntu/swap_1
  LV Name                swap_1
  VG Name                ubuntu
  LV UUID                ZSwRl4-bdV1-kqJ3-V1lG-ndfZ-cdSh-P5X3kp
  LV Write Access        read/write
  LV Creation host, time ubuntu, XXXX
  LV Status              available
  # open                 0
  LV Size                1.99 GiB
  Current LE             509
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:2

Step 6. If the partition has logical volumes, enable the volume group using the vgchange command. From the sample above, the name of the volume group is ubuntu.

# vgchange -a y ubuntu
  2 logical volume(s) in volume group "ubuntu" now active

Step 7a. If the partiton is using logical volumes, mount the logical volume desired. Using the sample above, we want to mount /dev/ubuntu/root.

# mkdir /tmp/disk
# mount /dev/ubuntu/root /tmp/disk

Step 7b. If the partition does not have logical volumes, mount the mapped device directly.

# mkdir /tmp/disk
# mount /dev/mapper/sda5_crypt /tmp/disk

That’s it. You can now view your data in /tmp/disk.

Step 8. When you’re done accessing the data, don’t forget to unmount and close (lock) the encrypted partition.

# umount /tmp/disk
# cryptsetup luksClose /dev/mapper/sda5_crypt

Include a JavaScript Source File into Google’s CodeRunner or OpenSocialDevApp

OpenSocialDevApp or Google’s CodeRunner is a great tool for testing code snippets using OpenSocial API. It allows you to easily test code by cutting and pasting your JavaScript code into a textbox. You don’t need to load a gadget XML file into the server to test out something which can be a tedious process.

This works out really well when your code is small. However, once your code gets bigger, cutting and pasting starts to get painful. Wouldn’t it be nice if you can simply save your JavaScript code into a file, say mytestcode.js, and then just cut and paste a neat single line into the textbox like this?

<script type="text/javascript" 
    src="http://myserver.com/mytestcode.js"></script>

To test changes in mytestcode.js, all you need to do is re-execute CodeRunner every time you make an update to the file. No need for cutting-and-pasting.

However, this OpenSocial application only accepts raw JavaScript code, so the code above won’t work since it’s an HTML code.

To fix this problem, what we need is a way to convert the HTML code above into a JavaScript code. This is where the technique of dynamically including a JavaScript file comes to the rescue.

The code below is the script-src-tag converted into a JavaScript code. If you are not familiar with this technique, see this posting for details.

function includeJavascript(src) {
  if (document.createElement && document.getElementsByTagName) {
    var head_tag = document.getElementsByTagName('head')[0];
    var script_tag = document.createElement('script');
    script_tag.setAttribute('type', 'text/javascript');
    script_tag.setAttribute('src', src);
    head_tag.appendChild(script_tag);
  }
}
includeJavascript("http://myserver.com/mytestcode.js");

Instead of cutting-and-pasting the entire mytestcode.js into the CodeRunner’s textbox, just use the code above and make your changes directly to the mytestcode.js file on your server.

Furthermore, you don’t need to put all your code in a single file. You can break them up into multiple files and call out the include function for each.

includeJavascript("http://myserver.com/mytestcode1.js");
includeJavascript("http://myserver.com/mytestcode2.js");
includeJavascript("http://myserver.com/mytestcode3.js");
includeJavascript("http://myserver.com/mytestcode4.js");

With this technique, you can even run your entire OpenSocial application in CoderRunner!

Alvin Abad