Matt Brown content for you

How to verify a PGP signature with GnuPG

In case you are an idiot like me, here is a simple set of steps for verifying a PGP signature (for example, if you are downloading the TrueCrypt installer and you want to verify that the binary is intact).

If you already have GnuPG or another PGP client installed, skip steps 1 and 2.

  1. Install GnuPG - on my Mac with MacPorts, I ran

    $ sudo port install gnupg
    
  2. Create your private key with

    $ gpg --gen-key
    

    Accept all of the default options.

  3. Download the public key of the person/institution you want to verify. For TrueCrypt, their public key is available here.

  4. Import the person’s public key into your key ring with:

    $ gpg --import TrueCrypt-Foundation-Public-Key.asc
    

    (change the filename to whatever is appropriate).

  5. You need to sign the person’s public key with your private key, to tell PGP that you “accept” the key. This contains a few steps on it’s own:

    1. List the keys in your keyring with

      $ gpg --list-keys
      

      The output will look like:

      ... 
      pub   1024D/F0D6B1E0 2004-06-06 uid
                        TrueCrypt Foundation  
      sub   4077g/6B136ECF 2004-06-06 
      
    2. The “name” of their key is the part after “1024D/” in the line

      pub   1024D/F0D6B1E0 2004-06-06
      
    3. Sign their public key with:

      $ gpg --sign-key F0D6B1E0
      
  6. Now you can verify the signature of the file you downloaded. With TrueCrypt and it’s installer, this command was:

    $ gpg --verify TrueCrypt\ 7.1\ Mac\ OS\ X.dmg.sig
    

    which outputted:

    gpg: Signature made Thu Sep  1 11:50:54 2011 EDT using DSA key ID F0D6B1E0
    gpg: Good signature from "TrueCrypt Foundation " 
    

The dangers of java.security.SecureRandom

Java offers a few ways to generate random numbers, the default being java.util.Random. java.security.SecureRandom offers a more-secure extension of java.util.Random which “provides a cryptographically strong random number generator”.

“Cryptographically strong” sounds like something everyone would want, right? Why generate weak random numbers if you can generate secure random numbers instead?

Well, there is a pretty large downside to using SecureRandom in some scenarios:

If you want a cryptographically strong random number in Java, you use SecureRandom. Unfortunately, SecureRandom can be very slow. If it uses /dev/random on Linux, it can block waiting for sufficient entropy to build up.

Well, here is how the /dev/random device file in Linux works:

In this implementation, the generator keeps an estimate of the number of bits of noise in the entropy pool. From this entropy pool random numbers are created. When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.

If you don’t read the descriptions carefully enough, you might miss the fact that /dev/random will block when there is not enough entropy data available. This means that in practice, it’s possible for your calls to new SecureRandom() to block for an unknown amount of time.

This is great if you truly need a very strong random number - return random data generated by the environment rather than a pseudo-random number generator, and if there is none available wait until there is some more - but is a really poor choice if you don’t need a super-secure random number and just a random-enough number will do.

It’s a bad habit to use SecureRandom everywhere by default, unless you truly want to make sure your unit tests or other code-that-doesn’t-need-to-be-that-secure randomly block for long periods of time in certain environments (hint: you probably don’t want this).

Unpredictable blocking is a very bad thing for most applications.

Setting up a SSH tunnel to forward ports using Fedora 14

TLDR: By default SELinux in Fedora 14 blocks sshd from forwarding traffic, even if your sshd_config allows it. Run setsebool -P sshd_forward_ports 1 to allow forwarding.

When working from home, I was attempting to set up a SSH tunnel to forward traffic from my Macbook Pro to a Fedora machine I have on the network in the office. We have a VPN to connect to in order to access machines on the corporate network, but a particular internal web application has always been very tricky to connect to over the VPN (for some unknown reason - it takes minutes for any page to load).

After getting fed up with using VNC over the VPN to access this webapp from a machine on the network - which is unbearably slow - I remembered I could try to set up a ssh tunnel between my laptop and another machine I own on the network (in a bit of an “aha, why didn’t I think of this 6 months ago!” moment).

Setting up the tunnel is simple: run this ssh command in a terminal window:

$ ssh -ND 5555 matt@officelinuxmachine

and then configure a browser to use 127.0.0.1 and port 5555 as a Socks v5 proxy.

However then I ran into something tricky - when I tried to access the troublesome web app in the browser through the proxy, officelinuxmachine was refusing my requests:

debug1: channel 2: new [dynamic-tcpip]
channel 2: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 5555 for 10.22.15.138 port 80, connect from 127.0.0.1 port 62342, nchannels 3

(this is the output from the ssh client on my laptop, reporting that the other side of the tunnel is prohibiting the open command)

After googling around a bit, I checked to make sure /etc/ssh/sshd_config on the other side of the tunnel allowed tunneling (AllowTcpForwarding yes, PermitTunnel yes) - which it did.

After a few minutes of frustration, I noticed this in /var/log/messages of officelinuxmachine:

Sep 13 08:44:33 officelinuxmachine setroubleshoot: SELinux is preventing /usr/sbin/sshd from name_connect access on the tcp_socket port 80. For complete SELinux messages. run sealert -l 4153f994-92e9-4d14-89e8-881c0c8d9669

Uh-oh, SELinux is blocking sshd from connecting, even though sshd itself is configured ok! Running the sealert command to view the full alert yields this output:

SELinux is preventing /usr/sbin/sshd from name_connect access on the tcp_socket port 80.

***** Plugin catchall_boolean (47.5 confidence) suggests *******************

If you want to allow sshd to forward port connections then you must tell SELinux about this by enabling the ‘sshd_forward_ports’ boolean.

Do setsebool -P sshd_forward_ports 1

Now it all makes sense - SELinux is set up to block sshd from forwarding ports by default. Executing

$ setsebool -P sshd_forward_ports 1

then allows the port to be forwarded as intended.

How to setup GNU screen to tail a log file at startup

At work I used byobu on my Fedora machine as a wrapper around screen, and I’ve setup my .byobu/windows file (which is a bit of a replacement for .screenrc in a normal screen session) to open up all of the screen windows I like to have at startup.

I like to start a new session with a few dedicated windows setup:

  1. A window titled “logs” which tails the log file of the main application I’m working on

  2. A window titled “errors” which tails the same log file as #1, but piping the output to grep to watch for ERRORs

  3. A window titled “project” which starts in my project’s main directory

  4. A window titled “bash” which starts in my home directory.

My .screenrc (actually, .byobu/windows) looked like this:

# window 1
chdir /home/matt/code/project/logs
screen -t 'logs'

# window 2
chdir /home/matt/code/project/logs
screen -t 'errors'

# window 3
chdir /home/matt/code/project
screen -t 'project'

# window 4
chdir
screen -t 'bash'

To actually start the tail process, I used to always search through my command history to find the correct tail command I wanted to use in the window (either tail -F current.log or tail -F current.log | grep -A 3 ERROR to watch for the ERRORS only).

Until today, that is, when I figured out how to setup screen to run these commands for me automatically when the screen session starts.

There seems to be two ways to go about this:

  1. You can simply include the command you want to run in this window in the line containing screen -t, such as

    screen -t 'logs' tail -F current.log
    

    however, this breaks if you want the command to include a pipe, such as

    screen -t 'errors' tail -F current.log | grep -A 3 "ERROR"
    

    and I couldn’t figure out the correct way to escape this.

    Setting up the screen window this way will also cause screen to exit the window entirely if you enter Ctrl+C, rather than just exiting the command and returning you to the shell (which makes sense if you think about it).

  2. Another way to execute a command in the window at startup is to use the stuff command, which will paste whatever string you want into the input buffer of the current window. The trick here is to also include the escape sequence for the Enter key, to simulate someone actually entering the command and then pressing enter at the keyboard:

    screen -t 'errors'
    stuff 'tail -F /var/ec/current.log | grep -A 3 "ERROR"^M'
    

    (the ^M is entered by pressing Ctrl+V, Enter with your keyboard, not by actually typing caret and uppercase M)

This works like a charm - when I start a new screen/byobu session, I have windows named “logs” and “errors” setup which are already tailing the log files I would like them to.

Sources that were helpful in figuring out how to set this up:

How to start VNC server from the command-line in Fedora 14

I’ve recently started using Linux (Fedora 14 to be specific) as my primary development OS at work. I actually have two desktop machines side-by-side at my desk - a Windows 7 PC for general office-type work and the Fedora machine for development. When working from home, I have to remote into the Windows machine and then use VNC from that machine to the Linux machine. The built-in VNC server in Fedora (vino-server) is configured by default to start only once you start a physically-logged-in session (since it runs as your local user, with preference you set, etc).

This is fine most of the time, but when working from home I have a nasty habit of forgetting how the setup I’m using actually works and logging out of my physical session, thus terminating my VNC session and (GUI) access to the Linux desktop machine. To get back in, I need to find someone in the office who can walk over to my Linux desktop and log me in again. This is obviously a bit annoying.

After a fair amount of searching of how vino-server could be restarted remotely from the command line, I’ve found two methods for resolving this issue (ironically most of the advice was from threads on the Ubuntu forums, notsomuch on Fedora sites). The first option is rather ugly in that it leaves your system with a user who will be automatically logged in, and stored passwords will be saved to disk in plaintext. I would not suggest this approach unless absolutely desparate:

Solution 1: Set Gnome Desktop Manager to auto-login your user

One fix I found for this is to setup Gnome Desktop Manager to auto-login your user when it starts (at boot); this solves the VNC problem fine but it causes a few other problems of it’s own (listed below):

  1. Edit /etc/gdm/custom.conf and add the two settings under the [daemon] section, each on their own line: AutomaticLoginEnable=true and AutomaticLogin=yourUsername. Now the next time that the machine boots, yourUsername will be logged in.

  2. However, Gnome has a feature (the “keyring”) in which it asks you to enter a master password to unlock the keyring, in which Gnome and other applications in your system can store any password information you save in an encrypted manner. If GDM auto-logs in your user, Gnome will be sitting at a screen where it is asking the user at the physical display to enter the master password to unlock the keyring. If you are remote at this time, you will not be able to enter in the password! To prevent this behavior, rename or delete the ~/.gnome2/keyrings/login.keyring file.

  3. A new keyring needs to be created to replace the previous - to do so you can either (from a physical login) attempt to store a new password, triggering Gnome to prompt you for a new keyring password (you must set the password as blank for this method to work), or create the file ~/.gnome2/keyrings/default with the content of just the word default (no quotes).

  4. From now on you should be able to VNC if you ever log out of your physical session, since Gnome will automatically log your user back in.

The nasty side-effect of this method is that with an empty keyring password, any stored passwords in your local account are stored on disk in plaintext. If you save a password to your IM account in Pidgin, or your email account password in Thunderbird, etc., all of these are stored in ~/.gnome2/keyrings/default.keyring in plaintext.

This might not seem so bad at first glance since this file is readable by only your user, until you remember that your account will automatically be logged in whenever the machine boots. All someone needs to do is reboot your machine - even if you have locked the display - to gain access to your files.

As mentioned above, this solution is not that great due to the side-effects - I would really not recommend doing this unless nothing else works.

Solution 2: Start vino-server over X11 Forwarding

It is possible to start a new vino-server instance if you login to the target machine with X11 forwarding.

First, ssh to the target machine with ssh -X targetmachine (note, if you get warning messages about untrusted X11 forwarding setup failed, try ssh -Y instead).

If you are using Windows on the machine you are doing this from, you can install Cygwin and the X11 options (in Cygwin’s installer, select “xorg-server” from the X11 category, this should pull in a lot of other dependencies automatically). Once installed, open a Cygwin terminal and run startx. Then start the ssh session using the terminal windows within the X windows that have popped up on the Windows machine.

Once logged into the target machine, switch to the root account (using sudo -s) and then run DISPLAY=:0.0 xhost + to allow remote access to the local X server. Then exit from root, and as your normal user run DISPLAY=:0.0 /usr/libexec/vino-server to start a new instance of vino-server.

It’s necessary to prepend these commands with DISPLAY=:0.0 to have them use the X display of the physical display.

To recap:

  1. ssh -X targetmachine

  2. sudo -s to change to root

  3. Run DISPLAY=:0.0 xhost +

  4. exit from root

  5. Run DISPLAY=:0.0 /usr/libexec/vino-server as the regular user to start vino-server again

  6. You should now be able to connect via SSH and start a new login session as if you were sitting at the machine.

Note that from here, if you terminate the SSH session in which you spawned vino-server, then the VNC server will be shut down as well. To re-start the VNC server, you can either re-do these steps or (if connected to the target machine via VNC) open vino-preferences (either by running the command or navigating to System > Preferences > Remote Access). Simply running vino-preferences seems to start a new instance of vino-server if none is already running.

This thread on the Ubuntu forums was a big help in figuring out how to get this to work.

Compared to Solution 1, this solution does not leave your machine in a state in which it could be comprimised - no automatic login or keyring password options need to be changed.