A friend asks:
...so I'm trying to install RoR on a remote server of mine, but it seems that RoR development methodology wants you to test your webpages on localhost:3000. But, obviously, since the computer is in the cloud, I can't just well pull up a webpage on the console. Is there an easy way to get development pages to show up on a web browser that's not local?
So there are a couple of options.
use mod_passenger w/Apache for all your environments. This requires quite a bit more work but it's the way you'd want to do it for any larger environment. Google will teach you all you want about this, but I recommend it only when your environments are mature and you are needing to scale out production.
Tell Apache to pass http://dev.yourdomain.com/ to localhost port 3000. You'd need to set up a dev hostname in DNS and point it at the server as well. It would look like this in the apache config:
<VirtualHost *:80>
ServerAdmin webmaster@domain.com
ServerName dev.domain.com
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
</VirtualHost>
ProxyPass /foo http://127.0.0.1:3000
ProxyPassReverse /foo http://127.0.0.1:3000
more on mod_proxy here: http://httpd.apache.org/docs/2.0/mod/mod_proxy.html
ssh -L 3000:127.0.0.1:3000 host.server.com
Then in your browser surf to http://127.0.0.1:3000
For speed, I'd probably do the last option to test that things are working and go up towards #1 as you get your environment more set up. #1 is the most amount of work (mod_passenger can be a PITA).
A friend asked:
If I bought a wildcard certificate for .domain.com, wouldn't that cover .sub.domain.com?
Hrm...I had to look that one up. The answer is: no, not accorindg to the RFC. RFC 2818 states:
Matching is performed using the matching rules specified by [RFC2459]. If more than one identity of a given type is present in the certificate (e.g., more than one dNSName name, a match in any one of the set is considered acceptable.) Names may contain the wildcard character which is considered to match any single domain name component or component fragment. E.g., \.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
more here: http://www.ietf.org/rfc/rfc2818.txt
There are reports that older versions of Firefox don't complain when encountering an out of spec sub-domain SSL wild-card but IE would. I would recommending sticking with the RFC spec.
In previous years, when I had to run commands on hundreds or thousands of servers, I'd hack together a combo of expect and perl. It would almost always evolve into a system of complicated config and command files that was rickety and didn't handle errors well. I didn't dare mess with multi-threaded perl, which meant it was serial execution and slow for large clusters. It got the job done but left me wishing for a better system. I have always had cfengine in my sysadmin toolbox, but it's more about entropy reduction and not set up for one-off or occasional situations. I tried a few parallel shell implementations (such as dsh, pdsh) and found them all lacking.
Enter Capistrano. It bills its self as an 'easy deployment' system, with Ruby on Rails application deployment as the main use case. And since I'd never worked in a RoR environment before, I had no real reason to look into it much. But in the last 3 months, I have worked at 2 different companies that use RoR + Capistrano for deployment and have learned enough to see it's true power. How I'd describe it to a fellow sysadmin is: "parallel execution of scripts and commands on multiple hosts...easily". Want to quickly execute a command on every host in your cluster? This is the way to do it.
Installing it is pretty easy...you need a modern version of Ruby, a
modern version of RubyGems and then a gem install capistrano
later,
and you're good to go. You only need to install all of this on the
controlling/deployment server....not on all your clusters/nodes. If you
get errors with the version of ruby/gems that comes with your distro,
install from source (recommended). I followed this
tutorial to get it set
up, and to get the basics. You sould read it as well. They skip a few
necessary things (such as sudo and useful ENV variables) which I have
below.
An example Capfile of how to restart apache on a whole cluster:
role :apache_cluster,"www1","www2","www3" desc "restart apache on www hosts" task "restart_apache_www", :roles => :apache_cluster do sudo "/etc/init.d/apache2 restart" end
sudo
is a built in method of modern versions of Capistrano. Instead
of the 'run' method, you use 'sudo' and it understand and responds to
the prompt (if prompted). Very slick. One thing to keep in mind is that
is is running everything as YOU, unless otherwise specified. It will
look like you logged into 50 servers all at once and ran sudo commands
all at once. I bet that'd look cool on a
Splunk log graph.
Now, from the command line, type 'cap -T' to get a list of your documented commands. As long as you describe your commands, you will always get a list of what you can run. 'cap -e command' will explain commands.
$ cap -T cap invoke # Invoke a single command on the remote servers. cap restart_apache_www # restart apache on www hosts cap shell # Begin an interactive Capistrano session.
Run the command we set up: 'cap restart_apache_www'. It will prompt for your password.
$ cap restart_apache_util * executing \`restart_apache_www' * executing "sudo -p 'sudo password: ' /etc/init.d/apache2 restart" servers: ["www01.domain.com", "www02.domain.com", "www03.domain.com"] Password: [www01.domain.com] executing command [www03.domain.com] executing command [www02.domain.com] executing command ** [out :: www01.domain.com] * Restarting web server apache2 ** [out :: www01.domain.com] ...done. ** [out :: www03.domain.com] * Restarting web server apache2 ** [out :: www03.domain.com] ...done. ** [out :: www02.domain.com] * Restarting web server apache2 ** [out :: www02.domain.com] ...done.command finished
And that was completed in parallel, in about 1 second. What if you have
a one-off thing you want to run on all hosts? Try cap invoke
, no Capfile required. If you have a
Capfile with hosts defined, it will run against all of them by default,
or it can take a role by passing ROLE as an env variable.
$ cap COMMAND=uptime HOSTS="www1,www2" invoke * executing \`invoke' * executing "uptime" servers: ["www1", "www2"] Password: [www1.prod] executing command [www2.prod] executing command ** [out :: www1] 16:57:04 up 190 days, 4:30, 0 users, load average: 0.30, 0.33, 0.33 ** [out :: www2] 16:57:04 up 190 days, 4:42, 0 users, load average: 0.42, 0.32, 0.32 command finished
and
cap ROLES=www COMMAND=uptime invoke * executing \`invoke' * executing "uptime" servers: ["www1", "www2", "www3"] Password: [www1] executing command [www2] executing command [www3] executing command ** [out :: www1] 17:00:17 up 190 days, 4:33, 0 users, load average: 0.54, 0.37, 0.34 ** [out :: www2] 17:00:17 up 190 days, 4:46, 0 users, load average: 0.18, 0.27, 0.29 ** [out :: www3] 17:00:17 up 190 days, 5:02, 0 users, load average: 0.17, 0.22, 0.25 command finished
But every time you 'invoke', you must re-type your password. Want to stay connected? Try the shell:
$ cap shell HOSTS="www1,www2" * executing \`shell' ==================================================================== Welcome to the interactive Capistrano shell! This is an experimental feature, and is liable to change in future releases. Type 'help' for a summary of how to use the shell. ------------------------------------------------------------------- cap> uptime [establishing connection(s) to www1, www2] Password: ** [out :: www1] 17:03:24 up 190 days, 4:36, 0 users, load average: 0.29, 0.32, 0.32 ** [out :: www2] 17:03:24 up 190 days, 4:49, 0 users, load average: 0.35, 0.30, 0.29 cap> w ** [out :: www1] 17:03:37 up 190 days, 4:36, 0 users, load average: 0.24, 0.31, 0.31 ** [out :: www1] USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT ** [out :: www2] 17:03:37 up 190 days, 4:49, 0 users, load average: 0.30, 0.29, 0.28 ** [out :: www2] USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT cap> ls /tmp/blah *** [err :: www1] ls: cannot access /tmp/blah *** [err :: www2] ls: cannot access /tmp/blah *** [err :: www1] : No such file or directory *** [err :: www2] : No such file or directory error: failed: "sh -c 'ls /tmp/blah'" on www1,www2
Notice that errors show up with a 'err' line.
While useful and timesaving, this barely scratches the surface of the power of Capistrano. I suggest you read the "From the Beginning" doc on the Capistrano site. If you discover any cool recipes, share them in the comments on this blog and I'll publish them as a followup later (as I learn more recipes myself).
P.S. I think I'm going to add a feature to MachDB to export host lists in a format compatible with the Capfiles.
This is something I've needed at various jobs/situations for years...a place to store the root/router/database/web passwords that only I can see. There are a lot of desktop/handheld apps for this but I always feel like I could lose the computer/handheld that it's on and I'd be boned. I'd rather have something I can stick on a server somewhere and access via a remote shell....or carry it around on a thumb drive. Here are the scripts:
encrypt.sh
#! /bin/sh openssl bf -a -salt -in $1.txt -out $1.bf && rm -v $1.txt
decrypt.sh
#! /bin/sh openssl bf -a -d -salt -in $1.bf
To use it, create a file named blah.txt that has your secret info in it. Run the encrypt script first:
$ ./encrypt.sh blah enter bf-cbc encryption password: Verifying - enter bf-cbc encryption password: removed `blah.txt'
It will encrypt the file and remove it. Check the contents of the file:
$ cat blah.bf
U2FsdGVkX1/+ZGiXPSZX8MED9aXrm1NfIEjpv5vvFKo=
It's actually base 64 encoded so you can email it to yourself for safe keeping if you so choose.
To decrypt for reading:
$ ./decrypt.sh blah enter bf-cbc decryption password: secret host: secret password secret host2: secret password2
Now take the encrypted output file and the 2 scripts, email it to yourself and store a copy on a thumb drive. :)
Telemarketers, vendors and people I'd rather not communicate with frequently intrude on my early morning slumber (esp East Coast vendors), meetings, lunches, free time and life in general. And since they usually call from unrecognized numbers, I feel compelled to answer (could be something important, right?) A co-worker and I have been using a neat technique to remove these individuals ability to communicate with us...create a new contact called "Do Not Answer" with a custom silent ring tone. Each time they call from a new number, add them as an additional number to that contact. And with that silent ring, now they can't interrupt you in meetings, at home, early in the morning, etc.
I used iTunes to make a silent ringtone...you can download it here: iPhone Silent Ringone
Whipped this up for work, figured I'd share with the world, since it's decently useful. Stick it in cron nightly, needs to run as root. It will run a diff on what it sees and email you if there are new ports/hosts that pop up on your networks. If you find errors or mods, use this: http://pastebin.com/f635a7517 to modify it and post in the comments.
#! /bin/sh DIR="/opt/nmap/scans" NETWORKS="192.168.1.0-255" TODAY=`date +%Y%m%d` YESTERDAY=`date -d yesterday +%Y%m%d` for network in $NETWORKS do nmap -n -sS $network -oG $DIR/$network.$TODAY.nmap done for network in $NETWORKS do diff -I "^#" $DIR/$network.$TODAY.nmap $DIR/$network.$YESTERDAY.nmap > $DIR/$network.$TODAY.diff done for network in $NETWORKS do SIZE=`find $DIR/$network.$TODAY.diff -size +0b` if [ "$SIZE" = "$DIR/$network.$TODAY.diff" ] then cat $DIR/$network.$TODAY.diff | mail -s "Change Detected for $network" user@host.com fi done
I discovered sfdisk a few years ago (part of util-linux) and have been using it in automation scripts ever since. sfdisk is like fdisk, but is scriptable. So for example, to list the partitions on a disk:
[root@host]# sfdisk -l /dev/sdc Disk /dev/sdc: 121601 cylinders, 255 heads, 63 sectors/track Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0 Device Boot Start End #cyls #blocks Id System /dev/sdc1 0+ 121600 121601- 976760001 83 Linux /dev/sdc2 0 - 0 0 0 Empty /dev/sdc3 0 - 0 0 0 Empty /dev/sdc4 0 - 0 0 0 Empty
To list them in a dump format, suitable as input to sfdisk (for cloning, saving or for some wacky awesome script):
[root@host ]# sfdisk -d /dev/sdc# partition table of /dev/sdc unit: sectors /dev/sdc1 : start= 63, size=1953520002, Id=83 /dev/sdc2 : start= 0, size= 0, Id= 0 /dev/sdc3 : start= 0, size= 0, Id= 0 /dev/sdc4 : start= 0, size= 0, Id= 0
You can use that dump in a fashion like this to clone a disks's partition map:
sfdisk -d /dev/sdc | sfdisk /dev/sdd
Or for saving it and using it later:
sfdisk -d /dev/sdc > partition.sfdisk ... sfdisk /dev/sdc < partition.sfdisk
I have done a few things in the last few months that are worthy of mention. I haven't had much of a chance to blog about them or write them down, what with them all being back to back and then holidays, being sick, vacation, more holidays, more being sick. But here are some links to the media I've produced. Enjoy.
San Diego CA USA to Cabo San Lucas, Baja California Sur, MX (in 6 days, 2200+ miles, on my new BMW adventure motorcycle)
One month of beard growth in 5 seconds (an experiment in time lapse)
Deleting that same beard at high speed (an experiment with tracy's camera)
It's silly I've waited this may years to go figure this out. Many of you
may already know that modern installs of OpenSSH will tab complete
hostnames based on what's in the /etc/hosts file. But there is a neat
little addition to your .bashrc that will tack on the ability to tab
complete hostnames based on what's in ~/.ssh/known_hosts
. Add this to
your .bashrc:
SSH_COMPLETE=( $(cat ~/.ssh/known_hosts | \\ cut -f 1 -d " " | \\sed -e s/,.*//g | \\ uniq ) )complete -o default -W "${SSH_COMPLETE[*]}" ssh
All your new shells will auto complete based on what hosts you've
connected to once (and therefore have entries in the known_hosts file).
Any host you've never visited, well it won't be there. If you want to
filter it based on certain hosts (for example, hosts in a certain domain
name), just add a | grep domain.com
after the uniq
. If you're like
me, this will save a lot of keystrokes over the next few years.
Tip: If you cut and paste my text above and it gives errors, make sure your cut-n-paste didn't change the quotes. If you want to see what it's going to use (or troubleshoot/modify), you can run this on the command line:
cat ~/.ssh/known_hosts | \\ cut -f 1 -d " " | \\ sed -e s/,.*//g | \\ uniq
Installed a new LeoVince midpipe on the Hypermotard earlier in the week (2am in the garage with a rubber hammer the night before an early meeting). Finally got a moment away from the keyboard to take it out for a spin tonight. Total awesomeness. It starts better, sounds better and runs better. I think it's a bit faster too. How awesome is that for the effort and price? I should have done this a long time ago.
Pics: Old and Busted. New Hotness.
I had also adjusted the bars a bit to be a tad higher, but I didn't like it. The turn signals are pointed at the ground 5 feet in front of me, which means my effective road use brightness went from 'are those lasers?' to 'dead lightning bug'. I may pick a position half way between this position and neutral. Will have to test it again later in the week when I've got some more time.