This is what it looks like when someone posts a link to your blog on reddit (linux subreddit). I went from 48 visits the previous day to 581 visits the day of the reddit post. More than an order of magnitude more visitors. I don't give a s**t about visitor count but as a graph and chart nerd, it's neat to see in the google analytics.
Side note: great discussion on reddit. I will be making a followup article about pgp + vim, since that seems pretty slick. The jury is still out on which is a better solution for me.
In a post last year, DIY Encrypted Password Vault, I showed a simple way to use OpenSSL to create encrypted text files. Since I'd need to de-crypt those files to edit them (usually with Vim) there would be an unencrypted temp file sitting around while I was editing. And using a filesystem with history meant they were around for a long time. BAD. Surely there is a better way...
Can we encrypt directly with Vim? Actually, yes...Vim has encryption built in (via the -x flag)...it works and it's simple. Problem is that it uses 'crypt', which is not terribly hard to break. Also, it leaves a cleartext .tmp file around while you're editing it. Which means it's worthless to me for a password safe.
Enter the VIM openssl plugin. This plugin will allow you to write files with particular extensions corresponding to the type of encryption you desire (ex: ..des3 .aes .bf .bfa .idea .cast .rc2 .rc4 .rc5) and it turns off the swap file and .viminfo log, leaving no tmp files around. Excellent! Here's typical usage:
Edit a new file with the .bfa extension:
$ vi test.bfa
Add your secrets and save it out. It will prompt you for a password (twice) to encrypt against.
blah blah blah : secrets of the world
~
~
~
~
:wq
enter bf-cbc encryption password:
Verifying - enter bf-cbc encryption password:
You can look at the data in the file to see the encrypted content:
$ cat test.bfa
U2FsdGVkX1+TPJBn3hsJ6nzsXzDvTXOxdDk1PkWkTDFG45HIvMnZbBNIrnJubPCY
EexmfIJpZqo=
To re-open a previously encrypted file, just open it with vi. The plugin automatically recognizes the extension and prompts for your password:
"test.bfa" 2L, 78C
enter bf-cbc decryption password:
Pretty slick! You'll need the openssl binary in your path for this to work, which is pretty standard these days. Here is a little script that I run to set this up on my various home directories:
#! /bin/sh
test -d ~/.vim || mkdir ~/.vim/
test -d ~/.vim/plugin || mkdir ~/.vim/plugin
curl "http://www.vim.org/scripts/download_script.php?src_id=8564"
> ~/.vim/plugin/openssl.vim
Edit: 2010+ versions of Vim have blowfish support. Excellent, forward progress! I'm probably not going to upgrade Vim on my Mac and all my servers just for this when a plugin can work. Good to see progress but for now, this makes the most sense for me.
My friend Marc and I started doing research on what it would take to send a balloon to 'near space'. We've been inspired by a few others, most recently the father-son team from the UK that sent an iPhone up to 100,000 feet. We think we can build this for under $200, probably less.
Things we know:
The first launch will be to test the concepts and recovery mechanism. We have planned to use the instamapper service in combination with a t-mobile phone for ground tracking. We have a camera that would do the trick for the image capturing, using CHDK. Our friend has donated a cryogenic styrofoam box that should help with insulation and we can use hot-packs to keep it warm in there. Need some sort of LED light to help us in recovery after dusk.
We're also considering building a small data tracking device, for recording temperature, light and pressure. Maybe some other environmentals, not sure. Probably arduino powered, since that seems pretty easy and cheap.
Questions we have right now:
Would love to see some comments by fellow space nerds.
From some advice for a friend, looking to travel around the world, asking about cameras for travel.
I always say 'you should by a DSLR', as it's a quantum leap in abilities. The huge resulting change in your photos is worth it. Just know you that it's bulk may mean you don't have it out all the time, even unconsciously. Most likely, if you get one now, you'll carry it everywhere for at least some time and the pictures you'll have the rest of your life will be worth it.
From my own travel experiences: When you're walking around with a big camera and glass around your neck, the weight and size get to be a burden. So does the "I have an expensive camera" factor. I took my DSLR and a powershot with me on my moto trip...ended up using the powershot for 95% of the shots for those reasons. It was a cheaper cam too, so I would take equipment-risky shots...I dropped it a handful of times where my DSLR would have shattered and would hand it to anyone willing to take a shot. (http://www.flickr.com/photos/n8foo/3597617054/) There is also something to be said for having a camera that's easy to 'wear' all day and have ready to fire. With my DSLR setup, I find myself asking 'is this shot worth getting this thing back out?' all the time...and I miss opportunities due to it. Which means, instead of in my sling bag, I carry it around my neck/in my hands constantly, and we're back to the top of this paragraph. I have long considered picking up a G11 (G12 out now w/better low light) as my travel camera for these reasons.
Back to the DSLR - Assuming Canon 550D, I'd look at 3 lenses. The EF-S 18-200 IS, kick ass walkabout lens, I used it on my 7D when I don't know what kind of photos to expect (travel). 11x equivalent zoom so you can get wide shots and then zoom in on that wildlife off in the distance. The EF-S 17-55 2.8 IS is amazing. Nearly all my best shots have been taken with it. It's got L glass but isn't designated L due to the EF-S lineup. Downside is that it's kinda big. If you want that awesome DOF, pick up a Canon 50mm f/1.8 for $99...it's cheap plastic but will take good low light portraits. I personally opted for the Sigma 30mm 1.4 for my fast prime lens, but it was 4x the price.
Micro 4/3 has no viewfinder, suck. It's avail as a sep. accessory on most but then you're about as bulky as a DSLR. The E-P2 is the best of the breed of 4/3 so you can't really go wrong if you do go down that road. I'd be jealous of it. :-) I played with a Sony NEX-5 tonight at Target and I hated it's ergos and electronic focus ring. Shots looked pretty tho. If you're looking seriously at the 4/3 stuff, check the Canon G11/G12, those are amazing cams and have good ergos. The f/2.8 is plenty for low light, coupled with the IS and kick butt sensor (12800 ISO!).
Re: DOF - if you want more DOF, get further away from your subject and use the zoom. It tends to flatten the image but the DOF will be more dramatic than at closer ranges. You probably already know this.
One last comment: one of the best things you can get to make your photography better is a tripod or even a monopod. It'll make you compose your shots, makes them sharper and lets you leave the shutter open longer then 1/30th.
No matter what gear you buy, 'getting better at photography' is the right path.
Keeping revisions and history on device configs is an essential part of a good change control process. I've found this to be an extremely useful and ass-saving part of system/network administration.
WARNING: Consider the security ramifications before you start a project like this. Access to network configs saved on a filesystem or code repository can reveal network topology and login information (some network gear passwords are easily decrypted). Be careful how and where you store this data. For my environment, the tftp server and SVN repository have restricted access to only the systems team.
Here's how I do it:
Setting up TFTPd is pretty easy. On Ubuntu/Debian, it's simple:
apt-get install xinetd tftpd tftp
Set up something like this in /etc/xinetd.d:
service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = /tftpboot disable = no }
Set up the various directories and start the daemon. Be sure of your permissions, as your switch configs will be written to these directories.
mkdir -p /tftpboot/netconfigs/ chmod -R 700 /tftpboot chown -R nobody /tftpboot /etc/init.d/xinetd start
Double check that you can write switch configs out using your network gear. This is what it looks like on a Cisco 3560:
copy system:/running-config tftp://TFTPHOST:/netconfigs/SWITCH.config
And on a Cisco PIX firewall:
wr net TFTPHOST:netconfigs/SWITCH.config
Now you need to automate this. I created a utility user on each switch and a utility user in my subversion repository. This is what my tftp_switch.sh
script looks like for my Cisco 3560's:
#!/bin/sh DATE=\`date +%F\`SWITCHES='sw-1 sw-2 sw-3 sw-4 sw-5' USER=username PASS=password TFTPHOST="TFTPHOST" for SWITCH in $SWITCHES do (echo "${USER}" sleep 1 echo "${PASS}" sleep 1 echo "copy system:/running-config tftp://${TFTPHOST}://netconfigs/${SWITCH}.config" sleep 15 echo "exit" sleep 2 echo exit while read cmd do echo $cmd done) | telnet $SWITCH >> ~/cronlogs/${SWITCH}.${DATE}.log done
The 'sleep 15' is there in case it takes a moment to write to the tftp server.
I set up another script that runs the actions above, moves the files into the correct subversion tree, scrubs the files for strings that change too much (like timestamps or what-not) and then checks them into SVN. Here's my example:
#! /bin/sh # write out network configs to TFTP server /root/bin/tftp_switch.sh >/dev/null 2>&1 # copy them into the SVN tree cp -fv /tftpboot/netconfigs/*.config /root/svn/network/ # remove things that change all the time sed -i "s/ntp clock-period.*/ntp clock-period/g" /root/svn/network/sw-*.config sed -i "s/Written by.*/Written by/g" /root/svn/network/sw-*.config # check them in with subversion cd /root/svn/network ; \\ svn add -q *.config ; \\ svn commit -q -m 'automatic checkin'
I set these files owned by root, mode 500 and set it to run nightly in cron.
Since everything is now stored within SVN, I can checkout and in configs, see who and when they were saved (depending on if your gear writes that in the output) and compare to previous versions. I run WebSVN on my repo so it's very easy to see what has changed. Super useful.
If anyone implements this and has suggestions for change, please let me know!
A friend asks:
...so I'm trying to install RoR on a remote server of mine, but it seems that RoR development methodology wants you to test your webpages on localhost:3000. But, obviously, since the computer is in the cloud, I can't just well pull up a webpage on the console. Is there an easy way to get development pages to show up on a web browser that's not local?
So there are a couple of options.
use mod_passenger w/Apache for all your environments. This requires quite a bit more work but it's the way you'd want to do it for any larger environment. Google will teach you all you want about this, but I recommend it only when your environments are mature and you are needing to scale out production.
Tell Apache to pass http://dev.yourdomain.com/ to localhost port 3000. You'd need to set up a dev hostname in DNS and point it at the server as well. It would look like this in the apache config:
<VirtualHost *:80>
ServerAdmin webmaster@domain.com
ServerName dev.domain.com
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
</VirtualHost>
ProxyPass /foo http://127.0.0.1:3000
ProxyPassReverse /foo http://127.0.0.1:3000
more on mod_proxy here: http://httpd.apache.org/docs/2.0/mod/mod_proxy.html
ssh -L 3000:127.0.0.1:3000 host.server.com
Then in your browser surf to http://127.0.0.1:3000
For speed, I'd probably do the last option to test that things are working and go up towards #1 as you get your environment more set up. #1 is the most amount of work (mod_passenger can be a PITA).
A friend asked:
If I bought a wildcard certificate for .domain.com, wouldn't that cover .sub.domain.com?
Hrm...I had to look that one up. The answer is: no, not accorindg to the RFC. RFC 2818 states:
Matching is performed using the matching rules specified by [RFC2459]. If more than one identity of a given type is present in the certificate (e.g., more than one dNSName name, a match in any one of the set is considered acceptable.) Names may contain the wildcard character which is considered to match any single domain name component or component fragment. E.g., \.a.com matches foo.a.com but not bar.foo.a.com. f*.com matches foo.com but not bar.com.
more here: http://www.ietf.org/rfc/rfc2818.txt
There are reports that older versions of Firefox don't complain when encountering an out of spec sub-domain SSL wild-card but IE would. I would recommending sticking with the RFC spec.
In previous years, when I had to run commands on hundreds or thousands of servers, I'd hack together a combo of expect and perl. It would almost always evolve into a system of complicated config and command files that was rickety and didn't handle errors well. I didn't dare mess with multi-threaded perl, which meant it was serial execution and slow for large clusters. It got the job done but left me wishing for a better system. I have always had cfengine in my sysadmin toolbox, but it's more about entropy reduction and not set up for one-off or occasional situations. I tried a few parallel shell implementations (such as dsh, pdsh) and found them all lacking.
Enter Capistrano. It bills its self as an 'easy deployment' system, with Ruby on Rails application deployment as the main use case. And since I'd never worked in a RoR environment before, I had no real reason to look into it much. But in the last 3 months, I have worked at 2 different companies that use RoR + Capistrano for deployment and have learned enough to see it's true power. How I'd describe it to a fellow sysadmin is: "parallel execution of scripts and commands on multiple hosts...easily". Want to quickly execute a command on every host in your cluster? This is the way to do it.
Installing it is pretty easy...you need a modern version of Ruby, a
modern version of RubyGems and then a gem install capistrano
later,
and you're good to go. You only need to install all of this on the
controlling/deployment server....not on all your clusters/nodes. If you
get errors with the version of ruby/gems that comes with your distro,
install from source (recommended). I followed this
tutorial to get it set
up, and to get the basics. You sould read it as well. They skip a few
necessary things (such as sudo and useful ENV variables) which I have
below.
An example Capfile of how to restart apache on a whole cluster:
role :apache_cluster,"www1","www2","www3" desc "restart apache on www hosts" task "restart_apache_www", :roles => :apache_cluster do sudo "/etc/init.d/apache2 restart" end
sudo
is a built in method of modern versions of Capistrano. Instead
of the 'run' method, you use 'sudo' and it understand and responds to
the prompt (if prompted). Very slick. One thing to keep in mind is that
is is running everything as YOU, unless otherwise specified. It will
look like you logged into 50 servers all at once and ran sudo commands
all at once. I bet that'd look cool on a
Splunk log graph.
Now, from the command line, type 'cap -T' to get a list of your documented commands. As long as you describe your commands, you will always get a list of what you can run. 'cap -e command' will explain commands.
$ cap -T cap invoke # Invoke a single command on the remote servers. cap restart_apache_www # restart apache on www hosts cap shell # Begin an interactive Capistrano session.
Run the command we set up: 'cap restart_apache_www'. It will prompt for your password.
$ cap restart_apache_util * executing \`restart_apache_www' * executing "sudo -p 'sudo password: ' /etc/init.d/apache2 restart" servers: ["www01.domain.com", "www02.domain.com", "www03.domain.com"] Password: [www01.domain.com] executing command [www03.domain.com] executing command [www02.domain.com] executing command ** [out :: www01.domain.com] * Restarting web server apache2 ** [out :: www01.domain.com] ...done. ** [out :: www03.domain.com] * Restarting web server apache2 ** [out :: www03.domain.com] ...done. ** [out :: www02.domain.com] * Restarting web server apache2 ** [out :: www02.domain.com] ...done.command finished
And that was completed in parallel, in about 1 second. What if you have
a one-off thing you want to run on all hosts? Try cap invoke
, no Capfile required. If you have a
Capfile with hosts defined, it will run against all of them by default,
or it can take a role by passing ROLE as an env variable.
$ cap COMMAND=uptime HOSTS="www1,www2" invoke * executing \`invoke' * executing "uptime" servers: ["www1", "www2"] Password: [www1.prod] executing command [www2.prod] executing command ** [out :: www1] 16:57:04 up 190 days, 4:30, 0 users, load average: 0.30, 0.33, 0.33 ** [out :: www2] 16:57:04 up 190 days, 4:42, 0 users, load average: 0.42, 0.32, 0.32 command finished
and
cap ROLES=www COMMAND=uptime invoke * executing \`invoke' * executing "uptime" servers: ["www1", "www2", "www3"] Password: [www1] executing command [www2] executing command [www3] executing command ** [out :: www1] 17:00:17 up 190 days, 4:33, 0 users, load average: 0.54, 0.37, 0.34 ** [out :: www2] 17:00:17 up 190 days, 4:46, 0 users, load average: 0.18, 0.27, 0.29 ** [out :: www3] 17:00:17 up 190 days, 5:02, 0 users, load average: 0.17, 0.22, 0.25 command finished
But every time you 'invoke', you must re-type your password. Want to stay connected? Try the shell:
$ cap shell HOSTS="www1,www2" * executing \`shell' ==================================================================== Welcome to the interactive Capistrano shell! This is an experimental feature, and is liable to change in future releases. Type 'help' for a summary of how to use the shell. ------------------------------------------------------------------- cap> uptime [establishing connection(s) to www1, www2] Password: ** [out :: www1] 17:03:24 up 190 days, 4:36, 0 users, load average: 0.29, 0.32, 0.32 ** [out :: www2] 17:03:24 up 190 days, 4:49, 0 users, load average: 0.35, 0.30, 0.29 cap> w ** [out :: www1] 17:03:37 up 190 days, 4:36, 0 users, load average: 0.24, 0.31, 0.31 ** [out :: www1] USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT ** [out :: www2] 17:03:37 up 190 days, 4:49, 0 users, load average: 0.30, 0.29, 0.28 ** [out :: www2] USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT cap> ls /tmp/blah *** [err :: www1] ls: cannot access /tmp/blah *** [err :: www2] ls: cannot access /tmp/blah *** [err :: www1] : No such file or directory *** [err :: www2] : No such file or directory error: failed: "sh -c 'ls /tmp/blah'" on www1,www2
Notice that errors show up with a 'err' line.
While useful and timesaving, this barely scratches the surface of the power of Capistrano. I suggest you read the "From the Beginning" doc on the Capistrano site. If you discover any cool recipes, share them in the comments on this blog and I'll publish them as a followup later (as I learn more recipes myself).
P.S. I think I'm going to add a feature to MachDB to export host lists in a format compatible with the Capfiles.
This is something I've needed at various jobs/situations for years...a place to store the root/router/database/web passwords that only I can see. There are a lot of desktop/handheld apps for this but I always feel like I could lose the computer/handheld that it's on and I'd be boned. I'd rather have something I can stick on a server somewhere and access via a remote shell....or carry it around on a thumb drive. Here are the scripts:
encrypt.sh
#! /bin/sh openssl bf -a -salt -in $1.txt -out $1.bf && rm -v $1.txt
decrypt.sh
#! /bin/sh openssl bf -a -d -salt -in $1.bf
To use it, create a file named blah.txt that has your secret info in it. Run the encrypt script first:
$ ./encrypt.sh blah enter bf-cbc encryption password: Verifying - enter bf-cbc encryption password: removed `blah.txt'
It will encrypt the file and remove it. Check the contents of the file:
$ cat blah.bf U2FsdGVkX1/+ZGiXPSZX8MED9aXrm1NfIEjpv5vvFKo=
It's actually base 64 encoded so you can email it to yourself for safe keeping if you so choose.
To decrypt for reading:
$ ./decrypt.sh blah enter bf-cbc decryption password: secret host: secret password secret host2: secret password2
Now take the encrypted output file and the 2 scripts, email it to yourself and store a copy on a thumb drive. :)
Telemarketers, vendors and people I'd rather not communicate with frequently intrude on my early morning slumber (esp East Coast vendors), meetings, lunches, free time and life in general. And since they usually call from unrecognized numbers, I feel compelled to answer (could be something important, right?) A co-worker and I have been using a neat technique to remove these individuals ability to communicate with us...create a new contact called "Do Not Answer" with a custom silent ring tone. Each time they call from a new number, add them as an additional number to that contact. And with that silent ring, now they can't interrupt you in meetings, at home, early in the morning, etc.
I used iTunes to make a silent ringtone...you can download it here: iPhone Silent Ringone