Sunday, May 27, 2012

How can I detect and survive being "Slashdotted'?


What's a good way to survive abnormally high traffic spikes?



My thought is that at some trigger, my website should temporarily switch into a "low bandwidth" mode: switch to basic HTML pages, minimal graphics, disable widgets that might put unnecessary load on the database, and so-on.



My thoughts are:



  • Monitor CPU usage

  • Monitor bandwidth

  • Monitor requests / minute



Edit: I am familiar with options like caching, switching to static content or a content delivery network, and so on as a means to survive, so perhaps the question should focus more on how one detects when the website is about to become overloaded. (Although answers on other survival methods are of course still more than welcome.) Lets say that the website is running Apache on Linux and PHP. This is probably the most common configuration and should allow the maximum number of people to gain assistance from the answers. Lets also assume that expensive options like buying another server and load balancing are unavailable - for most of us at least, a mention on Slashdot is going to be a once-in-a-lifetime occurrence, and not something we can spend money preparing for.


Source: Tips4all

30 comments:

  1. install munin to monitor load/memory consumption etc and notify on overloads.
    install monit to restart apache2 if it crashes
    install nginx as apache2 frontend, it will massively decrease memory requirements under heavy load

    ReplyDelete
  2. Don't give anyone the URL
    Build something so useless that if rule 1 gets broken nobody will come anyway.

    ReplyDelete
  3. It's worth mentioning that clever caching and low bandwidth modes will be useless if you simply don't have enough bandwidth on your connection, so make sure the connection to your server is fat enough. Don't host it on your home DSL connection, for example.

    I speak from experience of being slashdotted. It's not fun when you can't access the Internet at all because thousands of people are simultaneously trying to download photos of a computer your housemate mounted inside a George Foreman grill. No amount of firewalling will save you.

    ReplyDelete
  4. Here's a rather lengthy but highly informative article about surviving "flash crowds".

    Here's their scenario for the situation their proposed solutions address:


    In this paper, we consider the question of scaling through the eyes of a character we call the garage innovator. The garage innovator is creative, technically savvy, and ambitious. She has a great idea for the Next Big Thing on the web and implements it using some spare servers sitting out in the garage. The service is up and running, draws new visitors from time to time, and makes some meager income from advertising and subscriptions. Someday, perhaps, her site will hit the jackpot. Maybe it will reach the front page of Slashdot or Digg; maybe Valleywag or the New York Times will mention it.

    Our innovator may get only one shot at
    widespread publicity. If and when that
    happens, tens of thousands of people
    will visit her site. Since her idea is
    so novel, many will become
    revenue-generating customers and refer
    friends. But a flash crowd is
    notoriously fickle; the outcome won't
    be nearly as idyllic if the site
    crashes under its load. Many people
    won't bother to return if the site
    doesn't work the first time. Still, it
    is hard to justify paying tens of
    thousands of dollars for resources
    just in case the site experiences a
    sudden load spike. Flash crowds are
    both the garage innovator's bane and
    her goal.

    One way out of this conundrum has been
    enabled by contemporary utility
    computing.


    The article then proposed a number of steps the garage innovator can take, such as using storage delivery networks and implementing highly-scalable databases.

    ReplyDelete
  5. The basics:


    Don't try to host high-volume sites on Windows unless you are a true Windows guru. It can be done, but it's a time versus cost issue.
    Use static content (i.e., no database queries) everywhere you can.
    Learn about cache-control headers and use them properly for images and other static assets.
    At the very least, use Apache, but if you can, use lighttpd or another high-performance webserver.


    The real answers:


    Really know your SQL, and spend time analyzing slow queries. Most page loads shouldn't require more than a second of straight queries.
    Determine where your load really is. If it's a media-heavy site, consider hosting content elsewhere (like Akamai, or some other service). If it's a database-heavy site, consider replication.
    Know what kind of replication will work for you. If you have a read-heavy site, standard MySQL master/slave replication should be fine. If you have a lot of writes going on, you'll need some kind of multi-master setup, like MySQL Cluster (or investigate 'cascading' or 'waterfall' replication).
    If you can, avoid calling PHP - i.e. have a cached static (HTML) copy of the page (which is what most of the Wordpress caching plugins do). Apache is much faster serving static files than even the simplest hello world PHP script.

    ReplyDelete
  6. I rewrite all URLs referred by several popular sites to be redirected through the coralCDN.

    An example for Apache:

    <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteBase /

    RewriteCond %{HTTP_USER_AGENT} !^Googlebot
    RewriteCond %{HTTP_USER_AGENT} !^CoralWebPrx
    RewriteCond %{QUERY_STRING} !(^|&)coral-no-serve$
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?digg\.com [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?slashdot\.org [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?slashdot\.com [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?fark\.com [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?somethingawful\.com [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?kuro5hin\.org [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?engadget\.com [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?boingboing\.net [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?del\.icio\.us [OR]
    RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?delicious\.com
    RewriteRule ^(.*)?$ http://example.com.nyud.net/$1 [R,L]
    </IfModule>

    ReplyDelete
  7. There's simply no way to know whether or not your website will survive heavy loads unless you stress test it. Use something like siege and see where your performance problems lie. Does it grow in memory too quickly? Does it start slowing down with a bunch of concurrent connections? Does it start taking forever to access the database?

    Once you know where the performance problems lie, then it becomes a matter of getting rid of them. Unfortunately, it's difficult to go into much more detail than that without knowing more about your particular situation, but keep in mind that you ARE talking about optimizations here. Thus, you should only act when you KNOW there are performance problems.

    And I would argue that you're not necessarily just preparing for a once in a lifetime event. DOS attacks still happen, so it's good to have preparations in place even if your site doesn't get slashdotted.

    The only thing that I can think of off the top of my head that will help you in almost all situations is if you gzip your content. That will save a lot of bandwidth and all modern browsers will support it without too much of a performance problem.

    ReplyDelete
  8. Don't write content or provide a service that may appeal to geeks ;)

    ReplyDelete
  9. The real question is "What is the single most effective way to be Slashdotted"

    If it's a real problem, redirect the traffic to my site.

    ReplyDelete
  10. I think the premise is wrong: you really really want to get slashdotted, otherwise you wouldn't have a web site in the first place. A much better question is how do you handle the extra traffic? And even that is really two questions:


    How do you technically manage the additional server load?
    How do you greet the new users, so that you can hopefully get some of them to stick around??

    ReplyDelete
  11. If you mean never getting submitted on Slashdot, just write boring non-geek content.

    If you want to withstand the traffic coming in from a Slashdotting, tell us more about your web server... Apache? IIS? Other?

    ReplyDelete
  12. For sites that experience high traffic, Akamai is a good solution to make the site fast, extraordinarily scalable, and reliable in spite of your own infrastructure. Akamai is a service (not free) which will cache your site a locations around the world. At my last job, our e-commerce catalog was cached via them and our servers could go down and nobody would know unless they tried adding to their cart. Also, we had our image servers go down once and Akamai's caching saved us again.

    ReplyDelete
  13. Never become popular.

    While that will work, it's not real helpful. What you need infrastructure that can scale on very short. Something like Google Gears or Amazon's web services seems ideal for this, since even Slashdot's not going to overwhelm Google or Amazon. If you want your own server make sure your network provider isn't going to cut you off at any preset bandwidth limit. Buy enough hardware so that you're not straining just to carry your normal traffic without any slack to handle sudden spikes.

    ReplyDelete
  14. There are a number of ways this can be done, or at least helped. Search Google for "slashdot-proof" and you'll find a number of them:


    Slashdot-proof your server with
    FreeCache - Boing Boing
    Simple Thoughts Blog is now Slashdot
    Proof


    etc.

    ReplyDelete
  15. I think we just failed that one grin

    ReplyDelete
  16. Cache... hard. Record hits, and if a spike occurs, write out a completely static copy of the page being hit, then serve that. Cutting DB queries from 100 to 2 with a good caching system can survive a weak slashdotting, but having any DB queries at all will still result in a dead site under serious load that you aren't prepared for.

    ReplyDelete
  17. Increase the level of caching from the DB so that the content might me slightly more out of date but faster accessed. Naturally, this only applies if the content does not have to be 100% consistent.

    ReplyDelete
  18. Put it in the cloud!

    This probably isn't relevant for personal blogs etc but for bigger sites cloud hosting will solve this. Amazon EC2 for example, thing about this strategy is that it will cost you a ton of money.

    On a smaller scale, using a CDN for all your images/static content might help a bit too, again evaluating the price is important. Amazon S3 is the CDN i hear about the most.

    ReplyDelete
  19. You can also use Nagios to monitor the server health. Based on your requirements, at certain conditions, you can trigger an existing SQL file to switch modes for your website.

    For example, add "UPDATE settings_table SET bandwidth = 'low';" into that SQL file and run it in mysql and do the opposite when the conditions get back to normal.

    ReplyDelete
  20. nearlyfreespeech.net is a semi-cloud so to speak and helps a ton in situations like this. As others above mentioned, layered caching helps a lot. Pull chunks of information from memcached instead of the database, have a reverse proxy (or a distributed reverse proxy aka CDN, Panther Networks is cheap) in front of you.

    ReplyDelete
  21. netstat -plant | awk '$4 ~ /:80\>/ {print}' | wc -l

    This will show you all of the connections to the Apache server. You can create a cgi script that will calculate the total number of connections to the Apache service and issue a warning once it reaches a certain threshold. What to do at that point is another question.

    Hopefully your server is prepared.

    ReplyDelete
  22. Use caching!

    If you're using WordPress (for example), you can use something like WP-Super-Cache. If you're using regular PHP there are still a number of options you can use including memcache. Or you can just use regular squid proxy style caching.

    Any caching you use will help bulletproof (or slashdot/digg-proof) your site :-)

    ReplyDelete
  23. I know with Digg you can contact them and request they blacklist your site. You can probably do the same with Slashdot.

    ReplyDelete
  24. Make sure all pages you build are static, no database, and don't use images.

    Actually, this place isn't doing THAT bad.

    ReplyDelete
  25. Cache data.

    Unnecessary Trips to database to display something that gets displayed the same every load is what kills a server. Write its output to a file and use that instead. Most CMSs and frameworks have caching built in (but you have to turn it on) but rolling your own is not the most challenging task.

    ReplyDelete
  26. Auto-redirect to Coral CDN, unless the request is from coral cdn.

    ReplyDelete
  27. There are a number of ways this can be done, or at least helped. Search Google for "slashdot-proof" and you'll find a number of them:


    Slashdot-proof your server with FreeCache - Boing Boing
    Simple Thoughts Blog is now Slashdot Proof


    etc.

    ReplyDelete
  28. You want to do exactly the opposite of all this advice, right? :)

    ReplyDelete
  29. .htaccess:

    RewriteEngine on
    RewriteCond %{HTTP_REFERER} slashdot\.org [NC]
    RewriteRule .* - [F]

    ReplyDelete