Performance tips for a large user base [closed]

I’m preparing a site with many users (100.000+). Since the output depends on the current user’s relation to some custom post types and taxonomies a static cache won’t help much.

Besides using a separate table for the users and serving static files from a cookieless domain – what else can I do for performance?

For example: Should I use InnoDB or MyISAM? Hints on indexes?

Update

Obviously, I wasn’t clear enough. Sorry.

All users are logged in. Always. No one else can see more than the start page. The site offers paid material for online courses.

I’m looking for tips related to to a large user base only. Basic general performance optimizations like compression, lazy loading of scripts, sprites etc. are useful, but that’s not what I’m out for.

3 Answers
3

You can use “W3 Total Cache” which isn’t a static file cache system. It however uses stuff such as opcode caching, memcached, and object caching to decrease page load time. APC, or another opcache, would be a good addition to your server, as well as using a lightweight httpd instead of bloated Apache.

Forcing GZIP on users is also considered a good idea as most people who don’t receive GZIP files are actually able to receive GZIP files. The request headers can get managed by firewalls, etc.

However 80% of page load is generally front end so that’s somewhere you’ll want to work on. “W3 Total Cache” does concatenation of CSS and JavaScript as well as minification of the files. It is the best option if you’ve properly got you’re JavaScript and CSS files to show up only on pages where they are needed. However most sites don’t, so its extra required configuration is nothing but annoying. Also minification of files generally results in breaking stuff so I just do the concatenation of the files.

The use of serving static files a cookieless domain will save a few ms but to get real savings in page load the use of a CDN will save roughly 100ms per item. Also using multiple domains to serve the files will increase the page load for older browsers who have limits on how many concurrent file requests can be done per domain.

You may also want to look into using http://smush.it to save on the size of images without loss in quality. (https://github.com/icambridge/filesmush script for running local files through smushit. https://github.com/tylerhall/Autosmush for running images on S3 through smushit.)

InnoDB should be used if your comments vastly outnumber your posts. Otherwise MyISAM may actually be faster.

Leave a Comment