Improving SharePoint’s Performance

As SharePoint grows and morphs and becomes your information beast, it sometimes feels like it slows down. This is usually typical in implementations where the proper optimizations were not applied early on. It’s okay, it’s still possible to recover and improve some of your SharePoint’s loss of performance.


Performance can be significantly degraded by using less than recommended specifications as Microsoft published. 4GB of RAM for a SQL server will slow it down no matter what we do. Make sure your hardware meets, or better yet, exceeds Microsoft’s recommendations! See Microsoft’s recommendations here:

Is your farm big enough?

First thing to consider is the size of your farm. The rest of this post includes other pointers to improve performance, and should be considered regardless of the size of your farm. Your farm should never be a single server farm, unless you’re using it for development or testing. Even the testing environment should closer mirror your production environment. At minimal, your farm should consist of two servers, one for SharePoint and one for SQL. This is a minimal configuration and can survive with a dozen or two users at once. If you’re working with highly used sites, you should be considering at least a three+ server farm, and possibly some load balancing. Sizing a farm should be taken seriously, check out for some great references.

A quick note about disk drives.

I mention RAID and drive partitions and such. RAID stands for redundant array of independent disks. This technology allows your server to combine several drives into one large drive, and in turn, it can lose one drive (except for a RAID0) and keep all your data and server running. It magically copies data across multiple drives so if one is lost, there’s no loss of data. There are different configurations available and pros and cons for each: RAID0, RAID1, RAID5, RAID10 are the more popular configurations. Whatever your configuration, the operating system will see your multiple drives as one large drive. For example RAID1 mirrors data, across two identical drives, so if you have two 250GB drives in a RAID1, Windows will only see 250GB. RAID5 requires at least 3 drives and writes the data across all three, so if you have three 250GB drives, Windows will only see 500GB. RAID5 tends to be faster in writing as it can write more data on 2 different drives at once, verse a RAID1 where the same data is being written twice. I hope that clears some stuff up.

If you haven’t formatted your disks yet, or installed anything of consequence, consider a reformat.

[When formatting your disk] On the “File System Settings” page, select NTFS for the File system, select 64K for the Allocation unit size and type “SharePoint Databases” for the Volume label.  Click the Next button. Changing the allocation unit (i.e. cluster) size from the default 4K to 64K is absolutely necessary at this juncture. This setting cannot be changed later without reformatting the disk! Why is 64K considered a best practice? Because SQL Server reads and writes data 64 KB at a time. Increasing the cluster size reduces fragmentation and the number of times disk space needs to be allocated, which can improve reading and writing speed. Since these disks will be used solely for storing SQL Server files, I’m not concerned about the slack (i.e. wasted) space that can occur when the cluster size is too large.

From Thanks to Chris for the tip!

The database

The database is a common area where performance hits are most prevalent. SharePoint is database intensive, about 95% of SharePoint is stored in a SQL database: files, images, videos, pages, content, user profiles, etc. It’s important to have a happy and health SQL database running.

Things to do or check

  • Make sure databases are running on RAID5 or RAID10 partitions. These two RAID configurations provide the fasted throughput for constant read/write activities. If you’re on a RAID1, consider moving your databases.
  • Check your drive’s free space. Make sure you have several gig available. If it gets too small SQL still start acting funny, and slower. If it get to a gig or under it’ll stop working all together.
  • Check your content database size and growth properties in SQL Server Management Studio. If these are set too low, then SQL has to resize itself a lot and that can slow it down.

To check your database properties

  • Connect to the database server using Microsoft SQL Server Management Studio.
  • Right click your database and select Properties.
  • Go to Files on the left.
  • You should now see 2 rows for your database.
  • Select the one with a File Type of Rows Data.
    • This represents your database file, where all the stuff is stored.
    • Note the initial size. If this site is new and young and doesn’t have much on it, set that initial size to an estimated file size for the future. If it’s an existing content database with a load of data, don’t worry about the initial size. By setting this above and beyond, it’ll resize the database now, instead of later.
    • Note the Autogrowth setting. Modify this to be 25%-50%, higher if the site will be loaded up with a lot of data coming soon. Again, this will resize the database once and cover a lot of data. The default property values requires SQL to resize frequently, and therefore imped performance of the server while it’s resizing.
  • Now select the File Type Log.
    • This represents your transaction logs. As items are being written to and from the database file, the transaction log keeps a log of it, in the event of a database failure, you can technically restore back to the last transaction. This log can grow exceptionally large depending on traffic on your sites.
    • Following the same rules as above, the initial size should be around 25% of the data file initial size.
    • Set autogrowth to 25%-50%, again depending on amount data that will be loaded.
  • If possible, these two files should be on different physical disks, not just different logical disk partitions.
    • This might be difficult as most servers have a single RAID container. In larger implementations, if you have additional drives available, separate the data and log files. Moving them onto their own physical disks or disk arrays, the drives can spin independent of each other, therefore improving performance.
  • Next, go to Options on the left
  • The Collation option should be Latin1_General_CI_AS_KS_WS (SharePoint’s preferred setting).
    • This setting handles how the database treats some finer settings like case sensitivity. If the database was created by SharePoint, you’re fine. If a DBA or someone else made the database first, this might not be set correctly.
  • Click OK.
  • Expand System Databases on the left, and do the same as the above to the tempdb.
    • The tempdb is a temporary location data is handled while it waits for other processes to finish. The tempdb prefers RAID10 over all.

Another consideration for improving database performance is RBS, I go into greater detail on RBS in this post.

SharePoint Logs

SharePoints logs are usually on the drive it was installed on, and in most cases that’s the C drive of the server. It’s recommended to moved these logs to another partition, and preferably different drives altogether.

  • Go to Central Administration > Monitoring > Configure Diagnostic Logging.
    • Note the top part, Event Throttling. This specifies what SharePoint will log. If any of these have been changed from default, they’ll be in bold. Your ideal settings for all of them are
    • Least critical event to report to the event log: Information
    • Least critical event to report to the trace log: Medium
    • In the next section, Event Log Flood Protection, make sure that’s checked.
    • And in Trace Log section, feel free to move the logs to a different drive to either free up space on the current drive or to a unique drive. You may limit the amount of days or space the logs use per your preference.
    • Click OK.

Large lists are slow

If you have lists that have several thousand records in them, those lists may run slow as you load views. Other lists which use your large lists in a lookup field may also slow down as it has to process all of the records.

You can enable indexing on your SharePoint fields within your lists to improve lookup speeds and page loads. Go to your list settings, then under the column list is a link Indexed Columns. Select your columns that you will be searching against, creating filtered views with or using in lookups.

Keeping it alive

This issue is less of an issue as it’s just how technology works. If SharePoint (rather the services running SharePoint: IIS) runs idle for a set amount of time, usually 20 minutes, the services will go to sleep and cache is cleared. As a result, when a user goes to access SharePoint, the services have to start up, and build the cache again. This can set an initial load to take several seconds, which to an end user can be forever. There’s a neat little application available at which tickles your sites to keep them awake.

Funky Code

How many custom features are you using? Does it seem to slow down when they’re in use? This might be hard to determine as a highly used site might be hitting custom code left and right. If you’re a developer, or want to tell your developer something, check these out

  • Make sure all SPSite and SPWeb objects are disposed of cleanly. There is a free tool which will check your compiled solution for you, download SPDisposeCheck here
  • If you’re adding new items to a list that has thousands of records, don’t simply perform SPList.Items.Add(). This will actually load all your items into memory, then create a new record. Instead, do something like SPQuery qryEmpty = new SPQuery() { Query = “0” }; SPList list = web.lists[“Name”]; SPListItem newItem = list.GetItems(qryEmpty).Add(); This will load an empty results set first, then give you a new item. Also, check out the note above on Large lists are slow.
  • Check the status of workflows, ensure they aren’t looping. This is a common issue with developing custom workflows. If the workflow updates the item the workflow is running on, it’ll loop because it was updated again. And around and around you go. Make sure to include some validators in your workflow to ensure you want to perform the update, not just update it.
  • Check out for some more information on improving your code.

Am I missing some? Please, leave a comment and let me know!!


3 thoughts on “Improving SharePoint’s Performance

  1. Pingback: Performance Optimising SharePoint Sites – Part 3 « SPMatt

  2. Pingback: My Users Don’t Like SharePoint because it is too slow! | David Lozzi's Blog

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s