?

Log in

Second SQL Server blogging - John [entries|archive|friends|userinfo]
John

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Second SQL Server blogging [Feb. 6th, 2013|02:43 pm]
John
More SQL blogging. This one should be a lot "crunchier". If any computer-geek friends would like to review it from the perspective of "wow, I don't know SQL Server, but, reading this, I (feel I learned important stuff/was bored to tears)", I'd appreciate it. Here, especially, I'd love to have someone say "I felt like I could have understood X, but I don't, can you explain better?" so I know if I'm going over the head of an informed layperson.

Now, I just said I'd appreciate this - but this is more like "here's a story of mine, if you want to read it." You don't have to, I won't feel you're a bad friend if you don't. But I'd like to learn if I've got the right touch for blogging about this. If I can't even write a couple of quick articles on fundamentals, SQL blogging is probably not for me.


What is logging, and why do you care?

Okay, so, you want to learn more about SQL Server. Good for you. Let's delve into one of the most fundamental pieces: logging. I don't mean the writing of important messages to the SQL Server log (by default, named "ERRORLOG" - but it includes non-error messages, too). I mean the logging of database activity, called "transactions". These are logged in what's called the "transaction log" of each database.

What's the use of a transaction log? One of the biggest uses is data protection. If you log a proposed change, and make sure the log is "hardened" (physically written to disk) and then make the change, if anything goes wrong during the change, you know what was supposed to happen. So you can check, and see if it actually happened that way - and if not, you can fix it. So if power goes out, or the OS crashes, and the database server is restarted, it knows what changes might have been in progress, and can fix the data so it's in the appropriate state.

Another use for logging transactions is allowing you to group a set of commands into a single transaction, where everything will happen, or nothing will happen. So, if a bank customer pays for an item with a check, money is paid to the check holder, and removed from the customer's account - unless both pieces succeed, the transaction throws an error. This way, you never end up with only the payment made, or only the money removed - you either get both, or neither, leaving the accounting in a consistent state.

This also allows you to make provisional changes. Imagine an online retailer that has many warehouses. A customer places an order. This might be several sales recorded, several inventory adjustments, maybe a warehouse selection (or multiple selections - perhaps the warehouse will change based upon the specific items in the order, to minimize the number of necessary shipments), maybe a special discount loaded from the customer's account record... oops! The customer decides to cancel the transaction and empty the shopping cart. No biggie - just roll back everything, and you know you're all cleaned up.

Or, if you're a DBA like me, when you're making configuration changes, or the rare, but occasionally necessary, back end data changes, you'll wrap everything up in a transaction and make doubly-sure that the changes are correct before committing[1] them.

There's another subtle bit to the transaction log. Every thing that's logged has a LSN - Log Sequence Number. This means that there's a kind of running total to all logged database changes. (Most changes are logged, but some are not. From now on, I'll talk about "changes" and let you assume they're logged, unless I mention that they're not.) This lets you track database history. This becomes important in backups, restores, and in certain high availability and disaster recovery scenarios. For example, if you have a copy of your database, SQL can tell if it's synchronized to the original by checking the LSNs of completed and active transactions - if they match, then you can roll back all active transactions on both, and they're perfect copies. (Okay: they're perfect copies, unless one has data corruption, or unless someone has shut down SQL, made modifications to the data via hex editing of the data file(s), and then restarted SQL.)

These LSNs are intimately tied to SQL Server's backup and recovery models, so let's talk about that for now.

SQL Server has three "recovery models". These determine two things: first, they determine how the transaction log is used and re-used, and second, they determine how you can restore a backup of a SQL Server database.

The transaction log is like a whiteboard. Imagine scribbling notes about data changes on whiteboard before you make the changes. Once a change is absolutely committed, written to disk and done with, you no longer need the notes you scribbled, right? You can't roll the change back, and you know the disk changes have been made permanent, so you don't need to save the changes. So what do you do with them?

The simplest recovery model is called "Simple". When you're using the Simple recovery model, once a transaction has been committed, SQL Server marks the transaction as complete. And then, later, it will essentially erase the whiteboard section holding it. This is called "truncating" the transaction log. That's a really bad name - people expect the transaction log to become smaller, but it doesn't. It is, in fact, exactly like a white board, in this respect - when you "truncate" the transaction log, you've just erased the unnecessary sections of the transaction log, so they can be re-used. There's no change to the size of the transaction log file, there's just a change to the amount in use.

This wiping of old transactions happens at what's called a "checkpoint". I'll explain what that is in a bit - it's important, and is intimately involved with the transaction log.

Now: the problem with the simple recovery model is that every time a transaction is over and done with, the transaction log data gets thrown away. That means that the only way to restore anything is by restoring a full backup, or a differential backup.

(A full backup is a complete dump of all data in the database. A differential backup is a dump of all changes since the last full backup.)

This isn't horrible, in some applications. Imagine a database holds an inventory of your office supply cabinet. Well, if that database crashes, and you can't restore it to the point in time when it crashed, you can rebuild the data by taking a new physical inventory of the cabinet. That's a pain, of course, but on a properly maintained server, it's rare to lose data, so you can probably afford to take the chance. You can probably just back up the database with a full backup once a week (or even once a month!) and take differentials every day that you don't take a full. Because, really, you probably don't care about when you used a particular ream of paper... you only care about whether the ream was still there as of 6pm one particular night.

However, there are other applications that require better data tracking. A sales database is a prime example. If your server crashes at 12:37am, you might need to be able to restore it to 12:37am, exactly, and if you can only restore it to 12:00:01am[2], you might find it necessary to update your resume.

Here's where the transaction log and use of the LSNs can save your job. There are two other recovery models: there's "FULL", and "BULK-LOGGED".

These recoery models are identical except under certain circumstances. For most transactions, SQL Server logs the transactions in the transaction log, and then, when they're completed and written to disk, SQL marks the transaction as complete - but it doesn't truncate the log. It just marks the completed sections as complete.

When you take a transaction log backup, then SQL Server truncates the transaction log - any transactions that were committed, and backed up, are erased so that the space can be re-used[3].

Let's think about this for a moment, and let's go back to the idea of a white board.

With a Simple recovery model, you need a transaction log/"white board" big enough to hold all of the transactions that are in progress at one time[4]. As soon as each one is completed and written to disk, you can count on SQL erasing that section of the white board.

With the Full and Bulk-logged recovery model, you need a transaction log big enough to hold all transactions that occur, or are in progress, between transaction log backups. So if you run transaction log backups every hour, you need a transaction log file big enough to hold an hours worth of transactions for that database. And you need to take backups often enough to keep your transaction log from filling up.

Wait, you say. Isn't that the same thing, said two different ways? No. See, let's assume you have a big, busy database that will perform two gigabytes of transactions in an hour.

Well, if you have a four gigabyte transaction log that can grow to 6 gigabytes maximum, you need to make sure that if no transaction log backup has occurred in two hours, you know you can find and fix the problem in under an hour.

Simultaneously, if you take transaction log backups every hour, you need to have a transaction log big enough to handle anything that might happen in that hour... and you should remember that it's possible for some actions (a massive data load, or a huge index rebuild) to take multiple hours to complete. So you might need a larger transaction log than you might think. (This also holds true for Simple recovery model - any large transaction can require a lot of transaction log space. But with full, you need to wait for the next backup to occur, after the large transaction finishes. With Simple, you just have to wait for the next checkpoint - and SQL Server is likely to run a checkpoint after a large transaction completes.)

Okay: back to discussions of recovery models. As I was saying, you need to back up the transaction log when using Full and Bulk-Logged. (In the Simple recovery model, you can't back up the transaction log. Even if you could, it would be meaningless, because there'd be no guarantee that the information you collected was useful.)

A transaction log backup typically contains all data changes recorded in the transaction log, plus the LSNs involved.

Because of this, if you had the first full backup of a database, and all transaction log backups taken since then, you could completely restore the database to its current state, no matter how much time has passed.

With Full recovery model, you can also restore the database to a point in the middle of a transaction log. You can designate either a specific LSN (if the LSN was marked - don't worry about what this means right now, just remember that you can't just stop at any arbitrary LSN), or a specific time, to stop, since the transaction log backup will know when each transaction completed - so it can restore all transactions that committed before a certain time, and it can restore all transactions that committed before a particular transaction completed.

Bulk-logged is almost the same. Bulk-logged is a special mode intended to be used during bulk operations. Bulk-logged works just like Full if there are no bulk operations (like BULK INSERT, or the use of BCP - these are tools designed for moving lots of data into (or out of) a database). If there are bulk operations, the database will employ a technique called minimal logging.Instead of logging all data changes in the bulk operation, it only logs the data pages (data in a database is organized into 8kb pages) that were touched by the operation. So, during the bulk operation, instead of having to log all data changes (before and after values), it just has to say "and we did a bulk operation here." This means that a lot less data is written to the log file. This means that the task can complete faster (remember, all changes have to be logged - less logging means less time spent waiting for the log to be hardened), and requires less log file space.

(Requiring less log file space may not seem like much, and it isn't much - if your log file is large enough to hold the logging for the bulk operation. But if it's not, then the log file would have to grow, and log file growth is a very expensive operation.)

There's a trade-off for this minimal logging. If there are any bulk operations performed during the time covered by a transaction log backup, you can't restore to a point in time. You need to restore the entirety of the log backup - it's all, or nothing. Why?

Well, it's pretty obvious once you understand. Remember, when doing bulk operations in a Bulk-logged recovery model, SQL just notes which pages were touched by the bulk operation. So, when you back up the transaction log, it doesn't have enough information to rebuild the data in those pages. Instead, it takes a snapshot of the current contents of those pages, as of the time the log backup was taken. So you can't stop in the middle, because only have good data for how it looked at the very end.

Now, I just ran a little test. I created a Bulk-logged database, performed no bulk operations, and used a transaction log backup to restore to a point in time. It worked. It should have worked. But should you trust it to work? No - if you trust it to work, Murphy's Law will dictate that someone will end up doing some tiny operation that gets recorded as a bulk operation, and you'll lose the ability to restore to a point in time at the worst possible moment.

The best practice is to plan your bulk operations in advance, and switch to being in bulk-logged recovery model just until the operations are complete, and stay in full recovery model at all other times. It's a bit of a pain - but it's much less painful than finding you can't perform a desperately needed restore.

One final interesting bit of trivia: all databases - even those marked as Full and Bulk-logged - use the Simple recovery model until their first full backup is taken. This is because you can't restore a transaction log backup unless you've first restored a full backup, so there's no point in saving transaction log information until there's a full backup taken. So: always plan your first full backup of any new database with this information in mind.


Wow, this has gotten really long, but I did promise I'd discuss checkpoints, and that requires going over recovery, so let's cover those two topics.

Remember, the point of logging transactions is to log what happens before you make any changes to the data. That way, if something goes wrong while writing the data to disk, you still have a copy of what was supposed to happen. There's another aspect to this. SQL Server doesn't make any changes to any data on disk. It always makes them in memory. Now, if you made a change in memory, and then immediately wrote it to disk, you'd waste a lot of time writing to the disk. Some data might change a million times in an hour (think about inventory levels on a busy internet vendor!) Since the changes have all been written to the transaction log, there's no need to copy things to disk immediately. Even if you lost power, you could redo all of the committed changes, right?

In fact, that's what SQL does when a database comes online. It runs a process called "recovery". It scans the transaction log, and sees what was active when the database was last online. It then checks and undoes everything that was written to disk and shouldn't have been. And then it checks and redoes everything that had been changed and committed, but hadn't been written to disk.

If the server was shut down cleanly, this is very fast - the undo/redo was already done during shutdown.

But while we always hope our servers will be shut down cleanly, we know they won't always be. There are crashes, there are power failures, there are people who press the wrong server's power button, etc.. Since nothing can happen to a database until the undo operation is complete, SQL Server wants to keep that as short as it reasonably can. So it has a setting that determines how often it will run a checkpoint.

A checkpoint is when SQL Server flushes all changed data to disk - whether the changes are committed or not! - so that this recovery process should occur in a reasonable period of time, if possible. Now, you may wonder why SQL Server doesn't write only the committed changes to disk, instead. Wouldn't that be safer, and eliminate the "undo" period? It might - but there are multiple reasons it doesn't work well. First, it's very expensive to determine if all changes to a data page are committed (we'd have to look at the transaction log to see if the last LSN is committed or not - for each page!). Also, if a page is really busy, it might always have uncommitted changes. Second, remember SQL has to make all changes in memory - if we didn't write uncommitted changes to disk, we couldn't work on data sets bigger than memory. So SQL writes both committed and uncommitted changes to disk.

There are two big side effects of this. First, if you have to undo a large transaction, it can take just as long to undo it as to do it - because it might have to make just as many changes to the disk[5]. Second, if the server crashes during a huge transaction before it gets committed, you might need to wait a long time before that database comes online - the incomplete transaction must roll back before the database can be used.

SQL Server has its ways of estimating how long that will take, and when it decides it is past a certain threshold, it runs the checkpoint process, to flush data to disk, and (for Simple recovery model) to truncate the transaction log. Unfortunately, if there's a huge transaction that might require a massive undo, there's no way for SQL Server to cut the recovery time short.



[1]Finalizing changes is called "committing"; cancelling them is "rolling back".
[2] 12:00 is never am or pm - it's noon or midnight. Yes, I'm nitpicky.
[3] There are other reasons SQL Server might delay truncating the transaction log - Replication, and Mirroring, and some other issues, may prevent the transaction log from being truncated. But all databases in Full or Bulk-logged recovery model need transaction log backups to allow truncation - only some of them need to have other tasks complete.
[4] Technically, you need enough space to hold all transactions that occur between the times that SQL chooses to run a checkpoint. The log is only truncated at each checkpoint.
[5] Disk IO is the slowest operation on a server, and usually takes the vast majority of the time on large operations. So, the same amount of disk IO often (but not always) means the same amount of time.
linkReply

Comments:
From: (Anonymous)
2013-02-07 02:36 pm (UTC)
I skimmed, but it seems a reasonable casual blogging voice and clear enough. The one thing that i missed is information along the lines of the answers to do i need to configure this or is it on by default? Are there settings that need to be tuned, perhaps due to load levels, to ensure the log is "hardened"? What you have written about recovery is long enough that i would expect that the full answers wouldn't be covered....

If i were reviewing this for posting myself, i would break this into two articles: one a "Transaction log: why do you care?" that would give a little bit more information -- like answers to my questions -- and then a second article about transaction logs and backups (Which appears to be the main body).

For what it's worth -- and i did skim, so take that into account!
(Reply) (Thread)
[User Picture]From: elainegrey
2013-02-07 02:37 pm (UTC)
And the anonymous comment was mine.
(Reply) (Thread)
[User Picture]From: johnpalmer
2013-02-07 06:06 pm (UTC)
Thank you; you're probably right, it should be broken into two pieces.

This points to my big weakness as a writer - what I think of as the "engineering". I have a hard time planning and building a structure; I do the beginner's game of starting to write, and hoping that, when I'm done, I did what I wanted :-).

Maybe I'll try doing that planning and see if I can rewrite the whole thing better.
(Reply) (Parent) (Thread)
[User Picture]From: kightp
2013-02-09 04:49 pm (UTC)
I'll give you some thoughts face-to-face, but generally, I agree with Elaine that you've got (at least) two articles (or blog posts, if you will) here.

The tone is engaging, and your explanations seem pretty clear to me (as a total lay person) it's just a bit much for a non-specialist to swallow in one gulp.
(Reply) (Thread)
[User Picture]From: ysabetwordsmith
2013-02-16 12:43 am (UTC)

Okay...

I don't code, but from an editorial perspective, you seem to be writing very clear descriptions in straightforward terms. This is a surprisingly rare skill, so by all means, keep doing it.
(Reply) (Thread)