Skip to content

How Much Logging Goes on for a Clustered Index Drop?

2011 December 19
tags: ,
by Shannon Lowder

A week back @fatherjack put out a question on #sqlhelp that immediately drew my curiousity. He wanted to know how much his log file would grow if he dropped his clustered index (primary key), and then created a new one. I was thinking that it wouldn’t write a lot of information to the log file, but I wasn’t 100% sure, so I did a little digging.

Water drop
I hit the MSDN. Looks like my initial thoughts were right…so long as there weren’t any non-clustered indexes pointing to that clustered index. I sent a reply to @fatherjack to see if he had any non-clustered indexes on that table. While I was waiting for his reply I decided to verify what I found in the MSDN.

First, I wanted to set my test database to FULL recovery mode, so it wouldn’t cycle the log entries until I said it was OK to do so.

USE [AdventureWorks]
GO
--set recovery to full
ALTER DATABASE adventureworks SET RECOVERY FULL;

Then I wanted a simple table with a Primary Key Clustered Index.

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[HumanResources].[Department_test]') AND type in (N'U'))
DROP TABLE [HumanResources].[Department_test]
GO

CREATE TABLE [HumanResources].[Department_test](
	[DepartmentID] [int] IDENTITY(1,1) NOT NULL,
	[Name] [dbo].[Name] NOT NULL,
	[GroupName] [dbo].[Name] NOT NULL,
	[ModifiedDate] [datetime] NOT NULL,
	CONSTRAINT [PK_DepartmentTest_DepartmentID] PRIMARY KEY CLUSTERED 
	(
		[DepartmentID] ASC
	)
) ON [PRIMARY]

GO

Next I wanted some data in that table, say 100,000 rows to start. Later I could crank this up to one million rows.

SET NOCOUNT ON
DECLARE @recordCount INT = 1

WHILE (@recordCount < 100001)
BEGIN
	INSERT INTO [HumanResources].[Department_test]
	(Name, GroupName, ModifiedDate)
	VALUES 
	('name' + CONVERT(VARCHAR(10), @recordcount), 'groupname' + CONVERT(VARCHAR(10), @recordcount), GETDATE())
	
	SET @recordCount = @recordCount + 1
END 	

--verify the count
SELECT COUNT(*) FROM [HumanResources].[Department_test]

Ok, I have my table and data. I also need a table I can store log entries to. One that will let me do a little analysis on what happened in the log file.

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'LogInfo') AND type in (N'U'))

DROP TABLE LogInfo
go
CREATE TABLE LogInfo 
	(currentLSN VARCHAR(50),
	Operation VARCHAR(50),
	Context VARCHAR(20),
	TransactionID VARCHAR(20),
	logblockGeneration INT)
GO

Ok, I have my test bed. Let's clear out the transaction log to make sure only the entries from our drop index are in there.

--clear the tran log
BACKUP LOG adventureworks  TO DISK = N'NUL' WITH NOFORMAT, NOINIT, NAME = N'adventureworks-LogBackup', 
SKIP, NOREWIND, NOUNLOAD, STATS = 10
--look at your log, verify it's clear
dbcc log (Adventureworks, 3)

You might see entries left over in your transaction log. Note those that were there before running the rest of this demo. We're going to drop the clustered index now.

BEGIN TRANSACTION DropIndex

IF  EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[HumanResources].[Department_test]') AND name = N'PK_Departmenttest_DepartmentID')
ALTER TABLE [HumanResources].[Department_test] DROP CONSTRAINT [PK_Departmenttest_DepartmentID]

COMMIT TRANSACTION DropIndex

Now, dump your transaction log into the LogInfo table we set up before.

BEGIN TRANSACTION LogInsert

INSERT INTO LogInfo
EXEC ('dbcc log (Adventureworks)')

COMMIT TRANSACTION LogInsert

Since I had some records in my LogInfo table from before the drop, I removed those from my table using the following query:

DELETE
--SELECT *
FROM LogInfo
WHERE
	currentLSN <= '00000367:0000003b:0001'

Now look at the entries that were generated during the drop index:

operation			context			cnt
LOP_SET_FREE_SPACE	LCX_PFS				904
LOP_SET_BITS		LCX_PFS				 24
LOP_DELETE_ROWS		LCX_MARK_AS_GHOST	         15
LOP_LOCK_XACT		LCX_NULL			 11
LOP_EXPUNGE_ROWS	LCX_CLUSTERED		          9
LOP_INSERT_ROWS		LCX_CLUSTERED	 	          6
LOP_EXPUNGE_ROWS	LCX_INDEX_LEAF		          6
LOP_HOBT_DDL		LCX_NULL			  5
LOP_MODIFY_ROW		LCX_CLUSTERED		          5
LOP_MODIFY_HEADER	LCX_PFS				  4
LOP_MODIFY_ROW		LCX_PFS				  4
LOP_INSERT_ROWS		LCX_INDEX_LEAF		          2
LOP_BEGIN_XACT		LCX_NULL			  1
LOP_ROOT_CHANGE		LCX_CLUSTERED		          1
LOP_MODIFY_ROW		LCX_IAM				  1
LOP_HOBT_DELTA		LCX_NULL			  1
LOP_COMMIT_XACT		LCX_NULL			  1

Looks to me like there were some records generated by the index drop. We freed quite a bit of space. My instincts were telling me that the LOP_SET_FREE_SPACE operation was related to the number of pages in the table. So I tested that hypothesis.

DECLARE @spaceused TABLE ( 
	name SYSNAME
	, [rows] INT
	, reserved VARCHAR(255)
	, data VARCHAR(255)
	, index_size VARCHAR(255)
	, unused VARCHAR(255)
)
	
INSERT INTO @spaceused
EXEC sp_spaceused [HumanResources.Department_test]


SELECT 
	(CAST(REPLACE(data, ' kb','') AS INT) * 1024 )/ 8192 AS dataPages
FROM @spaceused
/*--Results:
DataPages
904
*/

So we learned that dropping a clustered index on a table would result in one entry for every page of data in the table. So when the MSDN says no additional space is needed, they must be assuming you have enough space available in the log file to store a record for each page.

Now looking at the other log entries, there isn't a lot to note. I know the LOP_INSERT_ROWS operation has to do with the rows I inserted into the logInfo table, so we can ignore them. But the other operations LOP_SET_BITS, LOP_DELETE_ROWS, LOP_LOCK_XACT, etc... I'm not sure what they are doing. Google isn't exactly being forthcoming when I go to look up what these operations are.

But I was able to learn that log space will be required when dropping a clustered index. Later I'll research how much more space is needed if you have non-clustered indexes and drop the clustered index they depend on. I know it will be a significant increase in the number of rows inserted into the log file, I'm just not sure what magnitude it will be.

If you're still reading, I think you know why I wrote this blog. I'm studying the internals of the log file in SQL server. This has been an excersize to see how well I can extract useful knowledge from the log file in order to answer a real world question. If you're not looking to master SQL server, this blog has been way more information than you'll ever need to know. But if you are looking to master SQL Server, let me know. We may be able to help each other learn more!

Let me know what I can do to help!

4 Responses leave one →
  1. CodeNamePapa permalink
    January 6, 2012

    I’m studying for the 70-432, essentially teaching myself, so I’m digging around SEVERAL pages on your site. I really appreciate the time you’ve taken to break down several key concepts.

    Thanks for sharing the code, too. I’m currently just doing backups/restores, etc. in my current position, so it’s good for me to get exposure to questions like this.

  2. rozliczenie pit online permalink
    July 27, 2013

    continuously i used to read smaller articles that as well clear their motive, and that is also happening with this piece
    of writing which I am reading at this time.

  3. pit 2014 permalink
    July 30, 2013

    Woah! I’m really loving the template/theme of this website. It’s simple, yet
    effective. A lot of times it’s challenging to get that “perfect balance” between usability and visual appeal. I must say that you’ve done a great job with this.
    In addition, the blog loads very quick for me on Firefox.
    Outstanding Blog!

Leave a Reply

Note: You can use basic XHTML in your comments. Your email address will never be published.

Subscribe to this comment feed via RSS