Thursday, August 27, 2015

SQL Server 2008 R2 SP3 Patch Failure

I was working on a SQL Server SP3 patch upgrade on Windows Server 2008 SP2 and for a server it failed with the following error.


 
 
"Attributes do not match. Present attributes (Directory, Compressed) , included attributes (0), excluded attributes (Compressed, Encrypted)"
Exit code (Decimal): -2068578304
Exit facility code:       1204
 
 
Solution
Go to C:\Program files\SQLServer\ and change the properties of this folder by unchecking
compress option/Archiving option.
 
 
 
 
















Once you uncheck this, Re-ran the patch installation and it should complete successfully. It worked for me without any issues and hope does the same for all.
 
Njoy and Have a Gr8 Day!!


 

Thursday, August 20, 2015

DBCC SHRINKFILE

Since SQL Server 2005 there is an additional shrink logic to compress TEXT/IMAGE data, referred to as LOBs(Large Object's). The dbcc shrink code uses the same underlying methods that ALTER INDEX .... WITH (LOB_COMPACTION ON) uses to compact the LOB space in the database files.

There are various stages of dbcc shrink activity which are indicated in the sys.dm_exec_requests dmv in command column along with the progress indications

select percent_complete, estimated_completion_time, cpu_time, total_elapsed_time, * from sys.dm_exec_requests.

Now let’s see the phases that SHRINK operation

Step
Command
Description
1
DbccSpaceReclaim
Clean up deferred allocations and purge empty extents preparing for data moves.  
2
DbccFilesCompact
Moves pages beyond the target to before the target and truncate file as required.
3
DbccLOBCompact
Compacting the LOB data.


A target extent is calculated based on the information supplied to the shrink command.   If you have a file that is 1GB in size and you specify a target of 750MB, it starts from there

MDF File











Shrink  Steps
Cleans up space that is available in the entire file, such as deferred allocations. This helps expose space in the lower portion of the file (BOF-Beginning Of File). Sets up the necessary information to tell the allocation routines that, no pages should be allocated beyond 750MB (Target Size). Scans all pages from 750MB to 1GB that are marked allocated in the GAM.  Data on allocated pages is moved to the lower portion of the file. This can involve index, LOB chain and other updates to move the row. The more rows that are found beyond the target extent location the more work shrink has to do. Do not compare one file to another because the data layout is critical to understanding the work that shrink has to perform. The dbcc extentinfo command can be used for this purpose but need to be aware of the large volume of output it produces. Once all pages are  moved below the target point of truncation, LOB's will be compacted unless we have specified truncate only in our SHRINK command


WARNINGThis should only be used under the guidance of SQL Server support and may not be supported in the future

SQL Server 2005 has a trace flag –T2548  dbcc tracon(-1, 2548) that allows shrink and other LOB_COMPACTION actions to be skipped. Enabling the trace flag returns shrink behavior to that similar to SQL Server 2000. Using ALTER INDEX … LOB_COMPACTION is another way of limiting the work that shrink has to perform.

Progress Indication
The progress indication in sys.dm_exec_requests is based on a grouping of work for DBCC. When a dbcc command starts, it performs an estimate of the work it will be doing.   Going back to the 750MB example part of the estimate will be that the 250MB at the end of the file has to be processed.  As each page is interrogated and moved the progress indication is advanced. However, this is not the only progress indication calculation that makes up the progress and estimated time indicators. For the same shrink an estimate at LOB_COMPACTION is made. This is included in the progress indicators. Combining the progress indicators with the command value should help you follow the DBCC progress.

Shrink Transaction Behavior
Shrink works in transaction batches of ~32 pages. Once 32 pages are moved below the truncation point the transaction is committed and a new transaction starts. Previously during shrink  it keeps a transaction open for a long time and caused the log files to grow because you can’t truncate past an active transaction.  This new process of processing in batches of 32 pages also makes shrink restart capable. If you cancel/kill a shrink operation only the current active transaction is rolled back. So you could schedule a shrink operation during a maintenance window and limit its run time to that maintenance window. You can restart it during the next maintenance window and it will pick up from where it left off. The only disadvantage to this is that allocations could occur after the target once the shrink is terminated. This may require shrink to do additional work for newly allocated space but this should be reasonably limited.

The shrink  process uses named transactions. For instance, during the DbccSpaceReclaim command you will see transaction names such as ‘DeferredAllocUnitDrop::ReclaimSp‘ and by Using sys.dm_tran_active_transactions you can view the activity.



Move LOB Fragement





 Like all other data, when a LOB fragment is found inside the shrinking line it is also moved below the target. If the LOB chain is in an older SQL Server format the entire chain is upgraded to current SQL Server LOB format by 'UpgradeBlob' getting invoked. The fastest way to upgrade the LOB chains is by a series of BULK export and BULK insert operations.  Otherwise expect the shrink activity to require more resources and time to complete if it requires LOB compaction. There is an excellent post from PAUL on this Why LOB data makes shrink run slooooowly ?



Single Threaded
Multiple shrink operations against the same file/database are not allowed. You will encounter an error as stated below. The shrink operation is currently a single threaded operation. The shrink does not run in parallel unlike DBCC CHECKDB

Error: File ID 1 of database ID 10 cannot be shrunk as it is either being shrunk by another process or is empty

.
Effects of Shrinking
1.     I/O contention
2.     Transaction Log file space
3.     Fragmentation
4.     Lock Contention and many other things,,,  :)

Paul has mentioned this in an excellent way here.



Moral Of  Story
Bottom line is that there are lot of overheads involved in Data file Shrinking and hence it is not recommended as a good practice Unless you have no other option and need space. The information's mentioned in this blog has contents collected from blogs written by CSS Engineers






Cheers... Have a Gr8 day!!!





  


Wednesday, July 29, 2015

REQUEST_FOR_DEADLOCK_SEARCH


I don’t want to make the readers feel bored of reading some repetitive things. So I am not explaining what is a deadlock how it occurs instead I felt it would be good if I mention what and how  SQL Server engine does internally about Deadlock.

Let me take you through a DMV called sys.dm_os_wait_stats. If you notice the columns that it uses, you will see something like …no Exactly like REQUEST_FOR_DEADLOCK_SEARCH. This is the guy we will be centering today.

 
What is this Doing????

This is basically a sleepless thread/process that wakes up every five seconds to see if we have any deadlocks. Wow…what a nice Guy!!!  If it finds any, it terminates one of the sessions. That’s true, it KILLS one of the two (multiple) sessions to let the other session have all the resources it needs.  So we don’t have to worry, right?

Well, SQL Server does have to worry a bit, even if it’s not going to prison.  So how does it decide which transaction to be killed?

It decides this by looking at the LOG USED value. Now what is that?

 Say if you have high Transaction Log usage (more Transactions), you will have a high log used value for that session (SPID). Consider another session with low value on LOG USED. It’s obvious that the second one will be easier to rollback. Because if SQL Server kills a transactions, any work it has done must be rolled back to bring the database to consistent state.  So it chooses a low value SPID and kills.

So what happens when the value are the same? 

It uses sort of LIFO (Last In First Out) method and kill whoever came in second.

How it Works????

It wakes up every 5 seconds to check for deadlocks. If it finds any, it follows above process to decide how to resolve it.  However, second time it wakes up right away, to make sure it is not a nested deadlock.  If there is one it kills that and goes back to sleep.  Next time it wakes up @4.90 seconds (I believe it’s actually 10ms).  It will keep going down to as small as 100ms; that is, it will wake up even 10 times per second to deal with Deadlock Issues.

 

So hope this was not a boring(For someone) but instead shed some light on the internal thread.

Have a Gr8 Day!! Njoy

Tuesday, July 14, 2015

Operating system error 5(Access is denied.). [SQLSTATE 42000] (Error 3201) RESTORE DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013)


I found this error while I was watching for alerts in my new environment. I know this is a common and a silly error for DBA's on a senior level. I have intended to share this because I think it may at least help Newbies...
Issue

The Restore Job part of  a Maintenance Plan was failing with the above error. It was trying to get the backup file from one of the Shared Network Location. The job was running under SQL Server Service account and the error was

Executed as user: xxxx Cannot open backup device '\\xxx\d$\project\filetransfer\xxx.bak'. Operating system error 5(Access is denied.). [SQLSTATE 42000] (Error 3201)  RESTORE DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013).  The step failed
.

Solution
Even after giving FULL permissions to service account for the folder and even Root (D Drive) it was failing. But when we changed the way we were calling the backup location it worked.

RESTORE DATABASE XXX
FROM DISK = '\\XXX\project\filetransfer\xxx.bak'.

Since the folder 'project' was shared, we omitted D$ and gave the shared folder name directly, which infact was the issue here.

Moral Of the Story:- Backups and Restores with Network locations are always Scary!!! :)



Hope this helps someone...Cheers and Njoy your day!!!