el

How to Avoid (or Fix) the Transaction Log Full File Issue

Wednesday, August 23, 2017

In the day-to-day work life, users may be frequently presented with an error when a transaction log grows unexpectedly or becomes full.  

Errors from SQL server error log:

Error: 9002, Severity: 17, State: 2. When the transaction log for database ‘test’ becomes full, users can find out why space in the log cannot be reused by checking the log_reuse_wait_desc column in sys.databases. If a user gets an error  (such as “Error: 9002, Severity: 17, State: 2.”), users should then check the log_reuse_wait_desc column in sys.databases by running below command and seeing what value appears.   When the log file becomes full, users will have to take the transaction log backup if the database's recovery model is full. From there, the user can reanalyze and update the log backup job. We can use the command below to evaluate the database's current status: SELECT name, recovery_model_desc, log_reuse_wait_desc FROM sys.databases In the Log_reuse_wait_desc column, NOTHING means there is no open transaction. On the other hand, if there was any active transaction, the column would display Active_Transaction. If there is no value listed within the column, users should then look to the transaction log backup for the respective database. Users can use the following script to shrink all user databases in any instance. Script to shrink the all user database if database associate with single log file: sp_msforeachdb "use [?]; dbcc shrinkfile(2,TRUNCATEONLY) WITH NO_INFOMSGS " Note: The command is working file if every user database only has one single log file. Quickly shrinking the all user database log into one file is very hectic work and will most likely require multiple log files in different drive. The command below can be used to shrink the user database log file all at once. Script if databases have multiple log files: DECLARE @DBName varchar(255) DECLARE @DBLogicalFileName varchar(255) DECLARE @DATABASES_Fetch int DECLARE DATABASES_CURSOR CURSOR FOR   select DATABASE_NAME = db_name(MaTableMasterFiles.database_id), MaTableMasterFiles.name from sys.master_files MaTableMasterFiles where -- ONLINE MaTableMasterFiles.state = 0 -- Only look at databases to which we have access and has_dbaccess(db_name(MaTableMasterFiles.database_id)) = 1 -- Not master, tempdb or model and db_name(MaTableMasterFiles.database_id) not in ('Master','tempdb','model') AND type_desc LIKE 'log' group by MaTableMasterFiles.database_id, MaTableMasterFiles.name order by 1 OPEN DATABASES_CURSOR FETCH NEXT FROM DATABASES_CURSOR INTO @DBName, @DBLogicalFileName WHILE @@FETCH_STATUS = 0 BEGIN Exec ('Use [' + @DBName + '] DBCC SHRINKFILE ("' + @DBLogicalFileName + '")') FETCH NEXT FROM DATABASES_CURSOR INTO @DBName, @DBLogicalFileName END CLOSE DATABASES_CURSOR DEALLOCATE DATABASES_CURSOR Note: Sometimes, users will be unable to shrink the database. If this happens, the user will have to verify the active command and communicate with the business owner. USE master; GO EXEC sp_who 'active'; GO If the business owner says that fixing the error is very important and roll back is not posible, users will have to analyze the free space status for other drives on the server. They will also need to add separate log files for the time being until the database log has enough room to grow. Once the active transaction has been completed, users can take the log backup and shrink the log file as much as they want, empty the new log file or delete it if it is no longer needed. Proactive monitoring method to avoid the log file full error status for database log space:- While working with a client, SmartERP made a script that notifies us whenever a database (including tempdb) has less than 50% unused transaction log space. Monitoring for this condition proactively can help users discover long running, transaction log consuming queries. Such queries can take a long time to rollback and can result from user errors – for example: accidentally deleting many records. While using our monitoring script, we consistently found issues with tempdb. Checkpoint behavior only allowed unused transaction log space to be reclaimed at 70% log usage of tempdb. The customer wanted the headroom limit at 50%, so the customer had to be able to distinguish between the “by design” checkpoint at 70% and a troublemaking, long running transaction preventing log space from being reclaimed. The solution for the customer was monitoring to the column log_reuse_wait_desc in sys.databases and the unused transaction log space. If this column has any value other than CHECKPOINT and NOTHING, long-running, space consuming transactions in tempdb are responsible for the high amounts of log usage. The script below illustrates how to monitor the log_reuse_wait_desc column to relieve transaction log space: SELECT name , db.log_reuse_wait_desc , ls.cntr_value AS size_kb , lu.cntr_value AS used_kb , CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT) AS used_percent , CASE WHEN CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT) > .5 THEN CASE /* tempdb special monitoring */ WHEN db.name = 'tempdb' AND log_reuse_wait_desc NOT IN ('CHECKPOINT', 'NOTHING') THEN 'WARNING' /* all other databases, monitor foor the 50% fill case */ WHEN db.name <> 'tempdb' THEN 'WARNING' ELSE 'OK' END ELSE 'OK' END AS log_status FROM sys.databases db JOIN sys.dm_os_performance_counters lu ON db.name = lu.instance_name JOIN sys.dm_os_performance_counters ls ON db.name = ls.instance_name WHERE lu.counter_name LIKE 'Log File(s) Used Size (KB)%' AND ls.counter_name LIKE 'Log File(s) Size (KB)%'

No items found.