SQL Server Disk Size too large?

If your SQL server disk size got too big, narrow down to the DBs:
WITH fs
AS (SELECT database_id,  type, size * 8.0 / 1024 size FROM   sys.master_files)
SELECT recovery_model_desc, NAME,
(SELECT Sum(size)   FROM   fs  WHERE  type = 0 AND fs.database_id = db.database_id) DataFileSizeMB,
(SELECT Sum(size)   FROM   fs  WHERE  type = 1 AND fs.database_id = db.database_id) LogFileSizeMB
FROM   sys.databases db
ORDER  BY 3 DESC, 4 DESC

If Log file is too large, check the recovery model. If the recovery model is mistakenly set to Full, then reset it Simple. If it is set to Full model but by intention, then take a full backup. If you don’t have a space, then set the recovery model to Simple.
If recovery model is simple, shrink the log file.

Here is more information:
want-to-make-your-database-recoverable-is-it-growing-too-large

If Data file is too large, narrow it into the table/index:

CREATE TABLE #temp
(
table_name    SYSNAME,
row_count     INT,
reserved_size VARCHAR(50),
data_size     VARCHAR(50),
index_size    VARCHAR(50),
unused_size   VARCHAR(50)
)

SET nocount ON

INSERT #temp
EXEC Sp_msforeachtable
‘sp_spaceused ”?”’

SELECT a.table_name,
a.row_count,
Count(*) AS col_count,
a.data_size
FROM   #temp a
INNER JOIN information_schema.columns b
ON a.table_name COLLATE database_default =
b.table_name COLLATE database_default
GROUP  BY a.table_name,
a.row_count,
a.data_size
ORDER  BY Cast(Replace(a.data_size, ‘ KB’, ) AS INTEGER) DESC

DROP TABLE #temp
go

If a table is too big, try either rebuilding it:

ALTER TABLE table1 rebuild
GO
Or if it is a heap, try building a clustered index on it, that will cause its rows to defragment.

CREATE CLUSTERED INDEX ix_testtable_testcol1 ON dbo.testtable ( testcol1 );
GO

Retest again by:

sp_spaceused @objname = 'db.schema.table',@updateusage = 'TRUE'

If it is not the table data, then check the index:

sys.dm_db_index_physical_stats (DETAILED)

 

Leave a Reply

Your email address will not be published. Required fields are marked *