portal entry

select a category, or use search below
(searches all categories and all time range)

Avoiding data normalization with data compression

| View in Portal
December 26, 2018 12:02:32 AM GMT
<p>I am going to argue against data normalization and I am going to be using traffic tracking as an example. I am going to introduce some better approaches to get the job done. I was thinking of all kinds of titles for this video Don’t normalize that data Intro to DB Compression Normalization is not the answer Stop wasting your time with normalization Traffic tracking for the rest of us How I learned to stop worrying and learn to love data compression […]</p>
<p>The post <a rel="nofollow" href="https://coldfusion.adobe.com/2018/12/avoiding-data-normalization-data-compression/">Avoiding data normalization with data compression</a> appeared first on <a rel="nofollow" href="https://coldfusion.adobe.com">ColdFusion</a>.</p>
Labels: Blog, Learning, blog, CFML, Data, learning, SQL


Good approach!  One thing to note about using data compression in SQL Server, if your table does not have a clustered index on it, compression may not behave like you would expect. When compressing a Heap (a table w/o clustered index), SQL Server will only compress the data that is currently there, and not compress any future incoming data.  You will have to periodically rebuild the Heap by removing and reapplying compression.  This is not the case with a clustered index table, once you apply compression it will continue to compress new incoming data. Also, adding or removing compression to a Heap forces all nonclustered indexes on the table to also be rebuilt behind the scenes so that they have pointers to the new row locations in the Heap.  However, on tables with clustered indexes the table and nonclustered indexes can be compressed and rebuilt independently from each other.
Comment by Eric Cobb
1517 | December 26, 2018 04:33:22 PM GMT
Interesting? When I ran the EXEC sp_estimate_data_compression_savings 'dbo', 'Traffic', NULL, NULL, 'NONE' ; I thought that was confirming that the compression was working. Is there a better way verify that the table compression is working?      
Comment by James Mohler
1518 | December 26, 2018 05:20:22 PM GMT
sp_estimate_data_compression_savings just provides an estimate of what the table size would be if you applied the specified compression level.  (typically "NONE","PAGE", or "ROW") You can run the below query to determine whether or not compression is enabled on a table or index, and whether the object is clustered, nonclustered, or a heap.  How it "works" is dependent upon whether the table is clustered or a heap.  That's why I posted my original comment, because MS isn't always clear on what's going on and it's hard to know that the same compression settings works differently on each. <em>SELECT DISTINCT SCHEMA_NAME(o.schema_id) + '.' + OBJECT_NAME(o.object_id) AS TableName,</em> <em>i.name AS IndexName, p.data_compression_desc AS CompressionType, i.type_desc AS StorageType</em> <em>FROM sys.partitions p </em> <em>INNER JOIN sys.objects o ON p.object_id = o.object_id </em> <em>JOIN sys.indexes i ON p.object_id = i.object_id AND i.index_id = p.index_id</em> <em>WHERE p.data_compression > 0 </em> <em>AND SCHEMA_NAME(o.schema_id) <> 'SYS'</em>  
Comment by Eric Cobb
1520 | December 26, 2018 10:16:30 PM GMT
I think you are right on this one. I have updated Github to have an index on dbo.Traffic. https://github.com/jmohler1970/DB_Compression Thanks for the feed back        
Comment by James Mohler
1521 | December 27, 2018 02:13:07 AM GMT