According to the MSFT SQL Server 2008 marketing material, row compression is supposed to help improve performance. I can see how it compresses the empty space out of the database, and reading
Rashed's and
Alain's blog entries can make a huge impact on file size, but I am skeptical about actual performance.
If you are using row compression, please share your experience here. Did it affect performance in any way?
Comments
well, I have implemented PAGE Compression lots of times, as it is more effective than ROW Compression (actually PagComp includes RowCom). The compression rates are really amazing, ca. 80%, means a table of 10GB is shrunk down to 2GB.
I only compress Tables, never Indexes, and only those tables which records are not modified much - classic candidates are most of NAV "Ledger Entry" tables or posted documents!
The resulst - as far as I experienced - are these:
1) Reduction of db-size = less disk usage = smaller backups (if not using backup compression) = slower growth
2) Less I/O - physical Reads - with certain queries = faster query execution
3) Avg. CPU time increase for about 5 to 10 percent, depending on the number of compressed tables
The CPU time is IMHO a minor issues, as usually NAV/SQL Servers have plenty of spare CPU cpacity (if the Server is correctly sized).
The I/O improvment affects only certain queries, e.g. if huge number of records are retrieved etc..
Thus, the major aspect is the optimized usage of precious disk-space, secondly memory cache space.
So far I did not experience any negative effects; problems only arise when compressing the wrong tables, e.g. something like "Sales Line" etc.
Cheers,
Jörg
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
=D> =D> =D> =D>
Great post (as always) Jorg,
thanks. :thumbsup:
And if I remember correctly, the data REMAINS compressed in the cache of SQL, so this is an extra performance boost because more data remain in memory, so an extra less reading from disk.
BUT you need at least Enterprise edition for page/record compression....
And at least Enterprise edition for SQL2008 for backup compression BUT standard edition is ok for SQL2008R2!
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
And - if I don't err - the data indeed remains compressed on disks AND in the cache (!) and is only un-compressed via CPU when the data is queried.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
So if I can summarize these findings:
As long as you follow these simple rules, there are no negatives, only positives. I am starting to think that this is an important argument for customers to use Enterprise Edition.
Feel free to add or comment.
RIS Plus, LLC
This thread has started me think about using page compression on some of the larger table. GL entry currently has about 150 million records. Value Entry has 80 million records. The other table I might consider is "G/L - Item Ledger Relation" which has about 130 million records. what I'n looking at are large tables that had records inserted but never updated or deleted. Posted dimension tables might also be considered.
AS IS, NO WARRANTY, NO GUARANTEE, NO SUPPORT, USE AT OWN RISK.
NAV/SQL Performance Optimization & Troubleshooting
STRYK System Improvement
The Blog - The Book - The Tool
1. Compression is not going to improve write performance. It should also not hurt it provided you are careful in selecting which tables to compress. I say the best choices are tables that have new entries added to the end.
2. Compression will likely provide the best improvement on tables that have a high ratio of reads.
Based on the above theories and the database I am considering for compression, here are my thoughts:
A. While the GL Entry table is quite large, and a candidate for compression, only 2% of its activity is reads. I would not expect compression to have any positive impact on performance. But would still be useful from a size reduction point.
B. The Value Entry table is also quite large, but 65% of its activity is reads. This maybe a table on which compression does provide some read performance. The question will be how much.
Thought?
The resulting space savings where within 10% of that estimated by the "estimate" query above.
Initial compression was fairly quick. GL Entry (150 million records) took 5 minutes and gained 80% of its space. CPU and disk utilization was fairly high during this time. All 16 cores were running 96-98%. This was during the initial compression. During testing of user processes I did not see much load beyond what is normally experienced on this system.
After compressing a few tables, the database gained about 25% of its space. This also resulted in an improved backup time of about 18% and a backup file reduced by about 15%. This was with compressed backups before and after.
Encouraging results and definitely an option as system resources (disk space) begin to dwindle. This could extend system useful life.
On the performance side I had mixed results. But nothing bad. I used a couple reports for this testing.
This did not seem to help processes that read data in small chunk (even though the table was large). For this example I used the Inventory Valuation report. While the Value Entry table has many records, this report only reads them in small chunks. I saw no difference between uncompressed or compressed.
On the other hand it did seem to slightly improve processes that read large amounts of sequential records. For this test I used the G/L Register report with a register containing ~30,000 GL Entries. Using the compressed version yielded about a 15% performance improvement.
At this time I'm undecided on whether I will implement compression in the live systems. I probably will not at this time. Disk space is currently not an issue. But I will definitely keep this feature in mind.