Options

Row Compression in SQL Server 2008

DenSterDenSter Member Posts: 8,304
edited 2010-11-22 in SQL Performance
According to the MSFT SQL Server 2008 marketing material, row compression is supposed to help improve performance. I can see how it compresses the empty space out of the database, and reading Rashed's and Alain's blog entries can make a huge impact on file size, but I am skeptical about actual performance.

If you are using row compression, please share your experience here. Did it affect performance in any way?

Comments

  • Options
    strykstryk Member Posts: 645
    Hi,

    well, I have implemented PAGE Compression lots of times, as it is more effective than ROW Compression (actually PagComp includes RowCom). The compression rates are really amazing, ca. 80%, means a table of 10GB is shrunk down to 2GB.
    I only compress Tables, never Indexes, and only those tables which records are not modified much - classic candidates are most of NAV "Ledger Entry" tables or posted documents!

    The resulst - as far as I experienced - are these:
    1) Reduction of db-size = less disk usage = smaller backups (if not using backup compression) = slower growth
    2) Less I/O - physical Reads - with certain queries = faster query execution
    3) Avg. CPU time increase for about 5 to 10 percent, depending on the number of compressed tables

    The CPU time is IMHO a minor issues, as usually NAV/SQL Servers have plenty of spare CPU cpacity (if the Server is correctly sized).
    The I/O improvment affects only certain queries, e.g. if huge number of records are retrieved etc..
    Thus, the major aspect is the optimized usage of precious disk-space, secondly memory cache space.

    So far I did not experience any negative effects; problems only arise when compressing the wrong tables, e.g. something like "Sales Line" etc.

    Cheers,
    Jörg
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • Options
    David_SingletonDavid_Singleton Member Posts: 5,479
    stryk wrote:
    Hi,

    well, I have implemented PAGE Compression lots of times, as it is more effective than ROW Compression (actually PagComp includes RowCom). The compression rates are really amazing, ca. 80%, means a table of 10GB is shrunk down to 2GB.
    I only compress Tables, never Indexes, and only those tables which records are not modified much - classic candidates are most of NAV "Ledger Entry" tables or posted documents!

    The resulst - as far as I experienced - are these:
    1) Reduction of db-size = less disk usage = smaller backups (if not using backup compression) = slower growth
    2) Less I/O - physical Reads - with certain queries = faster query execution
    3) Avg. CPU time increase for about 5 to 10 percent, depending on the number of compressed tables

    The CPU time is IMHO a minor issues, as usually NAV/SQL Servers have plenty of spare CPU cpacity (if the Server is correctly sized).
    The I/O improvment affects only certain queries, e.g. if huge number of records are retrieved etc..
    Thus, the major aspect is the optimized usage of precious disk-space, secondly memory cache space.

    So far I did not experience any negative effects; problems only arise when compressing the wrong tables, e.g. something like "Sales Line" etc.

    Cheers,
    Jörg


    =D> =D> =D> =D>

    Great post (as always) Jorg,

    thanks. :thumbsup:
    David Singleton
  • Options
    krikikriki Member, Moderator Posts: 9,096
    And if you use compressed backups while having page/record compression, the compressed backup is even (a little) smaller than having only compressed backup or only page/record compression.
    And if I remember correctly, the data REMAINS compressed in the cache of SQL, so this is an extra performance boost because more data remain in memory, so an extra less reading from disk.

    BUT you need at least Enterprise edition for page/record compression....
    And at least Enterprise edition for SQL2008 for backup compression BUT standard edition is ok for SQL2008R2!
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    strykstryk Member Posts: 645
    kriki wrote:
    And if you use compressed backups while having page/record compression, the compressed backup is even (a little) smaller than having only compressed backup or only page/record compression.
    And if I remember correctly, the data REMAINS compressed in the cache of SQL, so this is an extra performance boost because more data remain in memory, so an extra less reading from disk.

    BUT you need at least Enterprise edition for page/record compression....
    And at least Enterprise edition for SQL2008 for backup compression BUT standard edition is ok for SQL2008R2!
    Very true!
    And - if I don't err - the data indeed remains compressed on disks AND in the cache (!) and is only un-compressed via CPU when the data is queried.
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • Options
    DenSterDenSter Member Posts: 8,304
    Thanks for sharing guys, great feedback :thumbsup:

    So if I can summarize these findings:
    • Use Page compression, not Row compression
    • Table compression makes a HUGE difference in file size, and has a moderately positive impact on performance.
    • Index compression makes no difference in size or performance.
    • Only set this on tables that have relatively static data. Yes for tables such as Ledger Entry tables, posted document tables, etc. No on tables such as Sales Header/Line, Purchase Header/Line, journal lines, etc.
    • Use compressed backups

    As long as you follow these simple rules, there are no negatives, only positives. I am starting to think that this is an important argument for customers to use Enterprise Edition.

    Feel free to add or comment.
  • Options
    bbrownbbrown Member Posts: 3,268
    One of the larger sites I work with moved to a new server and SQL 2008 Enterprise the beginning of this year. We started using the compressed backups at that time. The currently 308 GB (used) database results in a 49 GB backup file. The smaller backup size also allows us to retain more history on the local backup disk. Restores seem a tad bit slower as it must be uncompressed. But not a sizable difference.

    This thread has started me think about using page compression on some of the larger table. GL entry currently has about 150 million records. Value Entry has 80 million records. The other table I might consider is "G/L - Item Ledger Relation" which has about 130 million records. what I'n looking at are large tables that had records inserted but never updated or deleted. Posted dimension tables might also be considered.
    There are no bugs - only undocumented features.
  • Options
    strykstryk Member Posts: 645
    Here is a little script which creates a TSQL to implement the compression:
    set statistics io off
    set quoted_identifier on
    
    declare @tabname nvarchar(128), @statement nvarchar(512), @cnt bigint, @threshold bigint, @method varchar(10)
    set @threshold = 1000000  -- min. number of records in candidate table
    set @method = 'PAGE'  -- define compression method; PAGE or ROW or NONE
    
    declare tab_cur cursor fast_forward for 
      select "name" from sys.tables 
      where "type" = 'U' 
        and ("name" like '% Entry' or "name" like '% Line')  -- Filter on relevant tables
      order by "name"
    open tab_cur
    fetch next from tab_cur into @tabname 
    while @@fetch_status = 0 begin
      set @statement = N'select @cntOUT = COUNT(*) from "' + @tabname + N'"'
      exec sp_executesql @statement, N'@cntOUT bigint OUTPUT', @cntOUT = @cnt OUTPUT
      if @cnt >= @threshold begin
        set @statement = 'ALTER TABLE [' + @tabname + '] REBUILD WITH (DATA_COMPRESSION = ' + @method + ')'  
        print @statement
        print 'GO'
      end
      fetch next from tab_cur into @tabname
    end
    close tab_cur
    deallocate tab_cur
    
    I guess it is self explaining. It's not very sophisticated but it gets the job done. You could use it to create a template, then sort out the tables you don't want to compress (regard the tables you should not compress!).

    AS IS, NO WARRANTY, NO GUARANTEE, NO SUPPORT, USE AT OWN RISK.
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • Options
    bbrownbbrown Member Posts: 3,268
    A useful stored procedure for estimating compression savings
    sp_estimate_data_compression_savings 
          [ @schema_name = ] 'schema_name'  
         , [ @object_name = ] 'object_name' 
        , [@index_id = ] index_id 
         , [@partition_number = ] partition_number 
        , [@data_compression = ] 'data_compression' 
    [;]
    
    There are no bugs - only undocumented features.
  • Options
    bbrownbbrown Member Posts: 3,268
    Here is what I am thinking (from a performance standpoint):

    1. Compression is not going to improve write performance. It should also not hurt it provided you are careful in selecting which tables to compress. I say the best choices are tables that have new entries added to the end.

    2. Compression will likely provide the best improvement on tables that have a high ratio of reads.

    Based on the above theories and the database I am considering for compression, here are my thoughts:

    A. While the GL Entry table is quite large, and a candidate for compression, only 2% of its activity is reads. I would not expect compression to have any positive impact on performance. But would still be useful from a size reduction point.

    B. The Value Entry table is also quite large, but 65% of its activity is reads. This maybe a table on which compression does provide some read performance. The question will be how much.

    Thought?
    There are no bugs - only undocumented features.
  • Options
    bbrownbbrown Member Posts: 3,268
    I did some test compression on a copy of the database mentioned above:

    The resulting space savings where within 10% of that estimated by the "estimate" query above.

    Initial compression was fairly quick. GL Entry (150 million records) took 5 minutes and gained 80% of its space. CPU and disk utilization was fairly high during this time. All 16 cores were running 96-98%. This was during the initial compression. During testing of user processes I did not see much load beyond what is normally experienced on this system.

    After compressing a few tables, the database gained about 25% of its space. This also resulted in an improved backup time of about 18% and a backup file reduced by about 15%. This was with compressed backups before and after.

    Encouraging results and definitely an option as system resources (disk space) begin to dwindle. This could extend system useful life.

    On the performance side I had mixed results. But nothing bad. I used a couple reports for this testing.

    This did not seem to help processes that read data in small chunk (even though the table was large). For this example I used the Inventory Valuation report. While the Value Entry table has many records, this report only reads them in small chunks. I saw no difference between uncompressed or compressed.

    On the other hand it did seem to slightly improve processes that read large amounts of sequential records. For this test I used the G/L Register report with a register containing ~30,000 GL Entries. Using the compressed version yielded about a 15% performance improvement.

    At this time I'm undecided on whether I will implement compression in the live systems. I probably will not at this time. Disk space is currently not an issue. But I will definitely keep this feature in mind.
    There are no bugs - only undocumented features.
Sign In or Register to comment.