Options

Performance slow after optimise table on SQL Server

jordi79jordi79 Member Posts: 272
edited 2011-01-11 in SQL Performance
Hi,
I am using SQL Server 2005. I have a table that is used for reporting. This table is very big, and will probably grow to be the same size as a transaction table. This table has 29 keys. Out of which 1 is primary key, 20 is sorting keys and 9 are keys with sumindex. I have a report that takes about 3 hours to run based on this table. I am in the midst of trying to see what I can do to improve performance with this table. Some of the suggestions I gather from searches on this website are:

1) Do a table optimise on this table (This is the file --> DAtabase --> Information --> Tables --> Optimise)
--> I have done this, but after doing table optimise, I have found out that it takes longer for the report to complete. What use to take 3 hours, is now taking longer. It took 6 hours to run, and as of now, is still running. The report I am running only reads from this table. It does some writes too, but to some temporary tables like Excel buffer etc. But it only reads from this table, and writes to other temporary tables that it uses for it's internal process.

2) Set the non sumindex keys in the table to MaintainSQLIndex = FALSE
--> I have not done this yet. But will try later. I understand that keys that are used for sorting only, should set MaintainSQLIndex to false for performance. Does this include the primary key?

3) Rebuild Index regularly on this table (This is the rebuild index from SQL Management Studio)
--> I have not done this. But shouldn't optimise table be doing the same thing as this?

Any advice is welcome. thanks.

Answers

  • Options
    jordi79jordi79 Member Posts: 272
    ops... i have posted this in the wrong forum. how can I move this thread to SQL performance?
  • Options
    jordi79jordi79 Member Posts: 272
    Hi again,
    my mistake. After optimise table, what used to take 3 hours is now taking 2 hours to run. Problem solved. The question now would be, how would one automate this task?
  • Options
    PeterVersteegePeterVersteege Member Posts: 13
    Hi Jordi,

    What version of Navision are you working on ?

    About setting the property MaintainSQLIndex to False, it is not as easy that you can just set this property for all the secondary keys without SumIndexes. More important are the fields which are in the key, and for what purpose each key is used.

    The report you mention, does that also update any records in this table ?
    K3 Business Solutions
  • Options
    krikikriki Member, Moderator Posts: 9,096
    [Topic moved from 'NAV/Navision Classic Client' forum to 'SQL Performance' forum]

    BTW: when you want to report spam or a post to move to another forum, just report the post (the red ! at the right of the post ; you need to be logged in to see it).

    2) You can do that for the indexes that are not used in SQL.
    This select finds them (if you find an index with user_seeks,user_scans,user_lookups all 0 it means it is not used). Before you run the query, you need to be sure that your database is online since some time to gather useful data.
    SELECT 
    	  CONVERT([VARCHAR](80),PVT.TABLENAME) AS 'TABLENAME',
    	  CONVERT([VARCHAR](30),PVT.INDEXNAME) AS 'INDEXNAME',
    	  -- PVT.INDEX_ID, 
    	  CASE
    		WHEN (B.user_updates = 0) 
    		  THEN (B.USER_SEEKS + B.USER_SCANS + B.USER_LOOKUPS)
    		WHEN (B.USER_SEEKS + B.USER_SCANS + B.USER_LOOKUPS) < 200
    		  THEN - B.user_updates
    		ELSE ((B.USER_SEEKS + B.USER_SCANS + B.USER_LOOKUPS) / B.user_updates)
    	  END AS [UseFullNess],
    	  B.user_updates,B.USER_SEEKS,B.USER_SCANS, B.USER_LOOKUPS,
    	  B.last_user_seek,
    	  B.last_user_scan,
    	  B.last_user_lookup,
    	  CONVERT([VARCHAR](30),[1]) AS COL1,
    	  CONVERT([VARCHAR](30),[2]) AS COL2,
    	  CONVERT([VARCHAR](30),[3]) AS COL3,
    	  CONVERT([VARCHAR](30),[4]) AS COL4,
    	  CONVERT([VARCHAR](30),[5]) AS COL5,
    	  CONVERT([VARCHAR](30),[6]) AS COL6,
    	  CONVERT([VARCHAR](30),[7]) AS COL7,
    	  CONVERT([VARCHAR](30),[8]) AS COL8
    
    	FROM (SELECT A.NAME AS TABLENAME,
    				   A.OBJECT_ID,
    				   B.NAME AS INDEXNAME,
    				   B.INDEX_ID,
    				   D.NAME AS COLUMNNAME,
    				   C.KEY_ORDINAL
    			FROM   sys.objects A
    				   INNER JOIN sys.indexes B
    					 ON A.OBJECT_ID = B.OBJECT_ID
    				   INNER JOIN sys.index_columns C
    					 ON B.OBJECT_ID = C.OBJECT_ID
    						AND B.INDEX_ID = C.INDEX_ID
    				   INNER JOIN sys.columns D
    					 ON C.OBJECT_ID = D.OBJECT_ID
    						AND C.COLUMN_ID = D.COLUMN_ID
    			WHERE  A.TYPE <> 'S') P
    		   PIVOT
    		   (MIN(COLUMNNAME)
    			FOR KEY_ORDINAL IN ( [1],[2],[3],[4],[5],[6],[7],[8] ) ) AS PVT
    		   INNER JOIN sys.dm_db_index_usage_stats B
    			 ON PVT.OBJECT_ID = B.OBJECT_ID
    				AND PVT.INDEX_ID = B.INDEX_ID
    				AND B.DATABASE_ID = DB_ID()
    	WHERE  ((B.user_updates > 100) 
    	   OR (B.USER_SEEKS + B.USER_SCANS + B.USER_LOOKUPS > 100)) -- I will check them if the index is used a lot
    order by [UseFullNess]
    

    Best not do that on the primary key (as most of the time, it is the clustered index).

    3) index rebuild does the same thing, but you can give a certain fillfactor to be used (if you have a lot of reads and few writes, a 100% fillfactor can be ok, but it can slow down writes)
    If you have a NAV version that is before 5.0SP1, than the optimize does more and I can give the advice to do a technical upgrade to a newer version.
    If your table does not change too much, you can also try do to a index defrag. This is something that can be done online. It is mostly faster but it does not optimise the indexes so well. And afterwards, you need to update the sql server statistics of the table.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    jordi79jordi79 Member Posts: 272
    Hi Jordi,

    What version of Navision are you working on ?

    About setting the property MaintainSQLIndex to False, it is not as easy that you can just set this property for all the secondary keys without SumIndexes. More important are the fields which are in the key, and for what purpose each key is used.

    The report you mention, does that also update any records in this table ?

    Hi Peter,
    I am using ver 5.1. Actually after reading some of the posts made by the users in this forum I have a clearer picture on MaintainSQLIndex. It determines if you want SQL To created an index for this key in NAV. And if Yes, you can specify the fields SQL will use to create this Index in "SQLIndex" property in the "Key - Properties". In short, READ will be faster if SQL Index is created, but WRITE and INSERTS will be slower. And fields to be specified in "SQLIndex", must be carefully specified, with low selectivity.

    It is amazing what you can learn if you spend a whole day reading this forums... \:D/

    So far at the moment, the optimise table option has helped me.
  • Options
    PeterVersteegePeterVersteege Member Posts: 13
    I still think that 2 hours to run a report, which only uses 1 table, is quite long. I still would advice you to reconsider if you really need all the current keys, because I imagine that filling the table even takes more time, and could become a problem in the upcoming months.

    How many records does this table have now ?
    K3 Business Solutions
  • Options
    jordi79jordi79 Member Posts: 272
    this is actually a denormalised table from GL Entry table. it is denormalised to make reporting easier and to take the load of actual transaction tables when the report is run. This is similar to analysis views table in NAV. But it handles more dimensions compared to analysis views table. I am suggesting to partition the table into more records, and therefore having less key fields on this table. The structure of the current table:

    1) Entry No. (Primary Key)
    2) G/L Account No.
    3) Posting Date
    4) Dimension 1
    5) Dimension 2
    6) Dimension 3
    7) Dimension 4
    8) Dimension 5
    9) Dimension 6
    10) Dimension 7
    11) Dimension 8
    12) Dimension 9
    13) Dimension 10
    14) Dimension 11
    15) Dimension 12
    16) Dimension 13
    17) Dimension 14
    18) Dimension 15
    19) Dimension 16
    20) Dimension 17
    21) Dimension 18
    22) Dimension 19
    23) Dimension 20
    24) Amount
    25) Field n++ (there are about additional 50++ fields here that store other information, but mostly they mirror the GLEntry table, so that we do not query the GLEntry table)

    We have secondary keys with sumindex fields on "Amount". The keys are:
    1) G/L Account No.,Posting Date,Dim Value 1,Dim Value 2,Dim Value 3,Dim Value 4,Dim Value 5
    2) G/L Account No.,Posting Date,Dim Value 6,Dim Value 7,Dim Value 8,Dim Value 9,Dim Value 10
    3) G/L Account No.,Posting Date,Dim Value 11,Dim Value 12,Dim Value 13,Dim Value 14,Dim Value 15
    4) G/L Account No.,Posting Date,Dim Value 16,Dim Value 17,Dim Value 18,Dim Value 19,Dim Value 20

    The user selects a group dimension to use for the report. So the user can only use 5 dimensions in a report at a time. I was thinking if I should split this table, so that Instead of 1 record having 20 dimension fields, to 4 records each having 5 dimension fields. But bear in mind, we have "Field n++". And with this we will be replicating the same information (which is "Field n++") across all the other fields. I was thinking of normalising this, so that "Field n++" can be stored in another table, but this would not be ideal for reporting. For reporting, a denormalised table would be better.

    And with this, I need to maintain 1 secondary key with sumindex instead of 4 secondary keys with sumindex. I am not sure how will this affect performance. Any advice here?

    I think I will need to come out with a proof of concept on this on how much time it can save on performance before proposing this change. Because we have developed a lot of reporting tools on this table, and changing the structure as such, is a big revamp.
Sign In or Register to comment.