Transaction LOG Backup getting locked?

kaelaakaelaa Member Posts: 28
edited 2009-10-05 in SQL Performance
Hello,

I have been monitoring blocks in SQL server to try to understand what queries are creation blocks.

I see that the Transaction Log Backups are getting blocked.

I have a maintenance plan that creates Transaction Log Backups every 30 minutes.

Does those backups can create locks or get locked by another user. In my monitoring , I always see that the backup was

blocked by user sa!

The size of the backups during the day is 32MB approximately.

Does the disk that the backup are being written to should be very performant?

The backups always gets completed even after the blocks.

Thanks

Comments

  • Marije_BrummelMarije_Brummel Member, Moderators Design Patterns Posts: 4,262
  • kaelaakaelaa Member Posts: 28
    The Wait Ressource is always : FILE: 11:0

    I don't understand what does it mean...

    Thank you
  • kaelaakaelaa Member Posts: 28
    I know that 11 is the database id but what does 0 means ?

    Is File resource is the disk file? Is that means that it cannot access the disk to write to? why?

    I did some search and I found some documentation on waitresource:
    http://support.microsoft.com/default.as ... -us;224453

    but they only talk about those resources: table, page, key, row and compile.

    Thanks
  • strykstryk Member Posts: 645
    Well, I think it would help to learn more details about both/all "counter-parts" in this blocking situation ... maybe this could help you to find out: http://dynamicsuser.net/blogs/stryk/archive/2008/11/03/blocks-amp-deadlocks-in-nav-with-sql-server.aspx
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • kaelaakaelaa Member Posts: 28
    Thank you guys for your help!

    Jorg I forgot to mention that I am monitoring those blocks using the scripts described in your blog. I have even bought your book.

    Thank you again guys for sharing your knowledge in these forums. It is really appreciated!

    Actually to relate to your blog, when monitoring the blocks in the system, most of the time the Waitresources are the following indexes:
    Warehouse Entry$0 (Entry no)
    G/L Entry$0 (Entry no)
    Note Filter$0 (Entry No) - Addon (Popup notes from CostControl)

    I read your GUID solution. I know that all the financial tables should not be changed but is it safe to apply this solution for other tables and for a live system? I will have to cover all the codeunits that those tables are getting inserted to I guess and change the code...

    I already started doing some Index and SIFT tuning for the Ledger tables.

    I don't get the Locks with the Transaction Log Backup, I know that the disk that the backups are written to is always working 100%...

    If a SQL server has 4 Quad Core, Microsoft documentation states that only 1 processor is used if the database is configured with only one filegroup. They say to have the full potential for all processors, we have to create multiple filegroups (as many as there are processors) and put each fielgroup on a separate disk.
    Is that true? it will cost a lot to have this configuration, no?

    Is there a way to know for a specific case if putting TempDB on a seperate drive makes a difference?

    sorry I had all these questions in my mind that I needed to ask!


    Thank you
  • strykstryk Member Posts: 645
    Well, the AutoIncrement thing is actually a really tricky one ... depending on local legal requirment it might be impossible to implement such a thing on "fiscally relevant" table (G/L Entry, VAT Entry, etc.).
    If doing this, you need to carefully test, as there are several pros and cons (which were not discussed in my BLOG) ...

    SIFT tuning is highly recommended as this solves a lot of issues "in one go" so to speak!

    Regarding "splitting the database": the short answer is: No, you shouldn't do this. Splitting the DB into 16 files (4 x Quad = 16 CPU) would be rather insane and definitely exceed the disk-controllers capacities. Having multiple files makes only sense if you could place each file on a dedicated physical disk; but this also just gives minor improvements as SQL Server does not perform a full logical loadbalancing: for example, it will never split any Index - e.g. Clustered Index of "G/L Entry" - into multiple files. Hence, if you have load on this Index, well then only this single file will be used.
    Once it was assumed - and trained by MS! - that multiple files would also increase the prallelism on the CPU as each file was supposedly processed with a dedicated CPU thread ... according to some BLOG (http://blogs.msdn.com/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx) this is a fairy tale ...
    So I recommend to have on single file for "Data Filegroup 1" and to put it on a powerful disk using RAID10 with as many spindles as necessary (4 to 12+) (= physical loadbalancing).
    (And of course the Transaction Log file MUST be placed exclusively on a dedicated disk (RAID1 or 10) as well!)

    With the tempdb it's somewhat different: here indeed it makes sense to split it into multiple files, even though if you place them all on the same disk. Thus, the tempdb requires it's own exclusive/dedicated disk (RAID1), but you could place multiple files there.
    Depending on the number of concurrent users, the transaction volume and the disk-specifications I would suggest to split the tempdb into 4 to 8 files. (Have in mind: only split if tempdb is on its own drive, else you might cause trouble for the disk-controller).
    According to that you should increase the initial file sizes, set Auto Growth to fix values and disable tha Auto Stats stuff ...
    Please refer to this one about details: http://msdn.microsoft.com/en-us/library/ms175527.aspx

    Hope this helps you a little.

    Best regards,
    Jörg
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • kaelaakaelaa Member Posts: 28
    Thank you very much Jorg for the detailed response.

    I will look into all this.
  • strykstryk Member Posts: 645
    You're welcome!
    Some things to add: what are the Lock Modes"? I guess the tables/indexes you mentioned are affected when inserting new lines; here this would be 2 "classic" algorithm know for trouble:
    GLEntry.LOCKTABLE;
    IF GLEntry.FIND('+') THEN
      NewEntryNo := GLEntry."No."
    ELSE
      NewEntryNo := 1;
    

    Problem is the FIND('+') as this "pumps" a huge amout of data into a SQL Cursor, causing various problems. If you could replace this with a FINDLAST the performance should be remarkably better ...

    Best regards,
    Jörg
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
  • kaelaakaelaa Member Posts: 28
    Hello Jorg,

    Most of the locks are LCK_M_RS_U and some of them are LCK_M_X.

    I am gonna try to change the Find('+') by FindLast.

    Thank you
  • strykstryk Member Posts: 645
    Yep, the Update Locks (LCK_M_RS_U = Range/Shared/Update) should be coming from a FIND statement with a LOCKTABLE ahead.
    The Exclusive Locks (LCK_M_X = Exclusive) should originate from a write transaction, thus INSERT or MODIFY.

    I guess the issue is indeed concurrent postings, attempting to read/insert into the same Ledger Entry table ...

    Using FINDLAST might reduce the problems, but also cannot completele spolve them ...
    Jörg A. Stryk (MVP - Dynamics NAV)
    NAV/SQL Performance Optimization & Troubleshooting
    STRYK System Improvement
    The Blog - The Book - The Tool
Sign In or Register to comment.