Date Compress Whse. Entries - sslllooooooowwww as molasses

LibertyMountainLibertyMountain Member Posts: 94
our Warehouse Entry table has 6.1M records, taking up 2.5GB in our DB. I'd like to compress the entries by year through 12/31/2010.

unfortunately, the process is very slow going. usually when I see something like this, I assume the report I'm running has a bad key or something.

But the logic in the batch report is a little over my head, the way it is based on Warehouse Entry (without a specified key), but then the first statement of Warehouse Entry OnAfterGetRecord is WhseEntry2 := "Warehouse Entry";

At that point all the logic is based on WhseEntry2, and the report sets an appropriate key to do the compression (starting with Item No., Bin No.), but then it still works very slowly.

When I say "slowly" I mean that I am seeing the report run through a couple hundred items numbers without issue (based on the new/deleted numbers climbing higher), but then the report stalls anywhere from 2 to 10+ minutes.

I am doing this when I am the only user on the system. I am doing one year at a time (i.e. I did 1/1/04..12/31/04 by itself, then 1/1/05..12/31/05 by itself, etc.). Up through 2008, we had up to 10,000 records per year. But starting in 2009, we have 1,000,000+ records per year. At the current pace that the 2009 compression is moving, I estimate this taking 50-75 hours to complete.

Am I doing the best I can do? or is there something wrong with the compression report that others have also experienced (and hopfully have addressed)?

thank you for your thoughts and ideas.

Comments

  • bbrownbbrown Member Posts: 3,268
    First my question is why are you concerned with compressing a table with that only occupies 2.5 GB of space? The table you describe does nto seem all that big. I took a quick look at one of our clients for comparision. Based on the first entry to now, they've been averaging about 3 million entries per year. The table has about 20 million records and uses about 9 GB of space. Compression has not come up as a subject. At least not yet.

    This is not to say I've never "compressed" a Warehouse Entry table. I've just taken a more manual approach. That is to abandon all the data.

    1. Pick a cutoff point where you have no open warehouse activity.

    2. Reclass the inventory to an empty non-bin location.

    3. Clear the Warehouse Entry table

    4. Reclass the inventory back.
    There are no bugs - only undocumented features.
  • LibertyMountainLibertyMountain Member Posts: 94
    on your #3, after you've cleaned up the data with the reclassifications, do you just use SQL do to delete statements of the old entries you don't want?

    we have about a 60GB database. we've identified about 20GB of old data that isn't necessary among about 12 tables. Warehouse Entry table is one of those 12 that we're trying to purge old data.

    G/L compression has worked flawlessly. Deleting posted sales and purchase documents is working well too (but the only way I've figured out how to do this efficiently is just to run object designer, open up the header table, select the records I want to purge, and just delete from there. I'm guessing I'm missing a batch report or something that should allow this, but I can't find it.) Also we've purged old warehouse registration documents.

    we're about 75% the way to our goal. but this Warehouse Entry table isn't playing nice like the others.

    thanks for your help.
  • bbrownbbrown Member Posts: 3,268
    Why are you concerned with deleting data from a database that's only 60 GB? That's nothing.
    There are no bugs - only undocumented features.
  • bbrownbbrown Member Posts: 3,268
    Is this SQL or C/Side?

    In response to your question, I've done it both ways. But it's not something I've done a lot of times.
    There are no bugs - only undocumented features.
  • LibertyMountainLibertyMountain Member Posts: 94
    SQL server option. NAV 2009 R2.

    we're doing a DR audit. and we want a smaller database to speed up a restore if necessary. also we hope to give the users a small performance boost (especially those who know a little too much to be dangerous and do searches and filters using the client on these tables without respecting a key ... that includes me sometimes too :oops: ).

    one lookup we do often when looking up bin history to resolve a problem is to bring up all the bin history for an item, for all bins in a location, sorted by entry no. This is performing worse and worse, and I hope purging old records will make this go faster for the users. I guess we could add a new key... although that doesn't solve our DR objectives.
  • bbrownbbrown Member Posts: 3,268
    I think you really need to step back and rethink the issues and what you are proposing for solutions. As I've said, 60 GB is nothing. I've got multple clients with databases easily 2 or 3 times that size. Including one in the 500 GB range.

    Deleting data should not be your first solution to performance issues.

    In terms of users picking wrong keys, don't use technology to replace user training.
    There are no bugs - only undocumented features.
  • ara3nara3n Member Posts: 9,256
    i would remove the warehouse entries. and there is a way to improve the process through sql.
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • bbrownbbrown Member Posts: 3,268
    ara3n wrote:
    i would remove the warehouse entries. and there is a way to improve the process through sql.

    That may be a valid solution for that table, but is it valid for the other tables? Also what happens if transaction volume increases, do you just start deleting more often? I still say that, with a DB this small while there may be table specific exceptions (like warehouse entries), deleting data should not be the first generic solution for performance issues.
    There are no bugs - only undocumented features.
  • ara3nara3n Member Posts: 9,256
    It's valid solution for any non subledger/ledger table .
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • bbrownbbrown Member Posts: 3,268
    ara3n wrote:
    It's valid solution for any non subledger/ledger table .


    I don't disagree. What's I'm talkign about is the main stated reason driving this need. If the DR plan is only viable if the DB is maintained at something below 60 GB, then I say the DR plan needs to be rethought.
    There are no bugs - only undocumented features.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    I was going to reply to this thread, but I think its simpler (with the same effect) if I just say I agree with everything Mr Brown has said so far :mrgreen:

    Compression has its place, and yes you are looking at the simple ones, but that may lead you to get over confident and move into the dangerous territory which should have a "Do Not Try This At Home" disclaimer attached.
    David Singleton
Sign In or Register to comment.