Date Compression with Average Costing

Carmen_SCarmen_S Member Posts: 19
edited 2008-08-08 in Navision Attain
We have a 3.10B native database and we use average costing.
The database size is over 150GB and we're planning to do some date compressions and history cleanups to reduce the database size and increase performance. I don't want to get into any performance debates, I just want to ask the experts if we should attempt to run a date compression on item ledger entry knowing that we use average costing.

Thank you,
Carmen
Carmen Stanciulescu
Vancouver, Canada
«1

Comments

  • DenSterDenSter Member Posts: 8,307
    You should first have someone come in and help you tune the tables. You would be shocked to see how much room indexes and SIFT tables use. I've seen file size reduction of up to 40% just from tuning the indexes.
  • SavatageSavatage Member Posts: 7,142
    Also check for unused keys - turning the off makes space - I think there's a download, else have the professional that's doing Denster's suggestion look at that too.

    If i remember correctly there was a problem with the date compression in 3.1 - I remember a hotfix :-k

    Check Old Sales Inv & Sales Lines or Shipping Lines etc, for possible deletion. But keep a backup before you do any of this incase you need the info again. Plus deletetions like this don't effect any costs.
  • Alex_ChowAlex_Chow Member Posts: 5,063
    Personally, I'd stay away from date compression on the item ledger. I have too many bad mental imprints about them.

    If you really have to compress, delete the posted shipment/invoice/CM, compress customer ledger, compress G/L. But do not compress ILE or VE.

    Also, try switching to SQL if you're still on native. SQL is better at handling large databases than native.
  • Carmen_SCarmen_S Member Posts: 19
    Thank you all for your suggestions. We will run date compression on G/L, Cust. and Vendor Ledger Entries and delete some document history. I will also attempt to determine unused keys and disable them. We should see some improvement after all this. I will look at ILE and VE later if I still need to.

    Cheers,
    Carmen
    Carmen Stanciulescu
    Vancouver, Canada
  • Alex_ChowAlex_Chow Member Posts: 5,063
    Carmen S wrote:
    I will look at ILE and VE later if I still need to.

    I rather you not. But do so at your own discretion.

    If your DB is really that big, have you considered moving to SQL instead of purging data? :-k
  • Carmen_SCarmen_S Member Posts: 19
    We need to do data cleanup before moving to SQL. A weekend would not be enough to convert this database to SQL and we cannot be down during the week.
    Carmen Stanciulescu
    Vancouver, Canada
  • Alex_ChowAlex_Chow Member Posts: 5,063
    Carmen S wrote:
    We need to do data cleanup before moving to SQL. A weekend would not be enough to convert this database to SQL and we cannot be down during the week.

    Have you considered using Upgrade Centers to do the upgrade for you? They specialize in upgrading large databases and may be able to help you finish the upgrade over a weekend.
  • Carmen_SCarmen_S Member Posts: 19
    We decided to do data clean-up for now. Thank you.
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    Carmen S wrote:
    We decided to do data clean-up for now. Thank you.

    What are your plans when the data fills up again? Deleting data is NOT a solution for performance issues. (No matter what you've been led to believe!) It is a temporary band-aid at best.
    There are no bugs - only undocumented features.
  • DenSterDenSter Member Posts: 8,307
    Carmen S wrote:
    I don't want to get into any performance debates
    She don't wanna talk 'bout it :mrgreen:
  • DenSterDenSter Member Posts: 8,307
    I thought the same thing :mrgreen:
  • Carmen_SCarmen_S Member Posts: 19
    I was away for a few days....

    Looks like there is some interest in this topic so let's talk about it.
    Here's some more details:
    1. We have a high transaction volume database. We got to 150 GB in 6 years.
    2. We don't need to keep more than 5 years of GL and 3 years of customer and vendor history.
    3. Navision performance is not bad.
    4. It takes about 28 hours to restore the database from a Navision backup.
    5. We plan to move to SQL Server but we are not ready for it yet.

    Here's our plan:
    1. Data clean-up. We need to make it a periodic activity to clen-up history every year.
    2. Clean-up indexes and sumindex fields. Just ran Eduard Sanosian's Unused Keys tool - Love it! We're in pretty good shape there. A handfull of keys we can get rid of.
    3. Get educated on SQL Server. I am a SQL Server 2000 MCP but haven't got the chance to practice SQL Admin enough so that I can take the DBA role for the Navision SQL Database. With this much data, we need to tune the SQL Server for performance.
    4. Last and not least, we need to budget for the hardware and SQL license.

    I am open for any suggestions. What is your experience with databases of this size? What is the best practice for maintenance and performance? How do you plan for moving to SQL?

    Thank you,
    Carmen
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    Carmen,

    You raise some interesting discussion points:

    1. We have a high transaction volume database. We got to 150 GB in 6 years.

    I'm not sure I would consider 150 GB in 6 years to be a high transaction volume. But it depends on specifics. It's some what of a subjective number and would depend on what makes up that 150 GB.

    In contrast, we are working with a customer that took 5 years to reach 20 GB with just core financials. But since going live with manufacturing has produced an additional 100 GB of data in the past 15 months. Their current growth rate is 6 to 8 GB per month.

    2. We don't need to keep more than 5 years of GL and 3 years of customer and vendor history.

    Only you can decide the value (or lack) of your historical data. Look before you leap. Understand what you are giving up (data not available) and what you possibely gain. Also be aware of potential issues and test.

    I do not have confidence in the compression routines for inventory. I ran into an issue with a client that had run the process.

    3. Navision performance is not bad.

    This again is subjective. Performance is not good or bad, but rather acceptable or unacceptable. What is acceptable to one client may be unacceptable to another. Only you can decide what is acceptable for your site.

    If a site operates from 8 to 5 each day, having Adjust Cost take 6 hours each night may be acceptable. But that same performance might not be acceptable on a site that run longer hours.
    There are no bugs - only undocumented features.
  • Carmen_SCarmen_S Member Posts: 19
    bbrown,

    Can you give any details on the issues caused by inventory date compression?

    To clarify, we consider Navision performance acceptable, as far as normal day to day operations. You are right, we are running short on off-operation time for batch processes and the restore takes too long.

    Looks like you are not in favor of data clean-up.
    Any suggestions for database maintenence and improved performance?

    Thanks,
    Carmen
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    In this one particular issue the client was constantly driving inventory negative. They ran data compression which compressed sales entries that were related to items with negative onhand quantity. A few months later the booked entries to bring the items positive and ran adjust cost. Adjust cost blew up because the outbound entries no longer existed.
    There are no bugs - only undocumented features.
  • bbrownbbrown Member Posts: 3,268
    It's not that I am not in favor or data cleanup. It's that I don't consider it as a first level option. I see data clean up has a knee-jerk reaction too often. At least once a month I have someone come to me and say that a particular database needs data cleanup. Then after asking a few questions I discover the database is only 20 GB.
    There are no bugs - only undocumented features.
  • Carmen_SCarmen_S Member Posts: 19
    bbrown wrote:
    In this one particular issue the client was constantly driving inventory negative. They ran data compression which compressed sales entries that were related to items with negative onhand quantity. A few months later the booked entries to bring the items positive and ran adjust cost. Adjust cost blew up because the outbound entries no longer existed.

    I see. Thank you for sharing.
    Carmen Stanciulescu
    Vancouver, Canada
  • Carmen_SCarmen_S Member Posts: 19
    bbrown wrote:
    It's not that I am not in favor or data cleanup. It's that I don't consider it as a first level option. I see data clean up has a knee-jerk reaction too often. At least once a month I have someone come to me and say that a particular database needs data cleanup. Then after asking a few questions I discover the database is only 20 GB.

    What would be your first level option based on the info. you have about my scenario?

    What do you think about table optimization in a standard database as a maintenance routine?
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    I would focus on specific issues and address the underlying causes. You mention that overall performance is good but you are running out of time for batch processes. Look at what these processes are doing and where is there oppurtunity to enhance their performance. Also look at ways of improving overall performance.

    Review your hardware and system build. Is it configured in a manner to achieve maximum performance from NAV?

    What does the disk system look like? Is there oppurtunity for improvement. Spread the database across multiple RAID 1 arrays. Are they dedicated to the database?

    Does the NAV service have maximum cache set an is write-cache on?

    Look at the processes themselves. Is there oppurtunity to improve the process thru code changes or using better keys?

    these are just a few thoughts..
    There are no bugs - only undocumented features.
  • Carmen_SCarmen_S Member Posts: 19
    Bbrown, you bring good points to the discussion.
    I would invite Daniel and Alex to contribute as well if they are still interested in the subject.

    I think we did consider most of these areas for improvement.
    We have a dedicated server with dedicated disks - 7 sets of Raid 1. Write through, no cache on the disks - as far as I know this is what is recommended for Navision. We get 4 - 7 mean read and write times. DBMS Cache is at 900 MB. Server performance is great. We have a farm of 4 citrix servers for the 100 users in 25 locations. No complaints about performance during the day. Nightly routines - I optimized as much as I could. We are not in panic mode - still manage to complete all batch processes at night, but trying to be proactive and prepare for the future.
    We were concerned about the disaster recovery time because it takes 28 hours to restore the Navision database from the Navision backup. We have to stop the service and take disk image at night as well.
    Do you have any better solution?

    Thank you all for a great discussion.

    Regards,
    Carmen
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    If you plan to stay in native and are concerned with disaster recovery time, you might consider a server cluster. My recommendation would be to move toward SQL. It would provide you with more options.
    There are no bugs - only undocumented features.
  • Alex_ChowAlex_Chow Member Posts: 5,063
    I would contribute but if I do, I would basically duplicate what bbrown has already said. :D

    For disaster recovery, back up and restore is MUCH MUCH faster on SQL and I'd strongly advise you to move to SQL first before trying to purge.
  • DenSterDenSter Member Posts: 8,307
    Instead of looking at the values of certain settings, I would get someone involved to actually measure current performance. You might find that even though someone said that a certain setting was 'the right/recommended' setting, the actual performance may not be as good as it can be. Taking 28 hours to restore one company, one 150GB database is WAY too long, and I would bet that i/o performance, or disk read/write operations are severely underperforming.

    Most of the time we're talking about index tuning in relation to SQL Server, but native databases can benefit from tuned indexes as well. You'd be surprised how much space indexes take up on native.

    Just to elaborate on Alex's remark that resoring on SQL Server is much faster: this is only if you use SQL Server backups. Restoring a NAV backup takes just as long on SQL Server as it does on native, plus you have to have the transaction log on about 2-3 times the disk capacity as the other data files to accommodate the huge transaction log that the restore process will generate.
  • Carmen_SCarmen_S Member Posts: 19
    I should have mentioned that it takes 28 hours to restore on a test server. It would be faster in Live because we have better disk I/O. Still we'd be down a day if we need to restore a Navision backup, and it is not acceptable.
    We did consider cluster servers, but could not justify the cost.
    SQL Server, yes. We are going to move to SQL but not right now.

    How long would you expect it to take to convert to SQL Server a 150GB Navision database?

    Did you find that table optimization improves performance on a standard database?
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    So in the event of a system failure you potentially lose 48 hours. The day you lose from needing to revert to the last backup plus the day to restore. This doesn't include time waiting for any needed system repairs.

    Don't expect the conversion of a 150 GB database to SQL to be a simple slam-dunk backup/restore. You will need to plan lots of time for testing and performance tuning.
    There are no bugs - only undocumented features.
  • Carmen_SCarmen_S Member Posts: 19
    You are correct. That's why we do a disk image which will restore in a couple of hours.

    I know we need to do a lot of testing and tuning. I'm just trying to figure out if when we're ready and the day comes, a weekend is enough to go live on SQL. I'll have my answer when we start testing but I thought I'd ask what your experience is so I know what to expect.
    Carmen Stanciulescu
    Vancouver, Canada
  • bbrownbbrown Member Posts: 3,268
    There can be many variables that will impact the conversion time. I could tell you that we just went thru one that took about 60 hours. But that might be totally irelevant to your upgrade.

    I'm working on another one where we have a very short upgrade window. That's around 140 GB.
    There are no bugs - only undocumented features.
  • bbrownbbrown Member Posts: 3,268
    Have you ever tested a restor of that disk imaage? How does it handle the DB write cache?
    There are no bugs - only undocumented features.
  • Carmen_SCarmen_S Member Posts: 19
    bbrown wrote:
    Have you ever tested a restor of that disk imaage? How does it handle the DB write cache?

    No, we just bought the software. Haven't got the chance to test the database restore yet, but we tested other servers and it works pretty sleek. It's a pretty cool tool. Have a read at http://www.acronis.com.
    We stop the server service before taking the image, and we don't use disk write cache.
    Carmen Stanciulescu
    Vancouver, Canada
Sign In or Register to comment.