Commit Cache on Native Database

JFGcanesJFGcanes Member Posts: 11
Greetings fellow "Nav-ites". I've enjoyed coming to this site for a lot of great Nav info for the past several years and this is my first time posting.

As background, our company had been running computer operations on a TI990 (Texas Instruments Mini Computer) since 1978 up until a complete cut-over to Navision in 2009, after a long process which began in 2006. This is my first experience with a Database System and I'm truly loving the capabilities and ease of use/programming of Navision.

My question involves the "Commit Cache" setting for the Native Database Server. Is there any reason/circumstance where you would not turn the Commit Cache On?

Our Database/Server setup is as follows:

Nav 4.0 SP3 - Native - 30 Active Users
Database Size is 15 GB with about 70% of the space used
Dell PowerEdge 6600 Server with 4 GB RAM (800,000 KB used for Database Cache)
2 - 36 GB 10K SCSI Disk Drives in a RAID 10 PERC-3DC Hardware Array (for the OS Windows 2000 Server)
4 - 36 GB 15K SCSI Disk Drives in a RAID 10 PERC-3DC Hardware Array (for the Database in a Single File)

Our issue is when large amounts of lines are posted from journals, during the COMMIT phase at the end of the posting, all our NAV Clients hang waiting on the Windows "Hour Glass" Cursor. Once the postings are Committed to the Database the Clients return to normal operations. On large postings this can take up to 30-45 seconds and is very annoying to our users.

This problem was first brought to my attention after I mistakenly allowed the Database to grow to 90% of capacity. Over the 4th of July Holiday ran the "Optimize Tables" operation which reduced the used space to only 62%. From reading this forum I know that "Optimizing" the Tables can lead to a decrease in "WRITE/INSERT" performance, but I have yet to find any info on its effect on the "COMMIT" process, but maybe this is implied when folks talk about "WRITE/INSERTS" into the Database. This all could be just a coincidence and the problem may have just been tolerated without issue until now.

After running some tests on a "Test Server" with the "Commit Cache" turned on, I do not experience this hang condition. I've asked our NAV partner about this and why we would not have the "Commit Cache" turned on. Their opinion is that it is better to leave it off in case of a server meltdown. But, I’m not sure I agree with them. From my understanding of Navision’s “Database Versioning Principle”, the “Version-ID” of the Committed Data is the last thing that is actually written to the cache/disk. Whether the “Commit Cache” is turned On or Off, if something were to happen to the disk controller before the next “Version-ID” actually gets on the disk, then none of that “Version’s” updates will even register in the database once it is back in operation. The Database might be a few versions behind what the users thought they had committed, but it would still be intact/consistent.

Does anybody know of a situation where the "Commit Cache" is better left turned Off on the Native Database Server? Since this is a setting which can be turned on/off there must be a reason for the control (Trouble-Shooting maybe?).

Sorry for being so long-winded on my first post, but feel it’s a good idea to get all the facts out before anybody can reasonably respond to my question/situation and I seriously do not want to crash the database.

Thanks,
John

Answers

  • David_SingletonDavid_Singleton Member Posts: 5,479
    Hi John, that's quite a big jump you have made. I do wonder though why your partner put you on the classic database and not SQL, but that's for another thread.

    Basically the reason for turning commitcache off is historical, when servers had 64 meg of ram and they didn't have intelligent disk controllers.

    Things have changed, and now you would always use Commit Cache.

    But in your case your issue is probably more disk related. Classic server needs more smaller disks, so rather than one disk for the database, you probably need at least 4. This would be 8 drives configures as 4 x RAID 1 sets. That will make the biggest improvement for you.
    David Singleton
  • JFGcanesJFGcanes Member Posts: 11
    Thank you Harry & David for your prompt reponses.

    In Kriki's Blog there is a discussion that the full 1 GB for the Database Cache is not attainable. Is/was this still the case with Version 4.0 SP3? Is 850,000 KB still the practical max?

    If the Database Service can be started with 1 GB, is there anything to fear that it could later malfunction other than risking a Page/Swap? In my case, I doubt this will happen as I currently show over 1.5 GB of Memory available on the server.
  • David_SingletonDavid_Singleton Member Posts: 5,479
    If you can get the cache to 1g and it starts then you are fine. I have been able to get close to 1G, and if you start the server in a dow window rather than a service, then you can use the full 1G. The only problem is that one day you will restart the server, and the Navision server wont start. So its safer to leave it around 900Meg.

    As to the version, it does not matter. Very little changed in the server code since they introduced ver 3.56 (which is when they added commit cache) in 1994 and since the Windows version its basically the same code. This cache bug has been there since 1.1.
    David Singleton
  • JFGcanesJFGcanes Member Posts: 11
    David,

    I use the Windows MMC to start/stop and change the Database Server parameters. Is this what you mean by "starting the server in a window"? I also shut down the Database Service with a "Net Stop" command before running Veritas Backups and then use the "Net Start" command to restart the Database Service. I will have to see if a 1 GB Cache setting will have any effect on restarting the service.

    Thanks again for all your help. This is a great forum! :thumbsup:

    Rgds,
    John
  • JFGcanesJFGcanes Member Posts: 11
    To All,

    As a followup, I can report my Database Service successfully starts with 1 GB of Database Cache, either via the MMC or through the "Net Start" command. If I ever do restart the server, I will report back as to whether or not the Database Service starts automatically.

    I would also like to report that the problem described above has been corrected by turning on the "Commit Cache". Our users are happy campers and that's really what my job is all about.

    Thannks again for your prompt replies and to all for a great Nav Forum.

    \:D/

    John Graney
    IDEAL Fastener Corporation
    Oxford, NC, USA
  • MalajloMalajlo Member Posts: 294
    Commiting cache I have some doubts what is happening at some circumstaces. Correct me if I'm wrong...

    So, native DB, 800MB cache. After large batch posting, other session (NAS actualy) sees data posted BEFORE it is written to disk? It should work so.
    Problem occurs if database server crashes. Rollback is done and postings are not in database (because there is no posted entries!)? Problem is NAS which sends data like it is posted.
    It is just bad luck with timings of scheduler but it is also a little creepy...
  • krikikriki Member, Moderator Posts: 9,110
    The NAV dbcache can be readonly but also contain writes (=COMMITCACHE).
    You can remove the COMMIT-cache on the NAV DB server, but this will slow down a lot the performance.

    What happens with the COMMIT-cache enabled?
    Every write in the DB is taken by the COMMIT-cache and when the client sends a COMMIT, the COMMIT-cache sends directly a confirmation back to the client, so the client can continue.
    When the DB server has time, it writes the info to disk. Between the accepting the committed data and writing it to disk, some time (in very bad cases also minutes) can pass.
    If in that time, the server goes down, you lose that data and need to redo it.

    Some of the processes:
    -the maximum memory the cache gives to the COMMIT-cache is 2/3 of cache memory. When this is full, the DB server has first to write something to disk before it can take other changed data from the clients.
    -It also has some 'optimizations': reading data from the disk is given more priority than writing data from the cache to disk. BUT when a changed record is longer than 1 minute in the cache, the DB server gives more priority to the writing to disk than reading to disk.

    If one of these 2 cases this happens, you generally have bad performance, because only 1/3 of cache is used for reads, so more reads will be issued to disk (slower) but the reads have less priority, so they can queue up (again slower).

    You can see what happens on the database files: Create a form on table "Database File" (it is read-only). There are some counters that are interesting :
    -"Reads in queue": the number of reads that are waiting to be done. Generally it is 0 or 1 or 2 (it can go up when write cache is full or a lot of records are waiting to be written).
    -"Writes in queue": the number of writes that are waiting to be done. Generally it is 0, but can go up very quickly after COMMITS are done and then goes down slowly. On a performant system, this number should go down with at least 100 writes per second per database file.

    How to speed up the writes:
    -AVOID RAID5! it has good reading speed, but is very slow for writing!
    -Try to slow down the writes done by the NAS (see also http://www.mibuso.com/howtoinfo.asp?FileID=14, http://www.mibuso.com/forum/viewtopic.php?t=6945).
    -divide the DB in multiple DB-files. Each on its own DEDICATED RAID1 (each DB-file on its own RAID10 gives even better performance but seems like overkill [and expensive])
    -try to run the NAS on the DB-server (preferably 32bit) itself, so the network does not slowdown the NAS (but you need at least 1 core for the DB-server and 1 for the NAS).
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


Sign In or Register to comment.