How is the Navision database handled?

RichardWRichardW Member Posts: 14
edited 2005-10-10 in Navision Financials
I'm not the developer for Navision, but I'm helping the team solve a performance problem that is affecting the users. So please bare with my ignorance of the software.

Currently the database is spread across 3 partitions - each on mirrored drives. It is split into 16 files. Each file is about 2GB in size. Apparently each file starts a SLAVE.EXE process, so there are a total of 17 slave.exe's running.

That Navision team was thinking about making the database files larger, as in 3+ gb each. I'm wondering if that will actually hurt performance - but I'm not sure how the files are handled. When there are changes made by users, does that entire 2gb file have to rewritten on the file system? Wouldn't it actually be faster if the database was split into as many files as manageable, lets say 100mb a piece?

But then back to the slave.exe - if there were more files, then there would be a lot of those running - is that bad or good? Can slave.exe be multithreaded into a single process?

Comments

  • wonmowonmo Member Posts: 139
    What are the other parameters set at? ie. cache, memory, database used, etc.
  • RichardWRichardW Member Posts: 14
    wonmo wrote:
    What are the other parameters set at? ie. cache, memory, database used, etc.

    commitcache=yes, cache=900000, database=e:\data\database.fdb

    The server has 2gb of memory. Navision version 2.01. Soon to be 3.6.
  • wonmowonmo Member Posts: 139
    what is the "database used"?
  • RichardWRichardW Member Posts: 14
    wonmo wrote:
    what is the "database used"?

    I'm not sure what your question is. It's a 2.01 database, it's the propietary Navision database. It's currently 33gb in size, split in 17 files across 3 mirrored partitions.
  • wonmowonmo Member Posts: 139
    go into the database information screen and you should have 2 fields. One of them will be "database size" which in your case would be 33 gig. The other will be "database used" which will tell you how much of that database you are actually using.
  • RichardWRichardW Member Posts: 14
    wonmo wrote:
    go into the database information screen and you should have 2 fields. One of them will be "database size" which in your case would be 33 gig. The other will be "database used" which will tell you how much of that database you are actually using.

    Okay I found it.

    Database used 20360412 65%
    Database size 31250000 24%
    DBMS Cache 900000
    Commit Cache checked
    object Cashe 5000
  • Dean_AxonDean_Axon Member Posts: 193
    Database used 20360412 65%
    Database size 31250000 24%

    and your still using the native DB ??? :o

    How long have you been storing this data ???

    I would recommed its either time to think about compressing the database OR switching to the SQL version :idea:
    Remember: Keep it simple
  • RichardWRichardW Member Posts: 14
    Dean Axon wrote:
    Database used 20360412 65%
    Database size 31250000 24%

    and your still using the native DB ??? :o

    How long have you been storing this data ???

    I would recommed its either time to think about compressing the database OR switching to the SQL version :idea:

    It was explained to me that the navision propietary database is faster than an SQL database. How do we compress it?

    What about my original question? :?:
  • SavatageSavatage Member Posts: 7,142
    do you optimize you tables frequently?
  • RichardWRichardW Member Posts: 14
    Savatage wrote:
    do you optimize you tables frequently?

    I was reading up on the Navision manual, and found about optomizing. I asked the developers about this and they told me they have never optomized it. Last week they started doing it but it's taking a long time...

    But for this particular thread, I would like to find out how Navision handles the files. Do they have to be rewritten when changes are made? What is better - bigger files and less files, or smaller files and more files?
  • nelsonnelson Member Posts: 107
    A bigger number of smaller files is the best option.
    Even better if these files are divided across different physical disks.

    As for the various SLAVE.EXE processes running they are all sharing the same memory space. One is needed for each database file.

    Don't switch to SQL Server. The proprietary Navision database is quite capable and allows an almost "brainless" administration (when compared to SQL Server).
    Of course, if you have a few key reasons, don't hold back on rolling out SQL Server just because...

    The "Installation & System Management: Microsoft Business Solutions-Navision Database Server" Manual (w1w1ism.pdf) will give you a good overview of issues related to Navision's DB and provide a nice reference guide. From your post, it looks you are already looking into it.
    Further info can be found on the online communities/user groups.

    I hope this helps you to get a better grip on Navision's DB.
    Nelson Alberto
  • RichardWRichardW Member Posts: 14
    nelson wrote:
    A bigger number of smaller files is the best option.
    Even better if these files are divided across different physical disks.

    Thanks! Can you give me a brief explanation why?
  • wonmowonmo Member Posts: 139
    In terms of resolving performance issues I don't think that changing file sizes and partitioning files over more hard drives will have much of an effect on performance, considering the things that you have already done. If you really want to increase your performance noticeably then I would suggest looking at what it is that's causing the slowdown. It may simply be a large much used table with poorly designed secondary keys, inappropriate use of flowfields, etc.

    Look for the largest tables in the database and do an analysis on them. That should be probably your first step.
  • rthswrthsw Member Posts: 73
    navision starts a slave for every databasefile (and sometimes one for the write-buffer). every slave is able to handle his data (the data in his database-file) by itself. so at first: for every databasefile you create you gain perfomance.

    because every slave handle his own data's he don't interact with any other slave. this is only done by the server.exe. so if you have 2 or more databasefiles on ONE physicaly drive, the drive has to listen to 2 masters, so it will do more seeking and comand-swaping than doing write and reads. so at second: for every databasefile MORE THAN ONE on ONE PHYSICAL drive you will loos A LOOOOOOOT of perfomance. so it is better, 2 16 gig-files on 2 hardrives, than 5 x 6 gig-files on 4 hard-drives!

    you will significant increase speed on changing from one file on one drive to 2 files on 2 drives (up to 70 % faster response on read AND write). on every hardrive (and file) more you get a less improvement, arround 40 5 on the 3rd, 20 % at the 4th and so on.

    But at least: the most bootlenek in navison is ALLWAYS the program-code. there is a easy way to test the quality of this: set the cache down to 10.000 (10 Mbyte,yes). your database should still usable by 1-3 users, and answetimes should even in whorst case less than 2 seconds on all dialog-action. if you recognize long search times on using CTRL-F (Find), you do something wrong with the HANDLING of navision. make a add-on education at your NSC. if the system itself works slow (e.g. F11 on a order with 10 lines needs longer than 5 second), the program-code should be re-designed.

    the navision-dbserver is (at the same hardware) at least 2-5 times faster than sql.
  • nelsonnelson Member Posts: 107
    There you go Richard, I think rthsw covered the main issues.
    Nelson Alberto
  • spycraftspycraft Member Posts: 66
    informative comment by rthsw... :)
  • KowaKowa Member Posts: 923
    The DBMS usually spreads the big tables over more than one .fdb file. If you optimize these, they (if space is sufficient) are pulled back into one .fdb file. This does decrease the search times (and the space used!)but increases the time to write new records to the database. Also, you will have to to this regulary (i.e. once a week) to keep the optimization level around 90%. I would recommend it only if you're running out of disc space.

    We have customers with database sizes of up to 80 GB using native db,
    up to 256 GB are possible ( = 204 GB with 80 % database used, don't take it further )
    Size isn't the problem with the native server, the major drawback is the table locking.

    2.01 Server only handles 2 GB .fdb files , but you can do a technical update for clients and server (not for the code!) to 2.6 ( or even Attain 3.7, but then the users will get the Attain online help) to handle larger files.
    Kai Kowalewski
  • Tommy_SchouTommy_Schou Member Posts: 117
    RichardW wrote:
    ......

    That Navision team was thinking about making the database files larger, as in 3+ gb each. I'm wondering if that will actually hurt performance - but I'm not sure how the files are handled. When there are changes made by users, does that entire 2gb file have to rewritten on the file system? Wouldn't it actually be faster if the database was split into as many files as manageable, lets say 100mb a piece?......

    I don't think anyone actually answered this question so I will give it a go :)

    When changes are made the entire db-file is NOT rewritten. Only the relevant data in the file(s) are written/re-written. So if you create a new customer in a database of ie. 100mb it will go just as fast slow as on a database of 32GB (all things being equal :) )

    As others have stated... the most perfomance gain you will ever get in a Native DB is to split the database in 2gb chunks with one chunk on each PHYSICAL drive. The more drives you have to read/write data the faster it will go. But only ONE part on each drive. THe perfomance gain will decrease for every new set of drives you add but initially you should get at least double the performance you are used.
    Best regards
    Tommy
  • girish.joshigirish.joshi Member Posts: 407
    because every slave handle his own data's he don't interact with any other slave. this is only done by the server.exe. so if you have 2 or more databasefiles on ONE physicaly drive, the drive has to listen to 2 masters, so it will do more seeking and comand-swaping than doing write and reads. so at second: for every databasefile MORE THAN ONE on ONE PHYSICAL drive you will loos A LOOOOOOOT of perfomance. so it is better, 2 16 gig-files on 2 hardrives, than 5 x 6 gig-files on 4 hard-drives!

    So, I think its pretty clear that having multiple slave.exe's for multiple disks will speed things up by letting things run in parallel. But, does it really lose performance when we are listening to two masters? I mean, if you have the set of read requests, does it matter if its coming from one or two different slave.exe? The only I way I can see for it to better to have one slave.exe is if slave.exe is doing some kind of internal optimization to queue up the requests more efficiently (seems tricky to implement). Does anyone know if any behavior like this happens?
Sign In or Register to comment.