I'm not the developer for Navision, but I'm helping the team solve a performance problem that is affecting the users. So please bare with my ignorance of the software.
Currently the database is spread across 3 partitions - each on mirrored drives. It is split into 16 files. Each file is about 2GB in size. Apparently each file starts a SLAVE.EXE process, so there are a total of 17 slave.exe's running.
That Navision team was thinking about making the database files larger, as in 3+ gb each. I'm wondering if that will actually hurt performance - but I'm not sure how the files are handled. When there are changes made by users, does that entire 2gb file have to rewritten on the file system? Wouldn't it actually be faster if the database was split into as many files as manageable, lets say 100mb a piece?
But then back to the slave.exe - if there were more files, then there would be a lot of those running - is that bad or good? Can slave.exe be multithreaded into a single process?
0
Comments
commitcache=yes, cache=900000, database=e:\data\database.fdb
The server has 2gb of memory. Navision version 2.01. Soon to be 3.6.
I'm not sure what your question is. It's a 2.01 database, it's the propietary Navision database. It's currently 33gb in size, split in 17 files across 3 mirrored partitions.
Okay I found it.
Database used 20360412 65%
Database size 31250000 24%
DBMS Cache 900000
Commit Cache checked
object Cashe 5000
and your still using the native DB ???
How long have you been storing this data ???
I would recommed its either time to think about compressing the database OR switching to the SQL version :idea:
It was explained to me that the navision propietary database is faster than an SQL database. How do we compress it?
What about my original question? :?:
http://www.BiloBeauty.com
http://www.autismspeaks.org
I was reading up on the Navision manual, and found about optomizing. I asked the developers about this and they told me they have never optomized it. Last week they started doing it but it's taking a long time...
But for this particular thread, I would like to find out how Navision handles the files. Do they have to be rewritten when changes are made? What is better - bigger files and less files, or smaller files and more files?
Even better if these files are divided across different physical disks.
As for the various SLAVE.EXE processes running they are all sharing the same memory space. One is needed for each database file.
Don't switch to SQL Server. The proprietary Navision database is quite capable and allows an almost "brainless" administration (when compared to SQL Server).
Of course, if you have a few key reasons, don't hold back on rolling out SQL Server just because...
The "Installation & System Management: Microsoft Business Solutions-Navision Database Server" Manual (w1w1ism.pdf) will give you a good overview of issues related to Navision's DB and provide a nice reference guide. From your post, it looks you are already looking into it.
Further info can be found on the online communities/user groups.
I hope this helps you to get a better grip on Navision's DB.
Thanks! Can you give me a brief explanation why?
Look for the largest tables in the database and do an analysis on them. That should be probably your first step.
because every slave handle his own data's he don't interact with any other slave. this is only done by the server.exe. so if you have 2 or more databasefiles on ONE physicaly drive, the drive has to listen to 2 masters, so it will do more seeking and comand-swaping than doing write and reads. so at second: for every databasefile MORE THAN ONE on ONE PHYSICAL drive you will loos A LOOOOOOOT of perfomance. so it is better, 2 16 gig-files on 2 hardrives, than 5 x 6 gig-files on 4 hard-drives!
you will significant increase speed on changing from one file on one drive to 2 files on 2 drives (up to 70 % faster response on read AND write). on every hardrive (and file) more you get a less improvement, arround 40 5 on the 3rd, 20 % at the 4th and so on.
But at least: the most bootlenek in navison is ALLWAYS the program-code. there is a easy way to test the quality of this: set the cache down to 10.000 (10 Mbyte,yes). your database should still usable by 1-3 users, and answetimes should even in whorst case less than 2 seconds on all dialog-action. if you recognize long search times on using CTRL-F (Find), you do something wrong with the HANDLING of navision. make a add-on education at your NSC. if the system itself works slow (e.g. F11 on a order with 10 lines needs longer than 5 second), the program-code should be re-designed.
the navision-dbserver is (at the same hardware) at least 2-5 times faster than sql.
We have customers with database sizes of up to 80 GB using native db,
up to 256 GB are possible ( = 204 GB with 80 % database used, don't take it further )
Size isn't the problem with the native server, the major drawback is the table locking.
2.01 Server only handles 2 GB .fdb files , but you can do a technical update for clients and server (not for the code!) to 2.6 ( or even Attain 3.7, but then the users will get the Attain online help) to handle larger files.
I don't think anyone actually answered this question so I will give it a go
When changes are made the entire db-file is NOT rewritten. Only the relevant data in the file(s) are written/re-written. So if you create a new customer in a database of ie. 100mb it will go just as fast slow as on a database of 32GB (all things being equal )
As others have stated... the most perfomance gain you will ever get in a Native DB is to split the database in 2gb chunks with one chunk on each PHYSICAL drive. The more drives you have to read/write data the faster it will go. But only ONE part on each drive. THe perfomance gain will decrease for every new set of drives you add but initially you should get at least double the performance you are used.
Tommy
So, I think its pretty clear that having multiple slave.exe's for multiple disks will speed things up by letting things run in parallel. But, does it really lose performance when we are listening to two masters? I mean, if you have the set of read requests, does it matter if its coming from one or two different slave.exe? The only I way I can see for it to better to have one slave.exe is if slave.exe is doing some kind of internal optimization to queue up the requests more efficiently (seems tricky to implement). Does anyone know if any behavior like this happens?