NAV5 with an old native (*.fdb) database in 10 parts of 20 gb.
The virtual table "Database File" say each database file has an average read and write delay of 0-3 msec, while users are working.
The underlying disc system is a huge SAN, so the hardware people say it is not an issue having all the database parts on the same logical drive. I am a bit in doubt. I seem to recall that fdb files had to be on different drives for the native NAV server to take advantage of splitting the database in parts. In the old days, when one logical drive was one (simple mirrored) physical disc it was due to physical restraints. But does NAV also expect that each logical drive can only be accessed once at a time, or can it take advantage of a SAN?
Anyone who can recall that? :-)
Regards
Peter
0
Answers
I still think multiple psychical is always better (means faster writes) but I also have to agree with those suppliers with special array cache, etc.
I would like to know if I can expect the performance to improve, if I get allocated 9 additional drives from the SAN, so I can place each database part on each own drive letter.
The SAN consists of 500+ physical discs, and I have no control of the relation between physical discs and the drives allocated to each server.
Peter
If you have a SAN, it won't matter that much. The important part is that the SAN ALSO has write caching on.
I never did any tests with a SAN on a classic NAV. But with some logical thinking, we can somewhere.
First the NAV has a commit cache to flatten the peaks in writing. Meaning in a few milliseconds, NAV issues a lot of writes. The NAV commit cache takes those in memory and when it has time, it then sends them to the disks. This works very well if you don't have sustained high number of writes to disk.
With the SAN, NAV does the same thing with the write cache. When it sends the writes to the SAN, the SAN takes it into its write cache and confirms it wrote it to disk (it is lying because it is not physical on the disk yet, but this is not a problem at all [I am not going into details here]). So for NAV, it seems the disks (in reality the SAN cache) can process a very high number of writes to it (RAM is faster then spinning disks...).
Of course, this is only possible if the SAN has also write caching enabled. I noticed most SAN's have 10% write cache and 90% read cache. For SQL server, this read cache is utterly useless because SQL server generally has more memory available on the server it runs on then the SAN has cache.
So you will probably see better write performance on the SAN (with a write cache that is at least a GB or 2) then on DAS disks with multiple RAID 1 pairs and on each pair a NAV-database file. (PS: this is only for a native NAV DB, this is not the case for SQL Server!)
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
I'm currently trying to get the customer to create 4 additional drives for the database. If they accept, we should be able see if it makes a difference.
I'll get back to you with updated info
Peter
Back in those days, high-end SANS with tons of cache memory simply did not exist in the PC world. In fact, neither did RAID-10 for the most part. It did but was outside most budgets. At best, choices were RAID-1 and RAID-5. RAID-5 was not a consideration due to performance.
Next consider the physical drives of the day. A 16 GB drive was "state of the art" back then. Not the 300 GB+ disk we see today. Thus the limits for the native DB (16 files with max db size of 256 GB) sort of make sense. That is 16 files of 16 GB each.
Considering all the above, you come to the optimal configuration of 16 RAID-1 pairs with a single DB file on each. NAV performance is not so much the separate RAIDs as it is the separate db files. But when you are limited to RAID 1 it does play a larger role in the performance.
But the availability of high-performance RAID-10 changes the above considerations. RAID-10 can easily handle I/O from multiple files. This a consideration is to put all the DB files onto a single RAID-10. This was actually a recommendation made by a senior MS engineer a number of years back.
All this being said, my recommendation on the SAN would to not be so concerned about the separate disk. But rather spread your DB across as many DB files as you can. Then let the SAN deal with the I/O.
Peter
One addition: even with all the space you have, it is best not to make the database files too big. Rather try to have multiple DB files (max is 16). It is faster with multiple small DB files on 1 physical disk then 1 big file.
No PM,please use the forum. || May the <SOLVED>-attribute be in your title!
So a suggestion of per drive 2 .fdb files of 16Gb would be better? please confirm?
Just pick a number of files to spread the data across. A few things to keep in mind:
1. The more files the better provided the disk system can support the I/O.
2. Files should be created equal size and maintained as such. Meaning you expand them all the same time.
3. Each additional file creates a slave service process. This increases the memory demand on the server. It may also be a factor in how many files you create. Be sure your server has enough memory. Paging the service processes can has an impact on performance.