Options

Is it important to have FDB files in different drives, when using a SAN?

NAV5 with an old native (*.fdb) database in 10 parts of 20 gb.
The virtual table "Database File" say each database file has an average read and write delay of 0-3 msec, while users are working.

The underlying disc system is a huge SAN, so the hardware people say it is not an issue having all the database parts on the same logical drive. I am a bit in doubt. I seem to recall that fdb files had to be on different drives for the native NAV server to take advantage of splitting the database in parts. In the old days, when one logical drive was one (simple mirrored) physical disc it was due to physical restraints. But does NAV also expect that each logical drive can only be accessed once at a time, or can it take advantage of a SAN?

Anyone who can recall that? :-)
Regards
Peter

Answers

  • Options
    mdPartnerNLmdPartnerNL Member Posts: 802
    For NAV multiple logical partitions is OK, while writing it will divide the data.

    I still think multiple psychical is always better (means faster writes) but I also have to agree with those suppliers with special array cache, etc.
  • Options
    pdjpdj Member Posts: 643
    Then I didn't make it clear, sorry. All the database parts are on the d-drive, which is a drive allocated on a huge SAN.

    I would like to know if I can expect the performance to improve, if I get allocated 9 additional drives from the SAN, so I can place each database part on each own drive letter.

    The SAN consists of 500+ physical discs, and I have no control of the relation between physical discs and the drives allocated to each server.
    Regards
    Peter
  • Options
    krikikriki Member, Moderator Posts: 9,096
    Hi,

    If you have a SAN, it won't matter that much. The important part is that the SAN ALSO has write caching on.
    I never did any tests with a SAN on a classic NAV. But with some logical thinking, we can somewhere.

    First the NAV has a commit cache to flatten the peaks in writing. Meaning in a few milliseconds, NAV issues a lot of writes. The NAV commit cache takes those in memory and when it has time, it then sends them to the disks. This works very well if you don't have sustained high number of writes to disk.

    With the SAN, NAV does the same thing with the write cache. When it sends the writes to the SAN, the SAN takes it into its write cache and confirms it wrote it to disk (it is lying because it is not physical on the disk yet, but this is not a problem at all [I am not going into details here]). So for NAV, it seems the disks (in reality the SAN cache) can process a very high number of writes to it (RAM is faster then spinning disks...).
    Of course, this is only possible if the SAN has also write caching enabled. I noticed most SAN's have 10% write cache and 90% read cache. For SQL server, this read cache is utterly useless because SQL server generally has more memory available on the server it runs on then the SAN has cache.

    So you will probably see better write performance on the SAN (with a write cache that is at least a GB or 2) then on DAS disks with multiple RAID 1 pairs and on each pair a NAV-database file. (PS: this is only for a native NAV DB, this is not the case for SQL Server!)
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    pdjpdj Member Posts: 643
    I have found some internal Navision documentation (sorry, no source) specifying the DBMS creates a session per disc to enable writing in parallel. But it also say to put one database part per disc, so it might be a bit incorrect.

    I'm currently trying to get the customer to create 4 additional drives for the database. If they accept, we should be able see if it makes a difference.

    I'll get back to you with updated info :smile:
    Regards
    Peter
  • Options
    mdPartnerNLmdPartnerNL Member Posts: 802
    think about this option too: so if a logical drive is enough for NAV... is it better to have 2, 3, 4, 5, 6 parts?
  • Options
    bbrownbbrown Member Posts: 3,268
    You have to first put the era in which the native database was designed into perspective. This see the basis for the Microsoft recommendations and how they can be adapted for the modern error.

    Back in those days, high-end SANS with tons of cache memory simply did not exist in the PC world. In fact, neither did RAID-10 for the most part. It did but was outside most budgets. At best, choices were RAID-1 and RAID-5. RAID-5 was not a consideration due to performance.

    Next consider the physical drives of the day. A 16 GB drive was "state of the art" back then. Not the 300 GB+ disk we see today. Thus the limits for the native DB (16 files with max db size of 256 GB) sort of make sense. That is 16 files of 16 GB each.

    Considering all the above, you come to the optimal configuration of 16 RAID-1 pairs with a single DB file on each. NAV performance is not so much the separate RAIDs as it is the separate db files. But when you are limited to RAID 1 it does play a larger role in the performance.

    But the availability of high-performance RAID-10 changes the above considerations. RAID-10 can easily handle I/O from multiple files. This a consideration is to put all the DB files onto a single RAID-10. This was actually a recommendation made by a senior MS engineer a number of years back.

    All this being said, my recommendation on the SAN would to not be so concerned about the separate disk. But rather spread your DB across as many DB files as you can. Then let the SAN deal with the I/O.



    There are no bugs - only undocumented features.
  • Options
    pdjpdj Member Posts: 643
    bbrown wrote: »
    ...to put all the DB files onto a single RAID-10. This was actually a recommendation made by a senior MS engineer a number of years back.
    Really? I have never seen this in printing. Unless it was Peter Bang saying it, then I would prefer testing it for confirmation :neutral:
    Regards
    Peter
  • Options
    krikikriki Member, Moderator Posts: 9,096
    I confirm with bbrown writes.
    One addition: even with all the space you have, it is best not to make the database files too big. Rather try to have multiple DB files (max is 16). It is faster with multiple small DB files on 1 physical disk then 1 big file.
    Regards,Alain Krikilion
    No PM,please use the forum. || May the <SOLVED>-attribute be in your title!


  • Options
    mdPartnerNLmdPartnerNL Member Posts: 802
    ok, I have a customer with 4 logical drives (SAN), per drive 1 .fdb file of 30Gb. Its running on VMware and sometimes have problems with slowness while writing.

    So a suggestion of per drive 2 .fdb files of 16Gb would be better? please confirm?
  • Options
    bbrownbbrown Member Posts: 3,268
    We're talking more like (8 files X 8 GB) or even (16 files x 4 GB). You do expect growth?

    There are no bugs - only undocumented features.
  • Options
    mdPartnerNLmdPartnerNL Member Posts: 802
    yes, but in my situation the database is 130Gb with 70% filled.
  • Options
    bbrownbbrown Member Posts: 3,268
    yes, but in my situation the database is 130Gb with 70% filled.

    Just pick a number of files to spread the data across. A few things to keep in mind:

    1. The more files the better provided the disk system can support the I/O.

    2. Files should be created equal size and maintained as such. Meaning you expand them all the same time.

    3. Each additional file creates a slave service process. This increases the memory demand on the server. It may also be a factor in how many files you create. Be sure your server has enough memory. Paging the service processes can has an impact on performance.

    There are no bugs - only undocumented features.
  • Options
    mdPartnerNLmdPartnerNL Member Posts: 802
    the server cache for the service is 1000Mb, server has 8Gb memory, thx
  • Options
    bbrownbbrown Member Posts: 3,268
    When you have the multi-file database running on the service, take a look at services under Windows Task manager. You will see what I mean. These will be a "Server.exe" for the first file, plus a "Slave.exe" for each additional file. I don't recall all the details (native was oh so long ago), but I do recall it would use additional memory for each slave.

    There are no bugs - only undocumented features.
Sign In or Register to comment.