4.01 and SQL - separate TEMPDB, CIs and Large Tables ?

PidiPidi Member Posts: 35
Currently we are migrating from 2.6 to 4.01 with SQL Server. Our NSC sold us 3 disk units with 21 mirrored disks all together: So we got 42 of 36 GB 15k U320 disk drives. Lots of stuff with lots of redundancy ...

The current configuration for a 30 GB database is:
2 x 6 disk stripe for the database parts 1 and 2
1 x 5 disk stripe for the TLOG
1 disk for the MDB
(the other drives are hotspares)

I don´t think, its very clever to run a 30 Gig DB on two drives with 12 disks in a RAID 10 configuration. So I´m really wondering, if it makes sense to separate things line CLUSTERED INDEXES or LARGE TABLES from the database. Besides that our TempDB is still on C: from the SQL server setup, which - I heard - is not very clever.

Another configuration might look like this:

1 x 6 disk stripe for the database
1 x 3 disk stripe for Clustered Indexes
1 x 3 disk stripe for Large Tables like Custom Ledger Entry
1 x 4 disk stripe for the TLOGs (still enough for 120 GB)
1 x 2 disk stripe for the TEMPDB and the .MDB

Does anyone have any experience with such a configuration, where parts of the database are on seperate drives?

Pidi
Michael Peters
Bos Fod GmbH Düsseldorf
+49 2132 139-0

Comments

  • PidiPidi Member Posts: 35
    By incedently finding an article on "sql-server-performance.com" I can now answer the above questions myself.

    The article http://www.sql-server-performance.com/filegroups.asp answers my questions in a way, that I shouldn´t move CIs or Large Tables to other drives.

    So I will only follow the plan with TEMPDB and leave the rest up to the RAID stripes.

    Pidi
    Michael Peters
    Bos Fod GmbH Düsseldorf
    +49 2132 139-0
  • DenSterDenSter Member Posts: 8,307
    MS Navision support told us they tries to move individual tables and indexes around into separate files, and said this does not make much of a difference.
    I don´t think, its very clever to run a 30 Gig DB on two drives with 12 disks in a RAID 10 configuration.
    What about this is not very clever? I am wondering what it is that makes you think your setup is not a good idea.
  • PidiPidi Member Posts: 35
    Good to know, what MS has tested- Thanks for that.

    If a SCSI U320 15k drive can handle 50MB/sec. at least, even under bad circumstances, two 6 drive stripes mean a theoretical troughput of about 600MB/sec. Since we´ve got one seperate contoller channel for each 6 disk stripe, even the PCI-X controllers should theoretically be able to handle such a througput. This however brings up the question, how much traffic can be generated by a 2 XEON 4 Gig RAM Server?

    Maybe we should look for a good benchmark tool to find out, what the maximum is.

    As I mentioned before, I changed my mind and wouldn´t say anymore in general, that the described disc concept is no good idea ...

    Pidi
    Michael Peters
    Bos Fod GmbH Düsseldorf
    +49 2132 139-0
  • davmac1davmac1 Member Posts: 1,283
    According to Navision, the SQL Server performance in 4.01 is now the same as the Navision server up to 120 users and SQL Server is faster above 120 users.
    IF your new hardware is much faster then your old, you should see an equivalent performance increase.
    Plus SQL Server has tuning tools you can use to optimize the indexes.
  • DenSterDenSter Member Posts: 8,307
    Well there are tools in SQL Server that help you maintain the indexes, minimize defragmentation and things like that. The actual tuning of the indexes is mostly done in the Navision IDE though, that is where you will do most of the actual tuning work.

    By the way, you are aware that you will need Enterprise editions for both Win2K3 and SQL Server if you want to use more than 2 GB of RAM?
  • PidiPidi Member Posts: 35
    Working on the migration from 2.6 to 4.01 we learned, that this a lot of tuning work not only under Navision with those MBS tools build around the Client Monitor but also under the SQL server itself. And by now, after 8 weeks of programming, everthing is still much slower than under the old C/Side although the new server is at least three times as fast.

    One of the thing we learned by now is, that after importing a .FBK backup as part of the migration process leaves the Clustered Indexes of all table in such a bad condition, that SQL becomes very slow - doing scans instead of seeks. After rebuilding the Clustered Primary key of the SIFT table for Custom Ledger Entry for example everything became much much faster. Now I want to find out, if a normal optimization by a maintainance plan job will have the same effect. I guess it should, but you never know ...

    Especially the stories I heard about locks and Deadlocks make me very nervous. If you run a Database on SQL compared to a 3.7 C/Side database while the size of the DB (30 GB) and # of users (45) don´t change, the server is much faster and finally the tuning job has been done well ... would you exspect more or less trouble with the typical locking problems?

    Not an easy question - I know.

    Pidi
    Michael Peters
    Bos Fod GmbH Düsseldorf
    +49 2132 139-0
  • DenSterDenSter Member Posts: 8,307
    Pidi, please get in touch with your solution center right away and request the clustered index hotfix for 4.0 immediately.

    When Service Pack 1 came out, it came out with all tables (yes that's right, ALL tables) without a clustered index. This was a result of an incomplete implementation of a new feature in the SQL Server option, where we now have a choice which index to set as the clustered one. The hotfix should create the clustered index for the entire database in one script. It also comes with new executables.
  • ara3nara3n Member Posts: 9,257
    SP2 suppose to fix this. So you can do an executable upgrade.
    Ahmed Rashed Amini
    Independent Consultant/Developer


    blog: https://dynamicsuser.net/nav/b/ara3n
  • DenSterDenSter Member Posts: 8,307
    Just doing an executable upgrade will not fix the clustered indexes of an existing database. You will either have to recreate the database with the SP2 executables (which would be the preferred way to do it anyway, or run the script that creates the clustered indexes.
Sign In or Register to comment.