We are trying to figure out the best setting for our Read/Write Cache. It is currently set to 50/50. Anyone have thoughts as to the best setting when it comes to NAV on SQL?
SOme background info:
We are running on SQL 2008, NAV Exectuables NAV 2009. Our database is 120 gb, we have 10 NAS's running 24 hours a day and will be using web services to connect to our website starting in Sept 09. We have 63 users and we have a high transactional database during our busy season (10,000 sales orders a day roughly 5 lines each)
0
Comments
You don't say how much RAM is in your server. I have found that maximizing RAM makes a huge difference. Windows 2008 Server standard supports 32GB RAM. Windows 2008 Enterprise allows you to go much higher - even with SQL Server standard.
Are you following all the other recommendations - separate RAID1 or RAID 10 for log, data, tempdb?
One problem with multiple NAS is if they are all updating the same ledger tables - do you have one job queue that handles all the hig volume posting?
http://mibuso.com/blogs/davidmachanick/
The following are all Raid 10:
Operating System (2 disks)
Temb DB + System Databases (4 disks)
Page File (2 Disks)
Datebase (24 disks)
Logs (8 disks)
The NAS's do not lock each other.
My initial question is what to set the RAID Read/Write Cache level at, any thoughts?
jwilder@stonewallkitchen.com
My thoughts are that with enough RAM, read cache is not as important as write cache providing you have battery backup on the controller.
So I would go with a higher percentage of write cache.
Is this configurable while the server is running, or do you have to take it down to change it?
http://mibuso.com/blogs/davidmachanick/
I typically reboot the machine after the cache level is change (but it is not required).
jwilder@stonewallkitchen.com
For example, I am at a client right now and we are trying to work out why a flow field is slow, here is an extract from the Database Information (tables) screen.
This database has almost as much data in one table as you have in your entire NAV database, so you can imagine that their strategy for performance tuning is completely different to what you will need to do.
I think its great that we look at all the "best practices", but in the real world, every large database needs to be looked at individually. And tuned for its particular conditions.
Keep in mind that this dicussion is only about hardware performance.
Transaction Log:
The most critical performance issue with a transaction log is write performance. A transaction must be written to the disk (transaction log only) before the users session is released. The actual data is written to the disk as a background process. Here your best option would be for 100% write cache. That would provide the best improvement. Read cache on the transaction log would not provide much improvement. Also be sure the caceh is set for Write-Back and not Write-Thru.
Data Files:
Here the opposite applies. Users are inteacing with these arrays during read processes. Set the cache to 100% read.
If the controllers are running both logs and data then you need to compromise somewhat. But if the server has plenty of RAM for data caching and fast disk system I would lean more toward write cache. Say 75/25.
Of course this all assumes that you have controllers with battery-backed swappable cache memory. If a controller using Write-back cache supporting a transaction log fails, the database will be corrupted and will require recovery from backup.
If the cache of SQL Server is working properly (enough memory) the cache of the SAN has less impact on reading.
SQL Server writes in peaks to the database files (checkpoint). This should be as fast as possible. For this, cache is realy important.
That's interesting to know, (and explains your other post). I thought that SQL had a versioning principle on the log something like Navision has on Native commitcache. And thus it would not corrupt the database.
We're really talking about different things. With commitcache you are talking about a database level function. In other words, the database is aware of the cache and its status. Here we are talking about a hardware level cache. SQL is not aware of this cache. In fact, Windows isn't even aware of it. To them it's just a big fast disk. They have no idea whether the data is in RAM (cache) or on the physical disk.
I had always thought that the log was written to the db sequentially (in the correct sequence) so that a failure of the log simply means losing all data currently in the log, but that the DB would not corrupt. IN that sense I always did a pseudo comparison between SQL LOG file and Navision commit cache.
Under normal SQL operations, the data files are updated from memory and not from the transaction log. The transaction log is only used in the event of a system crash and restart. As trandactions are written to to the data file, they are marked in the log for removal. So at any time, the log contains any transactions that are in memory and not flushed to the data files. This is how SQL is able to roll-forward the database after a server crash.
It has a lot to do with the existence of dirty disk cache at the time of failure. Dirty Cache is changes that are sitting in the disk cache that have not been written to the physical disk. In the event of a failure these would be lost.