Hello,
I asked in another topic "by the way" how to configure a server to run SQL-Server on it, but I'd like to know it exactly!
So here's my server-system:
I have Win2003 on 2 parallel SCSI-HDDs (RAID 1). And I have 8 HDDs with ~60 GB each. My SQL-DB is about 100GB in size. How should I combine the 8 disks (using only RAID 1 or 10) ?
Note that I want to install the SQL-Server on that 8 disks, too!
I'm afraid that when I create an extra-array for the transaction file that either the DB-files or the transaction file will not have enough free space left ?!
And another question is: How does BBU work (Backup-System of RAID-Controller)? Is there an akku (battery?) on the SCSI-Board ? And can I be shure that it works ? I heard that it's a great speed-advantage when I activate the RAID-Controller's cache, but I'm afraid of loosing the transaction files!
Thanks.
0
Comments
1. What versions of Win2003 and SQL are you using?
2. How much memory do you have?
3. Is this a dedicated server. (I hope).
With the info you provide this is how I would configure:
Drive 0 (2 disk RAID 1) This would be your C: drive. Install the operating system, paging file, and SQL programs here.
Drive 1 (2 disk RAID 1). This would be your
Drive 2 (6 disk RAID 10). This would be your E: drive. This would hold the SQL datafiles.
Using a write-caching controller can increase server performance. Before you do, be sure that your controller is certified this. Many lower cost controllers are not.
The better controllers will have their memory and batteries on a daughter board. Their batteries can hold the memory for a couple of days. If the controller fails, you can get a new one and move the daughter board.
1. Windows Server 2003 and SQL-Server 2000 Standard Service Pack 4
2. 4 GB Memory, 2x XEON 3GHz
3. of course dedicated
How big can the transaction file become ?
Should I split up the database file ? I read in another topic that it's better to do so when you have 2 prozessors ?!
When I split it up, how can I teach the SQL-Server to stop writing in the first file after 120 GB and write into the second file ?
Or better, how can I teach SQL-Server to write tables in BOTH files ?
Or how does it work in general (with more than one db-file) ?
Disable any services that are not required. This will allow the CPUs to focus on running SQL.
How big can the transaction file become ?
I'm not sure what you are asking. Do not rely on AutoGrow. Watch the data and log files and grow them manually. When you created the database, how large did you make the files (data & log), or did you accept the defaults and let it autogrow.
When you split a database into multiple files, you designate which objects (tables, indexes, etc) are in which file or filegroups. The one requirement is that a table and its primary index must be in the same file. Splitting a database requires a good understand of how the application works and the specific requirements of your site, in order to balance the activity across the files. This also adds maintenance complexity.
You would be better off adding an external RAID array (RAID 10) with a large number of drives. Remember that many small, fast drives are better than a few large ones.
No, I found a bug fix for Service Pack 4 so that all the RAM can be used
At least I hope it works :roll:
>>How big can the transaction file become ?
With this I mean: When I put the Transaction file on a only 60 GB HDD, is there enough space ? How much is the transaction file normally in size ?
1 MB ? 10 MB ? 10 GB ? I don't know :-k
This is not a bug. AWE is supported only in the Enterprise & DataCenter versions of SQL. With Windows 2003 Standard you can use up to 4 GB, 8GB with Advanced Server, or 32 GB with Enterprise Server.
>>How big can the transaction file become ?
The size of the the transaction file will depend on 2 things. Your transaction size and volume, and how often the file is truncated. How large did the file get when you restore the *.fbk file?
OK, it was about the size of the db-file.
And I just read in a paper, that a good estimation of the size of the trans.file is just the size of the dbfile
So I'm going to use 2 Arrays with 120GB each
Thanks
How many users and what is you expected transaction volume?
~20 users at the same time
transacion vol. I don't know
Unless the transaction volume is very low (doubtful with ~50 users), I would consider your server to be underpowered. (with the info you have provided).
- the size of the db increases very fast each month
- I'd like to use 3rd party tools (only with SQL-Server available)
- native DB (I guess) won't be supported under .net... ?!
- I can use stored procedures to speed up the slowest (couple of hours) processes
So why would you consider placing a database that grows fast and is already 100 GB on a 120 GB drive. This is also an indication of a high transaction volume.
- I'd like to use 3rd party tools (only with SQL-Server available)
Depends on what tools your referring to and what you need to do. Unless you are referring to SQL specific tools, most of the products use an ODBC connection. The newer ODBC drivers for Navision 4.0 are supposed to solve a lot of the earlier issues.
- native DB (I guess) won't be supported under .net... ?!
Anyone's guess here.
- I can use stored procedures to speed up the slowest (couple of hours) processes
possibily, but if these are complex processes, you are talking about some extensive programming.
I will add some extra disks.
>>...most of the products use an ODBC connection...
No. The tools use direct-TCP/IP-Connection it's much faster than ODBC
>>stored procedures [...] ...possibily, but if these are complex processes, you are talking about some extensive programming.
I know but I can't wait a couple of hours
If you are writing your own application (VB or other), you could use the SQL .NET Data Provider.
Do you know why the processes are slow in the current database?
How do you anticipate SQL resolving these issues?
What new issues do expect with the move to SQL?
Of course, but you can use some components that are MSSQL-specific (and they are). I treid it out and it was about twice the connection speed
I know why these processes are so slow. Because they're procedures that write a lot of data (~10.000) in one transaction. But everytime you write a single dataset you lose 1/8 second to the connection time! You can write a sp on sql-server that will increase the execution time nerly to zero.
In native DB all Stored Procedures are executed on the client computer
so it won't help if I write a sp.
I can't tell the C/SIDE to execute a whole procedure on the server.