If you cannot connect using SQLCmd, but can ping the server, then you need to get a Terminal Server session onto the SQL Server itself, check the firewall settings, and confirm that SQL Server is running. SQL Server talks on port 1433 by default so you need to make sure that this port is still open. If all this fails, then you will have to check the port that SQL Server is listening on, and then go back to the firewall and ensure that port is open.
Merge Replication and HA
Q Can I use merge replication in a high transaction or High Availability (HA) scenario when the transaction rate may be 100 transactions per second (tps)?
A Yes you can, but keep in mind that merge uses trigger-based change tracking so there is overhead associated with it. There are definitely shops that are running more than 100 tps in an HA scenario, so it can be done. If you have only two servers and no conflicts, you can consider transactional replication, which can perform bidirectional replication, resulting in higher throughput in this sort of scenario.
There are some caveats, though. With transactional replication, bulk updates (updates that affect 1,000 rows in a single update statement) will be replicated as 1,000 separate transactions on the subscriber database. This can negatively impact performance. Also, setting up reverse replication (after you fail over) will take time and might be an issue. With merge replication for an OLTP system, there is no transactional consistency maintained. The database can get into an inconsistent state during failover to this replicated database.
A benefit of transactional replication is that your 1,000-row update example can be replicated as a single stored procedure execution. Also remember that merge replication moves changes row by row. Transactional replication preserves transaction boundaries, so you won't accidentally apply half of a transaction. However, both are asynchronous so there is the possibility of inconsistency due to the latency between the nodes. In SQL Server 2005, merge has the ability to define a logical relationship between tables, so either all changes to a group of tables are replicated or none are.
SQL Server Memory
Q I have SQL Server 2000 set up to dynamically manage memory. The minimum memory value is set to 0, and the maximum memory value is set to 7+ GB. The machine has 8GB of physical memory. If Address Windowing Extensions (AWE) is not enabled, will SQL Server be able to allocate more than 2GB?
A No, if AWE is not enabled, SQL Server can only see up to 2GB. However, if you enable /3GB in boot.ini, SQL Server can then see up to 3GB. SQL Server will only take advantage of the 8GB memory if you use /PAE, /3GB in boot.ini, and enable AWE. Make sure you set Max memory to about 6.5GB, or so.
The following article would be a good resource to understand SQL Server memory usage: "
How to configure memory for more than 2GB in SQL Server
Clauses and Performance
Q I have a query in a stored procedure that performs well when I supply a constant in the WHERE clause, but poorly when I use a variable with the same value. I have an index on the column in the WHERE clause. What is wrong?
A You are probably getting a table or clustered index scan rather than a seek on your index because the value of the variable you're using is not known when the plan for the procedure is compiled. Because the value of a variable declared within a procedure is not known at compile time, you can't use an index's statistics histogram to calculate the number of rows that will be returned by a predicate that filters on the variable, and instead must use hardcoded number estimates based on the comparison operator specified. For "=", the estimate is 10 percent, for ">" and "<", it is 30 percent. The only exception to this is an equality test; in this case, the density for the index is used (rather than the magic numbers), but not the histogram.
The best solution here is to pass the filter value as a parameter to the stored procedure and use the stored procedure parameter to filter your query. This way, the value of the parameter can be known at compile time and the plan can be constructed accordingly.
If the value you pass into the proc when it is first compiled is atypical, you may get a plan in the procedure cache that does not perform optimally for the majority of values passed into the proc. Investigate using the WITH RECOMPILE option (covered in the Books Online) to remedy this.
Another thing to watch out for here is data conversion: make sure the procedure param and the column to which it's being compared are the same data type. If the data types differ, you'll likely see a CONVERT operator in the execution plan, and the local variable filter caveats apply.
See Chapter 15: "The Query Processor," section "Compilation and Optimization" of Inside SQL Server 2000 by Kalen Delaney (Microsoft Press, 2000) for more information.
Storing Photos in SQL
Q Is it advisable to store photos in SQL? I think it might be better to store only pointers to the file names in a SQL Server database and to move the storage of BLOBs to a file system.
A The answer depends on how many photos you have, how frequently they change, how large they are, what the application using them is, and so forth. On terraserver.microsoft.com, there are 330 million BLOBs ranging in size from 5KB to 40KB. Approximately 1.6 million a day are added or deleted. The about page on the site explains how it's done. The images are physically all the same size—200 x 200 pixel GIF or JPEG, and they average 10KB compressed. This number of images would be very difficult to manage in the file system. TerraServer often gets 300 to 1,000 simultaneous users and has an average of 25,000 to 55,000 site visitors each day.
A good rule of thumb is if there are less than 1 million images, or if there are big images (greater than 1MB), put them in the file system. For more than 1 million images of sizes less than 1MB, put them in SQL Server. For everything in between, either way will work; you just need to pick the issues you'd rather wrestle with.
SQL Server gives you more ways to index an image and store other data related to the image nearby. If you have a farm of Web servers, it is easier to manage all the images in one central store than keeping a bunch of Web servers up to date. SQL Server is great at managing millions of files and is optimized for smaller files. Plus, there is support for validating the consistency of the database, indices, backup, restore, and so on.
Copying and moving images around the file system is trivial. The file system will deliver raw bytes off the disk the fastest, plus getting them in and out of SQL Server requires some custom code. Without a doubt, the file system will be able to move bytes off the disk and down the wire (to a Web server, for example) faster and more efficiently than SQL Server 2000. SQL Server is pretty good at getting the bytes off disk and has good memory caching schemes, but the protocol used to transmit data over the wire (tabular data stream, or TDS) wasn't designed for BLOBs.
Another option is to store the images in both the file system and the database. The database is the "reserve copy" that is backed up with the metadata in case the file somehow gets lost, and the file system is the place where the application looks for the sake of efficiency. If you lose the file, the app can query the database for the BLOB and send it down the wire.
There is some discussion of this in the SQL Server 7.0 Resource Kit, Part 3 "Database Administration," Chapter 11 "
Thanks to the following Microsoft professionals for their technical expertise: Tom Barclay, Ken Henderson, Koce Ivanov, Lucien Kleijkers, and David Vest.
© 2008 Microsoft Corporation and CMP Media, LLC. All rights reserved; reproduction in part or in whole without permission is prohibited.