Monday, April 23, 2007

SQL Server and NUMA Architectures

Recently I was asked to write some information around SQL Server architecture that will be published as an appendix in an upcoming Microsoft Press book (I'll post more info when it is available). One of the topics that I chose to write about was how SQL Server 2005 handles Non Uniform Memory Access (NUMA) and how DBAs should really understand NUMA architectures on "high end" systems.

I was very surprised to learn that very few professional DBAs have actually heard of NUMA, and fewer even understood the ramifications of configuring SQL Server 2005. (that data is based on a very unscientific poll of people I know who are DBAs or DB developers)

What Exactly is NUMA?

If you're into reading fiction and have ever come across anything written by Clive Cussler , then you might think that NUMA stands for the "National Underwater Marine Agency", but in this case it actually stands for "Non Uniform Memory Access", which basically allows hardware manufactures to make up for the fact that processor and bus speed is starting to catch up with memory speed, and having multiple processor cores with large memory sets can actually begin to degrade the performance of applications. In a nutshell, NUMA pairs specific processors with specific regions of memory. This pairing is known as a NUMA node. For example, if you have a machine with 8GB of RAM and 4 processor cores, the NUMA configuration might end up as 2 4GB NUMA memory nodes, each hosting 2 processor cores. Each processor can access the full 8GB of RAM, but 4GB will be considered "local" memory and therefore faster than the "foreign" memory associated with the other NUMA node.

SQL Server and NUMA Configuration

Generally speaking, most SQL Server 2005 DBAs will probably not have to worry about NUMA configuration, however, it is important to note that the higher the processing load on the server, the more relevant NUMA configuration is. SQL Server 2005 provides a dynamic management view to detail how NUMA is used on the server. The following query will detail the NUMA configuration on SQL Server 2005:

SELECT DISTINCT memory_node_id FROM sys.dm_os_memory_clerks

Running this query will return the number of NUMA nodes configured. If the query returns only one row, then NUMA is not used.

There are several BOL topics and MSDN articles that are available on how to configure NUMA and SQL Server, but generally speaking they are obtuse and hard to follow.

In a follow-on post I'll walk through a NUMA configuration scenario to detail exactly how NUMA can be used to your advantage on a high-volume system.

Friday, April 20, 2007

I Love Team Foundation Server!

I am not normally one to gush about a product (ok, maybe I am), but I just have to take a moment and heap some praise on the Visual Studio Team.

My team has been developing a product over the last 6 months that incorporates a variety of business intelligence and database development. We're getting close to product delivery, and there have recently been some very nitpicky changes. Here at Configuresoft our corporate standard for source control is Perforce, but my team has been using TFS due to some of the issues we've had with Perforce and SSIS packages.

Today, I'm preparing for a beta deployment with one of our large and very important customers. I need to make sure that all of the "right stuff" is available for the deployment next week, while at the same time continue to bird-dog the team to ensure we keep track of all the last minute changes. In the last 15 minutes, I've performed the following:


  1. Label the "old" revision of the code for SSIS/Cubes/DBSchema/SSRS
  2. Consolidate and refresh changed code for all of the above
  3. Deploy "test" code for all of the above
  4. Provide change reports for all of the above

All without ever leaving Visual Studio!

To quote the Guinness commercials, Brilliant!

Thursday, April 19, 2007

SQL Server Performance Monitoring Reports

In case you missed it, Microsoft has released an add-in for SQL Server 2005 that contains several performance monitoring reports. This pack requires SP2.

Download here: http://www.microsoft.com/downloads/details.aspx?familyid=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en

Wednesday, April 18, 2007

Best Practices for Data Warehouse Indexing

(Yes, I know, today is catch up on the blog day!)

The MS Business Intelligence team released some best practices around data warehouse schema design and indexing. They make some good points..

http://blogs.msdn.com/bi_systems/pages/164502.aspx

Best Practices for OLAP Design

Check out this article from the SQL Server best practices website:

http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/olapdbpssas2005.mspx

Data Dude (Visual Studio Team System for Database) Service Release!

Check out Gerts blog for information on the upcoming DataDude service release. Should be some good stuff coming...

http://blogs.msdn.com/gertd/archive/2007/03/22/announcing-service-release-1.aspx

Visual Studio Team System - Guidance

The Visual Studio Team System guys have posted a series of prescriptive guidance articles on best practices, etc.. Good stuff. Check out the blog post here: http://blogs.msdn.com/jmeier/archive/2007/03/26/vsts-guidance-project-update.aspx

Tuesday, April 17, 2007

Yet another new blog host

I had been hosting the blog using Microsoft Office SharePoint Services and just wasn't getting the feature set that I wanted. I had spent some time off and on trying to figure out exactly what I wanted to do, but had really put off the decision. Then MS Word stopped working as a blog post editor. I have no idea why, but it just stopped working. Considering the fact that I like using Word as an editor, that really irked me, so I decided to make the switch to blogger. This seems to work pretty well, *and* MS Word once again works as my post editor, so I am happy.

I'll be moving some of the old posts over as I get the time.

Thursday, April 5, 2007

CX Packet Waits and SQL Server Performance

There have been many times throughout my career when people have told me that an in-depth understanding of the internals and architecture of SQL Server just isn't necessary anymore, that the software has become so smart that there is no real need to do anything other than let the system choose the best way to handle things…



The Demise of Architectural Knowledge



Every time someone says that to me, I cite several examples of how an understanding of the guts of SQL Server has helped me in one situation or another. Recently, the topic came up again (I was asked to write an internals chapter in an upcoming book on SQL Server, and spent a lot of time on architecture) and somehow got steered into a discussion on NUMA architecture and SQL Server 2005. At the time I wrote the chapter, I lamented that I didn't have any direct, relevant real-world examples to use related to SQL Server 2005. (Most of our customers are still running SQL 2000) Sometimes you just shouldn't wish for things, because they have a way of coming true!



Customers to the Rescue



Well, I have been onsite this week with a very large customer of ours, doing some beta-testing of a new product that Configuresoft is launching soon, and that I have architected, designed and led the creation of. The customer was nice enough to provide us with a 16-core machine (8 dual core Xeon processors and 16GB of RAM!) as the data source, and a 4 way (4 Xeon processors) box as the platform for Analysis Services and SSIS. Part of the product that we are testing is an SSIS package that extracts data (24 million rows of source data in this particular case), does some lookup information, and loads it into a Data Warehouse. One of the queries is derived through an expression that looks like this:



" SELECT
cl.property_name AS
PropertyName
,cl.path_name
AS PathName
,cl.agent_change_type_id AS ChangeTypeID
,cl.datetime_collected_to AS DateCollected
,me.machine_id
AS MachineID
,cl.data_class_id AS
DataClassID
,cl.changed_by_user_login AS ChangedBy
,me.uptime
AS MachineUpTime
,COUNT(*) AS ChangeCount
,SUM ( af.Alert) AS AlertCount
FROM ecm_dat_master_change_history_log AS cl
INNER JOIN ecm_dat_machine_environment_flat AS me
ON cl.machine_id = me.machine_id
LEFT OUTER JOIN( SELECT 0 AS Alert
,ecm_dat_master_change_history_log.row_id
FROM ecm_dat_master_change_history_log
LEFT OUTER JOIN ecm_dat_alerts_found
ON ecm_dat_master_change_history_log.row_id = ecm_dat_alerts_found.row_id
WHERE ecm_dat_alerts_found.row_id IS NULL
UNION
SELECT CASE
WHEN row_id IS NULL THEN 0 ELSE 1 END AS Alert
,row_id
FROM ecm_dat_alerts_found ) af
ON af.row_id = cl.row_id
WHERE datetime_collected_to = '" + (DT_WSTR, 28) @[User::CurrentDateEnumerator] + "'
GROUP BY cl.property_name
,cl.path_name
,cl.agent_change_type_id
,cl.datetime_collected_to
,me.machine_id
,cl.data_class_id
,cl.changed_by_user_login
,me.uptime "

When executed, the query is heavily parallelized by the source system (the 16 core machine) and results in a high number of CXPacket waits. (See the picture below for an example)






In looking at system performance, it was pretty obvious that the CXPacket waits were causing performance degradation. This was further proven by examining the overall load performance on the destination system through task manager.