Contact  |  Site Map  |  Privacy      Call for more info
 
Latest articles

Diagnosing Database Performance Problems
with SQL Profiler and SQL Trace Analyzer


Many large-scale and complex enterprise applications are now developed and deployed using Microsoft SQL Server. However, these applications often suffer from poor performance and scalability because the focus of the development process is on functionality, while performance and scalability are dealt with as an afterthought.

Everyone understands the importance of good performance to the success of any application. We make every attempt to design and architect our applications so that they are robust and scalable. We may even do some testing during development to ensure the application will perform adequately. But, more than likely, performance management doesnít become a critical issue until a customer experiences a problem with the application.

When performance seems to lag, organizations tend to look at costly hardware enhancements such as adding more processors or more RAM as a quick fix remedy. This response is often driven by their prior lack of success with performance troubleshooting. Troubleshooting performance of SQL Server applications is often a daunting task because of the lack of tools to pinpoint the exact source of problems.

Collecting Server Activity Data Using SQL Profiler

Typically, a DBAís first response to a performance problem in production is to launch SQL Profiler, a powerful tool that is included with Microsoft SQL Server, and collect all database activity into a SQL Trace. Depending on the server workload, the types of events recorded, and the duration of the recording, a SQL Trace can contain thousands, or even millions of events.

Analyzing a SQL Trace can be overwhelming because of the number of records it can contain. SQL Profiler does not allow the user to search or filter through Trace records. Trying to find a problematic procedure or a query among thousands of nearly identical lines can be like trying to find a needle in a haystack.

To extract relevant performance information from a Trace file, an expert DBA may save a trace file into a table and then analyze it using custom queries. To get more information, a DBA may be forced to write a small parser for the recorded SQL text to extract procedure and object names.

With the help of SQL Trace Analyzer,ô performance information can be extracted from a SQL Trace in just a few clicks. Letís review some of the features available with a SQL Trace Analyzer.

SQL Trace Views in SQL Trace Analyzer: Data and Filters, Analysis Report, and Activity Graph

The first step in using SQL Trace Analyzer is to open a SQL Trace (.TRC) file or a trace table created with SQL Profiler. Once SQL Trace Analyzer performs an analysis on the trace, user can get a good overview of the activity over a period of time by clicking on the Activity Graph tab.
Activity Graph presents a graphical overview of the entire trace, grouping all database activity by SPID or by a stored procedure name.

The Data and Filters tab, accessible through the same tab bar as the Activity Graph, organizes all the trace data into a table for easier control and presentation. The particular columns for the table depend on what properties were logged by SQL Profiler during its trace.

Based on the suspected problems on the server, you can choose any measurement column. If queries take too longñ you should check Duration. If CPU usage is too high, check CPU. For disk activity, refer to Reads or Writes columns.

For example, to find which individual T-SQL statement generated most of the Reads on the server, click on the Reads column to view the trace data in descending order, where the ëworstí statements in terms of the number of Reads are displayed on top.

Some of the columns in Data and Filters view have a dropdown control. These dropdown controls are auto-populated the same way auto-filters are built in Microsoft Excel.

For example, by clicking on the Application Name dropdown control under Reads column, one is presented with the analysis of Reads grouped by Application name:

You can group data by any column that has been recorded by SQL Profiler such as Application Name or Login Name. The pie diagram on the right side displays percentages while the check boxes on the left allow the user to apply multiple filters to the data. By utilizing data filters you can isolate the problematic T-SQL code or a stored procedure, applying a ëdivide and conquerí principle.

You may ask: How can one filter data by columns that have continues values (i.e. Time)?

In the case of Start Time and End Time, clicking on the dropdown for either column brings the following control:

Filtering a time range involves moving two sliders: one that controls the start and another that controls the end. A bar graph plots value of a particular parameter such as Reads against time.

This filter helps visually to isolate a peak of workload.

For TextData column, SQL Trace Analyzer parses all data and groups them according to query type and procedure name. All information from a trace (or a previously selected problematic part of a trace) can be parsed, analyzed and grouped by query type and procedure name.

Analysis Report tab is a powerful tool to analyze aggregate impact of a particular SQL statement or stored procedure over time. For example, one can view a stored procedure that consumed the most CPU over time. Color coding is applied to each column so that user can visually see which procedures were the worst by multiple parameters, such as CPU and Reads over a period of time.

In addition, Analysis Report allows the user to view Average values for stored procedures and particular T-SQL statements.

Distribution Chart in the bottom Pane allows to graphically view values over a period of time for each T-SQL statement or stored procedure.

For example, in this particular case, stored procedure TempGetStateItemExclusive2 had some long execution times:

The Analysis Report section also presents a mechanism to view the transactions performed within the trace. Click the Transactions node from the navigation tree (to the left side). Below is a list of transactions (created during the analysis):

To find the longest transaction, simply sort the table by Duration.

In the example screenshot above, some of these transactions have been rolled back. Diagnosing causes of transaction rollbacks is often challenging because the last statement just before the ROLLBACK can be thousands of lines away from the actual ROLLBACK statement in SQL Trace. Trace Analyzer allows to group data by transaction. Right clicking on the transaction and then selecting the option to ìfilter transaction statementsî will generate Data and Filters view, except that now it will contain only statements that were performed within the transaction.

Disclaimer:

In many articles you will find the execution times of different queries. Please do not interpret these results scientifically or as official benchmarks. You will not even find the configuration of the computer used for these experiments because it is rather unimportant.

What is important is that query A is running 2 times slower then query B. On your computer, it could be different: 1.5 or even 3 times slower. Read the full disclaimer