Code Performance improvements are a layered approach. I think of it as multiple passes and focii to
- Statistics. slightly out of date stats on huge tables can greatly affect performance.
- Index Optimization: Review missing indexes, remove unused indexes.
- Data Management Views: find what is slow on the whole server, and start digging in. Not just procedures, but ad-hoc, triggers, Scalar functions all have DMVs.
- Execution Plans. Lots of great information about what it is doing, and with experience, you would know what specific operators imply as far as performance enhancement opportunities .
- the Code itself. I have a long list of the things I look for in the code to avoid performance bottlenecks.
Some of the items we check for:
• Joins are Sarg-able, meaning all data types exactly match on both sides of the equals, no functions on columns, no implicit conversions
• WHERE is Sarg-able, meaning all data types exactly match on both sides of the equals, no functions on columns, no implicit conversions
• @Parameter data types and sizes exactly match the size of columns they are used against
• Replace @TableVariables with #Temp tables to leverage statistics, unless the @TableVariable has less than 100 rows in every situation.
• multiple CTEs or cascading CTEs should be replaced with #Temp tables to divide and conquer for better performance
• anything that uses a VIEW or multiple views needs to be replaced with the underlying tables instead to eliminate duplicate or extra tables, remove convenience for performance.
• convert any scalar functions to inline table value functions, and changes to use OUTER APPLY/CROSS APPLY for values.
• Because FOR XML concatenation is performed early, any concatenation statements must come from a #temp table and not the source, as the WHERE statements are applied after FOR XML is performed.
• Relevant recent comment on every deployed procedure or view
• Every table should be fully qualified as SchemaName.TableName
• Every table should have an alias with Table AS AliasName
• Every column should have alias.columnname
• uses SET NOCOUNT ON in first lines of the procedure
• uses SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED in first lines of the procedure as appropriate –remove WITH(NOLOCK) hints in favor of this isolation level
• Handles temp tables proactively, and in both try and catch due to connection pooling issues from java drivers
• Try-Catch is being used as a best practice
• Uses ContentManagement.EXCEPTION.spLogException in the catch statement for better error reporting
• ProcedureParameters is populated with relevant data for better error reporting
• No nvarchars if not relevant
• avoid SELECT *(Enumerate columns, avoid unneeded)
• avoid INSERT INTO #temp WITH ORDER BY(Adds unneeded sort operation)
• avoid EXISTS featuring TOP (WHERE EXISTS(SELECT TOP 1 1…..
• avoid UPDATE on OriginalTableName instead of Alias to avoid an implicit accidental cross join of original table
Without touching the code, I have an incredibly helpful script that finds missing index opportunities, and conveniently scripts out the create command, and keeps a comment header with the script so I know just how valuable it was when i did the analysis.
After that, it’s straight to the DMV’s. I have a nice collection to review performance of Procedures, Ad Hoc queries with plans, and other for lesser impact like the dmvs for trigger performance and scalar function performance. All of them are designed to show me the top x poorest performers, and let me see what tuning opportunities there might be.