Querying_Facets_in_SQL

Snippet Name: Querying_Facets_in_SQL

Tags:

Created Date: 2025-06-16 13:12:45
Last Modified Date: 2025-05-31 10:32:11

Dependencies:

DBA Notes:

Sample Output:

Title: Querying Facets in SQL
Author: user
in SQL server, you can navigate to a section called"facets" for various configuration options. are those facets queryable via TSQL?

Author: ChatGPT
In SQL Server, "facets" are a feature of **Policy-Based Management (PBM)**, which provides a way to manage server configurations and enforce compliance. Each facet represents a logical grouping of properties or settings that can be managed or evaluated.

Unfortunately, **facets themselves are not directly queryable via T-SQL** because they are part of the metadata and logic tied to Policy-Based Management, which is implemented through the SQL Server Management Object (SMO) library. However, you can indirectly query the settings or properties that the facets expose by querying the system catalog views, dynamic management views (DMVs), or using the SMO library in a scripting language like PowerShell.

### Example Queries for Facet-Like Properties
Here are some examples of how to query properties that may be included in facets:

#### 1. **Server-Level Properties**
   Query server properties exposed by the `SERVERPROPERTY` function:
   ```sql
   SELECT 
       SERVERPROPERTY('ProductVersion') AS ProductVersion,
       SERVERPROPERTY('Edition') AS Edition,
       SERVERPROPERTY('IsClustered') AS IsClustered,
       SERVERPROPERTY('Collation') AS Collation;
   ```

#### 2. **Database-Level Properties**
   Query properties for databases:
   ```sql
   SELECT 
       name AS DatabaseName,
       compatibility_level,
       recovery_model_desc,
       containment_desc,
       is_read_only
   FROM sys.databases;
   ```

#### 3. **Security Settings**
   Query security-related settings:
   ```sql
   SELECT 
       name AS LoginName,
       type_desc AS LoginType,
       is_disabled,
       create_date,
       modify_date
   FROM sys.server_principals
   WHERE type IN ('S', 'U', 'G'); -- SQL user, Windows user, Windows group
   ```

#### 4. **Performance Settings**
   Query settings related to performance:
   ```sql
   SELECT * 
   FROM sys.configurations
   WHERE name LIKE '%cost%' OR name LIKE '%thread%';
   ```

### Querying Policies and Facets via PBM
If you want to explore the **facets** specifically, you can use the `msdb` database, which stores Policy-Based Management configuration.

#### Policies and Conditions
```sql
-- View all policies
SELECT * 
FROM msdb.dbo.syspolicy_policies;

-- View all conditions (related to facets)
SELECT * 
FROM msdb.dbo.syspolicy_conditions;
```

#### Link Facets to Conditions
Conditions in PBM are based on facets. The following query can help you see which conditions are tied to which facets:
```sql
SELECT 
    c.name AS ConditionName,
    f.name AS FacetName
FROM msdb.dbo.syspolicy_conditions AS c
JOIN msdb.dbo.syspolicy_system_facets AS f
    ON c.facet_id = f.facet_id;
```

If you're looking for a particular configuration that corresponds to a facet, let me know, and I can help identify the relevant query.

Author: user
in SLQ server, I know implicit conversions on column in a WHERE statement will prevent an index from being able to be used. can you give me a quick list of column compares where this happens, like integer = nvarchar

Author: ChatGPT
Yes, implicit conversions in SQL Server can negatively impact query performance by preventing index usage, leading to full table scans instead of efficient index seeks. When a data type conversion occurs in a `WHERE` clause, the optimizer might have to convert every row before comparison, which can result in poor performance.

### **Common Implicit Conversions That Prevent Index Usage**

Here???s a quick list of common column comparisons that trigger implicit conversions and can prevent index usage:

1. **Numeric Comparisons:**
   - `INT` = `VARCHAR` or `NVARCHAR`  *(e.g., `int_column = nvarchar_column`)*
   - `BIGINT` = `VARCHAR`
   - `DECIMAL` = `VARCHAR`
   - `FLOAT` = `NVARCHAR`

2. **String Comparisons:**
   - `NVARCHAR` = `VARCHAR` *(Unicode vs. non-Unicode)*
   - `CHAR` = `NCHAR`
   - `TEXT` = `VARCHAR` *(avoid using deprecated `TEXT`)*
   - `VARCHAR` = `UNIQUEIDENTIFIER` (common issue with GUIDs stored as strings)

3. **Date and Time Comparisons:**
   - `DATETIME` = `VARCHAR`
   - `DATETIME2` = `VARCHAR`
   - `DATE` = `VARCHAR`
   - `TIME` = `VARCHAR`
   - `SMALLDATETIME` = `NVARCHAR`

4. **GUID Comparisons:**
   - `UNIQUEIDENTIFIER` = `NVARCHAR`
   - `UNIQUEIDENTIFIER` = `VARCHAR`

5. **Bitwise and Boolean Comparisons:**
   - `BIT` = `VARCHAR` (e.g., `bit_column = '1'`)

6. **LOB (Large Object) Data Types:**
   - `XML` = `VARCHAR` *(use explicit conversion with caution)*
   - `VARBINARY` = `VARCHAR`

### **How to Avoid Implicit Conversions:**

1. **Ensure Data Type Consistency:**  
   - Store data using consistent types across columns and comparisons.
   
2. **Use Explicit Conversions:**  
   - When unavoidable, use `CAST()` or `CONVERT()` explicitly to match the column type.
   ```sql
   WHERE CAST(nvarchar_column AS INT) = int_column
   ```

3. **Parameterization:**  
   - Ensure query parameters match the column data types.

4. **Check Execution Plans:**  
   - Use `SET STATISTICS IO ON` or query execution plans to detect "CONVERT_IMPLICIT".

### **Detecting Implicit Conversions in Execution Plans:**
You can identify implicit conversions in SQL Server execution plans using the following query:

```sql
SELECT 
    qs.execution_count,
    qt.text AS QueryText,
    qp.query_plan
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.dm_exec_query_plan(r.plan_handle) AS qp
CROSS APPLY sys.dm_exec_sql_text(r.plan_handle) AS qt
WHERE qp.query_plan LIKE '%Convert%Implicit%';
```

This helps find instances where implicit conversions are degrading performance.

If you need further help optimizing queries, let me know!

Author: user
does the direction matter? ie nvarchar = int?

Author: ChatGPT
Yes, **the direction of implicit conversion does matter** in SQL Server, especially when it comes to index usage and performance impact.

### **Why Direction Matters:**
SQL Server follows specific [data type precedence](https://learn.microsoft.com/en-us/sql/t-sql/data-types/data-type-precedence-transact-sql) rules to determine which data type should be implicitly converted in a comparison. The data type with the **lower precedence** will be converted to the data type with the **higher precedence**, which can affect whether an index can be used or not.

### **Example of Direction Impact:**

Consider the following example:

#### Scenario 1: `nvarchar_column = int_value`
- If `nvarchar_column` (lower precedence) is compared to an `int` (higher precedence), the **nvarchar** column will be converted to an `int`, resulting in potential conversion errors and poor performance.
- **Result:** This may prevent index usage.

#### Scenario 2: `int_column = nvarchar_value`
- In this case, since `int` has a higher precedence than `nvarchar`, the `nvarchar_value` will be implicitly converted to `int`.
- **Result:** The index on `int_column` **can** still be used, but implicit conversions might cause unexpected results if the string cannot be converted to an integer.

---

### **SQL Server Data Type Precedence (Simplified)**
SQL Server processes conversions in the following order (from highest to lowest precedence):

1. `sql_variant`
2. `datetimeoffset`
3. `datetime2`
4. `datetime`
5. `smalldatetime`
6. `date`
7. `time`
8. `float`
9. `real`
10. `decimal`
11. `money`
12. `smallmoney`
13. `bigint`
14. `int`
15. `smallint`
16. `tinyint`
17. `bit`
18. `nvarchar`
19. `nchar`
20. `varchar`
21. `char`
22. `uniqueidentifier`
23. `text` (deprecated)
24. `image` (deprecated)
25. `xml`

---

### **Key Takeaways:**
- **Higher precedence type wins:** If an `nvarchar` is compared to an `int`, SQL Server will try to convert the `nvarchar` to an `int`, possibly leading to errors or index scans.
- **Index impact:** If a column is on the lower precedence side and needs conversion, index usage may be prevented.
- **Performance:** If an implicit conversion happens on the **column side**, it will force a **full table scan**, slowing down the query significantly.

---

### **Best Practices to Avoid Directional Issues:**
1. **Store data in correct data types:** Ensure data is stored in columns with appropriate types (e.g., avoid storing numeric data in string columns).
2. **Use explicit conversions:** When needed, explicitly cast values to avoid unexpected behavior.
   ```sql
   WHERE nvarchar_column = CAST(int_value AS NVARCHAR)
   ```
3. **Check execution plans:** Look for `CONVERT_IMPLICIT` operations in execution plans to identify costly conversions.

---

Would you like to see examples of how to rewrite queries to avoid these implicit conversions?

Author: user
i have a SQL server2019 with 768GB of RAM. i am seeing a PLE dropt to 30 minutes regularly,a nd am thinking of adding buffer pool extensions. please provide two sentences about the advantages of BPE and it's potential impact

Author: ChatGPT
Buffer Pool Extensions (BPE) can improve SQL Server performance by providing an additional caching layer using fast, non-volatile storage (such as SSDs), which helps reduce disk I/O pressure when memory is under heavy demand. However, while BPE can enhance read performance and extend the buffer pool, it does not replace the need for adequate physical memory and may not significantly help workloads with high write activity or frequent data churn.

Author: user
that same server is a virtual machine, 16 core with 16 sockets. the business has identified that regardless of otehr recommendations, they want maxdop set to 2 and cost threshold for parallelism set at 140. what would you recommend changing the core/socket recommendation to

Author: ChatGPT
Given that your SQL Server 2019 instance is running on a **virtual machine** with **16 cores distributed across 16 sockets** (i.e., 1 core per socket), the current configuration is **not optimal** for SQL Server performance. SQL Server performs better when it can efficiently schedule threads across fewer sockets with multiple cores per socket, rather than spreading them across many sockets with a single core each.

Since the business has mandated a **MAXDOP of 2** and a **cost threshold for parallelism of 140**, you should adjust the virtual hardware configuration to align with SQL Server???s optimal NUMA (Non-Uniform Memory Access) behavior and parallelism settings.

### **Recommended Change:**
- **Change the VM configuration to 2 sockets with 8 cores per socket**  
  - This balances CPU resources effectively and allows SQL Server to optimize thread scheduling within NUMA boundaries.
  - It ensures better cache locality and minimizes cross-socket memory access latency.
  
### **Reasoning:**
1. **NUMA Awareness:**  
   SQL Server is NUMA-aware, and spreading resources across 16 sockets with 1 core each introduces unnecessary latency. Grouping cores into fewer sockets allows SQL Server to allocate resources more efficiently.
   
2. **Parallel Query Execution:**  
   With `MAXDOP = 2`, queries will utilize at most 2 cores per operator. Having 8 cores per socket allows for parallelism without excessive NUMA-related overhead.

3. **Reduced Context Switching Overhead:**  
   More cores per socket reduce thread migration between NUMA nodes, improving overall query execution efficiency.

4. **VM Performance Best Practices:**  
   Aligning the virtual socket/core distribution closer to physical hardware can reduce hypervisor scheduling delays.

Would you like guidance on how to adjust these settings within your virtualization environment?

Author: user
would changing it to 8 sockets with two cores each be recommended?

Author: ChatGPT
Yes, changing the configuration to **8 sockets with 2 cores each** would be a better option than the current 16 sockets with 1 core each, but it???s still **not as optimal** as a configuration with fewer sockets and more cores per socket.

### **Pros of 8 Sockets with 2 Cores Each:**
1. **Improved NUMA Efficiency Compared to 16 Sockets:**  
   - SQL Server would have fewer NUMA nodes to manage, reducing memory latency compared to the current 16-socket setup.
   - NUMA node overhead would be reduced, leading to better thread scheduling.

2. **Alignment with MAXDOP = 2:**  
   - Having 2 cores per socket ensures that the query optimizer will not have excessive parallelism beyond the allocated resources per NUMA node.

3. **Hypervisor Scheduling Considerations:**  
   - Depending on the hypervisor (e.g., VMware, Hyper-V), a lower socket count can reduce scheduling conflicts and improve CPU resource allocation.

### **Potential Drawbacks:**
1. **Still More NUMA Nodes than Necessary:**  
   - Having 8 NUMA nodes (with 2 cores each) may still result in memory access inefficiencies, as SQL Server works better with fewer NUMA nodes when parallelism is constrained.
   
2. **Cross-NUMA Latency:**  
   - Queries might experience some performance degradation due to memory access across NUMA nodes, especially for larger queries requiring access beyond 2 cores.

### **Better Recommendation:**  
A configuration of **4 sockets with 4 cores each** or **2 sockets with 8 cores each** would likely provide better overall performance for SQL Server by reducing NUMA overhead while still allowing parallel processing within NUMA nodes efficiently.

- **4 sockets x 4 cores:**  
  - Balances NUMA node management and provides flexibility for parallelism.
  
- **2 sockets x 8 cores:**  
  - Provides the lowest NUMA overhead and best core utilization.

### **Final Recommendation:**
If the choice is between **8 sockets x 2 cores** and other configurations, **8 sockets x 2 cores is an improvement** but **4 sockets x 4 cores** or **2 sockets x 8 cores** would likely yield better performance.

Let me know if you'd like further guidance based on your specific workload patterns.

Author: user
please enhance this observation to be more robust, but limit the rec ommendaitons to no more than three sentences:"there are more than quite a few  tables with 20M  or more rows. If this data ever needs to be archived off, say, older than seven years of retention, partitioning may be an option. "

Author: ChatGPT
There are numerous tables containing over 20 million rows, which could present challenges in terms of performance and storage management as the data continues to grow. If archiving older data, such as records exceeding a seven-year retention period, becomes necessary, implementing table partitioning could be a viable strategy to improve query performance and simplify data management. Partitioning would allow for more efficient querying, easier data purging, and better overall resource utilization without significant application changes.

Author: user
please repeat for this observation: "No databases are taking Differential backups. Switching to Weekly Full + Daily Diff can reduce the IO on the server, reducing load on the SAN "

Author: ChatGPT
Currently, none of the databases are utilizing differential backups, which may result in higher I/O and storage consumption due to frequent full backups. Implementing a backup strategy that includes weekly full backups combined with daily differential backups can significantly reduce the I/O load on the server and minimize the impact on the SAN. This approach balances recovery needs while optimizing resource utilization, potentially improving overall system performance.

Author: user
please repeat for this observation: "quite a few procedures use nested CTE's to organize data for reporting or other purposes. Theis design works, but a more comprehensive review might yield performance opportunities using the "divide and conquer" technique  of pulling out tables to temp tables to reduce IO."

Author: ChatGPT
A significant number of procedures utilize nested CTEs to organize data for reporting and other purposes, which, while functional, may not always be the most efficient approach. Conducting a thorough review of these procedures could reveal opportunities to enhance performance by applying the "divide and conquer" technique???storing intermediate results in temporary tables to reduce repeated I/O operations. This strategy can improve query execution times, lower resource consumption, and enhance overall system responsiveness.

Author: user
please repeat for this observiation related ot linked servers: "server not found or inaccessible/30 second dns , consider dropping the linked server code in every database was reviewed, and not views or stored procedure sreference the old linked servers."

Author: ChatGPT
Several databases contain linked server configurations that are no longer accessible, often resulting in "server not found" or extended 30-second DNS resolution delays. A comprehensive review of all databases has confirmed that no views or stored procedures reference these obsolete linked servers. Given their lack of use and potential to introduce unnecessary connection attempts and delays, it would be beneficial to consider removing these linked server configurations to improve overall system efficiency.

Author: user
[File]: file-UTg8A1fchBnrtSgCzMXYEk-image.png

Author: ChatGPT
High disk latency, with read operations averaging **300ms** and writes at **150ms**, suggests potential I/O bottlenecks that can negatively impact SQL Server performance. Here are a few recommendations to address and mitigate the issue:

1. **Review Disk Subsystem Performance:**
   - Use tools like **Windows Performance Monitor (PerfMon)** to analyze disk queues, throughput, and latency.
   - Engage with your storage team to ensure the underlying SAN or disk subsystem is not overloaded and meets SQL Server performance requirements.

2. **Optimize Database Indexing and Query Performance:**
   - Analyze frequently executed queries using **SQL Server's Dynamic Management Views (DMVs)** (`sys.dm_io_virtual_file_stats`, `sys.dm_exec_query_stats`) to identify inefficient queries causing high I/O.
   - Implement missing indexes and remove unused ones to reduce unnecessary reads and writes.

3. **Implement Data Compression:**
   - Enabling **page or row compression** can reduce I/O by storing data more efficiently and reducing physical disk reads/writes.

4. **Review TempDB Configuration:**
   - Ensure that TempDB is properly sized and distributed across multiple disks to avoid contention, especially if your workload relies heavily on temporary objects.

5. **Evaluate Memory Allocation:**
   - Ensure SQL Server has sufficient memory to cache frequently accessed data, reducing physical disk reads by increasing the buffer pool.
   - Check **Page Life Expectancy (PLE)** using `sys.dm_os_performance_counters` to determine if memory pressure is causing excessive disk reads.

6. **Consider SSD or Storage Tiering:**
   - If your current storage is traditional HDD-based, consider moving to SSDs or leveraging a tiered storage approach for frequently accessed data.

7. **Check Disk Fragmentation and File Growth Settings:**
   - Ensure that database files are pre-allocated properly to avoid excessive autogrowth events that can degrade performance.
   - Regularly monitor for fragmentation using `sys.dm_db_index_physical_stats` and defragment indexes if necessary.

8. **Implement Buffer Pool Extensions (BPE):**
   - Since your server has 768GB of RAM, extending the buffer pool with a fast SSD cache might help offload some I/O pressure, especially for read-heavy workloads.

By implementing a combination of these strategies, you can reduce disk latency and enhance overall SQL Server performance. Let me know if you need assistance with specific diagnostic queries.

Author: user
please repeat for this observation: "Three sql  logins have common passwords or predicable patterns of their password like pwd = username,pwd = 123456, etc. "

Author: ChatGPT
Three SQL logins have common or easily guessable passwords, such as passwords matching the username or using predictable patterns like "123456." This poses a significant security risk, making the system vulnerable to brute force attacks and unauthorized access. It is strongly recommended to enforce a strong password policy, including complexity requirements, regular password rotations, and monitoring for weak credentials to enhance database security.

Author: user
please provide two or three sentences on the impact of Transparent Data Encryption,a dn mention the impact of restorign to development

Author: ChatGPT
Transparent Data Encryption (TDE) protects data at rest by encrypting database files, backups, and log files, helping to meet compliance requirements and prevent unauthorized access to physical data. However, enabling TDE can introduce a performance overhead due to encryption and decryption operations, particularly during read and write processes. Additionally, restoring a TDE-encrypted database to a development environment requires access to the original encryption certificate and key, which must be securely transferred to ensure successful restoration.

Author: user
is this observation accurate?"Better practice is for Log Growth settings to not be a number smaller than 64GB, and not be a % of the file, which are the typical defaults  "

Author: ChatGPT
This observation is **partially accurate**, but it requires clarification and context.

**Better practices for log growth settings:**  
- It is generally recommended to **avoid percentage-based growth settings** for transaction log files because they can lead to unpredictable growth sizes, especially as the database grows over time. Instead, using a **fixed-size increment** provides more control and prevents fragmentation.  
- The ideal fixed growth size depends on the workload, but setting it to a very large value like **64GB** may not always be appropriate for smaller databases. A more common recommendation is to set the log growth to a size that balances performance with disk space utilization, such as **512MB, 1GB, or even 4GB**, based on database activity and available disk space.  
- A too-small growth size (e.g., 1MB, 10MB) leads to frequent autogrowth events, causing performance degradation and potential log fragmentation. Conversely, excessively large growth sizes can waste disk space.

### **Recommended Approach:**
- Analyze the log growth pattern and set an appropriate fixed increment (e.g., 512MB, 1GB, or larger if needed).
- Regularly monitor log usage and proactively increase the file size manually to minimize autogrowth events.
- Avoid percentage-based growth to maintain predictability and control.

**Conclusion:** Instead of a blanket recommendation of "no smaller than 64GB," it's better to tailor the log growth setting to the database workload and growth patterns.

Author: user
assuming i meant 64MB instead of gigabyte, please refactor the recommendation

Author: ChatGPT
A better practice for transaction log growth settings is to avoid values smaller than **64MB** and to refrain from using percentage-based growth settings, which are the typical defaults. Percentage-based growth can lead to inefficient disk space usage and fragmentation, especially as the database size increases over time. Instead, setting a fixed growth value, such as **64MB, 128MB, or 256MB**, provides better predictability, reduces the frequency of autogrowth events, and helps maintain optimal performance by preventing excessive fragmentation and file system overhead.

Author: user
and what is the recommendation for Virtual Log files; I heard trying to keep it under 24 is preferred

Author: ChatGPT
Yes, the recommendation for Virtual Log Files (VLFs) is to **keep their count as low as possible**, with a general guideline of **fewer than 50** and ideally **under 24** for optimal performance. A high number of VLFs can lead to performance issues, especially during transaction log backups, restores, and auto-recovery processes, as SQL Server must scan through all VLFs.

### **Recommended Practices for Managing VLFs:**
1. **Initial Sizing and Growth Settings:**
   - Pre-size the transaction log file appropriately based on workload to minimize frequent growth events.
   - Use fixed growth increments (e.g., **64MB, 128MB, or 256MB**) instead of percentage-based growth to prevent excessive fragmentation.

2. **Regular Monitoring and Maintenance:**
   - Check the number of VLFs using the following query:
     ```sql
     DBCC LOGINFO;
     ```
   - If the number of VLFs is excessive, consider shrinking the log file and then manually growing it in larger increments to consolidate VLFs.

3. **Growth Strategy for Healthy VLF Distribution:**
   - When growing the log file manually, follow these increments for balanced VLF creation:
     - **Up to 64MB growth** ??? 4 VLFs
     - **64MB - 1GB growth** ??? 8 VLFs
     - **Greater than 1GB growth** ??? 16 VLFs

By proactively managing VLF count and transaction log growth, you can enhance SQL Server performance, reduce log management overhead, and ensure efficient database operations.

Author: user
in SSRS, i have the followign recommendatiosn, after discovering a trigger on dbo.Subscriptions which raises 120 deadlocks per day. the recommendation is to remove the trigger, and the following:"An ongoing problem with Active subscriptions exists. This table should only have items that are in process, and working their way through Active Subscriptions>>Notifications>>Events>>ExecutionLog workflow.
An ongoing problem with Active subscriptions exists. Once the trigger is fixed, Execution Log for Items are Pending for a much longer time than normal needs a simple cleanup.
A Large number of Subscriptions have a last status of Running, though two years old "

Author: ChatGPT
Here are the enhanced recommendations based on the identified issues with SSRS subscriptions and the trigger causing deadlocks:

1. **Trigger Removal and Workflow Optimization:**  
   The existing trigger on `dbo.Subscriptions` is raising approximately 120 deadlocks per day, indicating contention issues that may impact report processing performance. It is recommended to remove the trigger and evaluate alternative mechanisms, such as scheduled cleanups or event-driven processing, to maintain the integrity of the Active Subscriptions workflow.

2. **Active Subscriptions Monitoring and Cleanup:**  
   The `Active Subscriptions` table should only contain records that are actively processing and progressing through the workflow (`Active Subscriptions >> Notifications >> Events >> ExecutionLog`). An ongoing issue suggests that failed or orphaned records are accumulating, potentially impacting performance and processing efficiency. Regular monitoring and automated cleanup processes should be implemented to ensure only relevant entries remain.

3. **Pending Execution Log Cleanup:**  
   After addressing the trigger issue, subscriptions with a prolonged "Pending" status in the Execution Log should be reviewed, as they indicate potential processing bottlenecks or failures. A scheduled cleanup routine should be introduced to remove stalled or outdated entries and ensure efficient report execution.

4. **Stale Subscription Management:**  
   A significant number of subscriptions have a "Running" status despite being two years old, suggesting that they were not properly finalized or canceled. These outdated entries should be analyzed and purged to prevent resource waste and improve report server performance.

Implementing these recommendations will enhance the overall efficiency of the SSRS subscription processing, reduce unnecessary resource consumption, and prevent future operational disruptions.

Author: user
the server in question has 55 Individual Active Directory Logins ; please  provide teh same two or three sentence recommendation for that issue

Author: ChatGPT
The server currently has 55 individual Active Directory logins, which can lead to complex security management and potential performance overhead due to excessive authentication checks. It is recommended to consolidate individual logins into Active Directory groups, allowing for simplified permission management, improved auditing, and reduced administrative overhead. This approach enhances security by ensuring consistent access control policies and minimizes the risk of orphaned or misconfigured logins.

Author: user
we also have this observation: "they are regularly shrinking both data files and log files for a few databases, in others they are consistently shrinking log files under the Agent Job 00_ReleaseUnsedLogSpace (4 Times a Day)."

Author: ChatGPT
Regularly shrinking both data and log files, especially through the scheduled Agent Job `00_ReleaseUnusedLogSpace` running four times a day, can negatively impact SQL Server performance by causing file fragmentation and increasing the overhead of repeated growth operations. Shrinking should only be used as a last resort in cases where disk space is critically low, as it can lead to performance degradation and excessive resource consumption. It is recommended to review database growth patterns, adjust autogrowth settings appropriately, and eliminate unnecessary shrink operations to maintain database stability and performance.

Author: user
DB_Owner permissions 

Author: ChatGPT
Excessive assignment of `db_owner` permissions to users can pose significant security and operational risks, as it grants full control over the database, including the ability to modify schema, manage security settings, and perform destructive actions such as dropping objects. It is recommended to follow the principle of least privilege by granting users only the necessary permissions required for their tasks, such as `db_datareader` or `db_datawriter` roles, to limit their capabilities while maintaining operational efficiency. Regular audits should be conducted to identify and remove unnecessary `db_owner` assignments, ensuring compliance with security best practices and reducing the risk of accidental or intentional data loss.

Author: user
how about this observation: "a few reports don't use any of the Four Shared Data Sources for 84 reports out of 98 use a Shared Data source, 14 do not.  "

Author: ChatGPT
A review of report configurations indicates that 84 out of 98 reports are utilizing one of the four available shared data sources, while 14 reports rely on individual, embedded data sources. Using embedded data sources can lead to inconsistencies, increased maintenance efforts, and potential security risks due to decentralized connection management. It is recommended to standardize report connections by transitioning all reports to shared data sources where possible, ensuring easier management, consistent security settings, and simplified connection updates across all reports.

Author: user
how about this observation? "best practice is to create a db_execute role whichis granted EXECUTE in the database, created in each database makes permission discovery easier to review identify and manage."

Author: ChatGPT
Following best practices, it is recommended to create a dedicated `db_execute` role in each database and grant it the necessary `EXECUTE` permissions. This approach simplifies permission management by providing a centralized role that can be easily reviewed, identified, and assigned to users or groups, rather than granting EXECUTE permissions directly to individual users. Implementing this role enhances security, ensures consistency across environments, and facilitates compliance audits by offering a clear and organized permission structure.

Author: user
and this recommendation? "we can help produce projects to document all objects in all databases, to be imported into source control, or simply kept as a point in time reference "

Author: ChatGPT
It is recommended to create projects that document all database objects across all databases, providing a comprehensive inventory that can be imported into source control or retained as a point-in-time reference. This documentation enables better change tracking, version control, and facilitates easier troubleshooting and auditing of database changes over time. By maintaining an up-to-date record of database objects, teams can improve collaboration, ensure compliance, and enhance overall database governance.

Author: user
Some Procedures self reference the current database as three part naming conventions


Author: ChatGPT
Some stored procedures use three-part naming conventions to reference objects within the same database, which can lead to maintenance challenges and potential compatibility issues, especially when moving or restoring databases to different environments. It is recommended to replace self-referencing three-part names with two-part naming conventions (schema.object) to improve portability, simplify future migrations, and align with best practices. This approach enhances code maintainability and ensures consistency across environments without relying on fixed database names.

Author: user
There are numerous Jobs which are updating indexes and statistics each day.  The jobs related to statistics only update specific tables, and do not check if there are out of date statistics on other tables that would also have value.


Author: ChatGPT
There are numerous SQL Server Agent jobs performing daily index and statistics updates, but the current approach for statistics maintenance only targets specific tables without assessing whether other tables have outdated statistics that could benefit from an update. This selective update strategy may lead to suboptimal query performance, as queries relying on stale statistics might generate inefficient execution plans. It is recommended to implement a more comprehensive statistics maintenance strategy, such as leveraging SQL Server???s built-in `sys.dm_db_stats_properties` to identify and update only out-of-date statistics, ensuring balanced performance while minimizing unnecessary resource consumption.

Author: user
and an observation for switching to Group Managed Service Accounts?

Author: ChatGPT
Switching to **Group Managed Service Accounts (gMSAs)** for SQL Server services can enhance security by providing automatic password management, eliminating the need for manual password updates, and reducing the risk of service disruptions due to expired credentials. gMSAs also simplify account management across multiple servers by allowing centralized control through Active Directory, improving compliance with security policies and reducing administrative overhead. It is recommended to transition SQL Server services to gMSAs to strengthen security, ensure seamless authentication, and facilitate easier management in enterprise environments.

Author: user
in a web based vdi session, how can i send control+alt+end

Author: ChatGPT
In a web-based Virtual Desktop Infrastructure (VDI) session, sending **Ctrl+Alt+End**, which is commonly used to bring up the security options (e.g., Task Manager, Sign Out, Lock) within the remote session, can be done using the following methods:

### **1. Use On-Screen Keyboard (OSK)**
1. Inside the VDI session, open the **On-Screen Keyboard** by pressing `Win + R` and typing `osk`, then hit Enter.
2. Press **Ctrl**, **Alt**, and click **End** on the on-screen keyboard to send the key combination.

### **2. Use VDI-Specific Keyboard Shortcuts**
Depending on the VDI provider (e.g., Citrix, VMware Horizon, Microsoft Remote Desktop Web), try the following alternative key combinations:
- **Ctrl + Shift + Esc** ??? Directly opens Task Manager.
- **Ctrl + Fn + Alt + End** ??? If you are using a laptop keyboard with an `Fn` key.
- **Ctrl + Alt + Insert** ??? Some environments remap `Ctrl + Alt + End` to `Ctrl + Alt + Insert`.

### **3. Browser-Specific Shortcuts (If Applicable)**
Some web-based VDI solutions provide a toolbar within the session where you can send key combinations such as `Ctrl + Alt + End`. Look for an option labeled **"Send Ctrl+Alt+Del"** or similar.

### **4. RDP Web Client Workaround**
If the web-based session is using an RDP connection, try pressing **Ctrl + Alt + Fn + End**, as some keyboards require the function key (`Fn`) to send the `End` key in remote sessions.

Let me know if you're using a specific VDI provider, and I can provide tailored steps.

Author: user
how about an observation about untrusted constraints

Author: ChatGPT
Untrusted constraints in the database can lead to performance issues and potential data integrity risks, as SQL Server may bypass them during query optimization, preventing efficient execution plans that rely on trusted constraints. It is recommended to identify and re-enable any untrusted foreign keys and check constraints to ensure they are properly enforced and trusted by the query optimizer. This can be achieved by running the `ALTER TABLE ... WITH CHECK CHECK CONSTRAINT` command to validate existing data and restore trust, improving query performance and maintaining data consistency.


Leave a Reply