VMstore T7000 Series Icon

VMstore T7000 Series

On-prem VMstore platform.

Tintri Cloud Platform

Managed infrastructure powered by Tintri.

Tintri Cloud Engine

Container-driven VMstore platform.

All Die Hard Tintri News & Events Perspectives Products & Solutions Technical

Intelligent Infrastructure for the Database Administrator – Part III – Predictable Performance

Intelligent Infrastructure Blog - Intelligent Infrastructure for the Database Administrator - Part III - Predictable Performance

Knowing your databases will perform as expected is a huge relief to any DBA. But most DBAs rarely, if ever, experience this kind of relief. Why? Because we all live in a shared world where we no longer have dedicated storage for our databases. That’s not really supposed to be a bad thing, but a shared storage model does create its own set of challenges.  

Welcome to the third blog post in this seven-part series on Tintri SQL Integrated Storage. SQL Integrated Storage is an innovative database-centric offering from Tintri that’s integrated with Tintri Global Center (TGC) and leverages the VMstore intelligent storage infrastructure platform. If you haven’t reviewed the first two parts of this blog series you can find them here:  

Intelligent Infrastructure for the DBA Blog 

Part I: Introduction to SQL Integrated Storage 

Part II: The Right Level of Visibility 

Now onto Part III:  Predictable Performance with SQL Integrated Storage.  We will cover the unique technologies that ensure your database receives the optimal amount of resources based upon its unique utilization patterns and QoS requirements, which we refer to as performance isolation. We will also discuss how to improve the overall performance of your production SQL databases and underlying systems by offloading some of your maintenance tasks. 

Today most databases reside on a database server. More specifically, they probably reside within a virtual machine (VM) that also includes other databases – all running on that database server. That VM, in turn, is assigned to a LUN on a storage system or array, along with other VMsThe performance behavior in this popular scenario can be somewhat “hit or miss as all databases and all other VMs on the same LUN are sharing access to the same physical storage systems. When a particular database or virtual machine on that LUN changes its performance behavior, creating a spike in resource usage, it can affect the performance of other databases – even those running on different servers. 

 

QoS 

 One of the capabilities that makes SQL Integrated Storage truly unique is the guaranteed quality of service (QoS) it delivers, which provide predictable performance for each database. There are two types of QoS available: Auto QoS and Manual QoS. We will explore both, but let’s be clear: No other product today can provide per-database QoS for storage performance. Other systems try to provide it at the LUN or volume level, but these solutions require you to configure only one database per LUN, which can easily create a management nightmare. 

Auto – QoS 

SQL Integrated Storage monitors and analyzes the behavior of each database. Based on usage patterns it puts system reserves into place to ensure that a desired level of performance is consistently maintained for each databaseThis is set up and managed automatically for you – neither the DBA nor the storage admin needs to make any configuration changes for this to work. In fact, no human intervention is required. This allows for predictable performance so that even when overall loads increase, each database still gets the same dedicated resources, enabling it to consistently provide the same level of performance. No longer are you being slowed down by some other workload ramping up. The granular analysis of usage patterns for each of your databases ensures that the same I/O performance will always be available. This Auto QoS capability lets you now run index maintenance on one database without slowing down the performance of other databases running on the same server or accessing the same storage. The reserves make sure one database does not hog up all of the storage performance. 

Manual – QoS 

You can also set performance limits manually, essentially “pinning” a database to a specific minimum or maximum IOPS value. For example, if one database on the SQL Server instance is more critical than the others, we can set the QoS to provide that database with the highest possible IO as needed, knowing it will leave other databases with less. There’s often a business justification for this, such as when critical production databases are prioritized over less important dev or test workloads. On the other hand, if you have a database that’s poorly designed and not very critical you can set a low QoS level to prevent it from using too many resourcesso that those resources are available for other databases. This function is similar to SQL Server Resource Governor but avoids the classifier connection process, which is a pain to set up. 

Offloading maintenance 

DBAs should be performing ongoing maintenance on their databases, which includes reindexing and integrity checks, plus backups of course. Most of these maintenance tasks incur a performance impact so they are usually conducted during planned maintenance windows or non-peak times. It’s best to perform them nightly if at all possibleHowever many applications run constantly, across various time zones and cant take the performance impact no matter when you run maintenance. For that reason, some of these tasks are only performed once per week or even less frequently. With SQL Integrated Storage you can offload some of these tasks to a different SQL Server and not have to worry about any performance impact. Let’s walk through the process of offloading an integrity check. 

Integrity Checks 

Running CheckDB is the only way to know for sure that there is no data corruption inside of your database. Typically, CheckDB only lets you know if there is corruption on a particular databaseand won’t work when you run it on a different system, even if you’re checking a backup copy of the database. So if you find issues in the second copy, you won’t know if these issues exist in production or not. On top of this, the CheckDB process has a major impact on performance. Exactly how often you need to run CheckDB is an open debate, but the most common recommendation is to run it at least weekly. With SQL Integrated Storage you can offload the CheckDB onto a different server, but the advantage is that you will be running it against the exact same blocks of storage used for production. This means you can be certain that there was no corruption in your production database athe time the snapshot was taken. This process can be set up to run daily – automatically, with no issues and no performance impact. 

So how does this work? Take a snapshot of a production database, then clone one database (or as many as you want) to a different SQL Server. Now run CheckDB on thdatabase(s). Because the CPU and memory are consumed from a different SQL Server, you will not see any performance impact. Each database – even a clone – will have its own I/O path between the SQL Server and the storage. So even though the CheckDB uses the same data blocks it will not slow down production access to the database. You can run CheckDB as often as you like and not worry about any performance impact to production. 

Copyonly backups can be offloaded in the same way. But note that this backup creates a new log chain so you will not be able to perform pointintime database recovery or restore any log files. Although indexing cannot be offloadedQoS does enable you to run it without any impact to other databases running on the same server. 

 

Recapitulation 

Not having to worry about fluctuations in database storage performance can be a great weight lifted off the shoulders of the DBA. IT teams spend way too much time trying to figure out why a slowdown occurred in an ETL or stored procedure. And oftentimes no answer is ever found, because the issue stems from a completely different system overloading some part of the infrastructure and slowing down your workload. With Tintri SQL Integrated Storage you’re getting the benefits of Intelligent Infrastructure, so your workloads are always provided with the resources they need to maintain the same, consistent level of performance based on past usage cycles. Offloading of database integrity checks can also help maintain critical workload performance as this high impact process is assigned to a different server, avoiding slowdowns to production.   

Are ready to make a change from the way your database storage has been running? Are you ready to finally sleep better at night, knowing things will just run and run predictably? Then you’re ready for Tintri SQL Integrated Storage.    

Contact us for a free demo today! 

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.