| Craig S. Mullins
The DBA Corner
by Craig S. Mullins
Most organizations monitor and tune the performance of their IT infrastructure. But the performance management steps taken are almost always reactive. Handling performance problems is an enterprise-wide endeavor; however, the task of enterprise performance management frequently becomes the job of the DBA group. Every performance problem gets blamed on the database regardless of its true source cause. DBAs need to be able research and decipher the true cause of all performance degradation, if only to prove that it is not caused by the database. DBAs must be able to understand at least the basics of the entire IT infrastructure, but they also need to have many friends who are experts in related fields. Possessing a sound understanding of the IT infrastructure enables DBAs to respond effectively when performance problems arise. Event-driven performance tools can make performance management easier by automatically invoking pre-defined actions when specific alerts are triggered. But many of the supposedly proactive steps taken against completed applications in production are actually mostly reactive. Let's face it, DBAs are often too busy taking care of the day-to-day tactical DBA tasks to proactively monitor and tune their systems.
What do we mean by "database performance?" The rate at which the DBMS supplies the demand for information can be termed "database performance." Five factors influence database performance: workload, throughput, resources, optimization, and contention.
The workload requested of the DBMS defines the demand. It combines online transactions, batch jobs, ad hoc queries, data warehousing analysis, and system commands directed through the system at any given time. Workload can fluctuate drastically from day to day, hour to hour, and even minute to minute. Sometimes workload can be predicted (such as heavy month-end processing of payroll, or very light access after 7:00 p.m., when most users have left for the day), but at other times, it is unpredictable. The overall workload has a major impact on database performance.
Throughput defines the overall capability of the computer to process data. It is a composite of I/O speed, CPU speed, parallel capabilities of the machine, and the efficiency of the operating system and system software. The hardware and software tools at the disposal of the system are known as the resources of the system, e.g., database kernel, disk space, cache controllers, and microcode.
Furthermore, all types of systems can be optimized, but relational databases are unique in that query optimization is primarily accomplished internal to the DBMS. However, there are many other factors that need to be optimized (SQL formulation, database parameters, database organization, etc.) to enable the database optimizer to create the most efficient access paths.
When the demand (workload) for a particular resource is high, contention can result. Contention is the condition in which two or more components of the workload are attempting to use a single resource in a conflicting way (for example, dual updates to the same piece of data). As contention increases, throughput decreases.
Database performance can be defined as the optimization of resource use to increase throughput and minimize contention, enabling the largest possible workload to be processed. In addition, applications regularly communicate with other subsystems and components of the IT infrastructure. Each of these must also be factored into the overall performance planning of your organization. But it is wise to place limits on the actual responsibility for tuning outside the scope of this definition. If it is not defined above, it probably requires expertise outside the scope of database administration. Therefore, performance management tasks not covered by the above description should be handled by someone other than the DBA - or at a minimum shared between the DBA and other technicians.
Trends and Applications, October 2002.
© 2002 Craig S. Mullins, All rights reserved.