Cost containment is an important criterion for IT departments in this day and age of financial austerity. Every decision regarding your computer resources is weighed based on not only the value that they can deliver to your organization, but upon their cost to procure, implement and maintain. In most cases, if a positive return on investment cannot be calculated, the software won’t be adopted, or the hardware won’t be upgraded.
An often overlooked opportunity for cost containment comes from within the realm of your capacity planning group. Capacity planning is the process of determining the production capacity needed by an organization to meet changing demands for its products. But capacity planning is perhaps a misnomer, because this group should not only be planning your capacity needs, but also managing your organization’s capacity. Actively managing your resources to fit your demand can reduce your IT department’s software bills, especially in a mainframe environment.
Why is the mainframe especially relevant? Well, the total cost of mainframe computing continues to be high, and software is the biggest portion of that cost. The pricing model for most mainframe software remains based on the capacity of the machine on which the software will run. Note that this pricing model reflects the potential usage based on the capacity of the machine, not the actual usage. Some vendors offer usage-based pricing. You should actively discuss this with your current ISVs as it is becoming more common, more accurately represents fair usage, and can save you money.
IBM offers variable workload license charging (VWLC) and its variants (such as AWLC) for many of its popular software offerings. VWLC applies to products such as z/OS, DB2, IMS, CICS, MQSeries and COBOL. It is a monthly license pricing metric designed to more closely match software cost with its usage. Some of the benefits of VWLC include the ability to:

·         Grow hardware capacity without necessarily increasing your software charges

·         Pay for key software at LPAR-level granularity

·         Experience a low cost of incremental growth

·         Manage software cost by managing workload utilization

Basically, what happens with VWLC is that your MSU usage is tracked and reported by LPAR. You are charged based on the maximum rolling four hour (R4H) average MSU usage. R4H averages are calculated each hour, for each LPAR, for the month. Then you are charged by product based on the LPARs it runs in. All of this information is collected and reported to IBM using the SCRT (Sub Capacity Reporting Tool). It uses the SMF 70-1 and SMF 89-1 / 89-2 records. So you pay for what you use, sort of. You actually pay based on LPAR usage. Consider, for example, if you have DB2 and CICS both in a single LPAR, but DB2 is only minimally used and CICS is used a lot. Since they are both in the LPAR you’d be charged for the same amount of usage for both. But it is still better than being charged based on the usage of your entire CEC, right?
 
About Those SMF Records
In the previous section I noted that the SCRT uses SMF data to report on activity and calculate a monthly IBM software bill. Two types of SMF records are analyzed:

·         SMF 70-1 records report CPU activity for the CPU and LPARs

·         SMF 89 records report when a software product is in use. The SMF 89-1 record is cut when a product is started and the SMF 89-2 record is cut when the product is stopped. Therefore, SMF 89 records can be used to calculate reduced sub-capacity software pricing.

However, not all products cut SMF 89 records. If SMF 89 records are not cut, the product will be billed based on the peak R4H average (or DC) for the LPARs in which it runs. If SMF 89 records are cut by the product, then it will be charged only for the time it was operational. So if the peak for the LPAR occurred when the product was not operational, the product would not be billed at that peak, but at the peak when the product was actually up and running.
 
Soft Capping
You can take things a step further by implementing soft capping on your system. Soft capping is a way of setting the capacity for your system such that you are not charged for the entire capacity of your CPC, but at some lower defined capacity.
Without soft capping you are charged the maximum R4H average per LPAR; with soft capping your charge by LPAR is based on the maximum R4H average or the defined capacity that you set, whichever is lower.
The downside to soft capping is that you are setting limits on the usage of your hardware. Even though your machine has a higher capacity, you’ve set a lower defined capacity and if the R4H average exceeds the defined capacity, your system is capped at the defined capacity level.
Sites that avoid soft capping usually do so because of concerns about performance or the size of their machines. This is usually misguided because soft capping coupled with capacity management can result in significant cost saving for many sites. As of z/OS 1.9 you can set a Group Capacity Limit, which sets a capacity limit for not only a single LPAR, but for a group of LPARs. This can minimize the impact of capping, but may not help much to minimize your cost.
 
Tuning to Reduce DB2 Costs
So the monthly software bill for DB2 for z/OS is based on the peak R4H average (or DC) MSU consumption for the LPARs on which DB2 is operational during the month. Given this information then, there are some steps that can be taken to reduce the monthly software cost for DB2.
First of all, it is important to understand that It is possible to impact the bill for multiple products by reducing the R4H average for a single product. VWLC (and other forms of subcapacity pricing) charges by LPAR usage, so if the R4H average for an LPAR decreases, it is possible that the bill for every product in that LPAR will go down. So consider the situation whereby DB2 and IMS are running in the same LPAR. If you can tune IMS usage to reduce the R4H average, then the LPAR peak may be reduced thereby reducing the peak not just for IMS, but for DB2, too, since it runs in the same LPAR. I used an example of this technique at one site where I was consulting. There was a large, batch VSAM system that consumed a lot of resources. VSAM CI and CA splits were happening regularly and investigation showed that the VSAM files had not been maintained in years. Running a REPRO to reorganize the file eliminated the splits, improved performance and reduced resource requirements. Since the jobs ran during the peak periods for the month and were running in the same LPAR as DB2, tuning that VSAM file reduced the cost for all products running in that LPAR, including DB2.
Secondly, recall the earlier discussion of SMF 89 records. These records indicate when the product is up and running – and – when it is down and inoperative. When you are not using the software, if at all possible, shut it down so it won’t be charged if the peak occurs. For example, consider:

·         The peak R4H average regularly occurs during the batch cycle between Midnight and 4AM

·         DB2 is not accessed between 10PM and 6AM (I know, this is a stretch, but this is an example only)

·         If you bring down DB2 before Midnight and up after 4AM you may be able to remove the peak for the DB2 bill.

Of course, such a scenario is not always feasible. Adopting this technique requires planning, sufficient time, and appropriate products
Additionally, intelligent DB2 tuning can be helpful to reduce the R4H average. Understand where your monthly peaks are and tune the workload occurring during those periods. You can use traditional DB2 tuning techniques like indexing, SQL tweaking, multi-row FETCH, buffer pool tuning, and so on.
 
Summary
Of course, it can be complicated to set your defined capacity appropriately, especially when you get into setting it across multiple LPARs. There are tools on the market to automate the balancing of your defined capacity setting and thereby manage to your R4H average. The general idea behind such tools is to dynamically modify the defined capacity for each LPAR based on usage. The net result is that you manage to a global defined capacity across the CPC, while increasing and decreasing the defined capacity on individual LPARs. If you are soft capping your systems but are not seeing the cost-savings benefits you anticipated, such a tool can pay for itself rather quickly.
And it always makes sense to keep an eye on your monthly peaks by reviewing your SCRT reports. After all, how can you reduce costs when you’re not even tracking the things that can increase them?