By all measures, the latest chips from IBM are technological marvels. The MIPS numbers back up the claims. As posted on the IBM website, the Power 9 processor chips have the processing power to handle more than 200 quadrillion calculations per second. That’s 200,000,000,000,000,000. With blazing performance boosts as compared to its predecessors, all users will see a measurable improvement in their even most challenging applications. Customers demand and expect IBM to consistently deliver extremely high-quality hardware and operating systems. And IBM delivers, each and every time. But what about the in-house applications? Surely, the TCO is not in the hardware alone.
In my mind, some questions abound.
First, what good is running a world class and large enterprise strength system when your code base is more than 20 years old? Please don’t drive the Ferrari when the road is full of potholes and traffic is extremely constraining.
It’s true – faster systems run faster, returning extra downtime. But what to do with this time windfall? How exactly does one harness all this extra horsepower? Unlike an actual horse which requires downtime, computer chips will happily and endlessly continue crunching numbers. Surely there must be a way to benefit.
Extra processing power lets you perform deeper analysis on your data. By that I mean taking advantage of advanced data relationships to build more meaningful queries. Indeed, while extracting every last ounce of processing power from new technology is a formidable goal, it is one we should strive to achieve. It is simply not enough to have amazing technology and let it sit idle.
Second, while it is always a good thing when jobs which previously took hours to complete are decimated to minutes, and minutes are crushed to seconds, a hardware upgrade alone is not enough without consideration to running better code. Simply put, does a hardware upgrade make up for poor programming design? A big, fat emphatic NO!
Why? Because Program execution speed is only one metric of system integrity. Other metrics abound as well, such as how pliable is the code? Is it outdated? Does it contain blocks of dead code? Is the program stable or constantly being modified? Even if it’s stable today, might there by changes to it in the future? Additionally, have major blocks of the program been replaced with external modules or service programs?
Why? Because those who maintain code are not inexpensive resources.
Why? Because business requirements are fast-moving, and new functions and entire applications need to be delivered quickly.
Why? Because most programs are maintained in a collaborative environment. Only by having standards in place will the code base be manageable.
An interdependence exists between new hardware chips and software. The new hardware supports the new operating systems and technology refreshes, and you need the newer systems to take advantage of them. Make a commitment to read the “What’s New in This Release?” documentation. Learn what the new chips are capable of. You’ll be amazed. If this is not your responsibility, see to it that those with this task are being accountable to staying current.
At the end of the day, if you are running old and tired mainline applications on shiny new hardware, we are keeping our systems bored most of the time. And who amongst us wants that?