Mainframes In The Modern World

Mainframes In The Modern World

There are 10,000 mainframes currently operational around the world. Organisations using them include 71% of Fortune 500 companies. The main attractions of mainframes systems are reliability, performance and the ability to readily process massive volumes of data and transactions. Insurance companies have a need for this type of system and 9 out of 10 of the world’s largest insurers have mainframes. When we look at how some of these business applications, such as general insurance, utilise mainframe technology it’s easy to understand why this is the case.

In 2019 there were over 200 million vehicles insured in the US alone, with an average premium in excess of $1,000. In addition to auto policies there are numerous other insurance products including home, watercraft and umbrella, so the total volume of policies is huge. State Farm alone services 83 million policies and accounts across the US.

Considering a sample scenario where an insurer has a million auto policies, it’s reasonable to assume that they have around 1.5 million vehicles and 2 million drivers. Data will need to be stored for each vehicle and driver along with the cover selected for each vehicle, the premium associated with it and the billing schedule being used to collect payment. Additional data is retrieved from third parties such as credit reference agencies and the DMV (or an alternate data provider) and this is often also stored. 

Where the mainframe really comes into its own though is when we follow through the lifecycle of any insurance policy. When a policy approaches its expiration date a process has to begin to decide if a further term of cover is to be offered (there normally have to be very good reasons not to offer) and what any offer should look like. Depending upon state regulations there are a number of things that can happen to a renewal offer and they may include automatically adding or removing cover, excluding drivers and changing limits. In addition to this, the age of both vehicles and drivers is updated and the latest version of the rates, terms and conditions are applied, in order to determine what the new premium will be.

The significant feature of this renewal process is that, in the vast majority of cases, it requires no manual intervention and the policy is never viewed on any screen. If a policy is taken out at new business and then kept for 10 years with no changes requested by the insured (less likely with auto than with home), then the policy will potentially never be viewed again on any screen. If our million policy company has a stable book and a 90% retention rate then there will be around 9 renewals applied each day for every new business policy accepted. The vast majority of premiums are therefore being collected with no user involvement at all. This doesn’t reduce the importance of good screen design, always a contentious issue for mainframe systems, but it does highlight the need to be able to process high volumes of automatic transactions in an efficient and secure manner.

An important point about the renewal process is that it doesn’t simply involve picking a premium out of the air when the current period of cover expires. There are many discrete activities that need to be applied, potentially starting months before the current contract expires, in order to ensure that every policy is correctly underwritten. These may involve getting updated credit scores and driver records, as well as providing proof of discount qualification such as Academic Achievement. There’ll likely then be an automatic review to determine if the policy will be non-renewed – the state will mandate that the policy holder is informed of this well in advance, so this has to be done early.  Most states will legislate when the renewal offer has to be sent to the policy holder but some will require that the offer has to be sent earlier if the premium has increased beyond a certain percentage or the level of cover has been changed. This gives a renewal cycle that can involve 10 or 20 automatic transactions for every policy term, depending upon the type of policy and the state. If 300 new policies are accepted every day then there will potentially be 30,000 policies that have some form of renewal process applied in the same timeframe.

When reference is made to mainframes running Cobol systems, what this really mean is batch/online Cobol DB2 systems. Most business functionality is written in batch/online subroutines, which means that exactly the same logic that is applied by a user through a screen can also be automatically applied by the system in a batch mode. What batch means is simply a set of scheduled processes that automatically get applied, usually overnight. In reality though we really should think of ‘batched’ processes, which are simply groups of processes being automatically applied together. This is an important distinction, as a batch of transactions could also be applied online, giving greater flexibility as to how system resources are used, and potentially reduce the need to increase the MIPS cap. 

The reference to DB2 is also significant, as many of these automatic transactions create persistent data, potentially in large volumes. An important feature of insurance systems is that they must always keep track of what cover was in force at any point in time. If the policy holder changes a deductible from $100 to $200 then the system must keep a record of the old and new values, the dates that each were applicable and the total policy impact of the change. The already large volumes of data can grow exponentially over a period of time, even though the total book of business remains fairly static. The ability of DB2 to store these large volumes of data and efficiently manage them during the batch processes is a key component in the way that the mainframe is able to efficiently apply the large volumes of transactions that are required.

For all tables within DB2 it’s possible to specify a cluster sequence, which controls the physical sequence in which the underlying data is stored. The controlling logic within the batch process can be structured so that it processes policy data in the same sequence as it’s physically stored. When DB2 sees that it keeps hitting consecutive pages on a table it can escalate the number of pages that it retrieves, significantly reducing the average cost of retrieving each page. This same clustering sequence can also be used to allow multiple instances of the batch process to run concurrently, each dealing with a specific, contiguous group of policies, significantly reducing the overall elapsed time of the batch process, which is particularly important as pressure increases to extend the online day. There will be a similar story for mainframe sites using IMS, IDMS, Oracle, VSAM or many alternatives, each exploiting the strengths of that particular technology. 

An important factor when assessing systems from 20 or 30 years ago is that the hardware was nowhere near as powerful then as it is today. MIPS and DASD were expensive and systems had to be as efficient as possible. For many of these larger applications there was a strong focus on application architecture. Functionality and data would be assessed together and a cohesive architecture would be created that governed the way that code was written, so that the application worked efficiently as a whole, exploiting the benefits of the available technology. Good designers would also be able to create a flexible architecture that would allow for future expansion and optimisation. As data volumes have increased over the years the cost of MIPS and DASD has reduced so that this hasn’t been an issue. Additionally, software changes such as those in DB2 releases have created further efficiencies, many of which have been realised without additional code changes.

All of this is coupled with the IBM Z-series having reliability levels of 99.999% or more. A well architected mainframe application will quickly and efficiently process very large volumes of transactions and the only time that it will fail is when there are issues with either program code or reference data and even then logic can be applied to circumvent these issues to allow remaining transactions to be completed. The latest mainframes have technology to address encryption and the firepower to deliver it, all packaged up in one of the most secure platforms available. There’s no doubt that, in these modern times, the mainframe isn’t going to solely meet the business needs of most large organisations but there’s a convincing argument that it should still be at the heart of the overall technical architecture.

Richard Robinson

Release/Programme/Project Manager With C-Level Experience

4 年

Cheers Mark, many of the current discussions are along the same lines of what we were having 20 years ago. Much of the mainframe negativity is commercially driven by those who who are selling a 'solution' to whatever it is that they claim is the mainframe problem of the day, which makes perfect sense. Increasingly you find cases where the decision makers have no real history with mainframes, so are more easily swayed. Every little helps when it comes to switching the dialogue back to what's actually relevant.

Mark Sweeny

Founder & Chief Executive de Novo Solutions, Serial Entrepreneur - GBEA Tech Entrepreneur Wales 2023, Elite Business 100 Exceptional Performance 2024, Winner Wales Enterprise Awards 2024, Tussell Tech200 2024

4 年

Wow this takes me back; but the points are valid and well made

回复

要查看或添加评论,请登录

Richard Robinson的更多文章

  • Insurance Data Migration

    Insurance Data Migration

    You don’t need to read too many articles on modern computer systems before you encounter references to ‘Legacy…

    2 条评论
  • Agile vs agile

    Agile vs agile

    The Agile team challenged the agile team to a competition. The challenge was for them both to walk from John O’Groats…

    4 条评论
  • A Tale Of Two Countries

    A Tale Of Two Countries

    When George Bernard Shaw commented that the UK and US were ‘two nations separated by a common language’, he was pretty…

  • Cloud Servers

    Cloud Servers

    In a previous article we introduced the basic concept that the Cloud is an umbrella term under which physically…

  • Cloud For Absolute Beginner

    Cloud For Absolute Beginner

    Most people now have at least one electronic device on which they store digital information. Phones, laptops, tablets…

    2 条评论
  • Cloud

    Cloud

    Given the nature of technology, it’s not surprising that it continues to evolve at a blistering pace. Anybody who’s…

  • Does Cobol Still Have Teeth?

    Does Cobol Still Have Teeth?

    Most people will have heard reference to Cobol at some stage. Quite often it’s spoken of in the context of being an…

    7 条评论
  • Long And Winding Road

    Long And Winding Road

    7 O’clock in the morning has never really been in contention for the award of favourite time of day. There were one or…

    2 条评论
  • Lipstick On A Pig

    Lipstick On A Pig

    1996. That was the year that I first heard the expression ‘lipstick on a pig’.

    4 条评论
  • Mainframe Challenges/Opportunities

    Mainframe Challenges/Opportunities

    In a previous article I looked at the way that mainframe solutions continue to be relevant for a host of industries who…

    7 条评论

社区洞察

其他会员也浏览了