Mainframe Challenges/Opportunities

Mainframe Challenges/Opportunities

In a previous article I looked at the way that mainframe solutions continue to be relevant for a host of industries who need to be able to process vast amounts of transactions and data in a safe, reliable and efficient way. This next article follows on to review some of the real and perceived issues with mainframe systems, and Cobol applications in particular, and how they can be managed. We’ll see that one of the potential issues is a lack of understanding amongst some decision makers of how mainframe applications have evolved over the years, which is important to be able to fully evaluate their potential. To balance this we’ll look at some examples of this evolution in order to have a more appropriate context.

Presentation Layer

Most discussions regarding the demise of the mainframe will include, and probably focus on, the presentation layer, which is the way in which the end users communicate with the mainframe itself. In the early days of the mainframe the 3270 screen, often referred to as ‘green screen’, was introduced as a terminal for mainframe access. To keep things as simple and efficient as possible these screens transferred as little data as they could and communicated with the mainframe as infrequently as they were able to. Most models would display 80 characters per row and would support between 12 and 43 rows (24 and 32 rows being most common). The mainframe would push the display out to the screen and effectively leave it alone until a suitable attention key was pressed, at which point it would pull back any entered values.

The 3270 screens worked well for simple applications with experienced users but, with the advent of the PC, users began to demand more sophisticated front-ends. IBM did introduce the 3270 emulator, which allowed a PC to connect to a mainframe but the display remained the same with a static size and shape. As applications became more complex the 3270 screens often became busier and less intuitive, which meant that less experienced users needed on-line help to fill in values. For many applications this was achieved by the user entering a question mark in a field and then hitting enter, at which point they’d be taken to another screen that explained the entry and potentially listed all valid values. This was likely to be slow but also meant more work for the mainframe, which now had to send an extra pair of screens for every field that needed help. A further issue was that the input data could only be validated once it had returned to the mainframe, at which point an error might be sent to the screen, creating further delays and overheads.

Simple GUIs

The answer to these issues initially came in the form of GUIs, often Java panels that were built to match the inputs and outputs of the 3270 screens but in a much more user-friendly way. There were no size restrictions, so fields could be more clearly labelled, and help was available locally with no need to return to the mainframe. Pull downs, in particular, were useful, as they were able to display all available values for a field and reduce the potential for an invalid value being selected. More complex validation could still be applied by the mainframe but the front-end logic reduced the workload on the mainframe and improved response times for less experienced users.

The issue here was how the mainframe could talk to the GUI sitting on a user’s PC. In most IBM mainframe Cobol applications there would be a program associated with each 3270 screen and this would contain commands to deal with the online transaction processing system (known as CICS on IBM). These commands would push the screen out to the user and receive it back when the user hit a relevant key. The Cobol code would then validate the returned values and handle any errors. If the data was valid then the program would apply any relevant business functionality and then continue on to the next screen. When replacing the 3270 screen with a GUI we no longer needed much of the CIS logic or the Cobol validation logic, as this was no longer relevant. The task then was to convert the existing CICS program to a subroutine that could now receive back all the data from the GUI in the form of linkage and then apply the remaining functionality that was still required.

In many cases this was understandably viewed as an issue with the technical architecture that could be resolved by manually or automatically splitting up the existing CICS programs so that there were often three new components. The original CICS program would now be reduced to the logic only required by 3270 (assuming 3270 continued to be supported), a new subroutine would receive back the data from the GUI and a further new subroutine would apply the logic that was common to both. When we’d looked at the examples in the previous article we saw that much business logic was already isolated into subroutines to allow them to be called both in batch or on-line. This helped tremendously, as these subroutines could be called from 3270 or GUI without further changes. 

Flexible Front-Ends

This change of technical architecture often proved to be a time consuming and costly exercise and for some organisations it proved to be a false premise.  The reality was that the front-end screens were becoming more and more demanding and placed increasing pressures on the underlying mainframe code. In addition to being much more flexible than 3270 screens the new front-ends were also more dynamic and several 3270 screens could often be condensed into a single panel or web page. Organisations would now often try to create a further application layer that would allow multiple 3270 screen subroutines to be linked together for a single on-line transaction. On top of this the users wanted to be able to navigate screens in a much more dynamic manner. 3270 screen tended to be very linear and you’d have to complete one screen and then move onto the next in predefined sequence. The new front-ends, however, were supporting tabs that allowed users to jump from one area of data input to another. This proved very difficult to manage simply by stringing to together subroutines whose structure had been dictated by the restricted format of 3270 screens.

This was the point at which many organisations realised that, after already expending large amounts on changes to the technical architecture, they had to now start making changes to the actual application architecture, which was potentially an even bigger task. Understandably, some organisations became disheartened and switched from mainframe solutions to server based solutions that often offered less functionality but eradicated many of the technical issues associated with the new front-ends. For those who remained on the mainframe there were a variety of techniques proposed for opening up existing mainframe Cobol code so that it could communicate with the new front-ends as required. These often talked about ‘wrapping’ existing code or creating services. One version I came across insisted that all Cobol subroutines had to have all linkage (input/output data) received in display format, i.e. how it would appear on the screen. The rationale behind this was that the Cobol subroutine now existed in the wider world, rather than just the mainframe, and therefore needed to communicate in what was a common format. This actually makes sense at a technical level, not least because the IBM mainframe is EBCDIC based and servers tend to be ASCII, so any movement of data between the two requires conversion and this can only be done consistently if the data is in display format. The downside of this was that every subroutine required an additional associated routine that converted the display data to processing format on the way in and then convert it from processing to display on the way out. The reality is that most subroutines would not be called from the new front-end and the extra work was completely unnecessary. In this case an academically correct solution was effectively commercially unviable.  In another scenario existing Cobol subroutines were being modified to create ‘services’ that had a new structure but the approach indicated that ‘services’ should only run on-line, so the old code was maintained in batch. This meant that two versions had to be maintained simultaneously. Some implementations of ‘services’ focused on creating greater levels of granularity, i.e. splitting code into lots of smaller subroutines, even though the front-end would never have any reason to call these new services directly and this often proved to be a very significant development overhead. This shouldn’t be seen as criticism though. Cobol developers at the time were faced with the unenviable task of interfacing with a front-end that they knew very little about and which proved to be a constantly moving target in many cases.   

At this stage it’s worth further clarifying the differences in the levels of navigational flexibility normally offered by 3270 solutions and their equivalent GUI options. Using the previous auto insurance example the 3270 would likely have a screen that captured vehicle details, followed by a screen of cover information and then a screen for each driver and then loop around for any extra vehicles. Users are pushed through a fixed sequence and when any calculation is applied the system knows that all the required information must have been entered. On the equivalent GUI a user could potentially click on a variety of tabs at any point in time to switch to a different risk or driver. Changes on any of these tabs could significantly impact other parts of the policy and the system needs to know what things may need recalculating. This leads to a very different process flow and an extra set of considerations.

Multi Tier Architecture

In addition to the issues regarding application architecture there were a variety of technical options for the way that the mainframe actually communicated with the new front-ends and there was lots of talk of n-tier, thin client, thick client, etc. Ultimately though, how the mainframe communicated with the front-end became much less of an issue than what it communicated – you could always manage to pass the information, it just wasn’t always the right information at the right time. There’s no doubt that many organisations remodelled their mainframe systems to successfully work with the new presentation layers, even if not at the first attempt, and continue to work with their modified architectures to this day. 

There are, however, a significant number of organisations who have spent large amounts of time and money to modify their Cobol mainframe applications and still struggle to apply changes to their presentation layer, as they never truly managed to get the front-end and back-end logic in synch in an efficient way. They’ll also likely have no appetite for further wholesale system changes after their earlier experiences and simply push ahead the best they can with the architecture that they have. The irony is that in many of these cases a lot of re-architecting could have been avoided. In most instances where transactions can be applied both in batch and on-line, the business functionality is already isolated into subroutines that could be called equally from any type of front-end and needed no real changes. The difficulty was in extracting the business logic from the 3270 screens so that other front-ends could operate in the same way. It’s important to note that in most instances it is far quicker and easier for a Cobol developer to code and test a change using a 3270 screen than it would be to modify an alternate front-end. This means that changes tend to originate on the 3270 and then the front-end is modified to match this, often with the front-end trying to execute shared 3270 code. 

For many mainframe Cobol sites this means that the ostensibly simple task of communicating with an alternate presentation layer has resulted in massive development efforts that provided less than optimum solutions. When new managers and executives arrive on mainframe sites they often find an environment where some changes take more time and effort than ought to be the case, normally due to the cumbersome nature of the application architecture. It’s easy for them to assume that there’s a fundamental issue with either the development shop or the technology itself and this can lead to knee-jerk reactions such as changing technology or trying to adopt Agile development processes. In the case of Agile, there’s nothing specifically wrong with Agile itself, but it won’t resolve issues with code and will potentially compound them by clouding requirements and creating more insular development teams. 

The message here is to fully understand what the real issues are that your applications are facing and address them directly before considering any further refinements. Initiating an audit of all development resources to understand the issues that they face and gather potential solutions that they may suggest, is an excellent way to start. Piecemeal changes are also far easier to implement than architectural revisions and a gradual change process will likely be more productive. This may be as simple as reviewing one of the more used screens in the application and assessing how it corresponds to the underlying 3270 panels. Looking at specific details makes it easier to determine tangible solutions than hypothesising about the overriding architecture.

Skills Availability

One of the big issues often raised regarding Cobol development is the lack of resources with suitable skills. This is often tied to the fact that Cobol is taught far less often than other languages in schools and universities. To a degree this is a misjudgement on the part of many academic institutions, who are failing to deliver skills that are continuously in demand in the market place. The main responsibility probably lies with the employers themselves though, as they need to be aware of their current and projected resource requirements and to have plans in place now for what they need in the future, rather than simply assuming that they can pick resources off the shelf as and when they need them. 

Cobol itself is not actually that difficult to learn and it would be well within the scope of most large organisations to take on bright students and quickly educate them to the level that they require. Not only is this potentially more cost effective in the longer term but it also means that your developers are learning about your applications, business processes and coding standards from day one. When recruiters ask for 5+ years of Cobol experience they’re not necessarily doing themselves any favours, as these resources will expect a higher salary yet will still need educating on your applications, standards and business processes. 

Adding further to the pressures on skilled resources is the requirement, in many cases, to become a Jack (or Jill) of all trades. There is sometimes a lack of recognition that different resources have different skill levels and that these are often of different values. A useful exercise is to assess how much time key Cobol resources actually spend on design and build and how much time is spent on other tasks such as admin and testing. If a significant amount of time is spent on secondary tasks then it may be worth assigning those tasks to other resources, who are potentially cheaper and easier to source. Employing more junior developers and having them take on some of the simpler tasks can ease pressure on others while functioning as a learning experience.

Ultimately the goal should be to make more efficient use of the scarcer resources that are available and have in place sufficient training plans to ensure the continued availability of skilled and experienced resources as they are required. This may sound like stating the obvious but if you’re part of a mainframe development site then assess how well your organisation is meeting that goal.

Education

A real strength of many long standing mainframe applications is the quality of the application architecture. A lot of effort will potentially have been invested in the initial structure to ensure that the application ran efficiently and provided a mechanism by which new code and modifications could be readily accommodated. Over a period of time, as resources change, the knowledge of this architecture can be diluted, meaning that code changes are less likely to be as efficient as possible. There’s potential to re-invent the wheel if a newer resource doesn’t appreciate that existing functionality may address many of the requirements that they are focusing on. There’s also the potential to compromise aspects of the existing architecture by adding new logic that is counter to the existing principles, possibly by negating performance features. 

It’s very import to promote knowledge sharing at all levels.  This should be of the application architecture, the business process or simply coding standards. People don’t realise what it is that they don’t know, or what incorrect assumptions they’ve made, until they’re presented with the real facts. Initiating technical dialogues between developers can stimulate ongoing knowledge sharing in many forms. This is particularly important when adopting structures such as Scrum teams where there will sometimes be greater vertical integration, with developers working more closely with testers and business users but having less contact with developers in other Scrum teams. In these cases it’s also useful to have designated experts who cross team boundaries in order to provide technical input and review designs.

Code Management

Many development sites will, quite rightly, insist that all code changes are fully audited. This will likely mean that code is never physically deleted but is simply logically deleted by turning it into a comment line. Lines of code will also often not be updated in situ but the old version is commented out and the modified version inserted, both suitably tagged in comments to indicate the who, when and why behind the change being applied. Over a period of time this can lead to code becoming cumbersome and difficult to read. In addition, the push towards smaller, more frequent production releases it’s also not unusual for code to be concurrently updated for two or three separate releases, which increases the likelihood of code being lost of incorrectly retrofitted.

There are a wide range of products available for source management on mainframes and some of these are being continuously refined but the main focus at the moment appears to be on more cosmetic changes that allow code to be managed through front-ends that are more intuitive for developers starting from a PC background. There’s great scope for these source management tools to add more value to the management of source code. In some cases this may mean locking code from being edited outside of the source management tool itself but this may be a small price to pay for increased functionality.

One useful option would be to allow changes to be defined within the source manager itself by allocating a unique code and description of each discrete change/requirement along with a proposed implementation date. When a user checks out source for changes they select the relevant change number and this is logged on the source. When the user makes any changes those changes are automatically tagged against the current change number and the current production copy of the code line that they’re changing is automatically preserved as a comment. The user would have the option of viewing the code either with the full audit history or simply with the current live production code, which makes it much easier to read. This ensures that a consistent audit is always maintained and isn’t dependent upon manual user entries, which can be time consuming to maintain. If a user needed to apply code for a different change to the same program then they would have the option select a further change code and apply the changes under that, switching in and out as required. When the code is migrated to production this would be automatically reflected in the audit history so that it would be clearly exactly when it started to run against production data

A further advantage of this approach is that it would be easy to quickly analyse the impact of any given change, quickly listing the programs involved and the exact changes made. This also supports easier reconciliation of code when changes for two separate releases are applied concurrently and can prevent code being lost or incorrectly retrofitted. These are just simple examples to indicate that, even after all these years, there’s still scope for something as simple as a source manager to be further refined to support the needs of many modern mainframe development shops.

Automation

For many organisations automation tools are still not as extensively used as they might be, often adding unnecessary pressures on an already stretched workforce. Any manual process that is consistent and repetitive is a candidate for automation and automation often offers greater reliability due to higher levels of consistency.

Testing

There have long been a variety of testing tools available on the mainframe that cover functional, regression and performance/stress testing. These are particularly good for black box testing where little is known about the internal activities of the code but some do offer a code assessment to ensure that all technical paths are tested, rather than just unique business scenarios. The use of these tools can improve overall code quality while also freeing up technical resources who are needed for other tasks. When organisations are looking at adopting some form of Agile that will likely promote the use of automated testing tools, it’s worth actually considering adopting the testing tools first, within the existing Software Development Life Cycle. This allows the users to become experienced with the tool within a structure that they are familiar with and also gives clarity as to how much benefit the testing tools are providing, as opposed to general Agile benefits.

Process Flow

There has been much more progress in recent years on the creation of tools to help the process flow within the development lifecycle. These tools can log initial requirements and subsequent revisions, assign work to developers, manage the movement of code within environments, monitor testing and request appropriate sign-offs. The greater the level of integration, the greater the benefit, but it’s important that such tools and processes are designed to succeed, rather than designed to fail. The rationale here is that the process should be able to identify if anything is missing or late, based on the proposed implementation date, and should trigger actions to resolve the issues. This is distinct from processes that wait until the code should be implemented and simply red flag the change so that the implementation is deferred. Using the right tools in the right way can significantly improve productivity and quality.

Narender Pathi

Associate Vice President at Broadridge

4 年

Excellent read !! I am new to LinkedIn, I am already became fan of your writing. Great job.

回复
??Nigel Godfrey

Technical Product Lead at Scram | Serious Software, Seriously Fast

4 年

Great read. Not sure about "vast" in terms of mainframe processing nowadays. Compared to Amazon or Google who had to invent whole new ways to process "vast" an Insurer is really pretty small. Nostalgia is wonderful, but I can get more processing power from AWS in the click of a button and I don't need to talk to an asshat salesperson from IBM ??

回复
Lech P.

Executive Partner

4 年

Great read Richard. I saw exactly what you described at various IT mainframe shops. Way to capture it.

要查看或添加评论,请登录

Richard Robinson的更多文章

  • Insurance Data Migration

    Insurance Data Migration

    You don’t need to read too many articles on modern computer systems before you encounter references to ‘Legacy…

    2 条评论
  • Agile vs agile

    Agile vs agile

    The Agile team challenged the agile team to a competition. The challenge was for them both to walk from John O’Groats…

    4 条评论
  • A Tale Of Two Countries

    A Tale Of Two Countries

    When George Bernard Shaw commented that the UK and US were ‘two nations separated by a common language’, he was pretty…

  • Cloud Servers

    Cloud Servers

    In a previous article we introduced the basic concept that the Cloud is an umbrella term under which physically…

  • Cloud For Absolute Beginner

    Cloud For Absolute Beginner

    Most people now have at least one electronic device on which they store digital information. Phones, laptops, tablets…

    2 条评论
  • Cloud

    Cloud

    Given the nature of technology, it’s not surprising that it continues to evolve at a blistering pace. Anybody who’s…

  • Does Cobol Still Have Teeth?

    Does Cobol Still Have Teeth?

    Most people will have heard reference to Cobol at some stage. Quite often it’s spoken of in the context of being an…

    7 条评论
  • Long And Winding Road

    Long And Winding Road

    7 O’clock in the morning has never really been in contention for the award of favourite time of day. There were one or…

    2 条评论
  • Lipstick On A Pig

    Lipstick On A Pig

    1996. That was the year that I first heard the expression ‘lipstick on a pig’.

    4 条评论
  • Mainframes In The Modern World

    Mainframes In The Modern World

    There are 10,000 mainframes currently operational around the world. Organisations using them include 71% of Fortune 500…

    2 条评论

社区洞察

其他会员也浏览了