Mobile App Testing - considerations to take before you begin

Mobile App Testing - considerations to take before you begin

Mobile App Testing 

Mobile app testing is a "new" field in IT and it’s not identical to conventional testing. The main principle still applies: “good software testing principles can be applied to any software, regardless of the platform.” The quality of the app is responsibility of the professional testers. With careful planning, considered approaches and appropriate techniques, we can ensure the quality of an app is the highest it can possible be. 

Mobile applications have changed the way we use information. From transmitting information between users in real-time around the globe, to transmitting data to and from a cloud, these relatively new ways of using and transmitting data have impacted and evolved the way mobile app testing is performed. 

Before testing begins 

App testing can have many different goals (quality assurance, usability, performance, etc). This is a key decision when planning the testing, as not testing enough or testing in the wrong areas will inevitably result in missed bugs. The aim should be first ascertaining “why” we are going to test and not simply “what” we are going to test. Testing cannot insure software is bug-free but it CAN increase the app’s quality. 

The mobile app tester should take on board the requirements or instructions for the testing. They should analyse these and determine if the proposed testing will be sufficient to satisfy them. During this analysis phase, questions can and should be asked by the tester in order to determine the goal. 

If the tester does not feel that the proposed testing will be sufficient, then they should highlight this fact immediately and propose alternative methods to achieve the intended goal.  

Some standard questions to achieve the testing goal: 

  • Which devices do you want the app tested on? 
  • What do you consider to be high-risk areas of functionality? 
  • How much of the app’s functionality has already been tested? 
  • Who is the intended audience of the app? 
  • Is this a new app, or an update to an existing app? 

Because there are smaller life cycles in developing mobile apps then web apps, the tester must consider what to communicate frequently, efficiently and to whom. The result of a serious defect found will obviously impact the launch of the app and so if the tester found such a defect, consider the expected launch date. The tester may also decide to communicate the issue immediately or armed with knowledge of the app’s release, it may be acceptable to include it with other defects found and communicated after test completion.  

  1. You need to ensure you have contact information for the following roles: 
  • Project manager: the person that will make decisions based on your feedback 
  • Developer: useful to ask questions regarding functionality or issues relating to installation of the app 
  1. Determine the best means of communication, e.g. phone, email, etc 
  2. Ensure it is clear when you will contact them i.e. provide the results in 3 days, or you will contact them if a serious defect is found. 
  3. Always communicate ensuring the message is clear and make sure additional information is included such as replication steps, screen-shots etc. 

A Client-side application can be expected to be either “thin-client” or “fat-client” in nature. A thin client application will not have any customized code within the application. The code will also not be expected to make use of the features within the underlying mobile operating system. Fat client mobile applications will typically have multiple layers of code within the application. The decision between “thin” or “fat” client typically comes down to the communication and data storage needed between client and server. 

When “server-side” architecture is considered, there are two possible architecture categories, which are either: “single-tier” or “multi-tier”. A single-tier architecture design works by locating all server side components such as a database server and an application server grouped together into a single unit. A multi-tier (aka n-tier) architecture design works by having each component spread across various units within the system. 

There are numerous mobile connection types (2g, 3, 4g, Bluetooth, …) so the push and pull methods might be used in one of the three states: always connected, never connected, partially connected. 

Emulators are fine for the app developer to test their app before handling it over to a tester, but that should be where emulators should stop being used. It is possible to test and app by using emulators, but there are many reasons why you should not use this method, and the benefits of testing on real devices are easy to see. One of them is that the end user will always be using a real device! The defects must be found and fixed before the customer sees them and also the “feel” of the app should be taken into consideration.  

While using emulators you cannot effectively test swipes or double-taps, you cannot effectively test all functionality and the stability of the app cannot be tested with a simulator. When testing in a mobile device you can see how the app functions when switching between other apps. Other tests that should be used are: receiving a phone call during operation of the app, receiving text messages, use of an accelerometer etc. A key area to focus on is “stability”. An app is developed to use the devices hardware effectively. If it doesn’t then apps will show symptoms such as a slow-down or even crashes. This is something that should be tested beforehand. 

Most mobile apps will be designed to work on a family of devices. If you have access to these devices you need additional time for running the tests on each device. Calculating the desired devices to test in is not a simple task. You cannot say that all defects will be found in any software development even if you perform all of your tests on all of the supported devices. It all goes back to the risk. Risk can be determined by communication and getting answers to questions that will provide you with one idea of which devices to use. Questions might be: Which platform? Which devices? Which Firmware? 

How to start planning for testing? First of all, define your goal. Ensure that you have the hardware available with the expected firmware. Confirm that you’re installing the correct version of the app. Determine your test methodology making sure of what is your focus (functionality, usability?). Determine how and where to document testing. Have a deadline for testing and communicate it to all interested parties. Planning the testing for a mobile app is mainly derived from the communication with the person or persons who asked you do to the testing. The key to a successful test is ensuring you have all the information up-front prior to starting the actual testing. You need to be sure of exactly what you are testing and how you are going to test it before you begin. 

Planning should be well documented, and can consist of a single master test plan, or be documented in separate test plans for each of the test levels. The test plan should be considered a living document and not something which is defined once and then left alone. Planning is influenced by organizations test policy, testing scope, objectives, risks, constraints, criticality, testability and availability of testing resources. 

We should create test plans with important information. 

Test plan identifier – is some type of unique company generated number to identify this particular test plan, its level and the level of software that it is related to. It should contain a unique “short” name for the test plan, version date and version number of procedure, version author and contact information and also revision history. Keep in mind that test plans are like any other software documentation, dynamic in nature and must be kept up to date. 

Introduction – states the purpose of the Plan. It’s essentially the summary part of the plan. You may include references to other plans or just create a reference document. In the introduction we should have the Project Authorization, Project Plan, Quality Assurance Plan, Configuration Management Plan, Relevant Policies and Standards, and for any lower level plans reference higher level plans.  

In this introduction you should identify the scope of the plan in relation to the software project plan that it relates to. Other items may include, resource and budget constraints, scope of the testing effort, how testing relates to other evaluation activities (analysis & reviews), and possible the process to be used for change control and communication and coordination of key activities. As this is the “executive summary” keep information brief and to the point. 

Test items – essentially a list of what is to be tested. This can be developed from the software application test objectives inventories as well as other sources of documentation and information such as: requirements specifications, design specifications, users’ guides, operations manuals or guides, installation manuals or procedures. This can be controlled and defined by your local Configuration Management (CM) process if you have one. This information includes version numbers, configuration requirements where needed. It may also include key delivery schedule issues for critical elements. Identify any critical steps required before testing can begin as well, such as how to obtain the required item. This section can be oriented to the level of the test plan. For higher levels it may be by application or functional area, for lower levels it may be by program, unit, module or build. References to existing incident reports or enhancement requests should also be included. This section can also indicate items that will be excluded from testing. 

Features to be tested – listing of what should be tested from a user’s viewpoint. This shouldn’t be a technical description and it’s recommended to identify the test design specification associated with each feature or set of features. We should use a simple rating scale and be able to discuss why a particular level was chosen (high, medium, low). 

Features not to be tested – this should be a listing of what NOT to be tested. Identify WHY the feature is not to be tested, don’t include it in this release of the software, low risk features (never used before or considered to be stable), something that will be released but not tested or documented as a functional part of the release of this version of the software. 

Approach – this is the overall strategy for the test plan. It should be appropriate to the level of the plan and should be in agreement with all higher and lower levels of plans. Overall rules and processes should be identified.  

Item pass/fail criteria – has the goal of identifying whether or not a test item has passed the test process. At the unit test level this could be: all test cases completed, a percentage of cases completed containing some number of minor defects, code coverage tool indicating all code has been covered. At the master test plan level, this could be: a specified number of plans completed with minor defects, individual test case level criterion. What is the number and severity of the defects located?  

A defect is something that may cause a failure, and may be acceptable to leave in the application. A failure is the result of a defect as seen by the user, the system crashes, etc. 

Suspension criteria and resumption requirements – Know when to pause in a series of tests or possibly terminate a set of tests. What are the potential impacts of resuming after testing being suspended? If the number or type of defects reaches a point where the follow on testing has no value, it makes no sense to continue testing.  

Test deliverable – as part of this plan we should deliver: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, and test summary reports. Test incident reports and test data can also be considered a deliverable as well as possible test tools to aid in the testing process. One thing that is not a test deliverable is the software.  

All these items need to be identified in the overall project plan as deliverables (milestones) and should have the appropriate resources assigned to them in the project tracking system. This will ensure that the test process has visibility within the overall project tracking process and that the test tasks to create these deliverables are started at the appropriate time. Any dependencies between these deliverables and their related software deliverable should be identified. If the predecessor document is incomplete or unstable the test products will suffer as well. 

Test tasks – for each test deliverable we should identify tasks. These tasks should have corresponding tasks and milestones in the overall project tracking process. If it is a multi-phase process or if the application is to be released in increments there may be parts of the application that this plan does not address. These areas need to be identified to avoid confusion if defects are reported back on those future functions. 

This will also allow the users and testers to avoid incomplete functions and prevent waste of resources chasing Non-defects. If it is a multi-party development process then this plan may only cover a portion of the total functions/features. This needs to be identified so that those other areas have plans developed for them.  

Environmental needs – if there are any special requirements for this test plan such as: special hardware/software (emulators, devices, etc), how will test data be provided, how much testing will be done on each component of a multi-part feature, special power requirements, specific versions of other supporting software, restricted use of the system during testing, tools, communications, web, client/server, networks, external, internal, and security. 

Responsibilities – we need to have a responsible person for each aspect of the testing and the test process: setting risks, selecting features to be tested and not tested, setting overall strategy for this level of plan, ensuring all required elements are in place for testing, providing for resolution of scheduling conflicts, required training, critical go/no go decisions for items not covered, delivery of each item in the test items section. 

Staffing and training needs – identify all critical training requirements and concerns. 

Schedule – should be based on realistic and validated estimates. We should also address how slippage in the project would be addressed, if the users know in advance there will be a slippage then they may be more tolerant to get a better tested application, you have the advantage of discussing the possible defects in advance, all relevant milestones should be identified with relationship development process, tie all the test dates directly to their related development activity dates. There are many elements to be considered for estimating the effort required for testing. It is critical that as much information as possible goes into the estimate as soon as possible in order to allow for accurate test planning. 

Risks and contingencies – we need to identify the overall risks to the project with an emphasis on the testing process: lack of personnel, availability, hardware, software, data or tools; late delivery of software, hardware or tools; delays in training; changes to the original requirements; test and development schedule; number of tests performed will be reduced; number of acceptable defects will be increased; these last two could lower the overall quality of the product; resources will be added to the test team; test team will work overtime; team morale could be affected; scope of plan may change; may be some optimizations of resources; you could just QUIT testing (this is abrupt).  

Management is usually reluctant to accept scenarios such as the one above though they have seen it happen in the past. The important thing to remember is that if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option. 

Approvals – who can approve the process as complete and allow the project to proceed to the next level. At the master plan this may be all involved parties, keep in mind who the audience is, levels and type of knowledge at the various levels, programmers are very technical and usually don’t have an overall perception of the business process driving the project, and users may have varying business or very little technical skills. 

Just remind yourself 

Don't forget that even though there's not as much mobile testing as there is in anywhere else it's not something we can forget about. The world is changing and with that new challenges are coming. Never forget that and include mobile testing in every project that has a UI deliverable to a customer. Also keep in mind it follows the same process of all other testing areas (and keeps up with the development life cycle).  


Igor Topalov

Solutions Architect & Consultant | Finding a way to do more using less within the reason and the budget

6 年

Quite exhaustive list. I'd also added up three more points a) what device simulator to be used (just after you set up expectations for device types) b) automated tests and their ranges.. c) interaction with dev.team (whether DEV's involved or not), and also - what is ultimate criteria of success/failure??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了