Software Engineering: Modern Challenges
Photo by Baron Ntambwe

Software Engineering: Modern Challenges


Baron Ntambwe, Department of Computer Science, University of East London London E16 2RD, United Kingdom


I: Privacy, Challenges, Techniques and Methods

A. The ethics of working on the engineering of software systems that help counter terrorism, by tracking large numbers of their citizens and their actions.

Terrorism is one of the most recurring scourges in the world, causing social unrests and leading to the loss of innocents human lives. For instance, in 2014 the global death toll that stem forth from terrorist activities amounted to 44,000 compared to 8,000 in 2010; therefore increasing public concern as more than half of the the population in many parts of the word are pretty much worried about being unexpectedly victims of terrorist attacks (Ritchie et al., 2022). Furthermore, the corrosive impediment on economical growth and social wellness in many countries around the globe due to systematic terrorism, can not pass unnoticed. This is mostly due to the fact that private investors - catalysts of economic growth, are not inclined to invest in countries destabilized by terrorism because the risk on the return on investment is obviously high.

Thus, many governments that have at heart the best interests of their countries and of the citizens they lead, would be more inclined to come up with strategies aiming at mitigating the nefarious effect of terrorism if not eradicating it once for all. Such measures may include developing a Computer System for tracking the population and their respective actions. Although commendable, such a project may undermine the very essence of laws and policies protecting the privacy of the population. In the following paragraphs, I will try to discuss the implications on the ethical aspect of such projects in light of the five pillars of the International Privacy Laws which are “Notice”, “Choice and Consent”, “Access and Participation”, “Integrity and Security” as well as “Enforcement” (Cobb, 2004); in order to gain ample understanding of the ethics implied when working on the development of this type of system.

To start, let's have a look at the issue of privacy with regard to the principle of “Notice”. When building a system aiming at tracking citizens of a country, the individuals should be given a formal and comprehensive notice by the organization implementing such a project, specifying the intent of the system as well as how and when the personal information will be collected and how it will be used. Without such an explicit and clear notice, targeted individuals cannot make an informed decision. Furthermore, the other? three subsequent principles (choice and consent, access and participation, and enforcement) are relevant only when a proper notice has been issued to an individual highlighting their rights and responsibilities with regard to the system and personal information. As an Engineer assigned to the development of this system, consulting with the leadership team on the necessity of implementing this primordial privacy principle, will be my main preoccupation.

The second privacy principle the project should comply with is “Choice and Consent”. The entity in charge of the system should provide people with choices and consent related to the collection, usage, management and storage of every single personal information captured by the system (Thales Group, 2021).? When complying with this principle, the manner in which it’s being carried out is very important for a better outcome. For example, honesty and sensitivity are essential qualities required in this context (Cobb, 2004). Techniques and ways of implementing this principle are numerous. However ensuring that it is usable and accessible to every strata of the society - including people with disability, is one aspect that should not be omitted.

The third principle to be taken in consideration has to do with “Access and Participation”. Data subjects should be able to know whether or not an organization maintains data about them and if it’s the case,? they should be able to request access to that specific set of personal information (International Security, Trust and Privacy Alliance, 2007).? Making sure that the correct security tools and processes are being deployed in order to insure that individual’s information can be accessed and used only by authorized entities, is a vital requirement on which I should stress more as I am one of the? key protagonists in this project. Nevertheless, the aim of this

?principle should be twofold. It should provide the targeted population with? the capability to access any stored information about them, as well as being able to assess data’s correctness and completeness. Appropriate mechanisms should be put in place in order to ensure this fundamental aspect of privacy.

This, thus leads us to the fourth aspect which focuses on “Integrity and Security”. This principle advocates that, after collecting and storing personal information, the organization should provide mechanisms of ensuring that the data is well protected and accurate. The Security aspect of this principle entails both managerial and technical means of safeguarding data against loss and the unauthorized access, destruction, use or disclosure? (Cobb, 2004). For high efficiency, these measures should be implemented at both infrastructure and process level. In occurrence, the Bell-LaPadula Model (BLP) could be a best approach that could be used for enforcing access control for personal data.

Lastly, I should encourage the organization to comply with the principle of “Enforcement”. A process where the entity responsible of the system can ensure that it adheres to the policies governing privacy, should be implemented (International Security, Trust and Privacy Alliance, 2007), as these principles can vary depending on the country or the region in which the system will be deployed. Ensuring that the human activities tracking system being developed as well as the platform on which it is deployed are tightly aligned with regulation that reinforces compliance, is the very essence of the principle of “Enforcement” and the culmination point of all precedent principles.

These five principles provide us with a clear insight on what needs to be done if the construction of the system for tracking citizens and their actions have to be compliant with the rules of privacy, and therefore eliminating any ethical concern. However, respecting by the book all privacy principles, although it reinforce democracy and freedom of choice for the citizens,? can potentially undermine the main purpose of such a system hence it forges a loophole by the fact that some citizens involved in terrorist activities may choose not to give their consent with regard to the tracking their actions. This constitutes one of the main flaws of such regulations as the organization does not have much power when confronted by such? refusal from individuals because? it is indeed legit for them to not consent.

In conclusion, we all undoubtedly recognize the need for building systems that can help with defeating terrorist activity worldwide. However, as Software Engineering Professionals, whenever working on such a project, beside our ethical obligation of promoting human well-being and preventing harm, we have the moral duty of contributing to the construction of such systems only if they respect people’s privacy (Association for Computing Machinery's Committee, 2018). Therefore, we should always advocate for the observance of our cherished ethical core values before getting involved in any Software Project or making any progress if we are already part of it. This may include promoting ethical practices within the organization through consultations and intentional workshops or supervised training of all stakeholders on the project.

B. Key challenges facing software engineering

Software Engineering being one of the youngest engineering disciplines, many challenges remain unresolved as scholars and industry experts are still learning how to deal with them. Some of the key ones that have been unearthed are “Legacy challenge”, “Heterogeneity challenge” and “Delivery challenge” (Vo & Vo, 2012). Below, I will try to elaborate a bit more on each of them.

1. The legacy challenge

In today’s software industry, not all software systems being used are newly or recently created. Most of them are systems developed many years ago on which businesses rely to execute their mission critical tasks. As the needs of the business evolve, the software system needs to be adapted as well through frequent updates and maintenance in order to remain relevant to the organization. By applying these continuous changes, the complexity of such a legacy software system, depicting structural degradation, amplifies? due to Software Entropy if not dealt with in a more efficient manner (Mannaert et al, 2012). This is the very essence of the “legacy challenge”. However, besides change in the business needs, other factors that may trigger this challenge are Technology Stack Revamping as well as Architectural Optimizations.

With regard to Technology Stack Revamping, a large number of software systems have been built using very old technologies such as VB 6.0, C, Pascal, Interbase,? etc. for the backend or JavaScript ES1, Ajax, jQuery, etc. for the frontend. Using such old technologies limits the software engineering team’s capability to implement some modern engineering good practices and writing more clean and maintainable code (not spaghetti code). It also deprives the team from the bounty of leveraging modern frameworks that take care of all bowler plate codes, and increase developers productivity by focusing on writing only high business value code. Some of these new technologies and frameworks are Java Springboot, .NET Core, Entity Framework, Liquibase, Hibernate, Angular and ReactJs; just to name a few. In many cases, when adopting such new tech stacks, the development team may decide to re-write the entire system in order to limit the possibility of an ever expanding Software Entropy Phenomenon. Nevertheless, such a move is salutary for the recruitment and retention process in the workforce hence it’s easier to find many developers who have experience with new technologies than the ones experienced with the old ones. It is also easier to retain developers when they are offered the opportunity to stay up to date by learning new technologies.

The second aspect has to do with Architectural Optimizations. As many organizations grow, the size of their customer base can also grow significantly. However, many legacy systems are not necessarily designed for better scalability and high availability. To achieve these two very important non-functional specifications, the engineering may be called upon to refactor the existing system by applying modern architectural approaches and design patterns such Microservices, Severless, Native Cloud, MVC, SOLID Principles, etc. Implementing such changes on a legacy system is usually not a trivial endeavor as the business continuity should never be interrupted. Therefore, many engineering teams prefer to tackle this in a more progressive way, by identifying lower risk modules, refactoring them before moving to the most complex ones. To ensure the success of this progressive refactoring, Integration and Regression Tests must be designed meticulously.

Software entropy being one of the major issues when dealing with a legacy system, it is important to understand its main root cause so that precaution could be taken for the sake of reducing its ripple effect. The “psychology, or culture, at work”? on the legacy project is one of the major causes (Thomas & Hunt, 2019, pp. 6-8). Even when the size of the engineering team is minimal, the project psychology can be a very critical aspect that requires attention. Despite the engineering process and the best developers, a legacy project can still experience ruin and decay during the course of its lifetime due to the “broken window” phenomenon - bad designs, wrong decisions, or poor code. One broken window left unrepaired for any considerable length of time, infuses in the development team a sense of abandonment. Psychologists have conducted research that shows that hopelessness can be contagious. Therefore, as a rule of thumb for mitigating the spread of software entropy, engineers should never? leave “broken windows'' unrepaired. They should be fixed? as soon as they are discovered.

At the bottom line, whether the legacy system challenge at hand was triggered by the changes in the software system or by the psychology of the team,? the challenge remains the one of maintaining and updating the system currently in production in such a way that is cost effective and fundamental business services continue to be delivered without interruption following the Service Level Agreement.

2. The heterogeneity challenge

In this age of information, technology has transcended the business realm, becoming an integral part of our social and everyday life. Software systems are therefore expected to function as vast distributed systems across the world encompassing miscellaneous kinds of computers and devices - servers, mobiles, sensors, etc.? in order to deliver a variety of services to the worldwide population. The capability to elaborate principles, techniques and tools required to architect and implement such complex software systems in such a way that they can deal with this heterogeneous global environment in a more reliable and flexible way, is what the “heterogeneity challenge” is all about.

Although this challenge remains a moving target, many scholars and industry experts have created conceptual frameworks aimed at addressing this issue. Amidst such frameworks, we find recurring design principles such as? “Interoperability”, “Availability”, “Modifiability”, “Performance”, “Security”, “Testability” and? “Usability” (Bass et al, 2013, pp. 103-115). Although all these tenets are relevant, I would like to dive deep into the most outstanding one in the context of the heterogeneity challenge:? “Interoperability”.

Interoperability is a distributed system’s design principle that is preoccupied with providing guidance on how a panoply of systems can usefully exchange meaningful information through interfaces in a given context. As the number of systems that can possibly participate in such exchanges is growing at a higher rate every year, this brings in some of the tactical challenges that should be addressed. One of them has to do with the “CAP Theorem” which states that it is not possible to simultaneously achieve consistency and availability of distributed systems in the event of a network failure (partitioning). One of these attributes needs to be sacrificed as part of the tradeoff? (Bass et al, 2013, pp. 103-115). In the context of the “heterogeneity challenge” where “interoperability” is vital, the most optimal choice could be to sacrifice “consistency” and guarantee unconditional “availability” coupled with “eventual consistency”. With this approach, users, devices and interconnected servers will be able to occasionally access stale data while waiting for updated data to eventually be available to them.

Furthermore, interoperability in an heterogeneous ecosystem is reinforced by a wide range of communication interfaces and protocols optimized for a reliable exchange of information. The most used ones are SOAP (Simple Object Access Protocol) and REST (Representational State Transfer). As SOAP is XML-based (very verbose) and rely on a SOA (Service Oriented Architecture) middleware (introduces possible strong coupling), many organizations are now moving more towards REST which happen to be very technology agnostic (low coupling as data flows directly through http) and supports JSON (JavaScript Object Notation) as data exchange format (less verbose). This power of REST allows the implementation of less coupled but more autonomous heterogeneous systems.

As we can see, although the “heterogeneity challenge” is far from being eradicated due to the dynamic and polymorphic nature of this problem, we can at least perceive some glimmerings of hope as new theories and tools are being invented to address this issue.

3. The delivery challenge

Software engineering being a team effort, executing and coordinating all its inherent activities in order to meet product quality requirements and deliver the final product, can be very time-consuming. This impediment can turn out to be the major setback on business expansion and profitability as it makes it hard and painful for the organization to scale and respond in a timely fashion to the rapid change of the external world. Therefore, the “delivery challenge”? is related to the ability of an engineering team to optimize the Software Development Life Cycle by shortening the delivery time for large and complex systems without compromising system quality. To address this issue, many organizations around the world are busy leveraging the power of DevOps (Development and Operations) in order to increase the velocity of the Software Development Life Cycle by implementing CI-CD (Continuous Integration - Continuous Delivery) pipelines.

The CI-CD process includes major steps such as building, packaging, testing, validating, verifying infrastructure, and deploying into all necessary environments such as Dev, QA, Staging and Production. Although this innovation has proven to be an efficient solution, it introduced overhead and? the necessity of increasing the headcount in the engineering team by hiring DevOps Engineers who will be in charge of building and maintaining the CI-CD pipelines.

Nevertheless, DevOps and CI-CD still proving its efficiency as all fundamental activities leading to the delivery of software systems - building, packaging, testing, validating, verifying infrastructure, and deploying into all necessary environments, are fully automated, therefore cutting down the operational and delivery time and reducing considerably the probability of introducing human errors.

C. Best software engineering techniques and methods

As a response to the Software Crisis, software engineering techniques and ?methods were invented to offer tools structured to be used for planning and controlling the software development life cycle in order to achieve the desired goals. The ultimate goal of the variety of these? techniques and? methods is to offer “customized” software development as per the requirements and needs of? software development teams.

Thus, as there are no methods or techniques that are suitable for every type of project, it is clear that there is no best? software engineering techniques and? methods. Each team should be about to analyze and choose the one responding better to the need for the project. For example, for a team working on a project where specifications are more inclined to change during the development phase, an Agile methodology would be the best for such a project as it offers the ability to respond and adjust to changes. But if the project is of a nature that has a very low probability of changing, a Waterfall methodology will be the best.

Therefore, it is incumbent upon each team to get thoroughly acquainted with the characteristics of the systems to be developed in order for them to be well equipped to make a better judgment call on the best software engineering techniques and? methods to be used.


II: Challenges, Specialized Techniques and Certification

A. Other problems and challenges that software engineering is likely to face in the 21st century

In software engineering, beside challenges related to heterogeneity, business and social change, and trust and security; “Sustainability” is one of the new emerging problems that requires specific attention. Sustainability bestows upon us the responsibility to meet the needs of the present without compromising the future generations in their endeavor to meet the needs of their time. The issue of sustainability can be tackled on three fronts: social, economic, and environmental. As far as software engineering is concerned, I will try to elaborate more on the environmental aspect because the operation of large scale software impacts directly the environment.

Environmental sustainability has to do with protecting the planet against any harm that may result from human activities whether industrial or domestic. To validate the nobleness of this cause, the software engineering community have developed a code of ethics via its two large organization in the domain, the IEEE Computer Society and ACM, where we find a clause stipulating that software engineers should approve a software system only if it does not diminish the quality of life or harm the “environment” (Vliet, 2008, pp. 24-25).

Therefore, some of the possible solutions to this problem would include but not limited to the deployment of software systems on application servers operating with 100% renewable and carbon negative energy. However, meeting this requirement could be unattainable for many software companies running all their software systems through on-premises application servers due to the lack of necessary resources. Their ultimate solution would be to move to the cloud because hyper-scale cloud providers have ample financial means to ensure extensive reduction of carbon emissions and to become more energy efficient than? organizations running software systems on their own on-premise datacenters with similar workload. A research conducted by Accenture and WSP has shown that moving enterprise applications from on-premise datacenters to Microsoft cloud services (Azure) could lead to a carbon efficiency amounting up to 98% and up to 93% on energy efficiency (Accenture & WSP , 2010).

To summarize, as stipulated by the software engineering code of ethics, engaging in practices that preserve the integrity of the environment in a more sustainable manner is the responsibility of every software professional. Being conscious of the fact that complying with environmental code of conduct is still a costly exercise, opting out for Cloud Computing as a sustainable and environmentally friendly way of delivering software systems is beneficial for many businesses. When organizations choose a large scale Cloud provider such as Azure, Amazon or Google, they are taking positive action to reduce carbon emissions. It’s one of the enthralling ways to contribute to the environmental preservation goals? of any company.

B. Need for specialized software engineering techniques to support design and development.

Software engineering techniques have evolved a lot over the last three decades. Most of these improvements have been characterized by the need of diversifying and specializing them with the intention of catering for a variety of application areas. Therefore, each domain of application requires a specialized software engineering technique to guide the design and development process. Although we have a multitude of application areas, the two most important are “Enterprise Business Applications” and “Big Data Processing Pipelines”.

The first area of application that demands a specialized software engineering technique is? “Enterprise Business Applications”. These kinds of applications are the most used in the world as they provide services required by the society spanning from eCommerce, eBanking, eLearning, up to social networks. They are the most complex system in terms of features. Therefore, using a specialized technique for designing and building such complex systems is more than important. This is why DDD (Domain Driven Design) technique was created. The DDD technique is more suitable for? “Enterprise Business Applications'' because it focuses on tackling? Business Complexity by designing a domain model for each Bounded Context (component, module or service) that reflects understanding of the business domain. DDD concentrates mostly on a business problem and how to rigorously organize the logic that solves it.

The second domain of application that demands a specialized software engineering technique is “Big Data Processing Pipelines”. Around 2005, the amount of data users generated through Facebook, YouTube, and other online services, grew exponentially inducing the creation of the concept of Big Data. Big data processing pipelines refers to data-processing application software for manipulating? data sets that are too large or complex to be dealt with by a traditional system. Thus, the most adapted technique to be used for designing and implementing such systems is SADT (Structured analysis and design technique) as it helps in describing systems as a hierarchy of functions. Most of these functions are related to Data? Extraction, Data Transformation and Data Loading (ETL). When representing them using an essentially diagrammatic notation, it helps the engineer to describe and understand systems to be built. Furthermore, the ETL functions of the data pipeline are the building blocks represented using entities and activities, and a variety of arrows to establish semantic relationships between them.

These are just two examples among many others. They show how the flexibility of choosing a specialized software engineering technique that responds better to the nature and the needs of a certain application area is at the heart of every success when designing a software system.

C. Need for certification for professional software engineers

Pursuing professional certification is an ubiquitous practice for people in almost all regulated industries worldwide, and for many reasons. For some, certification could be a token of an individual's expertise and also as a way for industries to set quality standards for all practitioners. For the professional engineers, although it has advantages, inconvenient might outweigh them.

Some of the advantages of introducing professional certification for engineers are establishing professional credibility, recognition and credit and structured learning. With these advantages in place, it becomes easier to control the proficiency of the engineers before joining the industry and therefore maintain a higher standard of engineering practices

However, engineering being a very creative discipline, requiring a certification before practicing can potentially shut out many great talents who have a lot to contribute to the field, but do not qualify for certification. This constitutes a major disadvantage. If the criterion of certification is instituted, probably having an university degree would become one of the basic requirements before certification. However, we know many great engineers who do not possess any university degree, but they are excellent in what they do. Such regulations would only exclude them.

To conclude, the benefits of certification are abundant. But, as we are living in the age of information, knowledge is available to everyone, with myriads of unconventional ways of education, namely through self-education. The amount of individuals who have acquired valuable engineering skills without following a formal university curriculum, is increasing every year. Taping in such a diverse pool of talents to advance engineering, is to be encouraged. Therefore, setting up certification criteria that are inclusive and? will preserve the diversity of candidates is what will make the certification useful for the engineering field.


III: Use Cases, Emergency Changes and Confidentiality

A. Use cases as a basis for understanding the requirements: Case of an ATM system.

The ATM (Automated Teller Machine) is a terminal device connected to the bank application servers in order to provide banking services as if they were performed by a human teller within the bank’s branch without time constraint (Available 24 hours).

Precondition

The ATM system requires that the user of the? ATM possess a banking card (Visa, American express, Master Card, etc.) with its associated? PIN code.

Use-Case UML Diagram

The figure below is a graphical representation of the use-case diagram of the ATM System.

Use case diagram 1

Actors

Customer: This is the actor of the system that represents a person with a Bank issued Card with a valid PIN Code.

Bank: This actor represents the financial institution associated with the customer.

Technician: This actor represents the person responsible for maintaining the Automated Teller Machine, refilling paper for printing receipt and replenishing cash.

Use Cases

Start Session: This use case specifies the action of getting the ATM out of the sleeping mode, so that it can ask the banking card and prompt the customer for the PIN Code from the customer.

Validate User: This use case describes the action of? the ATM that verifies the validity of the information provided by the customer.

Make Transaction: This use case is the generalization of actions such as Withdraw Cash, Transfer Fund, Transfer Fund and Enquiry Balance.

Print Receipt: This use case extends “Make Transaction”. The customer will be given an option to print a slip after the transaction.

Startup Machine: This use case describes the action where the Technician has to start up the ATM if it is down.

Shutdown Machine: This use case describes the action where the Technician has to stop the ATM for maintenance.

Refill: This use case is the generalization of actions such as Refill Cash and Refill Paper.

B. When emergency changes have to be made to systems, the system software may have to be modified before changes to the requirements have been approved. This is a suggestion a model of a process for making these modifications that will ensure that the requirements document and the system implementation do not become inconsistent.

In a Software Development Life Cycle, process models are usually used to bring order to the potential chaos a software development may experience. In our case, where we need to handle the situation where the software system has been modified before the requirements have been approved,? a more flexible process model that can allow us to adapt to frequent changes, will be appropriate.

By carefully analyzing the major existing models, it appears that Agile will be more useful in this context. Agile provides a room for? integration of various approaches as judged relevant? to the problem being solved or the system being developed. It is good for fast delivery of results and works well in projects with undefined or changing requirements just like the one we are dealing with.

Therefore, in the following diagram, are the Agile steps we will have to follow in order to ensure consistency between requirements document and system implementation.

Emergency change diagram

In this flowchart, we have used the standard Agile process, to which we have added the capability to deal with emergency changes that will eventually trigger the update of the requirement document a posteriori.

C. Requirement engineering and the responsibility of confidentiality to a previous employer

In the software industry, when taking up a new role with a new employer, they usually expect you to bring in all your wealth of? experience from previous positions in order to add value to the business. This is the key distinguisher between experienced employees and the less experienced ones.

However, the way employees can utilize their previous experiences can be implicit or explicit depending on whether the issue of confidentiality is at play or not. It is explicit when employees share unclassified information from their employers in their raw form without any filter or occultation. Such experiences could be related to process models used, the company culture (corporate vs startup), general architectures (Cloud vs On Premises) and the programming languages and frameworks used. On the other hand, it is implicit when employees have to share experiences related to confidential artifacts from their previous companies such project specifications, inventions, algorithms, and critical technologies used for some core business processes. The implicit sharing of previous experience requires employees to convey knowledge in a comprehensive manner while avoiding being very specific in a way that can divulge the source of the facts they are sharing. For example, they should not mention the name of the project, the name of the company, nor the solution itself, by trying to remain very generic.

Therefore, as an employee on the development team, with the professional responsibility of using my previous experience to save my current employer from enduring unnecessary? costs increase, I will opt for the implicit? approach of sharing my previous experience from my former employer in resolving ambiguities encountered in the interpretation of the requirements. I will achieve this by actively participating in the requirements elicitation process, setting red flags on problematic requirements to attract the attention of the team members, and making suggestions on how we could re-interpret those requirements by quoting very generically some of my previous experiences, and forecasting possible issues that could be faced if the requirements interpretations are not adjusted. I could express that in the form of? “In the past, I had to solve a similar problem, and to avoid x, y and z as consequences, I had to do a, b and c as actions”. Knowing that a, b and c will radically change the interpretation of the specifications.

IV: Process Models and Agility

A. Process modeling and process patterns

Modeling a Process

Processes are a coherent set of activities for conducting a project from inception up to the delivery. Therefore, any initiative aiming at modeling a process should ensure that it is well-defined and has a structured sequence of stages. Therefore, a process is usually modeled around? two basic pillars:

?- The activities to intended to support the execution of the project

?- The order in which these activities should be executed

However, these two main pillars can be extended and refined to include more? items that will culminate into the following list: Activities, Control, Artifacts, Resources and Tools. In the table below, I will elaborate a bit more on each one of them.

Process Modeling table

From the table above, we can conclude that modeling a process is nothing but elaborating? a global strategy for efficiently conducting the execution of projects, by systematically bringing order to the chaos that may arise in the absence of a systematic approach.

Patterns

There are many different? processes in the world, but all share some patterns that are fundamental to the execution of projects. Such patterns are usually defined by these phases: Preparation, Execution, Delivery and Maintenance. All processes, no matter their fields of application, follow this pattern. In the table below, I provide more details for each.

Process patterns


B. Prescriptive process models: Strengths and weaknesses

The prescriptive processes are three in number: Waterfall, Incremental and Spiral. Below is a table presenting their strengths and weaknesses.

Process Model Strengths and Weaknesses (part 1)


Process Model Strengths and Weaknesses (part 2)

C. Agility: A watchword in modern software engineering work

The digitalization of today’s society has caused every aspect of life to move faster than ever before. Markets and customer’s segments are getting more and more volatile due to this unprecedented dynamism of modern society. Therefore businesses are into the quest of approaches that will well equip them to respond faster to their respective ever changing markets. This reality has made “Agility” the watchword in modern software engineering work as almost all businesses rely on software to deliver value to their customers.

Many organizations around the world are frantically adopting Agility to help boost their team performance and their customer satisfaction. Companies where Agility has become part of their corporate culture are more able to respond to market versatility, giving them a competitive advantage with regard to the success of their projects. ?At the team level, Agility allows teams to quickly adapt to requirements changes without jeopardizing release deadlines. Not only that, Agile helps reduce technical debt, improve customer satisfaction and deliver a higher quality product.

D. Agile software engineering VS Traditional process models

Agile Software Development is an approach to develop software projects with a great ability to create and respond to change. It empowers the engineering team in dealing with, and ultimately succeeding in, an uncertain and turbulent software development environment .

Agile process model is also about reflecting on how the team can understand the dynamic of the business domain they are operating in, identify potential ambiguity, and devise a plan on? how the team can adapt to that as they move through different sprints of the project.

One of the key differences between Agile and other traditional processes can be found in the fact that it is the integration of various approaches of systems analysis and design for applications as deemed appropriate to the problem being solved and the system being developed. It enables engineering teams to deliver fast results and works well in projects with undefined or changing requirements. Nevertheless, Agile requires discipline, works best in small projects and requires a lot of user input.


References

1. Cobb, S. (2004, March 15). Five Key Privacy Principles. Computer World. https://www.computerworld.com/article/2574182/five-key-privacy-principles.html

2. Thales Group. (2021). Beyond GDPR: Data protection around the world. https://www.thalesgroup.com/en/markets/digital-identity-and-security/government/magazine/beyond-gdpr-data-protection-around-world

3. International Security, Trust and Privacy Alliance. (2007). Analysis of Privacy Principles: Making Privacy Operational. International Security, Trust and Privacy Alliance.

4. Association for Computing Machinery's Committee. (2018). ACM Code of Ethics and Professional Conduct. https://ethics.acm.org/

?5. Ritchie, H., Hasell, J., Mathieu, E., Appel, C., & Roser, M.(2022). Terrorism. Our World in Data. https://ourworldindata.org/terrorism

6. Vo, T.H. & Vo, H. (2012). Software Engineering. Hung Vo. https://cnx.org/content/col10790/1.1/

7. Mannaert, H., Bruyn, P.D., & Verelst, J. (2012, February 29- March 5). Exploring Entropy in Software Systems: Towards a Precise Definition and Design

Rule [Paper presentation].? ICONS 2012, The Seventh International Conference on Systems, Reunion Island, Saint Gilles. https://www.researchgate.net/publication/266350680.

8. Thomas, D., & Hunt, A.(2019). The Pragmatic Programmer: your journey to mastery, 20th Anniversary Edition (2nd ed.). Addison-Wesley Professional.

9. Bass, L., Clements, P., & Kazman, R.(2013). Software Architecture in Practice(3rd ed.). Addison-Wesley Professional.

10. Vliet, H.V.(2008). Software Engineering: Principles and Practice? (3rd ed.). John Wiley & Sons Ltd.

11. Accenture & WSP . (2010). Cloud Computing and Sustainability: The Environmental Benefits of Moving to the Cloud. https://aka.ms/cloud2010


要查看或添加评论,请登录

Baron Ntambwe的更多文章

社区洞察

其他会员也浏览了