What vs How: The Separation of Business Goals from IT Development

What vs How: The Separation of Business Goals from IT Development

When you define What vs How in an IT project you solve many common project issues. The “What” is the goals of the business. It has absolutely nothing to do with the systems that support it. As the old saying goes, “the customer doesn’t want to know how you make the sausage, they just want sausage.”

A lesser-known fact is the sausage maker doesn’t want to be told how to make the sausage by the customer, they just want to know what type of sausage the customer wants.

That’s the basic relationship between the Business and IT. The business defines the overall goals and outcome of the business processes. The Business defines the features, functions, processes, and success criteria for the systems IT will design. The business doesn’t define the solution design Technology will use to meet the success criteria. IT decides “How” to meet the success criteria of the business by designing and building a system.

Example of What vs How: The business has decided it would like an in-house video transcription service to save on costs. They like the transcription service Otter currently in use, but as the company has expanded the subscription costs have increased dramatically. They want a similar solution from IT but at a lower cost.

The Business has provided a few requirements, aka “The What”

1.?The solution must be accessible by employees at work, home, or traveling.

2.?The solution must utilize the existing Single Sign On login method.

3.?The solution must transcribe any video or audio file up to two gigabytes.

4.?The solution must transcribe any video or audio file in under 2 minutes.

5.?The solution must have the ability to categorize the audio files & video files with transcription into a business provided taxonomy.

6.?The solution must create a searchable library of the categorized audio files & video files with transcription.

7.?The business likes Otter and would like to know which features IT can replicate with the in-house transcription service. Identifying the speaker by name is considered a “must have.”

8.?The subscription is up for renewal in seven months and the complete rollout including training would need to take place before then.

9.?Budget estimate is $250,000 and would prefer the use of in-house IT staff to save consulting costs.

While we would need more detailed requirements to complete the work, these details provide enough information for IT to begin a search for a solution. Further refining the requirements to create project success criteria is a joint effort between IT and the business.

When we talk to the Business it should be in the context of What they asked for, not the technical details of the build, they don’t want to know how we make sausage. For example, requirement #2, ?“The solution must utilize the existing Single Sign On login method.”

1.?Assume somebody in the room has no idea what Single Sign On is. Add it as reference materials for the meeting, you can attach a document to the agenda or include it at the bottom. This can be part of a Business Glossary or done casually. Don’t review it, just show folks it’s there for reference. This will do, courtesy of the internet:

Single Sign-On (SSO) is an authentication process that allows users to log in once with a single set of credentials (such as a username and password) and gain access to multiple applications or systems without needing to log in again for each one. SSO simplifies access management for users while maintaining security and governance for administrators.

Key Features of SSO:

  • Single Authentication: Users authenticate once to access multiple applications or services.
  • Simplified User Experience: Reduces the need for multiple passwords and logins, improving user convenience and productivity.
  • Centralized Authentication Management: Centralizes user identity management, making it easier for administrators to control access.
  • Security: Often integrated with multi-factor authentication (MFA) to enhance security.
  • Federated Identity: Can integrate with external identity providers (such as Google, Microsoft, or Active Directory) using protocols like OAuth, SAML, or OpenID Connect.

Its now much easier for the business to understand the technical terms used by IT when creating Single Sign On, they are listed for reference when discussing the topic and their business goals. It answers questions they had as well but didn’t want to ask.

2.?Relate the business goal to the technology used. If this was on AWS, a primary component is the AWS IAM Identity Center. Same quick task, go the internet, get some summarized bullets. Plagiarism is our friend in Project Management.

AWS provides a comprehensive Single Sign-On (SSO) solution through its service called AWS IAM Identity Center (formerly AWS Single Sign-On). It enables centralized management of access to multiple AWS accounts and applications. Here are key features and integrations of AWS Identity Center (SSO):

AWS IAM Identity Center (AWS SSO) Key Features:

  1. Centralized Access Management: It allows users to manage SSO access to multiple AWS accounts, business applications, and custom applications from one place.
  2. AWS Account Access: Enables users to sign in once and access all AWS accounts and roles they have permissions for, simplifying multi-account management.
  3. Pre-Integrated Applications: Comes with built-in integrations for many business applications, such as: Salesforce Microsoft 365 Dropbox Slack Google Workspace
  4. Custom Applications Support: Allows integration with custom SAML 2.0 applications, making it easier to manage access to internal business apps.
  5. Active Directory (AD) Integration: Can be integrated with your on-premises Microsoft Active Directory, allowing users to authenticate using their AD credentials.
  6. Multi-Factor Authentication (MFA): Enhances security by providing MFA features to help protect access to accounts and resources.
  7. Access Control Policies: Provides fine-grained access control and enables assignment of permissions through IAM Identity Center permission sets.
  8. Audit and Logging: It integrates with AWS CloudTrail to track and log sign-in activities and access permissions for auditing and security monitoring purposes.

Now that we have added reference materials, you can begin to see tasks IT will perform cross-referenced back to the business requirements. We learn and communicate infinitely faster when we have a common frame of reference. Nobody has to struggle to communicate when you can point to or read words off a page.

When you provide project status updates, it can be a compact list of technical tasks completed by IT and a percentage complete of the requirement. This will be covered with greater depth in the scheduled article, “The Art of the Milestone.”

IT needs to understand the business process and ask questions to further define, “The What”

1.?How many users will the system need to support in, “accessible by employees.” Are there projections for the total user count in the next five years?

2.?There could be sensitive information such as personally identifiable data and key business decisions in the video and audio files. Is every person allowed to see any video and transcript? Wouldn’t it violate Data Privacy laws?

3.?What do we do if the file is over two gigabytes, not process the file and reject it for size?

4.?Two minutes for any file? Can we set an expectation by file size instead? What happens if it is going to take more than two minutes?

5.?It says, “transcribe any video or audio file”, can we agree to a list of file types? Would this list of file types meet the success criteria? Audio and video formats that can be transcribed include MP3, WAV, AAC, M4A, FLAC, OGG, AIFF, MP4, AVI, MOV, WMV, MKV, FLV, and WebM.

6.?Will these be considered core business records? Is there a defined retention schedule for data archiving?

7.?What is the Business criticality rating of this application for Business Continuity/Disaster Recovery? RPO (Recovery Point Objective) defines the maximum acceptable amount of data loss measured in time. It answers the question: "How much data can we afford to lose?"

8.?RTO (Recovery Time Objective) defines the maximum acceptable amount of downtime after a disruption. It answers the question: "How quickly must services be restored?"

9.?It says , “ability to categorize the audio files & video files with transcription into a business provided taxonomy”, has the Taxonomy already been created and can we get it? What happens when topics/categories don’t fit the taxonomy? Who is the owner of the data and the taxonomy? Could this be a Data Governance decision?

Information Technology must keep asking questions until it has the requirements & specifications to design a solution that meets the success criteria of the business. Engineers are not known as great communicators. There are a few engineers that are, but typically we need a body in the middle to create requirements. Most Engineers would like nothing more than to be handed a list of requirements and figure out what to build. That’s what they do, build solutions that meet the requirements.

The customer directly to engineer concept has a low success rate, it’s in Office Space, "Well then, I just have to ask why can't the customers take them directly to the software people?":

Bob Slydell : What you do at Initech is you take the specifications from the customer and bring them down to the software engineers?

Tom Smykowski : Yes, yes that's right.

Bob Porter : Well then, I just have to ask why can't the customers take them directly to the software people?

Tom Smykowski : Well, I'll tell you why, because engineers are not good at dealing with customers.

Engineers are specialists, Solution Architect, Project Manager, and/or Business Systems Analyst are generalists. Generalists must have a basic understanding of the technology used to support the business processes, the business processes themselves, the monetary impact of the business processes, and the regulations/compliance governing the business processes.

The project team is made up of specialists who have a limited understanding of work outside their specialty. Specialists have difficulty communicating with other types of specialists, somebody needs to bridge the knowledge gap between the them. The DevOps engineer doesn’t know the Data engineers job or the Network engineer’s job. The six client departments involved don’t know what the other departments do. The Finance resources don’t care what any of those resources are doing, they want the 5-year revenue projections for CapEx and OpEx as well as ongoing maintenance and support costs. Data Privacy and Security teams are an entirely separate animal, compliance goals often constrain business processes.

Project managers don’t need to be able to perform the jobs of specialists, but they do need to understand the fundamental concepts of their work. A good project manager will constantly reframe one specialists comment to other specialists to bridge the knowledge gap. Summarize and simplify to the intended audience.

You might be thinking, “How would I know there is a 2-gigabyte file limit in Amazon Transcribe, I’m the PM, not an engineer.“ You don’t have to know the exact file size limit, just that there is a limit and something bad will happen if we go over the limit and don’t plan for it. There is always a file size limit.

Same routine project after project, where are all these files to process, let’s analyze them and see if the file size assumptions hold up. Something will need to be done to handle files larger than the specification, eventually one will be ingested by the solution. What happens when a file is larger than specified? Maybe they skip it or develop another sub-process, it’s up to IT, they design the system.

Your job is to ensure there is a user story and test case for files larger than specified and test it. The business needs to agree to the output of the solution IT provides through the users stories and testing sign off. We need negative user stories and test cases to handle exceptions, we always assume it’s going to be a rainy day. We start with “Sunny Day Scenarios” and then create “Rainy Day Scenarios” aka negative user stories. Be a rainy-day PM and have a plan for rain, the sunny days take care of themselves.

While we often hear IT project failures stated as, “IT didn’t build it right”,? 90% of the time it’s IT didn’t build the right solution based on missed requirements and specifications. That’s a joint process between IT and the business, both teams failed to gather valid success criteria.

Incorrect file size requirements are a common project misstep, you must do an exhaustive analysis of the files to be transcribed before settling on a specification. If two gigabytes are listed as the maximum file size requirement, and there are ten gigabyte files to be transcribed, that is a “What” problem not a “How” problem.

Not convinced it’s a “What” problem? Here is an example: The file size/processing time will impact the solution design using AWS Transcribe: (guess where I got these bullets)

1.?File Size Limits: Amazon Transcribe has a limit of 4 hours or 2 GB per file for transcription. If your audio or video files exceed these limits, you’ll need to split the file into smaller segments before sending it for transcription.

2.?Performance and Latency: Larger files take longer to process, which can result in higher latency. If near-real-time transcription is required, chunking the file into smaller parts, and processing them in parallel can improve performance and provide results faster.

3.?Cost Considerations: AWS Transcribe charges based on the length of the audio/video file (per second). Larger files incur higher costs. Reducing file sizes through compression (while maintaining audio quality) can help lower these costs, especially if some parts of the file do not need transcription.

4.?File Chunking and Streaming: To handle large files, consider chunking the files into smaller sections. You can either manually break the file into smaller segments or use Amazon Transcribe Streaming, which allows for real-time, continuous transcription of an audio stream. This way, even large or live audio/video can be transcribed as it is streamed.

5.?Storage and Retrieval: If you're dealing with large audio/video files, you will likely store them in Amazon S3. For large-scale workflows, consider implementing S3 lifecycle policies to manage storage costs by archiving files or moving them to Glacier after transcription.

6.?Network Bandwidth and Transfer Times: Uploading large files to S3 (where Transcribe pulls the data from) can result in high network transfer times, especially with very large files. Efficiently managing the transfer process (e.g., using S3 Transfer Acceleration or breaking up the file) ensures faster uploads and processing.

7.?Memory and Compute Resources: If you are pre-processing the file (e.g., splitting it into smaller chunks, enhancing audio quality), you need to consider how much memory and compute power these operations require. Large files may need more robust infrastructure for these tasks, which could influence the use of EC2, Lambda, or other AWS compute resources.

8.?Resilience and Error Handling: Larger files also introduce greater chances of errors during processing (e.g., network interruptions, file corruption). Implement retry mechanisms and checkpoints to ensure that if an error occurs partway through the transcription, you don’t lose all progress and can resume where it left off.

9.?Compression and Audio Quality: Compressing large files can save storage and transfer costs but may affect the audio quality. Lower-quality audio leads to less accurate transcription results. When working with large files, balance file size reduction with maintaining clear audio for the best transcription accuracy.

The file size requirement, “The What” completely changes the solution design, “The How”

Processing time, cost, storage requirements, batch processing considerations, file upload limits, job management complexity, network bandwidth, error handling and retries, latency in transcription results, audio quality and performance are impacted by the size of the file listed in the requirements. That’s not a few lines of code, it’s a completely different solution to build.

Had the success criteria been listed as a 10-gigabyte file that’s what would have been designed, coded, tested and deployed. Yes, the system is choking on the ten gigabyte files, unfortunately it is, “working as designed” for two gigabyte files. Nobody likes to give or receive a , “working as designed” response but it happens all too frequently. Project success criteria is quantitative, not qualitative. There should be no question the business goals were met at the end of the project.

Next article in the series Why do we struggle to create project schedules? It’s about setting expectations, not dates:

What framework or model should we use to create a project schedule?

要查看或添加评论,请登录

Kenneth B.的更多文章

社区洞察

其他会员也浏览了