Coding is not hard. Identifying where it would fail is. A guide on how to solve it.
Rohan Girdhani (The TechDoc)
I help you make profitable softwares that your customers can’t ignore and systems that can support your next big milestone.
Coding at it's core is not an impossible task. However, the real challenge lies in anticipating where your code might fail and having the skills to fix those issues. For entrepreneurs, especially those diving into the tech space, understanding this aspect is crucial. This article will explore how you can leverage automated code reviews, security tests, debugging, and software health metrics to overcome these challenges.
When young Rohan started learning how to code and being a commerce graduate, he wanted to build his startup. One thing he realised immediately was the online course and even physical classes for coding only teach about the tools and usage. The real world problems are a different tangent. This was me back then.
The Real Challenge: Debugging and Fixing Code
Coding can be straightforward once you grasp the fundamentals. Just by focusing on the core concepts and the syntax, you will find with time that all the coding languages just change in their presentation and syntax. The core concept is always and will be the same. The aim is to move our community towards realisation that, making a fuss using tech jargons won't do any good.
However, every piece of code has the potential to fail. Identifying these failures and fixing them is where the true challenge lies. Debugging requires patience, attention to detail, and a systematic approach to problem-solving. Recently, in a tech audit i was afraid to tell the client the this code is just only functional but could fail anytime. It's hard to tell someone that their code is going downhill when they are working with the company for more than 4 years and being the most trusted person. I always have to take the hard call, soft skills matter. Trust me..
The most beautiful softwares i have even seen were the ones, tracked for every issue. And could translate every issue in plain english.
The key metrics that would help you navigate the flood coming your way well in advance.
1. Code Quality Tests
a. Code Coverage Tests - The measure of how much code is covered by unit tests. The more the coverage the stable is your code. Imagine a new developer playing with your code and as soon as he pushes the code, things fall apart and you are praying to god clueless. Don't be that guy.
Add unit test for 95-100% code coverage and run tests after each push. The simplest way is to use Chat GPT and here is your prompt.
b. Defect Density - The no. of defects per thousand lines of code. (KLOC)
Formula for Defect Density
Defect?Density= No.?of?Defects / Size?of?the?Software
Where:
Steps to Calculate Defect Density
A lower defect density indicates higher software quality, whereas a higher defect density points to lower quality and potential areas of improvement.
Tools for Tracking Defect Density
Several tools can help in tracking defects and calculating defect density, including:
2. System & API Performance Tests
a. Response Time: Monitor how quickly the system responds to user requests.
b. Throughput Measures: The no. of transactions system can process in a given time period.
c. Latency: Track the delay between request and response time.
d. API error Rates: Frequency and type of errors.
Now, my friend what if i say you can do all this with one tool. No paid subscription.
Download postman, write all your api request. Good for your documentation too. Step 1: Run the collection
Step 2: Define configuration
Step 3: Find all your results, and you are done.
3. Reliability
a. Uptime: Measure percentage of time system is operational.
Amazon CloudWatch is a monitoring service that can be used to track the uptime of your AWS resources, such as EC2 instances, RDS databases, and other services. Here's how you can use CloudWatch to measure system uptime:
Steps:
A script in python and boto3.
import boto3
import time
cloudwatch = boto3.client('cloudwatch')
# Send a custom metric to CloudWatch
cloudwatch.put_metric_data(
Namespace='MyApp/Uptime',
MetricData=[
{
'MetricName': 'Uptime',
'Dimensions': [
{
'Name': 'InstanceId',
'Value': 'i-1234567890abcdef0'
},
],
'Value': 1.0,
'Unit': 'Count'
},
]
)
Documentation: Amazon CloudWatch
Third party monitoring tools:
b. Meantime between Failures (MTFB) : Average time between system failures.
MTBF is a measure of how reliable a system or component is, calculated as the average time between failures. MTBF - Wikipedia
Formula: MTBF= Total?Uptime / Number?of?Failures
Steps:
Example:
MTBF = 10,000?hours/5?failures = 2,000?hours
c. Mean time to recovery (MTTR): Average time to recover from failures.
MTTR measures the average time required to repair a system or component after a failure. MTTR - Wikipedia
Formula: MTTR = Total?Downtime / Number?of?Failures
Steps:
Example:
MTTR=20?hours / 5?failures = 4?hours
4. Scalability
a. Load Testing: Monitor how system performs under different load conditions.
Set Up Your Postman Collection
First, create a Postman collection that contains the API requests you want to load test. Ensure your requests are properly configured with necessary headers, parameters, and body data.
Install Node.js and Newman
Ensure you have Node.js installed on your machine. If not, download and install it from nodejs.org.
Install Newman using npm.
Write a Script to Run Newman with Load Testing Parameters
Create a script to run your Postman collection using Newman with load testing parameters. For example, you can use a simple bash script to run the collection multiple times.
Example Script (load-test.sh):
#!/bin/bash
COLLECTION_PATH="path/to/your/postman_collection.json"
ENV_PATH="path/to/your/postman_environment.json"
NUMBER_OF_RUNS=10
CONCURRENT_REQUESTS=5
for ((i=1; i<=NUMBER_OF_RUNS; i++))
do
echo "Run #$i"
newman run $COLLECTION_PATH -e $ENV_PATH --delay-request 1000 --iteration-count $CONCURRENT_REQUESTS
done
That's it enjoy the results. Execute script.
sh load-test.sh
b. Horizontal and Vertical Scaling: Check system ability to scale out. (add more machines) vs scaling up. (Add more power to existing machines).
Steps for Testing Combined Scaling:
Now you can mix, steps from point 3 and 4.
Probably you right now. I told you this article would tell you everything. Now, don't get me stopping.
5. Security
a. Vunerability Metrics - Track the number and severity of vulnerabilities detected.
Amazon Inspector: Amazon Inspector is a security assessment service that helps you identify vulnerabilities in your applications deployed on AWS. It provides automated security assessments and generates reports with the number and severity of vulnerabilities detected.
b. Incident Response Time - Time taken to respond to security incidents.
6. Deployment Metrics
a. Deployment Frequency - Track how often code is deployed to production.
b. Deployment Success Rate - Measure the percentage of successful deployments without rollback.
This code would help you deploy and track success rate using github. Paste it in github workflows.
name: Deploy to Production
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: '14'
- name: Install Dependencies
run: npm install
- name: Build Project
run: npm run build
- name: Deploy to Production
run: ./deploy.sh
- name: Deployment Successful
if: success()
run: echo "Deployment was successful!" | tee -a deployment.log
- name: Deployment Failed
if: failure()
run: echo "Deployment failed!" | tee -a deployment.log
Using Third-Party Tools
7. Technical Debt
a. Code Complexity: Metrics like cyclomatic complexity to assess code maintainability.
b. Refactoring Rate: Rate at which code is refactored to reduce technical debt.
To combine tracking of code complexity and refactoring rate using GitHub, you can use GitHub Actions along with tools like SonarQube for code complexity analysis and custom scripts for refactoring rate tracking. Here’s how you can set it up:
Set Up SonarQube for Code Complexity
Here is how your github workflow file should look like:
name: Code Quality and Refactoring Tracker
on:
push:
branches:
- main
pull_request:
branches:
- main
schedule:
- cron: '0 0 * * 0' # Runs weekly
jobs:
sonar:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up JDK 11
uses: actions/setup-java@v2
with:
java-version: '11'
- name: SonarQube Scan
run: mvn sonar:sonar
env:
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
refactoring:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Count refactoring commits
run: |
refactor_count=$(git log --grep="^refactor:" --pretty=oneline | wc -l)
echo "Total refactoring commits: $refactor_count"
echo "refactor_count=$refactor_count" >> $GITHUB_ENV
- name: Upload results
uses: actions/upload-artifact@v2
with:
name: refactor_count
path: refactor_count.txt
By using SonarCube and github, you can maintain a high standard of code quality and actively manage technical debt.
8. Development Velocity
a. Lead time to changes: Measure the time it takes from code commit to deployment.
b. Sprint Burndown: Track the progress of tasks in each sprint to ensure timely completion.
I will leave you to ponder on the last point.
I know, i can't cover all the information in the article, there is so much. But, this took me a while. If you found this helpful.
Do share with your peers, and congratulations you reached till the end. See you next Saturday.
I help you make profitable softwares that your customers can’t ignore and systems that can support your next big milestone.
6 个月Last week i wrote about - How you can develop your flutter application for mobile devices using ChatGPT. You can access it here - https://www.dhirubhai.net/pulse/how-build-your-flutter-app-using-chatgpt-4o-rohan-girdhani-brfdc. Keep sharing and learning. Subscribe to The Tech Saturday - https://lnkd.in/gskQYpKx