Effective Software Troubleshooting

Introduction

Troubleshooting complex software issues is one of the most fulfilling experiences in software engineering. There’s nothing quite like having one of those ‘aha!’ moments at 3AM after grueling hours of reading code and documentation, pouring over log files, stepping through a debugger, and adding debug statements. What’s even more thrilling is taking the results of that troubleshooting, fixing user reported issues, and providing users with an improved software product and user experience.

Over the course of my career, one of my responsibilities was coaching other engineers on how to more effectively troubleshoot issues reported by users and provide quality customer service. When I noticed a support case that could have been handled more efficiently or if I troubleshot something complex, I would share those thoughts with my team. I’ll share some of those same thoughts here.

This article outlines the high level troubleshooting steps to follow when diagnosing software issues. Then it discusses some techniques to employ when feeling stuck. In a followup post, I will provide examples of how I employed these steps to tackle some tricky problems.

Target Audience

This article is directed at software engineers and customer support engineers. As it gets into the weeds, the troubleshooting techniques will be focused on diagnosing problems on Linux platforms as that is where my focus lies. This article assumes the reader has some background in programming plus some familiarity with Linux commands like ‘grep’ and ‘ssh’, and the version control system, ‘Git’. That being said, some of the techniques and tools are either applicable to other platforms or have equivalent alternatives. Even though many of the troubleshooting techniques are focused on software, the high level steps could be used to troubleshoot any machine.

TL;DR

Does this article look too long? I understand. I can be verbose. :) If there is one thing that I want you to take away, it is that perseverance is the most important weapon against a seemingly impossible troubleshooting task. Even when I felt that I had exhausted all possible root causes for an issue, if I persevered, I have always had an insight that led to a resolution. I just needed to be patient. This doesn’t mean that you should hold onto a problem indefinitely. By all means, you should ask for help especially if the issue is critical. However, if you have time, keep whittling away at the problem. Not only will you be rewarded, you will also learn much throughout the journey.

Troubleshooting Steps

At a high level, troubleshooting involves the following steps:

  1. Describe the problem
  2. Research the problem
  3. Isolate the root cause
  4. Plan and implement a solution
  5. Test the solution
  6. Communicate with stakeholders
  7. Document the whole process

First, you collect initial information about the issue that was reported. Then you use that information to identify the problem. Next, you research the problem until you identify the component of the software that is failing or behaving unexpectedly. Further research into the problematic component should result in a set of hypotheses, theorizing why the issue surfaced. By ruling out all hypotheses, except for one, you will identify the root cause of the issue. Once you know the root cause, you can create a plan of action to address it, implement that plan, test it, communicate the results to the user, and finally document the whole process. These steps can be iterative and recursive. They can be iterative if you reach step 3, and you are not able to determine the root cause, then you may need to return to step 2 to collect more information and generate a new list of hypotheses. They can be recursive in that step 3 may lead you to a failing component whose issue needs to be recursively described, researched, and root caused.

Even though communication and documentation were listed last, those steps are actually performed throughout the process. A problem investigation might span multiple people like product managers, customer account executives, salespeople, customer support engineers, software engineers, and testers. Proper communication and documentation will prevent wasted cycles and a frustrating customer experience.

Below, the article delves into each step in more detail.

Identify and Describe the Problem

The first step in troubleshooting is to gather the information necessary to accurately describe the problem. The description of the problem should encompass things like:

  • What was the user trying to achieve
  • What software was being exercised
  • How the user was exercising the software
  • What were the expected results
  • What were the actual results

It is vitally important to understand what the user is trying to achieve. Sometimes a user will report that command X failed to accomplish task Y. However, sometimes command X was not designed to accomplish task Y or sometimes task Y does not really achieve what the user wants. It’s true that command X may still need to be investigated. However, you do not want to spend cycles investigating it only for the user to learn that his or her goals still have not been accomplished. Good engineers build up an intuition for what users really want to accomplish even if not explicitly stated in the problem report.

When describing Apple’s approach to innovation, Steve Jobs once said, “And, one of the things I’ve always found is that you’ve got to start with the customer experience and work backwards for the technology.” Why is this relevant here? Sometimes, when presented with a problem an engineer is inclined to jump right into the technology. However, one doesn’t want to lose sight of the entire customer experience which includes finding and downloading the software, installing the software, reading the documentation, requesting support, interacting with the user interface, etc. For example, say after troubleshooting a problem, you find that the software is working as expected. Maybe the problem was caused by user error. In such cases it is easy to wipe your hands of the problem, but a good engineer will ask himself or herself whether the UI or the documentation could have prevented the user error.

To ensure you have captured the problem accurately, sometimes it is useful to paraphrase the problem and repeat it back to the user to ensure that both of you are on the same page and expectations are managed appropriately.

At this step it might be necessary to get additional context by asking questions like:

  • Is the issue reproducible?
  • If not reproducible, approximately when did this issue happen?
  • Has this ever worked and if so, when?
  • Has anything changed in the environment?
  • Do other users have the same issue?
  • Does the functionality work on a different machine?

As you get more experience, you’ll learn the minimum level of information to solicit from users to diagnose the issue. You want to avoid asking for information irrelevant to the issue at hand, and you also want to avoid unnecessary and costly back-and-forth communications with the user. Those communications delay timely resolutions and lead to a very frustrating customer support experience.

Research the Problem

The next step is to generate a list of hypotheses, theorizing why the software behaved the way it did. Those hypotheses are derived by researching the issue with some or all of the following techniques:

  • Reading the software manuals
  • Searching for previous cases in public or private knowledge bases
  • Reproducing the issue
  • Experimentation
  • Comparison of a working case against a broken case
  • Running OS diagnostic utilities
  • Examination of the software’s outputs
  • Examination of the source code

RTFM

The first step in researching an issue is understanding what the software would have done if it worked. You could start with the source code, if you have it available, but that is costly. The software’s manuals are the best place to start.

On Linux and Unix like platforms, the ‘man’ command is your best friend. The command displays the manual pages for things like general commands, system calls, library functions, kernel interfaces, file formats, and so on. For example, suppose you determined that the ‘mkdir()’ function was failing, and you wanted to determine all the possible reasons for the failure. The output of ‘man -S 2 mkdir’ will display the manual page for ‘mkdir()’ and will list out all the possible error codes.

Just remember that manuals can inadvertently omit important information or contain errors. The definitive explanation for the behavior of software lies in the source code.

Previous Cases

Sometimes, there’s enough information in the problem submission to progress to the next step. If more information is still needed, a great source would be a service request system that records previously reported issues. If an error was reported, search past cases for that error. Otherwise, search for keywords that describe the reported behavior. If the software is available to the public, then search public knowledge bases and forums. More often than not, when users report issues, they are not the first persons to encounter those problems.

Reproduce the issue

If the root cause is not clear yet, then usually the next step is to reproduce the issue. It’s difficult to debug an issue if it is not reproducible. Additionally, before a solution is provided to the user, one will need to ensure that the solution works. To do that effectively, one must test the solution on a reproducible test case.

Reproducibility is sometimes very difficult. Sometimes, an issue is only reproducible in a user’s environment, and one must take great strides to simulate the user’s environment as closely as possible. In follow-up post I’ll describe a problem where I couldn’t reproduce an issue after typing in a user’s command manually. However, after repeatedly struggling with the issue, I reproduced the problem when I finally cut and pasted the command from the user’s original problem statement. The two commands were apparently identical, but there was a very subtle difference that was not readily visible to the human eye. Imagine trying to determine why these two commands, which look identical, behave differently:

% process_changeset –some_option
and
% process_changeset -some_option        

Experimentation

Another useful technique for gathering information and or reproducing an issue is experimentation. One should try running the software on different machines or under different user IDs. Sometimes, the current working directory is an issue. Other times, running the software in a new shell gives clues as to the root cause. There are times one will want to experiment with different inputs to the program such as arguments, data files, or environment variables. If the problem is related to the operating system, then changing the operating system or patch level gives valuable insights. For browser-based applications one might try using different browsers and or clearing the cache of the current browser.

The important things to remember in this step are to:

  • Change one and only one thing at a time
  • Document each experiment

If you change more than one variable at a time, then it is more difficult to ascertain which variable contributed to a behavior change, if any. Additionally, you must document each step along the way to avoid duplicate tests by yourself and other engineers that assist with root causing the issue.

Sometimes, the version of the software itself is the issue and that can be discovered by experimenting with different versions of the software. Once you have a working version and a non-working version it can be useful to use a technique called bisection to identify the version of the software where a problem was introduced. You pick a version of the software halfway between the two versions and determine whether the problem exists there. If so, then you know the problem was introduced in one of the earlier versions. Otherwise, it was introduced in the later ones. You’ve effectively halved the number of versions you need to examine. Next you recursively bisect the new smaller list of versions until you find the actual change that introduced the problem. Once you identify the change, then you can examine the actual lines of code that were changed to cause the behavior change. If your software is versioned in Git, then the ‘git bisect’ command makes this very easy.

In more trivial cases, bisection may not be necessary. For example, once I was reading the Neovim manual online, and I noticed that one day it looked fine and the next some of the text was weirdly jumbled together. I am not a Neovim maintainer. I am not familiar with how the online documentation was generated. However, I surmised that maybe a recent commit introduced a bug. I used ‘git log’ to peruse the past few days of commits, searching for keywords like ‘wrap’. I came across this commit:

commit ebba7ae095d9bb800c43188df848ac4f4733d167
Author: gusain71 <[email protected]>
Date:   Tue May 14 13:23:43 2024 +0200

    docs(gen_help_html.lua): wrap legacy help at word-boundary #28678

    Problem:
    On the page: https://neovim.io/doc/user/dev_vimpatch.html
    The links extend beyond the container and thus end up behind the navigation to the right.

    Solution:
    Add these lines to get_help_html.lua:

        white-space: normal;
        word-wrap: break-word;        

Without looking at the diffs of that commit, I was pretty sure this commit was the culprit. The name of the file that was changed was ‘gen_help_html.lua’ which was a big clue that it was responsible for generating the HTML manuals and the second clue was that the change comment mentioned word wrapping. If I had been a Neovim maintainer, I probably would have known that gen_help_html.lua was the program responsible for generating the online documentation. Thus, I would have run ‘git log’ on that file directly and arrived at the answer more quickly.

BTW this is a reminder that change comments in the version control system are incredibly valuable. This one was well written and allowed me to identify which commit introduced a problem. I’ve come across change comments that said something like ‘Fixed issue #1234’ and as part of the code review process, I advised the engineer to elaborate on the changes in the commit.

Output

The software’s output can give excellent clues as to what the underlying code is doing, why it is doing it, and possibly what component is failing. Relevant output includes:

  • Console output
  • Graphical and text output on the screen in a desktop GUI or web-based application
  • Logs
  • Artifacts

One trick I have employed is comparing outputs from a working instance of a program to one that does not work. If there is output in one instance that does not exist in the other, then you can use those clues to investigate what different code paths were taken.

Console

The console usually includes two types of messages, ones directed to standard output, the others directed to standard error. Messages printed to standard output usually comprise the main output of the program. Messages directed to standard error are usually errors. However, they can also contain diagnostic information about what the software is doing while performing its core functionality.

As an example, let’s examine the output of ‘git clone’.

ryan@Ryans-Macbook-Pro ws % git clone https://github.com/git/git.git git      
Cloning into 'git'...
remote: Enumerating objects: 352391, done.
remote: Counting objects: 100% (1095/1095), done.
remote: Compressing objects: 100% (539/539), done.
remote: Total 352391 (delta 705), reused 810 (delta 555), pack-reused 351296
Receiving objects: 100% (352391/352391), 221.82 MiB | 30.28 MiB/s, done.
Resolving deltas: 100% (264852/264852), done.        

All of these messages are directed to standard error. That’s a reminder to capture both stdout and stderr when diagnosing program output. If this command ever failed, then the error message and or the last message printed before the failure is a clue to how far the command got in the code path before aborting.

Logs

Log files are another piece of output crucial to troubleshooting. One should familiarize oneself with where all the logs are stored for the various components of a software system. In a distributed system they could be stored on different hosts. Other times, they might be punted to a central database. You can utilize log file messages the same way that you utilize console and trace messages. Usually, software provides switches that increase or decrease the verbosity of the log files. Log files usually include timestamps indicating when a message was printed. I use those timestamps to correlate messages in one log file with messages in another. When viewing log messages that might have been caused by a system problem, I’ll peruse operating system logs looking for any system problems that happened at the same time. On Linux, those logs are located in ‘/var/log’. It is a best practice, in terms of observability, to collect log information in a central place. You can use that information to proactively address problems before users report them. For example, my team has used such logs to pinpoint performance issues caused at remote sites. Software behaves in unexpected ways when it is out in the wild. Log files are your window into that craziness.

Trace Messages

Sometimes the information printed to the console isn’t sufficient to debug an issue. There might not be enough messages being printed in the problematic code to help you narrow down the scope of the problem. Most software has a mode that enables trace messages to be printed to the console. Trace messages increase the verbosity of the output, giving you a deeper look into what the software is doing. For example, to enable tracing in Git you can set environment variables like:

  • GIT_TRACE
  • GIT_CURL_VERBOSE

With GIT_TRACE set to 1 a local git clone operation results in the following output:

ryan@Ryans-Macbook-Pro tmp % git clone t.git t-backup.git
09:41:43.858699 exec-cmd.c:139          trace: resolved executable path from Darwin stack: /Library/Developer/CommandLineTools/usr/bin/git
09:41:43.859396 exec-cmd.c:238          trace: resolved executable dir: /Library/Developer/CommandLineTools/usr/bin
09:41:43.859763 git.c:460               trace: built-in: git clone t.git t-backup.git
Cloning into 't-backup.git'...
09:41:43.870815 run-command.c:655       trace: run_command: unset GIT_DIR; GIT_PROTOCOL=version=2 'git-upload-pack '\''/Users/ryan/tmp/t.git/.git'\'''
09:41:43.880924 exec-cmd.c:139          trace: resolved executable path from Darwin stack: /Library/Developer/CommandLineTools/usr/libexec/git-core/git-upload-pack
09:41:43.881542 exec-cmd.c:238          trace: resolved executable dir: /Library/Developer/CommandLineTools/usr/libexec/git-core
09:41:43.881797 git.c:460               trace: built-in: git upload-pack /Users/ryan/tmp/t.git/.git
done.        

Without the extra verbosity, the only message you get is ‘Cloning into t-backup.git’. With added verbosity and without looking at the source code you can discern that git clone is implemented by calling git-upload-pack. The extra verbosity provides you a more granular look into what the software is doing and what component is behaving unexpectedly.

If you have access to the source code, it is sometimes useful to add your own trace messages if the ones currently available are not sufficient. Suppose that you suspect that the root cause of a problem lies between points A and B in a code path and suppose those points are very far apart. One technique is to use bisection by adding your own trace messages at the halfway point between A and B that give you a clue whether the problem exists before or after that halfway point. You’ve now halved the scope of your problem and if needed you can recursively bisect the new scope until you arrive at the problematic code. When performing this exercise evaluate whether the trace messages you add should be made permanent so that future maintainers of this code can make use of them.

The trace messages can assist you in narrowing down the scope of an issue to a failing component or lines of code. However, if that component has many different callers, it can sometimes be difficult to trace the function calls that led to the failing component. In these cases and if available, it is a good idea to have the software print out a stack trace in the module that is failing. A stack trace gives you the stack of function calls that led to the area of code that you are interested in.

Artifacts

As software runs, in addition to its messages, it sometimes leaves behind artifacts, little bread crumbs allowing you to trace the program’s execution. Artifacts include things like newly created files and directories, updated records in a database, the creation of background processes or threads, network connections to servers, etc. These artifacts offer proof that certain code paths were or were not taken. Later, we will peek at the source code for the ‘git clone’ command which calls ‘mkdir()’ to create a directory for the clone. If ‘git clone’ fails, the clone directory does not exist, and an error message was not printed, then there are a couple possibilities:

  • the code never reached the call to mkdir()
  • mkdir() was called but failed to report its error to its caller

Of course, the other information you collect will help you rule out these possibilities. The point is that the clone directory is an artifact helping you confirm how far the program got during execution.

Operating System Tools

Sometimes, you need to use system utilities to determine what the software is doing. On Linux platforms the following utilities are invaluable:

  • ps - list the process status of processes on the system
  • strace - trace the system calls a process is making
  • lsof - list the open files of the system
  • top - view the processes consuming the most resources on the system
  • df - display free disk space
  • quota - display disk usage and limits
  • netstat - show the network status of a system
  • ping - view the latency and network stability between hosts
  • traceroute - print the route packets take to a network host
  • host - DNS lookup utility, verify hostname resolution

If a user reports a command is slow on a particular machine, one of the first places I look is the output of the ‘top’ command to see how loaded the machine is and also how much memory remains. If a user reports a hanging command, I’ll use ‘ps’ to view the process’s status. I’ll also use ‘strace’ to possibly view the system call that the command is hanging on. If I suspect that a disk is full, I’ll use ‘df’ to show the free space on the disk. In some cases, the disk might have available space but be out of available inodes. in that case ‘df -i’ is a useful command. ‘df’ can also tell me if a directory is mounted from a remote machine via NFS or SMB or another file sharing protocol. If I suspect a network problem, I can test the network connection between two hosts with ‘ping’. The ‘traceroute’ command can tell me the path that packets take between two machines, maybe explaining why latency is so high.

Source Code

The definitive description of the behavior of a program is the source code itself. In order to use it effectively you need to use all the information you have collected so far to determine what part of the source code should be examined. The most trivial example is to search the source code for a message (error or otherwise) printed by the program before it ended. Suppose the ‘git clone’ command printed out the following:

ryan@Ryans-Macbook-Pro t % git clone https://github.com/git/git.git git
fatal: could not create work tree dir 'git': Permission denied        

You could search for that error message like so:

ryan@Ryans-Macbook-Pro git-2 % grep -r 'create work tree' *
builtin/clone.c:            die_errno(_("could not create work tree dir '%s'"),
<snip>        

Next you could look at the code surrounding that error message to determine its cause:

        if (dest_exists)
            junk_work_tree_flags |= REMOVE_DIR_KEEP_TOPLEVEL;
        else if (mkdir(work_tree, 0777))
            die_errno(_("could not create work tree dir '%s'"),
                  work_tree);        

Now you know that the ‘mkdir()’ function failed. Most seasoned engineers know what permissions are needed for ‘mkdir()’ to succeed. However, if needed one could read the man page for ‘mkdir()’ to determine all the possible causes of a ‘Permission denied’ error.

Simplify the problem

Once you identify the failing component of the software, sometimes it is useful to test that component in isolation of the rest of the system. For example, if you find a problem in a particular module, you could write some code that executes functions in that module that simulate how it is being exercised in production in a way that reproduces the problem. This is why when designing and building software, it is important to consider modularity. The more modular a system is, the easier it might be to test a module in isolation.

Generate Hypotheses

At this point you should have enough information to generate a list of hypotheses, guessing at the root cause of the problem. If the software had issues creating a file or directory, you might surmise that the disk was full or there were permissions issues. If the file or directory was hosted on a remote machine, there might be network issues with that machine or the machine itself may be having issues. While debugging the code associated with a failing component, you may have spotted a bug. It’s important that the hypotheses that you generate are testable. If they are not, then you will have trouble ruling them out, one by one.

Isolate the root cause

If you have a list of hypotheses, the next step is to rule them out, one by one in a methodical manner, until you have one remaining that has the greatest probability of being correct.

Plan, Implement, and Test the Solution

Now that you’ve identified the root cause, the next step is to plan a solution that addresses the reported problem. The solution should be broken up into two categories:

  1. Meeting the users’ immediate requirements
  2. Preventing users from having to open similar cases in the future

Let’s take our example above regarding the ‘git clone’ failure:

ryan@Ryans-Macbook-Pro t % git clone https://github.com/git/git.git git
fatal: could not create work tree dir 'git': Permission denied        

We have looked at the source code and we are positive of the root cause. Before responding to the user we first reproduce the problem by trying to create a clone in a directory that does not have write permissions. If we reproduce the exact error encountered by the user then we can rest assured that whatever we do to fix the problem on our end will also help the user. Next we test the workaround which is to open up the permissions of the directory or choose another parent directory. Obviously, this is a very trivial example that does not warrant this level of testing. However, I mention it because countless times I’ve seen support and software engineers provide solutions to users that have not been tested, resulting in costly back and forth communications with users and a frustrating user experience.

In this case, the problem was not a bug. The software worked as expected. However, I would also consider whether anything should be done to prevent the user from reporting this issue in the future. The error message is fairly brief. Git is used by other software engineers and the vast majority understand what that error message means and how to resolve it. In this case, I would leave the error message as is. However, if this tool were used by novices, and I was getting lots of reports of this issue, then I would consider making that error message much more verbose, improving the user manual, or adding the question to a FAQ. This is a reminder that the user experience encompasses much more than the technology itself.

Document the Process

Documenting the troubleshooting process is important because it:

  • Prevents wasted cycles in stakeholders requesting information from one another
  • Prevents wasted cycles from repeating troubleshooting steps
  • Allows the information in this case to be used in troubleshooting subsequent cases similar to this one

The best practice is to use an issue or support tracking system. This allows an organization to keep close tabs on its relationship with users and other stakeholders.

Communicate With Stakeholders

Throughout the troubleshooting process you should communicate any progress on the case to any interested stakeholders. It’s important to get feedback from these stakeholders, especially users, to ensure that all expectations have been met.

Feeling Stuck

At some point you may think you’ve exhausted all possible root causes, and you feel like you’re stuck in an infinite loop. That is, you’re trying to collect additional information, but you’re not coming up with any new hypotheses. Don’t worry. Here are some tips to get you out of this rut.

Rubberducking

Rubberducking is explaining the problem and describing any problematic code line by line in oral or written form to a third party. What does that have to do with rubber ducks? The term is in reference to a story in the book, “The Pragmatic Programmer” in which the programmer would debug his code by explaining it line by line to a rubber duck. I had no idea this technique had such a memorable name. It’s always something I’ve done intuitively. Many times, I’ve had the insight I needed when composing an email to my team, describing the problem at hand. Other times, the insight came while white boarding with my teammates. Explaining the problem to teammates or better yet people outside your team is extremely effective, because their perspectives will be the least biased. In a follow-up post, I’ll describe how rubberducking saved me more than once.

Take Breaks

Taking breaks is an important technique for coaxing your brain to develop new insights into a problem. Activities that distract your brain from the problem at hand are important. I prefer, hiking, cycling, or running. It doesn’t matter what it is. All that matters is that when you return to the problem, you return with a fresh mind, hopefully a mind in which the electrical signals take a different path through your neurons, giving you insights you didn’t have before.

Change Your Perspective

Another trick is to change your perspective. One way is to rearrange your office furniture. I think I rearranged my office three times one year. :) If that seems like overkill, then try something more subtle like:

  • Rearrange the items on your desk
  • Change your desktop theme
  • Change your desktop wallpaper
  • Change your font
  • Take your work outside or to a different room

The point is that a different perspective in your physical space might trigger your mind into seeing the problem from a different perspective.

Avoid Target Fixation

What is target fixation? Target fixation is a human neurological trait in which individuals become so focused on an observed object that they increase their risk of colliding with it. This is a well-known phenomenon to fighter pilots and race car drivers. During a bombing run fighter pilots might get so fixated on the reward of hitting a target that they actually collide with the target. Another possibility is that a pilot might be so fixated on an enemy aircraft that they lose sight of the aircraft’s wingman circling in on their six.

What does this have to do software debugging? Sometimes an engineer can get so focused on a particular hypothesis that they forget to revisit their list of hypotheses and ask themselves whether there might be others worth considering. As an example, once when diagnosing a problem, I was laser focused on the possibility that the code was calling a particular function that must have been failing, but it was not returning an error, or the calling code was not handling the error correctly. However, it turned out that the function was working as designed but was returning unexpected results. The solution in this case was not handle errors better. It was to call the function a little differently so that it returned the results we were expecting. I'll discuss this issue in more detail in a follow-up post.

Derek Lau

Optimizing for quality of life while making the world a better place

2 周

You are definitely very verbose. Only thing I would add/change is re '...capture both stdout and stderr". A quick dirty trick I used to do is to redirect STDERR to STDOUT

Robert Beltran

Information Security Leader

1 个月

I don't see "turn it off and then back on again" on the list ??

Madhavarao Kulkarni

Innovator | Social Reformer | Software Leader | Mentor | Philanthropist | Public Speaker

1 个月

hey Ryan hope you are doing great.

回复

要查看或添加评论,请登录

Ryan Hodges的更多文章

  • Effective Software Design

    Effective Software Design

    In the world of software engineering, elegant abstractions are invigorating. They keep me coming back for more.

  • Troubleshooting Examples

    Troubleshooting Examples

    In Effective Software Troubleshooting, I described the high-level steps for troubleshooting software systems. In this…

  • Writing LinkedIn Articles in Markdown

    Writing LinkedIn Articles in Markdown

    My first experience writing an article for this platform was challenging. I wanted to write the article in Markdown so…

    8 条评论

社区洞察

其他会员也浏览了