Truth Happens to an Idea: Adding Rigor to Best Practices

Truth Happens to an Idea: Adding Rigor to Best Practices

The implementation of Best Practices is necessary to excel in any industry. To often, however, the term Best Practice is tossed about in a casual way and applied to ideas or practices that have not experienced any type of vetting or examination. Achieving ‘bestness’ isn’t that difficult, but it does require some rigor and research to build sufficient trust in a practice. In this article I explore four examples from the literature that dovetail into the philosophy that truth happens to an idea. 

Our first paper, "Best Practices: A Methodological Guide for the Perplexed" encapsulates the importance of Best Practices when it is said:

"Though overly simplistic, the logical appeal of a "best practice" is profound, particularly when individuals and organizations operate in competitive and hostile environments"

I have always suspected that Best Practices, as it is generally practiced, is as it is termed above, 'overly simplistic'. The authors assert that common sense approaches to Best Practices, while the most prevalent in the literature, are deeply flawed. This is an assertion that I agree with. From the paper:

"Most... sources typically develop a common sense form of "best practices" approach to vet their information... the lack of a more rigorous approach to "best practices" results in several problems. First, a large body of knowledge in decision psychology identifies numerous problems of bias associated with the application of human judgment in forming decisions... the focus of such activities is on action, the clear and careful identification of the causal chain is usually vague and ambiguous. Typically, case-based analysis suffers from weak internal validity; therefore, many of these studies do not specifically deal with alternative explanation for success..."

The authors then go on to point out why one needs to have more rigor in the methodology behind capturing Best Practices and to widen one’s view of the problem space and incorporate a degree of systems thinking:

"in a complex... system it is often impossible to identify the causal chain in order to prescribe an action that rectifies the situation. The weakness is not in the action; it is in our poor understanding of the cause/ effect relationships at work. Thus, it is useful and important to understand how to conduct "best practices" research in order to not only identify actions but also to make sure that the action is the appropriate cause to some desired effect."

Because we all like the listicles I’d like to offer their three simple goals and definition of a Best Practice program:

"The term "best practice" implies that it is best when compared to any alternative course of action and that it is a practice designed to achieve some deliberative end. Hence, there are three important characteristics that are associated with a ‘best practice’

1. a comparative process,

2. an action, and

3. a linkage between the action and some outcome or goal."

This simple definition is then broken down into more complex parts; Comparability and Linkage. Comparability is described as the need to essentially be sure we are comparing apples to apples when it comes to gathering data for a Best Practice. Linkage is described as establishing a measurable cause / effect relationship between the Best Practice and the outcome.

The authors, however, to offer some caveats to their own structure when it is said:

"While comparability is a necessary condition for identification of a "best practice" it is not sufficient. To be sufficient the cases selected for comparison must include all comparable cases for the relevant domain. The reason for this is that any successful comparative approach can only find the "best case" within the sample... Given this problem, one might think that the solution is to pull a representative sample of cases, but unfortunately, that research strategy does not solve the problem. There is no guarantee that such a sample will include the ultimate "best case." The best possible statements that can be made from a random sample will be probabilistic statements about how far the sample "best case" is from the population "best case," which will also require assumptions about the probability distribution of cases... this is essentially a form of sample inference."

That is a bit dense, so let me see if I can unpack it a bit. I think what the authors are saying here is that we cannot use comparability as a defining factor when establishing a Best Practice. I'm often asked to research published Best Practices for establishing policy or technical approaches to a problem. When I'm lucky enough to parse these out of the research, what is being said here is that a decision based on that research should not be called out as a 'Best Practice', because my sample of cases or codified Best Practices are insufficient evidence to make that assertion. OK, I wanted to clarify the statement, but now I'm not sure I made it better...

The author's quantify their above statement by saying that:

"it is also necessary to have a clearly articulated causal structure to relate inputs to outputs"

Their suggestion is the way around sample inference and, what I am reading as a universal insufficiency of published Best Practices methods, as a guide for establishing new ones is the tracking and codification of a causal structure within the systems of technology and people that lead to the establishment of a Best Practice.

Following further arguments for their approach to the subject, the authors offer us what they term as the:

"two [essential] conditions that must be met

1. completeness of cases; and

2. comparability of cases.

To obtain the second condition, it is necessary to have a complete and accurate statement of the causal relationships linking inputs to outputs... [this establishes] a foundation for judging any research design that is aimed at the identification of a "best practice."

The body of the paper digs very deep into how to treat Best Practices using hardcore statistical models. While I don't think a 'common sense' approach is effective or right for implementing Best Practices, I am not saying that swinging the pendulum all the way over to MathNerd City is a good approach either. One last relevant bit from the body of the paper gets at something actually very important when designing a Best Practice program, the scope of the data collection:

"any 'best practice' design will be, by its very nature, less generalizable than standard social science research design. While not surprising, given the applied nature of 'best practices' work, it does identify some clear suggestions. Delimiting the domain of cases in space and time to define a complete and exhaustive set best approximates completeness. This approach is preferable to the traditional social science approach of random sampling..."


The paper concludes with a fairly relatable appendix that offers a checklist for 'Best Practices Research Design' that I think can be adapted to our purposes, adding a bit more rigor to the approach:

"Checklist for 'Best Practices' Research Design Issues

1. How complete (or representative) is the sample?

  a. Comparisons are only as good as the cases in the group.

  b. Random representative samples cannot guarantee a 'best' outcome.

  c. Random samples can provide probability limits on how far the population 'best' is from the sample 'best.'

  d. Limiting geographic and temporal space to achieve completeness is a better strategy than random sampling.

2. How comparable (and on what basis are you determining comparability) are the units in the sample?

3. Have you identified all the major inputs and outputs for the system?

4. Have you used and appropriate structure (i.e., linear or nonlinear) for relating inputs and outputs?

5. Does the nature of the cause and effect relationship (i.e., linear or nonlinear) for the typical case remain structurally the same for the 'extreme' cases?

6. Does the set of input and output variables for the typical case remain the same for the 'extreme' cases, or are different models necessary?"

This paper will take a lot of unpacking and a bucket full of metaphors to be fully relatable to our purposes, so for now, let's move on to the next one in our list.

I'd like to take a moment to note that treating Best Practices as a research problem, complete with hypothesis, applying scientific rigor, etc. is in my opinion a much better framing of the wicked problem of implementing best practices.

With that context, lets peek at another paper; the first bit of "Best Practice Research and Postbureacratic Reform" that jumped out at me is as follows:

"For most of its lifetime social science... has been struggling to improve the use of knowledge."

I like the conciseness of this statement. Knowledge Management, I would argue has many more analogues to social science than it does to business administration. It is people, not policies, that, as the authors term it, 'improve the use of knowledge'. That is really what Best Practices is about, using knowledge in the most effective way. Understanding Best Practices in this way makes the leap to 'Best Practice Research' as it is defined in the paper, easier. From the authors:

"The most precise definition of BPR is the selective observation of a set of exemplars across different contexts in order to derive more generalizable principles and theories of management..."

The paper goes on to frame BPR [Best Practices Research], a term I am using as a synonym for Best Practices, as a system of inquiry, using inquiry as an analogue to research:

"BPR is pragmatic... Philosophical pragmatism reveals itself through the language-in-use by BPR practitioners. Pragmatists, for example, prefer the word inquiry to research, hence BPR authors often will use the term inquiry to describe their research activities and products... 

Inquiry suggests an open, social process of conflict resolution, problem solving, and social change... 

[inquiry] is willing to take anything, to follow either logic or the sense, and to count the humblest and most personal experiences"

I might actually like this better than framing it as research, even though I am a research nerd. Using the word 'inquiry' in place of research suggests to me something more active, something that moves quicker and analyses the problem deeper, while still maintain the shape of research and rigor that we (well, me at least) are after.

The authors then drop this real gem from the philosopher William James into the discussion:

"The truth of an idea is not a stagnant property inherent in it. Truth happens to an idea. It becomes true, is made by events."

Best Practices, from a policy-driven, common sense approach is stagnant. Someone or a group of someones decide this is the truth of a situation, it is written down, and expected to be acted on the next time a similar situation occurs.

The sentence 'Truth happens to an idea.' is so good, it might become my first knowledge management tattoo.

This implies the method of inquiry, of research, required to imbue a practice with 'bestness'. "It becomes true, is made by events." This threads back to what was discussed in the first paper, the need to track and codify the causal relationships, to map the system, that the practice came out of, wonderful stuff.

The paper continues, dropping in some popular buzzwords, innovation and entrepreneurship:

"BPR is innovative and entrepreneurial... The BPR argument goes something like this: If it is a best practice it must be innovative and entrepreneurial; if it is innovative it must be a best practice... Entrepreneurs should innovate..."

I'm going to have to sit with this one a bit, if it is a best practice it must be innovative and if it is innovative it must be a best practice. I'm not sure if I agree or disagree with this assertion, but I do like its connotations.

This next quote, I also found enlightening:

"BPR is positive and prescriptive. Few people like to read about the problems they encounter on their jobs, no matter how detailed and incisive the analysis may be. Most professional managers would much rather read about the solutions than the problems and thus be given hope and promise for the future. Best practice researchers respond to this sentiment by avoiding negative analysis and focusing instead on possibility and change. One strength of BPR is its ability to take complex events [and] make them appear simple... make the 'lessons learned' even more simple."

I've been looking for this corollary, the one between Lessons Learned and Best Practices. To me, what the authors are talking about here is that 'Lessons Learned' are part of that causal systems level observation that is required to come to, in the immortal words of Stephen Colbert, to come to the 'truthiness' of a Best Practice. Connecting this to my exploration on Lessons Learned, After Action Reviews, Kobayashi Maru, and Self-Lacing Shoes: On Practical Approaches to Lessons Learned, the Best Practice is a codified, concise, innovative and positive action a practitioner should take. The practitioner is able to take for granted that the idea possess truth because of the research or inquiry process she knows went into establishing it. Yeah. That sounds about right. Particularly telling is the phrase "Most professional managers would much rather read about the solutions than the problems...", Best Practices are a spoonful of sugar the knowledge manager uses to help the Lessons Learned go down. Data from the Lessons Learned process inform two-thirds of the actions used as context for establishing a Best Practice.

We then find a nice connection to what the previous paper discussed regarding the scope of data collection when building the causal map for a Best Practice:

“Many scholars have expressed concern about the limited nature of our personal and organizational learning horizons. The core of the learning dilemma is that 'we learn best from experience but we never directly experience the consequences of many of our most important decisions"

What the authors are getting at here, I think, is that Best Practices done well need to be focused on specific spans of time, but that focus is limiting when you pull out and view the organization holistically over much greater time-depth. This isn't an argument against Best Practices, but it is a word of caution that BP is not a Philosopher's Stone for your firm. The results might be tangible and impactful, but not necessarily sustainable in a longitudinal sense.

I had to concentrate on methodological research for Best Practices because of the amount of fluff that is tagged with that term. That said, I think we've come up with some valuable tech, or at least, meta-tech related to our own utilization of Best Practices so far. The next paper will be our last look at methodology, in a hopes to round out our investigation in this area. The author of "Learning from best practices in public and social management: A methodological proposal" begins the discussion with a clarification of (from their perspective) of the term Best Practice:

"when are management practices best? Within the rich but not very systematic discussion of “best practices”... we believe that it is possible to clarify what makes them “best” from at least two points of view. One is to ask to what extent the practice achieved the proposed results or whether it achieved better results than did other alternative practices. Another is to ask how the practice worked and also why it did or did not work well. The answer to the first question clearly pertains to evaluation of results and impact. The answer to the latter questions, on the other hand, pertains to the area of analysis of practices."

The authors then offer us a piece to our methodology that we were missing, and have already called out as a problem when working with Best Practices outside of the 'common sense' approach. The below quotes focus on the act of extrapolating Best Practices outside of their original context:

"Given the focus on learning about the “best practices” in a specific context in order to transfer their contributions to another context at a later date, the reader may think that what is most important would be to evaluate the results actually obtained. But is it possible to apply what is learned about one practice in another context without a prior understanding of how and why the practice was able to develop and operate appropriately in its original context? Because the contexts are not equivalent, it does not make sense to replicate or copy a practice, which is why [our approach is to] extrapolate it... to apply our conclusions about a practice in its original context to a different context... To [extrapolate] it is essential to understand how and why the practice developed and operated in its original context, so that we can subsequently clarify (taking into account the differences in the context that receives the practice) whether it will be able to operate in a different situation."

If a successful method is put in place for applying Best Practices after the context / original time-depth is exhausted, this extrapolation, or as the author terms it 'causal reconstruction' increases and continues the value of the knowledge object. The authors of "Best practice research and postbureaucratic reform" don't offer us a method of extrapolation that we can use, but, for me, the germ of the idea is enough.

The rest of the paper, as was the theme with most of the research I found, deep dives into Best Practices in a specific area, which isn't 100% helpful for our discussion, so I'll move on to the last paper I reviewed on this subject.

Before concluding this article, I wanted to briefly look at a concept that I discovered while looking for literature. It is used probably exclusively in software development, but it was as close as I could come to the opposite of a best practice. The concept / knowledge entity is called an Antipattern.

According to the authors:

"The goal of software development is to generate products with high levels of productivity and efficiency that ensure good levels of quality."

I think we can all agree that the goals of an architecture or engineering project align with what is said above. The authors continue:

"To achieve this, it is necessary to avoid the risks introduced by bad practices... These bad practices have been labeled as antipatterns, and occur in different areas. The catalog of antipatterns is an important road map, particularly on dark paths that might be followed when precautions are not taken, and of course, that cause problems in projects."

To me, we are back to that all important mapping of the cause and effect that lead to a best practice mentioned in our first paper and then reiterated when we briefly discuss extrapolation. I think that in order to create a valid map, antipatterns need to cataloged as well. We might already be doing this work, right? When we are working through our Lessons Learned process. Mapping (and really, just naming) antipatterns in past project work as part of the methodology towards codifying Best Practices will complete the picture and allow us to extrapolate better when we are operating out-of-context or too far outside the time-depth of the original inquiry.

So what causes antipatterns? From the paper:

"anti-patterns... are caused by poor abstraction and poor implementation of the theoretical approaches of software. Usually, "shortcuts" and poor analysis approaches lead to malpractice. The time factor developers always have to compete against does not allow thinking more carefully about good practices; even patterns themselves might become anti-patterns when abusing their implementation."

Placing the above into our own context, these theoretical non-software development antipatterns would be caused by poor implementation on a project, shortcuts, and poor or nonexistent QC ('poor analysis'). We all know the crunch of a deadline and how it effects our thinking. I think we all can relate to a Best Practice being used outside of its context, a past good practice that no longer applies and is not reexamined due to its codified state. This last point is taken further in the first type of antipattern defined by the authors, the 'Top Process':

"It is common that whenever a process is needed, the first choice is to pick the in-fashion process, which is generally proposed by a large organization, a community, a research center or a person or group of people who pool their expertise to propose a rescuing formula. Generally the top process is proposed as the only silver bullet with regard to the process. However, what worked for a particular project environment does not necessarily work for every project environment... The main responsibility for achieving success lies in the process, as an essential tool... "

So, in translation, a 'Top Process' antipattern can be identified as a practice that is being used because it is popular with a particular person, administrative entity, etc. and adopted without further investigation into its applicability to the project at hand. Again, this could be a past Best Practice that was identified and worked well in context, or, and I think this is closer, a trending Best Practice from the 'common sense' camp we identified earlier.

The next antipattern that I find applicable is the 'Super Process':

"Explaining any phenomenon from all angles is an approach that can be adopted. Similarly, using complexity to explain a software process is another way; [for example] 'let us take a contemporary cloth, it uses flax, silk, cotton, and wool of various colors. For that cloth, it would be interesting to know the laws and principles concerning each of these types of fibers. However, the sum of knowledge about each of these types of fibers that form the cloth is insufficient to meet not only the new reality which is the tissue, that is, the qualities and specific properties of the texture, but also to help us understand the shape and configuration'"

In other words, the project is not a sum of its parts. The Super Process antipattern, outside of its original context, could be the process of trying to analyze all of the moving parts and human capital in a project in an effort to identify who or what led to the mistakes. When we systems think, it is important to be cognizant of the fact that the project has unique qualities that are brought about by a synergy of all the components working for and against it. A lot of time in project research can be wasted by taking up by following the Super Process antipattern and ignoring the project as a whole entity.

Not everything in this paper applies, I had to skip over a couple of antipatterns that I could not see an analogy for outside of software development. The next one that landed, however, is a big one that I see all the time in my office, the Slide Process. Again, from the authors:

"In a slide process, it is typical to start at a certain speed and finish with acceleration. In the same way, a process without rhythm starts with extended times in its initial phases and have tight schedules in development and deployment phases. A slide process does not control time, delaying projects; it also accelerates at critical stages, sacrificing product quality. These processes end up adjusting schedules, paying fines, conducting renegotiations, and making considerable losses for the organization"

I don't think I have to illustrate this further. This is a common ailment of all project work. What is unique, however, is that now we have a name that we can apply to it when creating our map.

I think a slide process antipattern is likely slithering around your office in some form, RIGHT NOW!

Another very familiar antipattern is the Domino Process:

"A development process tempered by a high interdependence between the activities that constitute its workflow will result in a domino process.

Initial activities are critical and cause exponential effects on final activities to the point that it becomes impossible to produce an activity i +1 if you have not fully completed activity i.

A domino process leads to stiffness and reduces the possibility of feedback at early stages in the workflow. A problem is detected when the cost has increased considerably, leading to elongation in the schedules, as well as to inefficient use of resources."

In other words, when a project isn’t scheduled correctly, or a team doesn’t have a great enough redundancy of skill sets, the absence of a team member or another hang-up that stops activity i from happening, brings the entire project to a halt.

The next antipattern I'd like to include is called the Headless Process:

"Poorly managed processes, and /or processes with leadership problems in the various disciplines, are referred to as headless processes. This type of process does not define clear functional objectives and responsibilities, there is a poor identification and assessment of the roles and therefore there is no adequate assessment of the disciplines; activities usually focus on the production of code without ensuring appropriate quality conditions; moreover, ad-hoc delegations occur. Headless processes exhibit exaggerated rotation of staff, stalling the workflow and leading to an abrupt end with unfavorable implications for the parties involved"

I think we've all experienced some degree of Headlessness at some time in our career.

This paper's conclusion helps drive some of my above points home:

"One of the advantages of having a catalog of antipatterns for software processes is to implement the catalog using automated tools, which allows timely identification of a bad practice within a process... The anti-patterns generate a vocabulary and a list of risks that can arise when using a software process. This vocabulary facilitates effective communication between the different roles of the process and contributes to failure detection and quick response whenever risks are encountered in a project."

Again, translating to a more global context, our advantage of having a catalog of antipatterns is that identifying these bad practices become that much easier, allowing us to innovate and find our way towards good and eventually Best Practices. I am big into linguistics and lexicons, so I know their power. I think that co-opting the antipattern terminology within the context of the Best Practice could be a very useful experiment.

What did we come away with in our investigation of Best Practices?

In summary, Best Practices need to be derived via a process that compares and codifies the causal chain, the activity within both our technology and human capital systems, over a specific period of time. This process results in action, the Best Practice, that relates to this context.

The process should have some checks and balances, such as a metric for how complete or representative is the sample of data that goes into the causal map, where 'completeness' is related to our scope for data collection (time span, practice area, market area, or other delimiting factors). 

The above process can be guided by the statement:

Truth happens to an idea.

Meaning that it cannot be considered a Best Practice until the causal map, scope, etc. is established and followed through on. The truth of the Best Practice is not an inherent quality, it has to have a proof.

Once a Best Practice and its associated data and mapping are complete, it is important to understand that it should not be taken out of context without some further investigation or extrapolation. 

And finally (theoretically), the mapping should include the identification of antipatterns. Antipatterns in project work can be stretched across different context and time-depth, as they are typically the result of human nature and the universal challenges of project work. Identifying and naming the antipatterns that work against or are solved by a Best Practice makes our overall process more efficient.

References

Bola?os Castro, S J, González Crespo R and Medina García V H (2011) Antipatterns: A compendium of bad practices in software development. International Journal of Artificial Intelligence and Interactive Multimedia (1, 4) pp 42-47.

Bretschnieder S, Marc-Aurele F J and Wu J (2005) "Best Practices" research: A methodological guide for the perplexed. Journal of Public Administration Research and Theory: J-Part (15, 2) pp 307-323

Cortazar, J C (2005) Learning from best practices in public and social management: a methodological proposal. Paper for Unite Nations Public Administration Network. pp 1-16

Overman E S and Boyd K J (1994) Best practice research and postbureaucratic reform. Journal of Administration Research and Theory (4, 1) pp 67-83


要查看或添加评论,请登录

Drew W.的更多文章

社区洞察

其他会员也浏览了