Quality checks in warehouse: Leveraging sampling methods with R for informed decision-making

Quality checks in warehouse: Leveraging sampling methods with R for informed decision-making

Introduction

Quality check (QC) is a key process of warehouse operations, ensuring that the products received from suppliers meet the required standards before they are stored or dispatched. Effective QC methods help maintain product quality, reduce returns, and enhance customer satisfaction. In this article, we will explore various QC methods used in warehouses, with a specific focus on the sampling-based QC method, the most popular of all. We’ll also dive into how R can be used to quantify the risks associated with sampling and optimise decision-making.

What are different QC methods in warehousing -

Various methods exist for quality checks, but most are applicable to manufacturing or production environments typically. We are focusing specifically on methods used in a warehouse.

  1. Visual Inspection: Manually checking items for visible defects or damages. While cost-effective, it may not catch all defects.
  2. Automated Inspection: Using technologies like barcode scanners, RFID tags, and AI-powered vision systems to automate the inspection process. It’s efficient and reduces human error but expensive to implement & maintain.
  3. Functional Testing: Verifying that products function as expected. This methods needs an in-depth technical understanding of the product.
  4. Sampling-Based Quality Control: Inspecting a representative sample from a lot rather than checking every item. This method balances inspection costs with quality assurance but introduces the risk of accepting defective lots or rejecting good ones.

Challenges Typically Faced in Sampling Methods

While sampling-based QC is a popular approach due to its cost-effectiveness and practicality, it comes with its own set of challenges:

  • Risk of mis-judgement : We can expect two types of error here. First one the risk borne by the supplier or Type I error. This occurs when a good lot is incorrectly rejected based on the sample inspection. This can strain relationships with suppliers, leading to unnecessary returns and associated costs. The second one is the risk borne by the warehouse or Type II error. This happens when a defective lot is incorrectly accepted. The defects might be missed in the sample but present in the rest of the lot, leading to quality issues downstream.
  • Sample size determination : Determining the optimal sample size (n) and acceptance criteria (c) is critical. Too small a sample might not be representative, increasing the likelihood of errors, while too large a sample might negate the cost savings that sampling provides.
  • Supplier consistency : The effectiveness of sampling heavily depends on the supplier’s historical quality performance. Variability in supplier quality can make it difficult to set consistent sampling criteria, requiring continuous adjustment and monitoring.
  • Communication and Transparency : Ensuring that both the warehouse and supplier understand and agree on the sampling plan is crucial. Misalignment can lead to disputes, especially if the supplier perceives the sampling method as unfair or overly strict.


Deep dive into sampling based QC

Let's Consider a scenario where a warehouse needs to perform QC on incoming shipments from a supplier with a historical defect rate of 2%. The warehouse adopts a sampling strategy, inspecting 10% units of each shipment. To assess the effectiveness of this approach, we use R to simulate and quantify the risks involved.

Simulating and Quantifying risks with R

We will use R to calculate and visualise the probability of accepting a lot, also called as OC -Operating Characteristic curve, based on different sample sizes (n), acceptance criteria (c), and defect rates (p). We also explore the supplier’s risk (Type I error) i.e. probability of rejecting a good quality lot and the warehouse’s risk (Type II error) i.e. probability of accepting a bad quality lot.

Assuming that we get a lot of 10,000 units on average from the supplier and the warehouse has the capacity to do QC for max 10% of the lot. So, the sample size (n) can take values upto 1000. Let's assume the acceptance criteria (c) can be upto 100. And the defect rate (p) of the supplier can be from 1% to 20%. The code below defines these ranges and creates all possible combinations of n,c and p values.

library(tidyverse)

# Define a range of n, c and p values to analyze
n_values <- seq(500, 1000, by = 100)
c_values <- seq(20,100,by= 10)
p_values <- seq(0.01,0.2,by=0.01)

expand_grid(n = n_values, c = c_values, p = p_values)        

While using the sampling method, the probability of accepting the lot follows a Binomial distribution. More theory can be read here - Wiki page

R has a built in function 'pbinom()', to calculate probabilities associated with our n,c,p values.

# Calculate acceptance probabilities for each combination of n,c,p
results <- 
  expand_grid(n = n_values, c = c_values, p = p_values) %>%
  mutate(prob_accept = pbinom(c,n,p))        

The operating characteristic (OC) curve visualises this data nicely. In the plot below, notice how the probability of accepting the lot approaches 0% as of the defect rate (p) increases. Also, to maintain small defect rates, we need larger sample sizes.

# Plot the OC curve
results %>%
  ggplot(aes(x = p, y = prob_accept, color = factor(n))) +
  geom_line(size = 1.1) +
  geom_point()+
  facet_wrap(~c,labeller = label_both)+
  scale_x_continuous(labels = scales::percent)+
  scale_y_continuous(labels = scales::percent)+
  labs(
    title = "Operating Characteristic (OC) Curve",
    subtitle = "Facets created for all values of (c)",
    x = "Defect rate of the supplier (p)",
    y = "Probability of Accepting the Lot",
    color = "Sample Size (n)") +
  theme_minimal()+
  theme(strip.background = element_rect(
    color = '#b5b1b1',fill = '#b5b1b1'),
  strip.text = element_text(
    face = "bold.italic"))        

Wait, this looks too complicated for either supplier or warehouse teams to take decisions and act accordingly. So, let's plot the supplier and warehouse risk separately for various sampling plans where the risk is limited to 10% or less. This should help in finding a common ground.

# Plot the supplier risk curve with limits as supplier_risk<=0.1 and defect rate i.e. p<=0.04
results  %>% 
  mutate(supplier_risk = 1-prob_accept,
         reciever_risk = prob_accept)%>%
  filter(supplier_risk <= 0.1 ,p<=0.04)%>%
  ggplot(aes(x = p, y = supplier_risk, color = factor(n))) +
  geom_line(size = 1.1,alpha = 0.5) +
  geom_point()+
  scale_x_continuous(labels = scales::percent)+
  scale_y_continuous(labels = scales::percent)+
  facet_grid(n~c,labeller = label_both)+
  theme_minimal()+
  theme(legend.position = "None")+
  theme(strip.background = element_rect(
    color = '#b5b1b1',fill = '#b5b1b1'),
    strip.text = element_text(
      face = "bold.italic"
    ))+
  labs(title = "Supplier risk - Type1 error",
       x = "Defect rate if the supplier (p)",
       y = "Probability of rejecting the lot")        

Notice that the risk for supplier becomes almost zero as the sample size increases and / or the criteria is relaxed. Warehouse team will obviously try to negotiate in the other direction.


# Plot the warehouse risk curve with limits as reciever_risk<=0.1 and defect rate i.e. p<=0.08
results %>%
  mutate(supplier_risk = 1-prob_accept,
         reciever_risk = prob_accept)%>%
  filter(reciever_risk <= 0.1 , p<=0.08)%>%
  ggplot(aes(x = p, y = reciever_risk,color = factor(n))) +
  geom_line(size = 1.1,alpha = 0.5) +
  geom_point()+
  scale_x_continuous(labels = scales::percent)+
  scale_y_continuous(labels = scales::percent)+
  facet_grid(n~c,labeller = label_both)+
  theme_minimal()+
  theme(strip.background = element_rect(
    color = '#b5b1b1',fill = '#b5b1b1'),
    strip.text = element_text(
      face = "bold.italic"
    ))+
  theme(legend.position = "None")+
  labs(title = "Warehouse risk - Type2 error",
       x = 'Defect rate of the supplier (p)',
       y = "Probability of accepting the lot")        

Notice that the warehouse risk becomes zero for larger sample sizes and relaxed criteria, but this comes at additional cost. So the warehouse would want to have a smaller sample size and stricter criteria.

From the above two plots, we can see that the sampling plan of n=1000 and c=40, gives a supplier risk of <10% at a defect rate of ~4% the warehouse risk of <10% even at a defect rate of 7%. So, this seems to be a plan that both can agree upon.

In a practical scenario, this negotiation has to be done with multiple suppliers periodically. A simple web-app to visualise this can help in realtime decision making - App link

How Warehouse Teams Can Use This Analysis

Warehouse teams can leverage this analysis to make data-driven decisions and align with suppliers on the most appropriate sampling plans and criteria:

  1. Risk assessment and negotiation: By visualising the supplier and warehouse risks for various sampling plans, warehouse teams can have informed discussions with suppliers about the acceptable level of risk. For instance, if a supplier is consistently performing well, the warehouse might agree to a more lenient sampling plan, reducing both inspection costs and time.
  2. Customising sampling plans: The analysis allows for the customisation of sampling plans based on different defect rates, sample sizes, and acceptance criteria. This flexibility ensures that the QC process is tailored to the specific characteristics of each supplier, leading to more accurate and fair outcomes.
  3. Continuous improvement: The results of the sampling plan analysis can be used to track supplier performance over time. If a supplier shows consistent improvement in quality, the warehouse may consider adjusting the sampling plan to be less stringent, fostering a collaborative relationship focused on continuous improvement.
  4. Setting clear expectations : By sharing the analysis with suppliers, warehouses can set clear expectations and establish transparency in the QC process. This mutual understanding can help prevent disputes and ensure that both parties are aligned on quality standards.

Conclusion

Sampling-based quality control is an essential tool in warehouse operations, especially when 100% inspection is not feasible. However, the risks associated with sampling need to be carefully managed. By using R to simulate and quantify these risks, warehouses can make informed decisions that balance efficiency with quality assurance. This approach not only helps in maintaining product standards but also in optimising operational costs. Open communication and collaboration with suppliers, supported by data-driven insights, ensure that the QC process is effective and fair for all parties involved.

If you’re interested in exploring how these techniques can be implemented in your warehouse operations or have any questions about the R code provided, feel free to reach out! Refer this Github repo for complete code - repo link

Credits - The statistical application used here is referred from the book Statistics for business and economics - Anderson Swinney.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了