Surprised why TCP/IP stack received more TCP data than advertised MSS?

Introduction

Laymen explanation

If you have used MSS, you know that it is described in rfc and sender should abide by MSS size. However, you might have got surprised by packet capture and seen that the amount of data received is more than advertised MSS. Is the sender buggy or is there any valid reason for this? This article tries to explain valid condition.


Technical explanation

Linux network driver faciliatate optimization of network data at the driver level. They use various approaches for the same. Assembiling incoming consecutive TCP segments is one of such activity. To do this, it will use IP header of first segment and combine consecutive TCP segments. In this way, it will save cost of extra headers. And you will see packet size bigger than advertised MSS. 


Why this optimization is needed

MSS is controlled by MTU (Maximum Transmission Unit). Note that if an IP packet size is within MTU limit, it avoids IP fragmentation of packets. So, advertised MSS should also be honoring MTU. Due to this restriction, advertised MSS is usually lesser than the actual capability of processing unit. 

This driver level optimization is done in order to handle this situation.    


Caution 

Due to this optimization, received packet may become Jumbo packet (>1500 bytes). 

OS network stack should accept this happily. However, if you are relying on custom TCP/IP stack which doesn't support Jumbo packet, then this optimization will result in packet processing issue.


Method to disable this optimization

In Ubuntu Linux, this optimization is enabled by default. ethtool utility can be used to disable it


要查看或添加评论,请登录

Deepak Kumar的更多文章

  • Role of DBSCAN in machine learning

    Role of DBSCAN in machine learning

    Why to read this? Density-based spatial clustering of applications with noise (DBSCAN)is a well-known data clustering…

  • Choice between multithreading and multi-processing: When to use what

    Choice between multithreading and multi-processing: When to use what

    Introduction Single threaded and single process solution is normal practice. For example, if you open the text editor…

  • Artificial Narrow Intelligence

    Artificial Narrow Intelligence

    About ANI ANI stands for "Artificial Narrow Intelligence." ANI refers to artificial intelligence systems that are…

  • Federated learning and Vehicular IoT

    Federated learning and Vehicular IoT

    Definition Federated Learning is a machine learning paradigm that trains an algorithm across multiple decentralised…

  • An age old proven technique for image resizing

    An age old proven technique for image resizing

    Why to read? Anytime, was you curious to know how you are able to zoom small resolution picture to bigger size?…

    1 条评论
  • Stock Market Volatility Index

    Stock Market Volatility Index

    Why? Traders and investors use the VIX index as a tool to gauge market sentiment and assess risk levels. It can help…

  • The case for De-normalisation in Machine learning

    The case for De-normalisation in Machine learning

    Why? The need for inverse normalization arises when you want to interpret or use the normalized data in its original…

    1 条评论
  • Kubernetes complements Meta-verse

    Kubernetes complements Meta-verse

    Motivation The #metaverse is a virtual world or space that exists on the #internet . It's like a big interconnected…

    1 条评论
  • Which one offers better Security- OSS or Proprietary software

    Which one offers better Security- OSS or Proprietary software

    Motivation World is using so many OSS. Apache Kafka is a core part of our infrastructure at LinkedIn Redis is core part…

  • Why chatGPT/LLM should have unlearning capability like human has..

    Why chatGPT/LLM should have unlearning capability like human has..

    Executive Summary Do you know, chatGPT/LLM has this open problem to solve. This problem(unlearn) has potential to…

    1 条评论

社区洞察

其他会员也浏览了