“Coding Enzymes & Digestive Algorithms”
3d-delivery-robot-working – AI generated image designed by Freepik

“Coding Enzymes & Digestive Algorithms”

I recently wrote a comparison between codec and human digestive system, at very general level (https://www.dhirubhai.net/pulse/codecs-were-human-anne-mari-lummevuo-sntgf). However, since there are so much more that can be equalled between these totally different systems, I decided to write “extended version” which is like a Director’s cut, containing material that was not present earlier.

The starting point is the same: To explain the basic functionalities of a codec to audience not familiar with coding process, it could be pictured as a human being. A codec is configured to encode input data, to output data, in a process with variety of intermediate actions. Input data for a human is of course food. It is also processed to output data in a process with multiple phases. We seldom eat only one type of food but typically, in the “input data” there is variety of different elements there. For example, in our lunch salad there can be lettuce, cherry tomatoes, perhaps carrot slices, pieces of cheese, and chicken. Similarly, for example in the video input data fed for the codec there are multiple different types of content therein; different colours, variety of details, dynamic and static images etc.

In order for the codec to encode the input data with varying content as efficiently as possible, it is beneficial to split the input data to smaller portions where the content of the individual portions is substantially similar. It is then easy to select an appropriate encoding method for the content in question, instead of using the same method to all data. There can be first initial split of data, e.g. video to frames, prior to splitting to sub-blocks of similar kind. If thinking of the “raw material” of our salad, the situation is the same. There is first the initial split of ingredients, e.g. we don’t eat the whole lettuce but it is divided to parts. We don’t put the whole carrot to our salad but we cut it to slices easier to eat. Notably, this splitting is done before the input data is fed to the system, i.e. when preparing our meal.

Both processes can then proceed to iterative splitting where the data is further split to smaller parts. Codecs such as AV1, MPEG-EVC and VVC (as well as Gurulogic proprietary codec GMVC?) divide the data to smaller portions, to gain optimal coding quality. Similarly, when eating, we bite our food in our mouth to as small pieces as possible, to enable better swallowing and transmission to our stomach. The right size of the individual pieces of different data can be defined after analysis of their content. Codecs analyze the data characteristics, and we humans tend to analyze the size of the convenient bites based on what we are eating – is it hard or soft ingredient, what is the viscosity of it, and so forth.

But what happens to the input data when it is coded? There are different coding algorithms that process the data into the more compressed form, either in lossy or lossless way. In lossy compression, data is reduced to smaller size by omitting something, while lossless compression transforms the data to different, compressed format, but does not lose anything from there. Similarly, food is processed in our digestive system by multiple digestive enzymes, the purpose of which is to break down different elements such as proteins, carbohydrates and fats, to enable them to be absorbed to bloodstream. ?

Entropy refers to a measure of randomness, or unpredictability, of information content. The smaller the entropy of the data is, the better it is compressed. Therefore, patented GMVC? Entropy Modifiers aim to reduce the entropy before coding since smaller entropy is associated with higher compression, which in turn results to reduced data storage requirements, saving energy and enables faster communications. If thinking of human digestive system, the breakdown of input data starts already in our mouth when chewing the food softens and warms it, for further processing by the enzymes in saliva. In fact, this is kind of pre-processing the data for better coding (i.e. digesting) could very well be equalled to reducing entropy preceding the actual encoding phase. There is first the mechanical digestion, namely biting the data to smaller parts, and then the aforementioned chemical digestion.

In coding, typically data is transmitted to a decoder which outputs the decoded output data, resembling or identical to the input data, depending on whether lossless or lossy compression was used. In lossy coding, the process gets rid of unnecessary details and only transmits, e.g. streams the compressed file or information. The output data is compressed version, where some data is lost. In human digestive system, it is a bit different. The process takes out, i.e. absorbs to blood stream the most important elements of the input data, and it is the “leftovers” what constitute the output data. In other words, while lossy coding results to residual data, in human digestive system it is the output.

The input data can also be transmitted after encoding to a storage, to wait for the further transmission to the final destination. For example, US streaming company could transmit their movies to a local database here in Europe, and when a European subscriber orders the movie in question, it is then streamed from the local database instead of all the way from the U.S. continent to Europe. (This is actually one use case of Gurulogic Microsystems’s patented Database Coding (US 10,255,315). Also, in human body there are storage locations such as liver, storing for example iron and copper. If we change the input data to human body to information that is visible with eyes and audible with ears, then the encoded information is retained in our brains, either in short-term or long-term memory. So, just like codecs can encode different types of data, also human beings can encode variety of different kind of information and ingredients, when talking about food. But of course, ending processes differ.

When talking about coding data, you cannot totally leave out the aspect of securing confidentiality of the data, by encrypting it. With encryption methods, the encoded data is transformed to such format where it cannot be decoded, i.e. outputted, without decrypting it first. For example, the US streaming company might add encryption to the streams they deliver to the European database, so that only the valid subscribers with tools to decrypt the data can output the movie and not any unauthorized party. There are multiple different encryption methods and that would be a topic of its own article. In my previous article regarding this codec and human comparison I ended up with conclusion that in the human system, the output data can be considered decrypted, as the input data is not revealed there. ??

As I mentioned information as one type of input data flow to human system, perhaps I’ll conclude that this piece of article of mine is now output data of all the information that I have gained and gathered during the years of patenting complex coding technologies, and I have encoded it with methods such as planning, thinking, typing and writing, just to mention few. The coding methods I have used are definitely lossy ones as so much details have been left out from here, but don't worry they are stored in my brain memory, for further processing.

要查看或添加评论,请登录

Anne-Mari Lummevuo的更多文章

  • Rumour has it, lawyers have it

    Rumour has it, lawyers have it

    It’s not once or twice in my 25+ year career that I have been asked to adopt a role other than legal, because of my…

  • If codecs were human

    If codecs were human

    I’m a patent and technology lawyer, and I always want to understand at least the basics of the technological solutions…

    2 条评论
  • Don’t keep your patents in the closet!

    Don’t keep your patents in the closet!

    I am enthusiastic recycler. It is important for me that different materials such as paper, plastic and metals are…

    6 条评论
  • Piece of Cake - or Data?

    Piece of Cake - or Data?

    Those that are familiar with my doctoral dissertation thesis, also know my adopted metaphor of “piece of cake”. In my…

  • Be careful what you wish for!

    Be careful what you wish for!

    Many things that you could only vision to happen in the old days is reality nowadays. I do not claim to have visioned…

    1 条评论
  • Security in Digital World

    Security in Digital World

    I’ve written earlier about world going digital from the point of view of our ordinary life (https://www.linkedin.

    1 条评论
  • To UPC or not to UPC?

    To UPC or not to UPC?

    Unitary Patent – or Community Patent - has been subject to discussions for all my IP career, so for quite long. This…

    5 条评论
  • New Era of Logos - is Less More?

    New Era of Logos - is Less More?

    I am by no means brand expert but lately, I have been wondering whether there is an ongoing trend for company logos…

    2 条评论
  • Cryptography for toddlers

    Cryptography for toddlers

    As a tech lawyer, you come across with variety of technological solutions that are designed to comply sometimes even…

    1 条评论
  • If patent families were real families

    If patent families were real families

    Back in my Nokia IPR years we used to sometimes have fun at work and play with terminologies with few colleagues*) of…

    3 条评论

社区洞察

其他会员也浏览了