Squeezing Bits, Securing Bytes

Squeezing Bits, Securing Bytes

The human brain has an unfortunate quirk of rationalising hypotheses in a backwards direction

It is soo true that I really felt this whole week, like my brain was just making up reasons for everything after the fact. Every decision, every mistake I kept convincing myself that I "saw it coming" when, in reality, I was just piecing things together afterward to make it all seem logical. Funny how we do that, huh?

Anyway, after nonstop disappointments, struggling to think straight, and being surrounded by multiple-choice questions where every answer seemed right but I couldn’t pick one, I’m finally sitting down to jot down a few thoughts that I have been thinking of some moments felt like a blur, while others lingered longer than they should have. It’s strange how the mind works overanalysing the insignificant while brushing past what truly matters. But through all the noise, a few things stood out, and those are the ones I want to share.


The Memory Safety Debate

C and C++ have been the backbone of software development. They’re fast, powerful, and deeply embedded in everything from operating systems to game engines.

But with great power comes great responsibility, or in this case, a big security headache - Programmer uncle ben

The U.S. government, particularly agencies like CISA and the FBI, are pushing for a shift away from C and C++ to memory-safe languages like Rust. Why? Because a massive chunk of security vulnerabilities (a really massive) 70% in Microsoft products, 70% in Google’s Chromium project, and 94% in Mozilla’s critical security bugs stem from memory safety issues. Things like buffer overflows, use-after-free vulnerabilities, and null pointer dereferences continue to plague software security.

But here’s the thing: many C and C++ developers don’t want to switch to Rust (including myself ??). They’re comfortable with their tools, they’ve built massive codebases, and they argue that rewriting everything in Rust is just not practical. So instead of abandoning ship, the community is looking for ways to make C and C++ safer.

1. Safe C++: Making C++ More Like Rust

A new initiative, called Safe C++ is aiming to introduce memory safety features directly into C++. The idea is to create a “safe mode” where only secure coding practices are allowed.

  • A strict safety mode with compile-time checks
  • Borrow checking to prevent use-after-free bugs (inspired by Rust)
  • Improved initialisation analysis
  • Thread safety mechanisms similar to Rust’s Send and Sync

This proposal is still in the early stages, but if implemented well, it could bring a Rust-like safety net to C++ without forcing developers to switch languages entirely.

2. Phil C: A Memory-Safe Flavour of C

Created by Philip Pizlo, a senior engineer at Epic Games, Phil C is a new compiler that introduces memory safety into C and C++. It claims to be 100% compatible with existing code, meaning developers can compile their projects using Phil C and get automatic memory protection.

Unlike Rust, there are no escape hatches no “unsafe” keyword to bypass checks. It’s designed to be truly safe by default. The only downside? It currently only works on Linux and is about 1.5 to 5 times slower than standard C. But with optimisation, the goal is to get this down to just 1.2 times slower.

3. Trap C: A New Language Inspired by C

Instead of modifying C, some developers are taking a different approach, creating a new language that looks and feels like C but removes its unsafe aspects. Enter Trap C, a fork of C that

  • Prevents buffer overflows and segmentation faults
  • Automates memory management
  • Includes constructors and destructors from C++
  • Removes some rarely used features like union
  • Implements automatic bounds checking

Trap C aims to be easy for C developers to adopt while making memory safety a non-issue.

What’s Next?

The battle between traditional C/C++ and Rust is far from over. But what’s clear is that developers are looking for ways to improve security without starting from scratch. Whether through enhanced C++, safer compilers, or entirely new languages, the push for memory-safe systems is growing stronger.

Will these efforts be enough to satisfy government security mandates? Or will Rust continue to gain ground? One thing is for sure, 2025 is shaping up to be an interesting year for low-level programming!

Unpopular fact - From couple years rust has been the main hyped up programming language for every single time but as you also must have also noticed, its kinda started fading out and Zig is all ready to take over (I guess it will in 2025). Why I say this? my thought process here is most hyped up programming language have 3 to 5 years of time of growth and hype and rust is actually kinda reaching its maximum hype time and you can see this same pattern in cases like for ruby and scala - definitely had a period of boom time and reached their maximum boom capacity after that amount of time passed though for rust it still maturing...

I have used two different metrics to define hyped up current language - 1. number of new stackoverflow questions that are tagged with a particular language, 2. number of new people joining a particular language's subreddit.

Compression

Screw H265. All my homies hate H265 (Yes I asked them).

Video compression is always an interesting field of research. The potential savings in bandwidth costs alone are so high that there is strong justification for implementing more efficient standards.

It’s one of those rare events where technological advances are immediately noticeable on both sides of the equation; both the distributors (streaming platforms) and the end user.

In many computer science domains, the impact of an efficiency improvement is only translatable to the business side. Sure, things like the migration of Oracle to Aurora databases saved AWS nearly $100 million (rumoured), but as an end user, the difference was imperceptible.

To understand the video compression landscape, we first have to talk about The Codec Wars.

On one side, we have the evil Royalists, defenders of the proprietary licensing model. Their forces consist of the MPEG LA and their allies.On the other, we have the Free Streamers, proponents of royalty-free codecs. Their forces consist of the Alliance for Open Media and their sympathisers. MPEG LA got a head start, standardising their H.265 (HEVC) codec in 2013, but was rife with fragmented patent pools.The tiered royalty structure made licensing fees unpredictable, and prohibitive to smaller company adoption.

You’re looking at annual minimum royalties of ~$100k a year!That said, the previous success of H264 led hardware manufacturers to adopt HEVC decoding relatively quickly. Consumer devices broadly adopted decoders in the mid-2010s. The iPhone 6 started using H265 for FaceTime, and NVIDIA’s 900-series featured on-device decode as well.AV1 (the Free Streamers), started as a consortium by various FAANG-companies as a next-generation, royalty free video compression alternative in 2015. Early implementations of the AV1 codec were significantly more computationally intensive, but optimisations continued to push the speed over time.

In the 2020s, hardware manufacturers started to adopt AV1 decode (Intel, NVIDIA, etc), making streaming adoption more viable.

Twitch was one of the more intriguing experimental adopters; AV1 shows about 30% better compression efficiency compared to H265 for the same quality. Due the nature of the platform essentially being a “sea of individual encoders” (streamers), royalty-free options are particularly relevant to their user base.

The savings on royalties alone represent such a massive potential cost savings for streaming platforms that my optimism remains high for the swift adoption of AV1 in the near future. Hardware decode is quickly reaching a critical mass for end user devices, it’s only a matter of time before everyone makes the switch.

This ML-upscale example is from 2021. It’s gotten

Okay, this one seems weird, but hear me out.

Anime upscalers have gotten ridiculously good.

Anime artwork, mostly due to the flat colors and sharp linework, responds particularly well to neural-net upscalers. Many series only exist in lower resolutions; never getting a proper “remastering” treatment, often from lack of popularity.

Like Wit Studio remakes the one piece anime series

The desire of fans to view their favourite shows in upscaled resolutions, combined with the rapid progress in the effectiveness of the techniques, led to a snowball effect of improvement.

Don’t get me wrong, it’s still a niche subject. But the tech is already there.

Let me attempt to convince you another way.

In 2023, Microsoft released their “Video Super Resolution” as an optional feature to reduce compression artifacts of low-bitrate videos in Edge. As of January 2025, it is still disabled by default, but the option is there. And it looks…not that bad actually.

It’s going to take some convincing before the general public wholeheartedly accepts neural video upscaling as a “default” feature, but the artifact reduction is quite impressive.

Much like how many consumer televisions implement frame generation for motion interpolation (gross), I wouldn’t be that surprised if this technique gets adopted more widely.


RISC-V

RISC-V may be the coolest ISA on the block, but at the moment, adoption by the linux community is limited.

Lurkers of distrowatch may (incorrectly) get the impression of Debian as the “old man” distribution from the relatively slow release cycle of the stable channel. Contrary to public perception, Debian is significantly ahead of the curve when it comes to RISC-V adoption.


Official graph of Debian packages that build on different architectures. Yes, it really is that low-res.

Official support on the unstable Debian branch for riscv64 started 1.5 years ago, back in July of 2023. Most risc-v SBC enjoyers will tell you that; yup, 99% of the time the default recommended distribution is Debian unstable.

Arch, somewhat obviously, has had unofficial riscv64 releases for a while now. Fedora is still in the experimental stage (Rawhide builds only).

It’s pretty reasonable to expect official, stable releases in the near future. Debian 13 is TBA, but if historical trends are anything to go by, a summer 2025 riscv64 stable build is quite likely.

Ubuntu will soon follow, considering their upstream follows Debian, and they already have an “early” (limited) RISC-V port. Considering the competition, I would wager Fedora competes in this space soon after.


Cryptography

Current quantum computing factorization records are…kind of pathetic. The largest reliably factored number by Shor’s algorithm is 21.

More recently, in 2024, some researchers at the University of Trento managed to factor 8,219,999.

This was computed a D-Wave Pegasus. Kinda cool, but not a real quantum computer.

D-Wave doesn’t want you to know their “quantum computer” is just a cat in a box

The distinction is actually quite important. D-Wave systems are quantum annealers, not gate-based quantum computers.

Train your eye to notice the difference. You’ll find yourself less swayed by “quantum” computing dribble in popular media.

On the other hand, the US Federal Government takes the quantum computing threat very seriously. Symmetric encryption standards are relatively safe. Even a perfect implementation of my buddy Grover only reduces the brute-force complexity by half.

The NSA basically shrugged, recommended AES-256, and said “good enough”.

RSA isn’t so lucky.

Assuming a stable, error correcting quantum computer with a chonky number of qubits (thousands) running Shor’s Algorithm, it’s game over.

At the moment, “real” quantum computers (aka computers capable of running Shor’s) are significantly out of reach.

IBM has a 433-qubit system called Osprey, which you can also rent on their cloud platform for a chill $96 a minute. Who knew quantum computing was almost as easy as spinning up an EC2 box.

“That seems pretty close! If you only need thousands of qubits to break RSA, it could happen at any moment!”

Not really.

physical qubits != logical qubits        

Error correcting qubits are needed to get any useful computing done. 433 physical qubits, with our current technology, only works out to a handful, at best, of logical qubits.

So why the fear in Federal Circles? Y2K no longer permeates the nightmares of trepidatious Congresspersons. Nay, they dream of Y2Q.

In other words, cryptographic standards have to change significantly before theoretical attacks are remotely possible, due to “harvest now, decrypt later” surveillance strategies. NIST produces the cybersecurity standards referenced by most federal agencies, although there are exceptions. A contest for the Post-Quantum Cryptography Standardisation began in 2016, wrapping up in August 2024 with the winners.

CRYSTALS-Kyber is the winner for public-key encryption, so it’s likely to replace RSA in future contexts. (note, not all of RSA’s functions are 1:1 replicated by Kyber; signatures are likely to be replaced with Dilithium) The cyphers may be finalised, but there is much work to be done factoring in transition plans and official mandates. NIST published an initial public draft last November, but it is yet to be finalised.

(incoming rant about probabilities)

The current executive administration unexpectedly pulled the previous directive regarding memory-safe languages in a federal context, which I am now factoring into odds. The White House previously published this on their site, which is now gone. That being said, it’s not uncommon for administration changes to result in a complete redo of the Whitehouse.gov site, so this may be an administrative fluke.


Bye for now...


要查看或添加评论,请登录

Prince B.的更多文章

  • What Do AI, Retiring Chips & Linux Packets Have in Common?

    What Do AI, Retiring Chips & Linux Packets Have in Common?

    Hello World, and welcome to the new edition of The Scholar Diaries. Like always first lemme give you a gist of what is…

  • Re-Thinking Crawler

    Re-Thinking Crawler

    Hello World, and welcome to the very first edition of The Scholar Diaries. First lemme give you a gist of what is this…

  • Return of JIT

    Return of JIT

    Some time back, introduced a new toggle named V8 optimizer that allowed users to disable compilation. This feature…

  • GitHub is the most important social media

    GitHub is the most important social media

    Felt that I should put this upfront, GitHub is an atypical social media platform because it requires specific…

社区洞察

其他会员也浏览了