August 28, 2021

August 28, 2021

Why scrum has become irrelevant

The purpose of the retrospective is just that: to reflect. We look at what worked, what didn’t work, and what kinds of experiments we want to try.Unfortunately, what it boils down to is putting the same Post-its of “good teamwork” and “too many meetings” in the same swim lanes as “what went well,” “what went wrong,” and “what we will do better.” ... Scrum is often the enemy of productivity, and it makes even less sense in the remote, post-COVID world. The premise of scrum should not be that one cookie cutter fits every development team on the planet. A lot of teams are just doing things by rote and with zero evidence of their effectiveness. An ever-recurring nightmare of standups, sprint grooming, sprint planning and retros can only lead to staleness. Scrum does not promote new and fresh ways of working; instead, it champions repetition. Let good development teams self-organize to their context. Track what gets shipped to production, add the time it took (in days!) after the fact, and track that. Focus on reality and not some vaguely intelligible burndown chart. Automate all you can and have an ultra-smooth pipeline. Eradicate all waste.?


Your data, your choice

These days, people are much more aware of the importance of shredding paper copies of bills and financial statements, but they are perfectly comfortable handing over staggering amounts of personal data online. Most people freely give their email address and personal details, without a second thought for any potential misuse. And it’s not just the tech giants – the explosion of digital technologies means that companies and spin-off apps are hoovering up vast amounts of personal data. It’s common practice for businesses to seek to “control” your data and to gather personal data that they don’t need at the time on the premise that it might be valuable someday. The other side of the personal data conundrum is the data strategy and governance model that guides an individual business. At Nephos, we use our data expertise to help our clients solve complex data problems and create sustainable data governance practices. As ethical and transparent data management becomes increasingly important, younger consumers are making choices based on how well they trust you will handle and manage their data.


How Kafka Can Make Microservice Planet a Better Place

Originally Kafka was developed under the Apache license but later Confluent forked on it and delivered a robust version of it. Actually Confluent delivers the most complete distribution of Kafka with Confluent Platform. Confluent Platform improves Kafka with additional community and commercial features designed to enhance the streaming experience of both operators and developers in production, at a massive scale. You can find thousands of documents about learning Kafka. In this article, we want to focus on using it in the microservice architecture, and we need an important concept named Kafka Topic for that. ... The final topic that we should learn about is before starting our stream processing project is Kstream. KStream is an abstraction of a record stream of KeyValue pairs, i.e., each record is an independent entity/event. In the real world, Kafka Streams greatly simplifies the stream processing from topics. Built on top of Kafka client libraries, it provides data parallelism, distributed coordination, fault tolerance, and scalability. It deals with messages as an unbounded, continuous, and real-time flow of records.


What Happens When ‘If’ Turns to ‘When’ in Quantum Computing?

Quantum computers will not replace the traditional computers we all use now. Instead they will work hand-in-hand to solve computationally complex problems that classical computers can’t handle quickly enough by themselves. There are four principal computational problems for which hybrid machines will be able to accelerate solutions—building on essentially one truly “quantum advantaged” mathematical function. But these four problems lead to hundreds of business use cases that promise to unlock enormous value for end users in coming decades. ... Not only is this approach inefficient, it also lacks accuracy, especially in the face of high tail risk. And once options and derivatives become bank assets, the need for high-efficiency simulation only grows as the portfolio needs to be re-evaluated continuously to track the institution’s liquidity position and fresh risks. Today this is a time-consuming exercise that often takes 12 hours to run, sometimes much more. According to a former quantitative trader at BlackRock, “Brute force Monte Carlo simulations for economic spikes and disasters can take a whole month to run.”?


Can companies build on their digital surge?

If digital is the heart of the modern organization, then data is its lifeblood. Most companies are swimming in it. Average broadband consumption, for example, increased 47 percent in the first quarter of 2020 over the same quarter in the previous year. Used skillfully, data can generate insights that help build focused, personalized customer journeys, deepening the customer relationship. This is not news, of course. But during the pandemic, many leading companies have aggressively recalibrated their data posture to reflect the new realities of customer and worker behavior by including models for churn or attrition, workforce management, digital marketing, supply chain, and market analytics. One mining company created a global cash-flow tool that integrated and analyzed data from 20 different mines to strengthen its solvency during the crisis. ... While it’s been said often, it still bears repeating: technology solutions cannot work without changes to talent and how people work. Those companies getting value from tech pay as much attention to upgrading their operating models as they do to getting the best tech.?


Understanding Direct Domain Adaptation in Deep Learning

To fill the gap between Source data (train data) and Target data (Test data) a concept called domain adaptation is used. It is the ability to apply an algorithm that is trained on one or more source domains to a different target domain. It is a subcategory of transfer learning. In domain adaptation, the source and target data have the same feature space but from different distributions, while transfer learning includes cases where target feature space is different from source feature space. ...?In unsupervised domain adaptation, learning data contains a set of labelled source examples, a set of unlabeled source examples and a set of unlabeled target examples.?In semi-supervised domain adaptation along with unlabeled target examples there, we also take a small set of target labelled examples?And in supervised approach, all the examples are supposed to be labelled one Well, a trained neural network generalizes well on when the target data is represented well as the source data, to accomplish this a researcher from King Abdullah University of Science and Technology, Saudi Arabia proposed an approach called ‘Direct Domain Adaptation’ (DDA).

Read more here ...

要查看或添加评论,请登录

Kannan Subbiah的更多文章

  • March 24, 2025

    March 24, 2025

    Identity Authentication: How Blockchain Puts Users In Control One key benefit of blockchain is that it's decentralized.…

  • March 23, 2025

    March 23, 2025

    Citizen Development: The Wrong Strategy for the Right Problem The latest generation of citizen development offenders…

  • March 21, 2025

    March 21, 2025

    Synthetic data and the risk of ‘model collapse’ There is a danger of an ‘ouroboros’ here, or a snake eating its own…

  • March 20, 2025

    March 20, 2025

    Agentic AI — What CFOs need to know Agentic AI takes efficiency to the next level as it builds on existing AI platforms…

  • March 19, 2025

    March 19, 2025

    How AI is Becoming More Human-Like With Emotional Intelligence The concept of humanizing AI is designing systems that…

  • March 17, 2025

    March 17, 2025

    Inching towards AGI: How reasoning and deep research are expanding AI from statistical prediction to structured…

  • March 16, 2025

    March 16, 2025

    What Do You Get When You Hire a Ransomware Negotiator? Despite calls from law enforcement agencies and some lawmakers…

  • March 15, 2025

    March 15, 2025

    Guardians of AIoT: Protecting Smart Devices from Data Poisoning Machine learning algorithms rely on datasets to…

    1 条评论
  • March 14, 2025

    March 14, 2025

    The Maturing State of Infrastructure as Code in 2025 The progression from cloud-specific frameworks to declarative…

  • March 13, 2025

    March 13, 2025

    Becoming an AI-First Organization: What CIOs Must Get Right "The three pillars of an AI-first organization are data…

社区洞察

其他会员也浏览了