September 06, 2023
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
The data show that not only are open source maintainers usually unaware of current security tools and standards, like software bills of materials (SBOMs) and supply-chain levels for software artifacts (SLSA), but they are largely unpaid and, to a frightening degree, on their own. A study released in May by Tidelift found that 60% of open source maintainers would describe themselves as “unpaid hobbyists.” And 44% of all maintainers said they are the only person maintaining a project. “Even more concerning than the sole maintainer projects are the zero maintainer projects, of which there are a considerable amount as well that are widely used,” Donald Fischer, CEO and co-founder of Tidelift, told The New Stack. “So many organizations are just unaware because they don’t even have telemetry, they have no data or visibility into that.” ... An even bigger threat to continuity in open source project maintenance is the “boss factor,” according to Fischer. The boss factor, he said, emerges when “somebody gets a new job, and so they don’t have as much time to devote to their open source projects anymore, and they kind of let them fall by the wayside.”
Recovering multi-master databases requires specialist skills and understanding to prevent problems around concurrency. In effect, this means having one agreed list of transactions rather than multiple conflicting lists that might contradict each other. Similarly, you have to ensure that any recovery brings back the right data, rather than any corrupted records. Planning ahead on this process makes it much easier, but it also requires skills and experience to ensure that DR processes will work effectively. Alongside this, any DR plan will have to be tested to prove that it will work, and work consistently when it is most needed. Any plan around data has to take three areas into account – availability, restoration and cost. Availability planning covers how much work the organisation is willing to do to keep services up and running, while restoration covers how much time and data has to be recovered in the event of a disaster. Lastly, cost covers the amount of budget available to cover these two areas, and how much has to be spent in order to meet those requirements.
Cybercriminals never sleep; they’re always conniving and corrupting. “When it comes to IT security strategy, a very direct conversation must be held about the new nature of cyber threats,” suggests Griffin Ashkin, a senior manager at business management advisory firm MorganFranklin Consulting. Recent experience has demonstrated that cybercriminals are now moving beyond ransomware and into cyberextortion, Ashkin warns. “They’re threatening the release of personally identifiable information (PII) of organization employees to the outside world, putting employees at significant risk for identity theft.”?... The meetings and conversations should lead to the development or update of an incident response plan, he suggests. The discussions should also review mission-critical assets and priorities, assess an attack’s likely impact, and identify the most probable attack threats. By changing the enterprise’s risk management approach from matrix-based measurement (high, medium, or low) to quantitative risk reduction, you’re basing actual potential impact on as many variables as needed, Folk says.
领英推荐
As malicious actors gain the upper hand, we could potentially find ourselves stepping into a new era of espionage, where the most resourceful and innovative threat actors thrive. The introduction of AI brings about a new level of creativity in various fields, including criminal activities. The crucial question remains: How far will malicious actors push the boundaries? We must not overlook the fact that cybercrime is a highly profitable industry with billions at stake. Certain criminal organizations operate similarly to legal corporations, having their own infrastructure of employees and resources. It is only a matter of time before they delve into developing their own deepfake generators (if they haven’t already done so). With their substantial financial resources, it’s not a matter of whether it is feasible but rather whether it will be deemed worthwhile. And in this case, it likely will be. What preventative measures are currently on offer? Various scanning tools have emerged, asserting their ability to detect deepfakes.
Scrum thrives in scenarios where the project’s requirements might evolve or where customer feedback is crucial because of its short sprints. It works well when a team can commit to the roles, ceremonies, and iterative nature of the framework. When there is a need for clear accountability and communication among team members, stakeholders, and customers, Scrum works better than Kanban which works on a less rigid task allocation. The problem is the scale at which Scrum is used. While there is some consensus on the strengths of the methodology, it is not applicable for all projects. One common situation engineers face is, in teams which build multiple applications, individuals can’t start a new story until all the ongoing stories are complete. The team members who’ve completed remain idle until each of them have finished their story, which is entirely inefficient.?Long meetings are another pain point for users, there’s a substantial investment in planning and meetings. Significant time is allocated to discussing stories that sometimes require only 30 minutes for completion.?
Some growth will be powered by new technologies; CIOs and other technology leaders can demonstrate how emerging technologies create specific growth opportunities. Instead of pitching random acts of metaverse or blockchain, which require radical changes in life or trade to matter, technology leaders can iterate on new technologies and infuse ideas from these into their own products. ... Outcomes of all kinds can always be improved — AI is just the newest tool in the improvement toolkit, joining analytics, automation and software. Personalization at scale is a good example of amplifying growth. Technology leaders should collaborate with marketing colleagues and mine databases to find better purchase signals that improve offers and outreach. They can also automate processes to streamline onboarding and improve revenue recognition. ... No technology leader and no company will do this alone. They will work with technology and service providers to build and operate the new capabilities, including those powered by generative AI.
Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer
1 年Thanks for sharing.