May 16, 2022
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
As you integrate OAuth into your applications and APIs, you will realize that the authorization server you have chosen is a critical part of your architecture that enables solutions for your security use cases. Using up-to-date security standards will keep your applications aligned with security best practices. Many of these standards map to company use cases, some of which are essential in certain industry sectors. APIs must validate JWT access tokens on every request and authorize them based on scopes and claims. This is a mechanism that scales to arbitrarily complex business rules and spans across multiple APIs in your cluster. Similarly, you must be able to implement best practices for web and mobile apps and use multiple authentication factors. The OAuth framework provides you with building blocks rather than an out-of-the-box solution. Extensibility is thus essential for your APIs to deal with identity data correctly. One critical area is the ability to add custom claims from your business data to access tokens. Another is the ability to link accounts reliably so that your APIs never duplicate users if they authenticate in a new way, such as when using a WebAuthn key.
It goes without saying that external clients of an application calling the same API version — the same endpoint — with the same input parameters expect to see the same response payload over time. The need of end users for such certainty is once again understandable but stands in stark contrast to the requirements of the DA itself. In order for distributed applications to evolve and grow at the speed required in today’s world, those autonomous development teams assigned to each constituent component need to be able to publish often-changing, forward-and-backward-compatible payloads as a single event to the same fixed endpoints using a technique I call "version-stacking." ... A key concern of architects when exposing their applications to external clients via APIs is — quite rightly — security. Those APIs allow external users to affect changes within the application itself, so they must be rigorously protected, requiring many and frequent authorization steps. These security steps have obvious implications for performance, but regardless, they do seem necessary.
The best guarantor of open source security has always been the open source development process. Even with OpenSSF’s excellent plan, this remains true. The plan, for example, promises to “conduct third-party code reviews of up to 200 of the most critical components.” That’s great! But guess what makes something a “critical component”? That’s right—a security breach that roils the industry. Ditto “establishing a risk assessment dashboard for the top open source components.” If we were good at deciding in advance which open source components are the top ones, we’d have fewer security vulnerabilities because we’d find ways to fund them so that the developers involved could better care for their own security. Of course, often the developers responsible for “top open source components” don’t want a full-time job securing their software. It varies greatly between projects, but the developers involved tend to have very different motivations for their involvement. No one-size-fits-all approach to funding open source development works ...
领英推荐
Recently, the Security Exchange Commission (SEC) made a welcome move for cybersecurity professionals. In proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting, the SEC outlined requirements for public companies to report any board member’s cybersecurity expertise. The change reflects a growing belief that disclosure of cybersecurity expertise on boards is important as potential investors consider investment opportunities and shareholders elect directors. In other words, the SEC is encouraging U.S. public companies to beef up cybersecurity expertise in the boardroom. Cybersecurity is a business issue, particularly now as the attack surface continues to expand due to digital transformation and remote work, and cyber criminals and nation-state actors capitalize on events, planned or unplanned, for financial gain or to wreak havoc. The world in which public companies operate has changed, yet the makeup of boards doesn’t reflect that.
With a comprehensive asset inventory in place, Salesforce SVP of information security William MacMillan advocates taking the next step and developing an “obsessive focus on visibility” by “understanding the interconnectedness of your environment, where the data flows and the integrations.” “Even if you’re not mature yet in your journey to be programmatic, start with the visibility piece,” he says. “The most powerful dollar you can spend in cybersecurity is to understand your environment, to know all your things. To me that’s the foundation of your house, and you want to build on that strong foundation.” ... To have a true vulnerability management program, multiple experts say organizations must make someone responsible and accountable for its work and ultimately its successes and failures. “It has to be a named position, someone with a leadership job but separate from the CISO because the CISO doesn’t have the time for tracking KPIs and managing teams,” says Frank Kim, founder of ThinkSec, a security consulting and CISO advisory firm, and a SANS Fellow.
One option is to use so-called “immutable” backups. These are backups that, once written, cannot be changed. Backup and recovery suppliers are building immutable backups into their technology, often targeting it specifically as a way to counter ransomware. The most common method for creating immutable backups is through snapshots. In some respects, a snapshot is always immutable. However, suppliers are taking additional measures to prevent these backups being targeted by ransomware. Typically, this is by ensuring the backup can only be written to, mounted or erased by the software that created it. Some suppliers go further, such as requiring two people to use a PIN to authorise overwriting a backup. The issue with snapshots is the volume of data they create, and the fact that those snapshots are often written to tier one storage, for reasons of rapidity and to lessen disruption. This makes snapshots expensive, especially if organisations need to keep days, or even weeks, of backups as a protection against ransomware. “The issue with snapshot recovery is it will create a lot of additional data,” says Databarracks’ Mote.