June 10, 2021
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Tracing: Why Logs Aren’t Enough to Debug Your Microservices
Traces complement logs. While logs provide information about what happened inside the service, distributed tracing tells you what happened between services/components and their relationships. This is extremely important for microservices, where many issues are caused due to the failed integration between components. Also, logs are a manual developer tool and can be used for any level of activity – a specific low-level detail, or a high-level action. This is also why there are many logging best practices available for developers to learn from. On the other hand, traces are generated automatically, providing the most complete understanding of the architecture. Distributed tracing is tracing that is adapted to a microservices architecture. Distributed tracing is designed to enable request tracking across autonomous services and modules, providing observability into cloud native systems. ... Distributed tracing provides observability and a clear picture of the services. This improves productivity because it enables developers to spend less time trying to locate errors and debugging them, as the answers are more clearly presented to them.
How SAML 2.0 Authentication Works and Why It Matters
At its core, Security Assertion Markup Language (SAML) 2.0 is a means to exchange authorization and authentication information between services. SAML is frequently used to implement internal corporate single sign-on (SSO) solutions where the user logs into a service that acts as the single source of identity which then grants access to a subset of other internal services. ... Generally, SAML authentication solves three important problems: SAML offers a significant improvement to user experience. Users only have to remember their credentials with a single identity provider and not having to worry about usernames and passwords for every application they use; SAML allows application developers to outsource identity management and authentication implementation to external providers without implementing it themselves; and And perhaps most importantly, SAML dramatically reduces the operational overhead of managing access within an organization. If an employee leaves or transfers to another team, their access will be automatically revoked or downgraded across all applications connected to the identity provider.
How to build Data Science capabilities within an organization
Signing up for a data science program is half the battle won. But only a strong, steady commitment and effort will take it to completion and yield amazing results. You as an organization may be clear on the ‘why’ of the whole endeavor. You know that more self-sufficiency and expertise will bring in more revenue. But without communicating the benefits learning data science has for your employees, you are unlikely to see genuine involvement. You can encourage buy-in from employees by showcasing the future career path, rewards of upskilling, higher payouts for working on advanced projects, or even the fear of being left out( I hate to say this but this is how the cookie crumbles). Of course, the seniority in your organization needs to weigh the pros & cons of such a transformation and accordingly roll out the mandate to selected groups as there may be employees who may not be sold to the idea of building the skills required for data science at all. ... A great deal of time, energy, and effort is saved by a wide variety of platforms that provide a bunch of tools and services for data science monitoring. They track and test the employee's progress during the data science program. This can keep your employees on their toes.
New identities are creating opportunities for attackers across the enterprise
The adoption of cloud services, third parties, and remote access has dissolved the traditional network perimeter and made security a far more complex equation than before. Identity security is quickly emerging become the primary line of defence for most organisations, because it allows security teams to tailor each user’s access proportionately based on the needs of their job role. Underpinning this model is Zero-Trust – the practice of treating all accounts with the same minimal level of access until authenticated. In cloud environments, for example, any human or machine identity can be configured with thousands of permissions to access cloud workloads containing critical information. User, group, and role identities are typically assigned permissions depending on their job functions. Providing each identity with its own unique permissions allows users to access what they need, when they need it, without putting company assets at risk of breach. In combination with Zero-Trust, it ensures each identity is only able to gain that access once it is authenticated. The increasing recognition of Zero-Trust as security best practice has led its stock to rise significantly, so much so that 88% of those we researched categorised it as either ‘important’ or ‘very important’ in tackling today’s advanced threats.
What to Know About Updates to the PCI Secure Software Standard
The PCI Council made several clarifications to controls within the standard, added additional guidance to a couple of sections, and added its new module specific to Terminal Software Requirements, which applies to software intended for deployment and execution on payment terminals. Specific to the new module of the Secure Software Standard, Module B, Terminal Software Requirements focus on software intended for deployment and execution on payment terminals or PCI-approved PIN Transaction Security (PTS) point-of-interaction (POI) devices. In total, the new section adds 50 controls covering five control objectives. ... Similar to Terminal Software Attack Mitigation, Terminal Software Security Testing clearly calls out the need to ensure software is "rigorously" tested for vulnerabilities prior to each release. The software developer is expected to have a documented process that is followed to test software for vulnerabilities prior to every update or release. The control tests in this objective continue to highlight secure software development best practices – testing for unnecessary ports or protocols, identifying unsecure transmissions of account data, identification of default credentials, hard-coded authentication credentials, test accounts or data, and/or ineffective software security controls.
Reawakening Agile with OKRs?
The approach I found works best is to lead with OKRs - what the team want to do. So throw your backlog away and adopt a just-in-time requirements approach. Stop seeing "more work than we have money and time to do" as a sign of failure and see it is a sign of success. Every time you need to plan work return to your OKRs and ask: What can we do now, in the time we have, to move closer to our OKRs? Stop worrying about burning-down the backlog and put purpose first, remember why the team exists, ask Right here, right now, how can we add value? Used in a traditional MBO-style one might expect top managers to set OKRs which then cascade down the company with each team being given their own small part to undertake. That would require a Gosplan style planning department and would rob teams of autonomy and real decision making power. (Gosplan was the agency responsible for creating 5-year plans in the USSR, and everyone knows how that story ended.) Instead, leaders should lead. Leaders should stick to big things. They should emphasise the purpose and mission of the organization, they should set large goals for the whole organization but those goals should not be specific for teams.
Read more here ...