August 22, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
The roots of the practice of data ethics can be traced back to the mid-20th century when concerns about privacy and confidentiality began to emerge alongside the growing use of computers for data processing. The development of automated data collection systems raised questions about who had access to personal information and how it could be misused. Early ethical discussions primarily revolved around protecting individual privacy rights and ensuring the responsible handling of sensitive data. One pivotal moment came with the enactment of the Fair Information Practice Principles (FIPPs) in the United States in the 1970s. These principles, which emphasized transparency, accountability, and user control over personal data, laid the groundwork for modern data protection laws and influenced ethical debates globally. ... Ethical guidelines such as those proposed by the European Union’s General Data Protection Regulation (GDPR) emphasize the importance of informed consent, limiting the collection of data to its intended use, and data minimization. All these concepts are part of an ethical approach to data and its usage.?
As a design practice fascinated by the practical deployment of AI, we can’t help but be reminded of the early days of the personal computer, as this also had a high impact on the design of workplace. Back in the 1980s, most computers were giant, expensive mainframes that only large companies and universities could afford. But then, a few visionary companies started putting computers on desktops, from workplaces, to schools and finally homes. Suddenly, computing power was accessible to everyone but it needed different spaces. ... As with any powerful new tool, AI also brings with it profound challenges and responsibilities. One significant concern is the potential for AI to perpetuate or even amplify biases present in the data it is trained on, leading to unfair or discriminatory outcomes. AI bias is already prevalent and it is crucial we learn how to teach AI to discern bias. Not so easy. AI could also be used maliciously, e.g. to create deepfakes or spread misinformation. There are also legitimate concerns about the impact of AI on jobs and the workforce, but equally how it improves and inspires that workforce.
Corporate legal departments will continue to draft voluminous agreement contracts packed with fine print provisions and disclaimers. CIOs can’t avoid this, but they can make a case to clearly present to users of websites and services how and under what conditions data is collected and shared. Many companies are doing this—and are also providing "Opt Out" mechanisms for users who are uncomfortable with the corporate data privacy policy. That said, taking these steps can be easier said than done. There are the third-party agreements that upper management makes that include provisions for data sharing, and there is also the issue of data custody. For instance, if you choose to store some of your customer data on a cloud service and you no longer have direct custody of your data, and the cloud provider experiences a breach that comprises your data, whose fault is it? Once again, there are no ironclad legal or federal mandates that address this issue-but insurance companies do tackle it. “In a cloud environment, the data owner faces liability for losses resulting from a data breach, even if the security failures are the fault of the data holder (cloud provider),” says Transparity Insurance Services.
First, organizations should map or inventory their data to understand what they have. By mapping and inventorying data, organizations can better visualize, contextualize and prioritize risks. And, by knowing what data you have, not only can you manage current privacy compliance risks, but you can also be better prepared to respond to new requirements. As an example, those data maps can allow you to see the data flows you have in place where you are sharing data – a key to accurately reviewing your third-party risks. In addition to be able to prepare for existing, and new, privacy laws, it also allows organizations to be able to identify their data flows to minimize risk exposure or compromise by being able to better understand where you are distributing your data. Secondly, companies should think through how to operationalize priority areas to embed them in your business. This might be through training of privacy champions and adopting technology to automate privacy compliance obligations such as implementing an assessments program that allows you to better understand data-related impact.
End-to-end testing is really where the rubber meets the road, and we get the most reliable tests when sending in requests that actually hit all dependencies and services to form a correct response. Integration testing at the API or frontend level using real microservice dependencies offers substantial value. These tests assess real behaviors and interactions, providing a realistic view of the system’s functionality. Typically, such tests are run post-merge in a staging or pre-production environment, often referred to as end-to-end (E2E) testing. ... What we really want is a realistic environment that can be used by any developer, even at an early stage of working on a PR. Achieving the benefits of API and frontend-level testing pre-merge would save effort on writing and maintaining mocks while testing real system behaviors. This can be done using canary-style testing in a shared baseline environment, akin to canary rollouts but in a pre-production context. To clarify that concept: We want to try running a new version of code on a shared staging environment, where that experimental code won’t break staging for all the other development teams, the same way a canary deploy can go out, break in production and not take down the service for everyone.
Neurotechnology has long been used in the field of medicine. Perhaps the most successful and well known example are cochlear implants, which can restore hearing. But neurotechnology is now becoming increasingly widespread. It is also becoming more sophisticated. Earlier this year, tech billionaire Elon Musk’s firm Neuralink implanted the first human patient with one of its computer brain chips, known as “Telepathy”. These chips are designed to enable people to translate thoughts into action. More recently, Musk revealed a second human patient had one of his firm’s chips implanted in their brain. ... These concerns are heightened by a glaring gap in Australia’s current privacy laws – especially as they relate to employees. These laws govern how companies lawfully collect and use their employees’ personal information. However, they do not currently contain provisions that protect some of the most personal information of all: data from our brains. ... As the Australian government prepares to introduce sweeping reforms to privacy legislation this month, it should take heed of these international examples and address the serious privacy risks presented by neurotechnology used in workplaces.