Responsible Innovation: Regulations Will Not Stifle Innovation
Adobe Stock

Responsible Innovation: Regulations Will Not Stifle Innovation

We have all benefited from the fruits of technological innovation over the centuries. But in some cases, innovation has brought harm to humanity.

In the past decade, innovation in non-deterministic and algorithmic technologies has surpassed the scope of existing regulations. Specifically, where they were used to process personal data for automated decision-making and profiling.?

Where regulations were intended to safeguard civil society within jurisdictions from discrimination and harm, they were?not effective to prevent?citizens from being adversely impacted by automated decision-making and profiling, and/or ‘Generative AI’ capabilities in?Socio-Technical Systems (STS) ?deployed by Corporations.?

Consequently, we have seen discriminatory and harmful outcomes as recorded in?repositories and websites maintained by industry bodies such as the?AI Incidents Database ,?AlgorithmWatch ,?AIAAIC Repository , as well as the?GDPR Enforcement Tracker .

There is a variety of reasons?why?we are where we are today.

Naomi Klein provided her views in the Guardian ?and?Anjana Ahuja provided her views in the FT ?respectively.?Meredith Whittaker offers her views in this clip .?

Historian and Philosopher, Yuval Noah Harari?offers his views in this article in the Economist .

Some of you might agree that a combination of the following is also a contributor to the adverse outcomes that we have seen and are likely to continue to see unless there is a fundamental change in the innovation culture within Corporations that deploy these STS:

  • A?lack of regulation?specific to how ‘AI’ should be developed and deployed, and a?lack of enforcement?around data protection, data privacy and the processing of personal data for automated decision-making and profiling, and/or ‘Generative AI’ capabilities, where regulations exist.?
  • Those driving the adoption of non-deterministic algorithmic technologies adopting a mantra of “Build Fast, Fail Fast” to release products and services to market quickly, with no accountability and without prioritising risk management, compliance, governance and careful consideration of ethical choices, when deploying STS with automated decision-making and profiling, and/or ‘Generative AI’ capabilities; and
  • Innovation involving STS being driven by technology, data, profit and revenue growth, rather than by purpose, core values, inclusivity of diverse inputs and multi-stakeholder feedback with humans and their fundamental rights at the front and centre of all decision-making.

If you believe that regulations stifle innovation and have proceeded to leverage non-deterministic algorithmic technologies to deploy automated decision-making and profiling, and/or ‘Generative AI’ capabilities in Socio-Technical Systems within your enterprise and to the public,?just because you could, we encourage you to take a close look at the Amendments to the draft of the incoming EU AI Act, which was?agreed by the European Parliament on the 11th May 2023 .

When leaders in Corporations decided to deprioritise the consideration of ethics and the implementation of operational safeguards to mitigate downside risks to recipients of automated decision-making and profiling, and/or ‘Generative AI’ capabilities in Socio-Technical Systems,?because there was no regulatory driver to do so, there is now a reason for their CEO’s and Boards to?think differently.

There are many who don’t believe that the introduction of the?EU AI Act ?along with other incoming regulations across the EU and the US is a good idea. Perhaps an understanding of?why?these laws are being introduced might put the intentions of the Regulators into perspective: there have been too many instances of harm (and the list continues to grow) experienced by the recipients of automated decision-making and profiling capabilities from these non-deterministic algorithmic technologies deployed in Socio-Technical Systems, for Regulators to not act.

While there was a perception that Regulators have taken too long to act, the inclusion of the use of ‘Generative AI’ in the Amendments to the draft EU AI Act, demonstrates the opposite.?

In the US, four of the US regulators recently issued their?Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems ?and announced:

“America’s commitment to the core principles of fairness, equality, and justice is deeply embedded in the federal laws that our agencies enforce to protect civil rights, fair competition, consumer protection, and equal opportunity. These established laws have long served to protect individuals even as our society has navigated emerging technologies. Responsible innovation is not incompatible with these laws. Indeed, innovation and adherence to the law can complement each other and bring tangible benefits to people in a fair and competitive manner, such as increased access to opportunities as well as better products and services at lower costs.”

Even Sam Altman, CEO of OpenAI has called on?“US lawmakers to regulate artificial intelligence (AI).”

What remains is enforcement across these jurisdictions to affect change in the behaviours that have?introduced?these societal and systemic risks.

How the EU AI Act will apply to you?

[All references below are from the?DRAFT Compromise Amendments ?associated with the ‘Proposal for a regulation of the European Parliament and of the Council on harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts’]

Providers vs Deployers

The approved Amendments introduced the reference to ‘Deployers’. This now provides a key distinction between two key stakeholders in the value chain that supply ‘AI systems’ to the recipients of automated decision-making and profiling, and/or ‘Generative AI’ capabilities.

We will illustrate the difference between a ‘Provider’ and a ‘Deployer’ in the following example:

  • An organisation procures an Automated Employment Decision Tool (AEDT) to conduct an online personality assessment as part of its recruitment process.
  • The software company that produces and releases that AEDT is the ‘Provider’.?
  • The organisation that decides to license that AEDT from the software company to conduct online personality assessment on candidates is the ‘Deployer’.

There are now clear obligations attributable to ‘Deployers’ in the amended draft EU AI Act [an example is on Page 26 (58 a)], consequently, any organisation that has licensed third-party applications and/or platforms with?embedded?automated decision-making and profiling, and/or ‘Generative AI’ capabilities needs to understand what their obligations are if impacted.

We believe that the Deployers have a critical role to play and responsibility to ensure that the STS they?decide to deploy?with automated decision-making and profiling, and/or ‘Generative AI’ comply with all regulatory requirements, ensuring they are safe, secure, fit for purpose, fair, explainable, preserve human rights, provisioned with feedback mechanisms, contestable and provide ease of access to the right for redress.

Risk-based Classification of Use Cases

The EU AI Act takes a risk-based approach to regulation. It classifies the types of use cases according to the following categories:

  • Prohibited

Article 5 [Page 128] lists all use cases which are prohibited under the amended draft EU AI Act, on the basis that they?“contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.”?[Page 126(15)]

The legal text describes the?outcomes?of use cases that are deemed to contradict the Union values as outlined.?

If the deployers of STS that leverage non-deterministic algorithms processing personal data have not considered the socio-elements in their design and deployment, the chances are they will not have conducted any human rights impact and human risk assessments in that process.

  • High-risk

Annex III lists all use cases that are considered to be high-risk. Whilst there are 8 types of use cases listed, we highlight the following that are likely to be operational within and deployed by Corporations:

“1. Biometric and biometrics-based systems

(a) AI systems intended to be used for biometric identification of natural persons, with the exception of those mentioned in Article 5;

(aa) AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5;

Point 1 shall not include AI systems intended to be used for biometric verification whose sole purpose is to confirm that a specific natural person is the person he or she claims to be.”

“4. Employment, workers management and access to self-employment:

(a) AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements, screening or filtering applications, evaluating candidates in the course of interviews or tests;

(b) AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships.”

“5. Access to and enjoyment of essential private services and public services and benefits:

(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud the exception of AI systems put into service by small scale providers for their own use;

(ba) AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance;”

“8. Administration of justice and democratic processes:

aa) AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda

This does not include AI systems whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure political campaigns from an administrative and logistic point of view.

(ab) AI systems intended to be used by social media platforms that have been designated as very large online platforms within the meaning of Article 33 of Regulation EU 2022/2065, in their recommender systems to recommend to the recipient of the service user-generated content available on the platform.”

Article 29 [Page 41] spells out the ‘Obligations of deployers of high-risk AI systems,’ which applies to any Corporation that has?procured?applications and/or platform solutions from third-party providers, that are within the scope of Annex III (High-risk) and deployed internally to their employees or externally to their customers or consumers of their digital services.?

There will be many high-risk use cases already present in large organisations that have invested in STS over the past few years. Boards and CEOs of these Corporations need to identify and understand capability requirements and?what gaps exist within their organisations?to comply with the obligations laid out in Article 29 and set forth a plan to address them?urgently, as?it will take considerable time, resources and investment.

It may not be immediately apparent for the Boards and CEOs of Corporations to ascertain if their STS (typically deployed as B2C platform(s) and third-party procured applications and platforms deployed?internally), share any?characteristics?described in?any of the prohibited or high-risk?use case descriptions. Hence an independent review and advice may be required.

  • Foundational Models?

Foundational Models and their accessibility through APIs are described [Page 28 (60e)-Page 29 (60h)] in the Amendments.

Article 28b [Page 39] is a new addition to the draft EU AI Act, introduced through the Amendments to address the popularity of ‘Generative AI’. It spells out a set of obligations which the providers of foundation models will need to adhere to.?

Article 28 [Page 37] outlines the “Responsibilities along the AI value chain of providers, distributers, importers, deployers or other third party.”

Annex VIII [Page 23] sets out the requirements for information relating to ‘high-risk AI systems’ and foundation models to be submitted for registration. Providers of foundation models [Page 24 (Section C)] will need to note that they are required to provide the:?

“Description of the capabilities and limitations of the foundation model, including the reasonably foreseeable risks and the measures that have been taken to mitigate them as well as remaining non-mitigated risks with an explanation on the reason why they cannot be mitigated”?

Incidentally, we highlighted the need to understand emerging and foreseeable risks as part of the need to?think critically and think holistically . This requires deployers to have embedded the discipline of risk management into their innovation process and culture. Risk Management here is not confined to just model risk management but crucially risk management at the enterprise level.

We are concerned about Socio-Technical Systems deployed with automated decision-making and profiling, and/or ‘Generative AI’ capabilities that directly and instantly impact people. These will be found in the?Prohibited?and?High-risk?categories.?

If Corporations assess their deployed STS against the principles outlined in the UK Government’s?‘A pro-innovation approach to AI regulation’ White Paper , how many of the following will be?evident?

  • Safety, security and robustness;?
  • Appropriate transparency and explainability;?
  • Fairness;?
  • Accountability and governance;
  • Contestability and redress.

We are concerned that the majority of STS with automated decision-making and profiling, and/or ‘Generative AI’ capabilities have?not been?deployed with operational safeguards that incorporate transparency obligations, and?neither?were they deployed with provisions for feedback or redress.

Scope

You might be wondering if the incoming EU AI Act will apply to your organisation.

Article 2 of the approved amended draft of the EU AI Act outlines its scope:

“1. This Regulation applies to:

(a)???providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;

(b)???deployers of AI systems that have their place of establishment or who are located within the Union;

(c)????providers and deployers of AI systems that have their place of establishment or are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union;?

(ca) providers placing on the market or putting into service AI systems referred to in Article 5 outside the Union where the provider or distributor of such systems is located within the Union;?

(cb) importers and distributors of AI systems as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union;?

(cc) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights were adversely impacted by the use of an AI system that was placed on the market or put into service in the Union;

Regardless of where you are domiciled, is your STS caught within the scope of the EU AI Act as defined in Article 2?

What are the penalties for breaches?

Some?of the penalties as listed [Page 75] are:

“3. Non compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 40 000 000 EUR or, if the offender is a company, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher;

3a. Non-compliance of the AI system with the requirements laid down in Article 10 and 13 shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is the higher

4. Non-compliance of AI system or foundation model with any requirements or obligations under this Regulation, other than those laid down in Articles 5, and 10 and 13, shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to?5 000 000 EUR or, if the offender is a company, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher”

If your organisation has deployed STS that falls within the prohibited, high-risk or foundation model categories, the cost of non-compliance?will need to be provisioned.

There is also a wider cost to non-compliance?beyond?the penalties listed above that are less tangible, but attributable to the loss of reputation, erosion of trust, declining credibility, potential liabilities from civil and class action lawsuits, limiting your potential to sustain, let alone grow your digital business. Consider these when you consider the?1-10-100 Rule ?to determine your Return on Investment (ROI) from investing in responsible innovation.?

Are you certain that you are compliant?

Although there is focus by the regulators on ‘AI’, we must not forget about the data that fuels these non-deterministic algorithms.

STS that are deployed with automated decision-making and profiling, and/or ‘Generative AI’ capabilities?that impact citizens directly and instantly will be processing a significant amount of personal data. This is the realm of the General Data Protection Regulation (GDPR) which is currently in force in the EU and the UK. A number of the articles within GDPR apply in instances where automated decision-making and profiling, and/or ‘Generative AI’ capabilities?are operational.??

How certain are you that your organisation is compliant in these scenarios?

We highlighted in our last article that,?

“Algorithms that are non-deterministic in nature and optimised to infer outcomes statistically, based on historical data that may not be representative of its recipients, often leave under-represented, underserved, vulnerable and marginalised members of Civil Society excluded.?
Further they may be disadvantaged, discriminated against and/or harmed perpetually unless the algorithm is changed or they have access to the means for redress. This typically occurs when autonomous agents in STS deliver automated decisions and profiling,?where recipients do not have agency and are not afforded the opportunity by the deploying organisation for redress.”

The nature of STS with automated decision-making and profiling?autonomous artificial agents?has been now recognised by regulators. They have advised how they intend to shape new regulations, while enforcing those that already exist. Deployers of these capabilities can no longer hide behind the technology their leaders decided to invest in and leverage.

Furthermore, the nature of the impact from these STS, specifically to humans who are recipients of automated decision-making and profiling, and/or ‘Generative AI’ capabilities, has put the focus on the “fundamental rights of persons?in general, including in the workplace, protection of consumers”.?Additionally, the Amendments now extends protection to?“the environment, public security, or democracy or the rule of law and other public interests”?[Page 102, Article 65(1)]?with the reference to democracy being inserted as a result of public access to ‘Generative AI’ capabilities and their potential to generate?misinformation ?and?disinformation .

The Regulators have responded to the adverse impacts and threats from the use of STS and ‘Generative AI’ technologies.?

It is unlikely?that the European Union’s values of “respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child” would have been taken into consideration when your STS was designed, developed and deployed.

If you are a Board Member or CEO of an organisation that has deployed STS in the high-risk use cases outlined in the EU AI Act – either directly or through your third-party solution/service provider, how certain are you that they do not infringe the fundamental rights of the recipients of automated decisions and profiling,?and/or ‘Generative AI’ capabilities?

How certain are you that the outcomes of your Digital Transformation initiatives comply with the incoming EU AI Act?

Conformity Assessments

Conformity Assessments or Independent Audits will be required under the draft EU AI Act as well as many of the other new regulations.

In a digital world that is filled with misinformation and disinformation, independent verification of compliance can act as an enabler of trust.?

How these independent verifications are carried out will determine the level of trustworthiness. The draft EU AI Act has stipulated the key elements that define independent audits are taken from the financial services industry.?

The critical aspect of Independence revolves around which audit criteria are used and how they were defined.?The perspectives from which they were derived matter.?Audit criteria designed to favour beneficial outcomes for deployers at the expense of fundamental human rights can?negate the intent of regulations designed to protect citizensconsuming services from the STS deployed.

ForHumanity ?audit criteria that form their Independent Audit Certification Schemes mapped to Regulations is unique in that they are crowdsourced, enabling diverse members of civil society to provide direct input into the intent of each criterion, therefore enabling their fundamental rights to be respected and incorporated in the Audit criteria.

What will you need to do to comply with the regulations?

New regulations?will?be introduced to complement existing regulations. Regulations?are needed?and?mandatory?to safeguard citizens, civil society, the environment, and democracies from the potential harms that can be realised through the?unregulated deployment?of these technologies.

We believe that these regulations?will drive responsible innovation, rather than stifle innovation. Enforced regulations provide guidance on appropriate governance of these powerful systems and the much-needed guardrails for producers, providers, importers, distributors, and deployers of these technologies to operate within; intentionally innovating with public safety in mind as a matter of strategic priority, while ensuring that innovation can deliver positive, beneficial and valuable outcomes for?all?the stakeholders across the value chain.

Since it will no longer be a matter of choice, what will organisations deploying STS with automated decision-making and profiling and/or ‘Generative AI’ need to change and add to their capabilities, competencies, and capacity to comply with the incoming regulations such as the EU AI Act, if applicable?

Throughout the Go Digital and Responsible Innovation series of articles, we have touched upon elements that are the key components of the draft EU AI Act and other regulatory requirements. These elements also make up our?Responsible Innovation Framework .

If your organisation has deprioritised or has not integrated any or all the following along with diverse inputs and multi-stakeholder feedback, considerable effort and investment will be required to become compliant with existing and new regulations. Here are?some?key factors for your Board and CEO to consider:

  • Ethics, Data Ethics and Algorithm Ethics
  • Data Protection, Data Privacy and Data Governance
  • Risk Management, Compliance and Assurance
  • Governance, Oversight and Accountability
  • Transparency Obligations, including documentation
  • Explainability and Interpretability (if applicable)
  • Quality Management
  • Security , Robustness and Resilience
  • Continuous Monitoring, including proactive controls
  • Processes, Procedures, and feedback mechanisms for contestability and the right for redress

Let’s look at a new recital in the Amendment to the draft of the EU AI Act that clearly describes what a Deployer needs to do [Page 26, (58 a)]:

Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that fundamental rights are protected, complementing the obligations of the provider when developing the AI system. Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge of the context of use, the people or groups of people likely to be affected, including marginalised and vulnerable groups.?

Deployers should identify appropriate governance structures in that specific context of use, such as arrangements for human oversight, complaint handling procedures and redress procedures, because choices in the governance structures can be instrumental in mitigating risks to fundamental rights in concrete use-cases.?

In order to efficiently ensure that fundamental rights are protected, the deployer of high-risk AI systems should therefore carry out a fundamental rights impact assessment prior to putting it into use. The impact assessment should be accompanied by a detailed plan describing the measures or tools that will help mitigating the risks to fundamental rights identified at the latest from the time of putting it into use.?

If such plan cannot be identified, the deployer should refrain from putting the system into use. When performing this impact assessment, the deployer should notify the national supervisory authority and, to the best extent possible relevant stakeholders as well as representatives of groups of persons likely to be affected by the AI system in order to collect relevant information which is deemed necessary to perform the impact assessment and are encouraged to make the summary of their fundamental rights impact assessment publicly available on their online website.

We believe that an organisation that is?capable?of innovating responsibly and complying with the regulation requirements as specified in the EU AI Act and other regulations?will be in a strong position to innovate for growth. They will continue to successfully attract and retain the best talent, suppliers and customers. Thereby meeting obligations towards shareholders, investors and the broader groups of stakeholders that lend support and legitimacy to the business.?

As we said earlier,?regulations do not stifle innovation. They serve to enhance corporate responsibility and accountability, and provide guidance in fulfilling your fiduciary duties towards?all?your stakeholders and the wider society for the safety, health and wellbeing of everyone. In practical terms, they serve to protect market value and strengthen business resilience.?

Is the value of Trust worth considering?

The value of trustworthiness cannot be assumed, nor should it be underestimated.?

Just because your brand may be trustworthy at present, based on its historical reputation earned from outstanding customer service, product or service quality, adverse outcomes from undesirable digital interactions and experiences can?quickly?erode that trust. Regulatory fines and/or lawsuits can certainly accelerate the erosion of trust.

Delegating digital interactions with customers and consumers of your digital services to?autonomous artificial agentsthat?you?cannot control,?to infer decisions by means of an algorithm, searching for regularities in patterns, based on statistical analysis and probabilistic outputs (which may proffer spurious correlations), should be considered?carefully.?

Corporations ought to proceed with?caution?to be sure the machines have been calibrated successfully towards the desired outcomes with?continuous monitoring?in place, throughout the model lifecycle, from inception through to decommissioning and particularly if the task being performed is?different?than the one specified at the outset.??

Profiling that?might not be explainable?based on historical data and personal data collected possibly?without informed consent?through third parties, is likely to be a risk that your Board and CEO may?not?wish?to absorb?given the sanctions specified under?EU AI Act ,?EU Digital Services Act ,?EU AI Liability Act ?and other regulations when introduced.?

The cumulative effects of cascading risks could be disastrous for the Corporation in terms of financial losses and reputational damage which may be irreversible, let alone for trustworthy compliance with Quality Assurance, Modern Slavery Act, Employment, Anti-discrimination, Privacy and other related laws across the different jurisdictions.?

CEOs and their Boards must act locally, but think globally across the value chain, particularly where third and fourth party suppliers may be involved. We have consistently said that trust is the currency for engagement in the digital world.?

As Civil Society is increasingly better informed about the limitations and downside risks from STS deployed with automated decision-making and profiling, and/or ‘Generative AI’ capabilities, there is also increased awareness about the adverse outcomes that they could be susceptible to, if these STS have?not been deployed?with appropriate operational safeguards and are?not compliant?with new legislation and regulations, such as the EU AI Act.

Perhaps it is time for Boards and the CEOs of organisations innovating with these powerful emerging technologies (as yet immature) to reconsider their innovation culture and existing ‘playbooks,’ and then decide?how?best they can innovate?within?rather than outside?the boundaries of the regulations, to earn valuable trust for the engagement required to grow in the digital world.

As a member of Civil Society, would you trust an STS that is compliant with the EU AI Act more than one that is not compliant?

The sooner Corporations?mature?their capabilities, competencies, and capacity to comply, the easier it will be for them to meet the regulatory requirements in the jurisdictions they operate within and innovate responsibly.

‘Artificial Intelligence’ is a technology that by its nature cannot be contained or localised. It will be necessary for a global legal and regulatory framework to come into play to ensure public health and safety across all countries. In the meanwhile, Corporations should endeavour to apply the most stringent quality standards to conduct their operations safely and successfully cross-border to the satisfaction of their shareholders, investors, diverse stakeholders and the wider society.?

While some organisations have started their?responsible innovation journey towards trustworthiness ,?why wait to start differentiating through trustworthiness?


Chris Leong ?is a Fellow, Certified Auditor (FHCA) and a Member of the Ethics Committee at ForHumanity and the Director of Leong Solutions Limited, a UK-based Management Consultancy and Licensee of the ForHumanity Independent Audit of AI Systems, helping you succeed in your digital business transformation through Responsible Innovation and Differentiate Through Trustworthiness.

Maria Santacaterina ?is a Fellow, Certified Auditor (FHCA) and a Member of the Ethics Committee at ForHumanity, CEO and Founder of SANTACATERINA, a UK-based Global Strategic Leadership & Board Executive Advisory, helping you revitalise your Core Business Strategy, Create Enduring Value and Build a Sustainable Digital Future.

Chris Leong, FHCA

Director | Advisory & Delivery | Change & Transformation | GRC & Digital Ethics | All views my own

1 年
Kelly Millar

?????? & ?????????????? ???? ???? ???????????????????????????????? ????????????????. I am an expert at driving brand growth and visibility through personal branding, thought leadership, company brand building and PR.

1 年

The complexity of digital transformation is often overlooked. Great read Chris Leong, FHCA

Brian Owen

Market, Regulatory & Technology Transformation Specialist - Conversational AI & Digital Technology, Environmental & SSL Technology; Editor, Conference Director

1 年

Great post and newsletter issue, Chris. The EU is leading the way to protection through regulatory control and SUSBSTANTIAL!!! fines. #payattentionorpayup #FTC #OVoN #UserTrust #OVoNTRUSTMARK #TRUSTMARK Open Voice Network Jon Stine Oita Coleman Michael Novak Laura Miller, MA Harry P. Pappas Emily Uematsu Banzhaf Susan Westwater Leslie Pound Janice Mandel Hampton Newsome

Heidi Saas

Data Privacy and Technology Attorney | Licensed in CT, MD, & NY | ForHumanity Fellow of Ethics and Privacy | AI Consultant | Change Agent | ?? Disruptor ??

1 年

Great article! #trust #fairness #responsibility #privacy #ethicalAI #consumerrights

要查看或添加评论,请登录

Chris Leong, FHCA的更多文章

社区洞察

其他会员也浏览了