Digital health ethics

Digital health ethics

dsMorals are a person's standards of behavior or beliefs concerning what is and is not acceptable for them to do. Ethics?is?defined?as a moral philosophy or code of morals practiced by a person or group of people. An example of?ethics?is a the code of?conduct?set by a business or profession. The first known business code of ethics was the Code of Hammurabi, written in 1776 B.C.

Within an ethical context, it is important to discuss how the commercialization of medicine has fostered a distortion of emphasis among the basic tenets of medical ethics, (autonomy, justice, beneficence, and non-maleficence) and how this unbalanced emphasis has created serious barriers to improving the health care system.

What is unethical is not necessarily illegal. Therefore, it is important for digital health entrepreneurs and users to stay current with digital health law and regulatory updates which are rapdily changing as a result of COVID.

Here is the digital health alliance code of ethics, which includes :Protect patients’ rights to privacy, consent, and knowledge of data use. Be transparent and accountable about how patient generated data is being used, stored, and shared. Selling of identified or de-identified patient data without the patient’s explicit knowledge and consent is prohibited.

The ethical practice of AI is now front and center. The Bosch code of ethics states:

  1. All Bosch AI products should reflect our “Invented for life” ethos, which combines a quest for innovation with a sense of social responsibility.
  2. AI decisions that affect people should not be made without a human arbiter. Instead, AI should be a tool for people.
  3. We want to develop safe, robust, and explainable AI products.
  4. Trust is one of our company’s fundamental values. We want to develop trustworthy AI products.
  5. When developing AI products, we observe legal requirements and orient to ethical principles.

Senator Maria Cantwell (D-WA) and Democratic colleagues have proposed a sweeping data privacy bill that would require covered entities to audit certain "algorithmic decision-making" systems that use machine learning (ML) and other forms of artificial intelligence (AI) to facilitate important decisions about consumers, such as credit or employment decisions. Unveiled in November, the?Consumer Online Privacy Rights Act?(COPRA) would force companies to conduct annual impact assessments of any covered AI/ML systems in an effort to mitigate bias and other potentially negative consequences of automated decision-making.

Fortunately, the conversation about what should be in an AI Code of Ethics has started.

Eleven overarching ethical values and principles have emerged from this content analysis of published AI ethical guidelines from around the world. These are, by frequency of the number of sources in which they were featured: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity.?

?The WHO guidance outlines?six main ethical principles for developers, governments, and society to ensure AI tools benefit the public.

Here is a list of AI principles from the Asilomar conference:

Ethics and Values

6)?Safety:?AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

7)?Failure Transparency:?If an AI system causes harm, it should be possible to ascertain why.

8)?Judicial Transparency:?Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.

9)?Responsibility:?Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.

10)?Value Alignment:?Highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.

11)?Human Values:?AI systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.

12)?Personal Privacy:?People should have the right to access, manage and control the data they generate, given AI systems’ power to analyze and utilize that data.

13)?Liberty and Privacy:?The application of AI to personal data must not unreasonably curtail people’s real or perceived liberty.

14)?Shared Benefit:?AI technologies should benefit and empower as many people as possible.

15)?Shared Prosperity:?The economic prosperity created by AI should be shared broadly, to benefit all of humanity.

16)?Human Control:?Humans should choose how and whether to delegate decisions to AI systems, to accomplish human-chosen objectives.

17)?Non-subversion:?The power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.

18)?AI Arms Race:?An arms race in lethal autonomous weapons should be avoided.

This team itemizes eight tenets of sound ethics to guardrail the development and use of AI in pathology and lab medicine, most if not all of which seem readily generalizable to other medical specialties.

Here is a primer on digital medicine. Digital medicine?describes a field, concerned with the use of technologies as tools for?measurement, and?intervention?in the service of human health

Silicon Valley has changed from a source of hope to a source of immoral hype and unethical practices. The central challenge ethics owners are grappling with is negotiating between external pressures to respond to ethical crises at the same time that they must be responsive to the internal logics of their companies and the industry. On the one hand, external criticisms push them toward challenging core business practices and priorities. On the other hand, the logics of Silicon Valley, and of business more generally, create pressures to establish or restore predictable processes and outcomes that still serve the bottom line.

BIG DIGITAL HEALTH, its offspring, runs the same risk. Ethical issues abound in digital health as they apply to:

  1. personal health data
  2. artificial intelligence
  3. facial recognition
  4. reconciling the ethics of business with the ethics of medicine
  5. manipulating patients and doctors for profit
  6. conflict of interest and busted trust
  7. personal health information ownership rights
  8. social media mining for health applications
  9. biometric technologies
  10. Alexa spying
  11. AI ethics
  12. Telebehavioral health ethics

The bottom line: Your data is a valuable asset and is for sale online.

Here are the reasons why it is so hard to do the right thing, particularly if you are a physician entrepreneur.

There are many reasons why people fear physician entrepreneurs:

1. Because they are afraid, they will place the profit motive above patient interests.

2. Because they don't trust "businesspeople" and, when it comes to medicine, "money is dirty" and the root of all evil.

3. Because they think entrepreneurship is about creating a business.

4. Because they think entrepreneurs are dishonest.

5 Because they think it corrupts the professionalism of medicine and encourages conflicts of interest.

6. Because they think it attracts the wrong kind of person into medicine.

7. Because they think it is a waste of a medical school education and has no place in the curriculum.

8. Because they are fed up with "high priced suits" who don't add value ripping off the system.?

9. Because they don't think doctors can do both and should stick to medicine.

10. Because they think doctors are innately lousy business people and should just pay attention to taking care of patients.

For physician entrepreneurs, the challenge is to reconcile?the ethics of business with the ethics of medicine and their personal ethical codes?by practicing compassionate capitalism. When there are conflicts between personal, medical and business ethics, then moral injury can be the result.

?Both research and real life have shown that overly loyal people are more likely to participate in?unethical acts?to keep their jobs and be?exploited?by their organizations. What can you do to harness the benefits of loyalty while mitigating the risks?

  • First, if you see something, say something.?Although your loyalty to your organization may lead you to worry about “rocking the boat,” remember that silence is often what enables wrongdoing to continue.
  • Don’t compete — collaborate.?When workplaces get competitive, people start to?lose sight of what is ethical and unethical.?Seeking out ways to collaborate with coworkers can increase the chances of behaving ethically.
  • Shift your perspective. When you find yourself in a fraught, loyalty-influenced situation, try taking a step back and changing how you think about it. For example, step back and think about the situation you are facing from a distanced, third-person (vs. first-person) perspective.

Here are some ideas on how to address the inequities in SickcareUSA, Inc.

Here are some articles addressing the ethics of artificial intelligence in sickcare.

Here are the ethical challenges and opportunities performing decentralized clinical trials.

This movement takes the emphasis on the agent (who should I be) and the receiver of my actions (the environment or patient) and focuses it on the relationship between the two entities. So, it’s not about me; it’s not about you – but it’s about the relationship that is between you and me.

In other words, it’s no longer about either Juliet or Romeo, but it’s about their love. It’s not about this party or another party but about politics. It’s not about citizens, but it’s about citizenship.

Critics are justified in exposing those who violate that social contract that places the interests of patients first and profits second. Beyond that, marginalizing and stigmatizing physician entrepreneurs is unjustified and will interfere with us innovating our way out of the US sick care mess.

Arlen Meyers, MD, MBA is the President and CEO of the Society of Physician Entrepreneurs on Twitter@SoPEOfficial and Co-editor of Digital Health Entrepreneurship

This work is licensed under a?Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Updated 5/2023

Aaron Hattaway, MD, MBA, CPE

Physician Leader | Executive | Entrepreneur | Change agent with proven success creating innovative data-driven solutions

5 年

Whether it is physicians, "high priced suits", tech engineers, or a kid with a computer in his garage, someone has to lead us out of our current broken system. ?Of all the options, I believe physicians are best suited to keep the patient squarely focused at the forefront of the vision for change.

Tom Davis MD FAAFP

Value-Based Care Expert | Founding Partner of First VBC Health System | Helping Professionals Leverage VBC to Improve Patient Care

5 年

Arlen Well said. Someone's going to curate the patient's care and the system that delivers it. Who better than the clinician? These negative connotations are used to keep clinicians out of the decision-making process, which is exactly where we should be. https://www.tomdavisconsulting.com/shame/

Bob Mogue

Managing Director at Concept2Exit (C2E) Group

5 年

Well stated, Arlen! We need physicians to help us innovate our way out the the U.S. sick care mess.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了