Skip to content

April 23, 2024

Navigating the Ethical Minefield of AI 

By IT Revolution

As a business leader, you know that artificial intelligence (AI) is no longer just a buzzword—it’s a transformative force that is reshaping every industry, redefining customer experiences, and unlocking unprecedented efficiencies. 

In his groundbreaking book, Adaptive Ethics for Digital Transformation, Mark Schwartz shines a light on the moral challenges that arise as companies race to harness the power of AI. He argues that our traditional, rule-based approaches to business ethics are woefully inadequate in the face of the complexity, uncertainty, and rapid change of the digital age.

So, as a leader, how can you ensure that your organization is wielding AI in a way that is not only effective but also ethical? 

Here are some key takeaways from Schwartz’s book that can help you navigate this new terrain:

Cultivate a Culture of Ethical Awareness and Accountability

Too often, discussions about AI ethics are siloed within technical teams or relegated to an afterthought. Schwartz stresses that ethical considerations must be woven into the fabric of your organization’s culture. This means actively encouraging all employees, from data scientists to business leaders, to raise ethical questions and concerns.

Foster an environment where it’s not only acceptable but expected to hit the pause button on an AI initiative if something doesn’t feel right. Celebrate those who have the courage to speak up, even if it means slowing down progress in the short term. By making ethics everyone’s responsibility, you can catch potential issues early before they spiral out of control.

Embrace Humility and Adaptability

One of the most dangerous traps in the realm of AI is overconfidence. We may be tempted to believe that we can anticipate and control every possible outcome of the intelligent systems we create. But as Schwartz points out, the reality is that we are often venturing into uncharted territory.

Instead of clinging to a false sense of certainty, Schwartz advises embracing humility and adaptability. Approach AI initiatives as ongoing ethical experiments. Put forward your best hypotheses for how to encode human values into machine intelligence, but be prepared to continuously test, learn, and iterate.

This means building mechanisms for regular ethical review and course correction. It means being willing to slow down or even shut down an AI system if unintended consequences emerge. In a world of constant change, agility is not just a technical imperative, but an ethical one.

Make Transparency and Interpretability a Priority

One of the biggest risks of AI is the “black box” problem—the tendency for the decision-making logic of machine learning models to be opaque and inscrutable. When we can’t understand how an AI system arrives at its conclusions, it becomes nearly impossible to verify that it is operating in an ethical manner.

Schwartz emphasizes the importance of algorithmic transparency and interpretability. Strive to make the underlying logic of your AI systems as clear and understandable as possible. This may require investing in tools and techniques for explaining complex models, or even sacrificing some degree of performance for the sake of transparency.

The goal is to create AI systems that are not just high-performing, but also accountable and auditable. By shining a light into the black box, you can build trust with stakeholders and ensure that your AI is aligned with your organization’s values.

Keep Humans in the Loop

Another key ethical principle that Schwartz stresses is the importance of human oversight and accountability. Even as AI becomes more sophisticated, it is critical that we resist the temptation to fully abdicate decision-making to machines.

Establish clear protocols for human involvement in AI-assisted decisions, especially in high-stakes domains like healthcare, criminal justice, and financial services. Create mechanisms for human review and override of AI recommendations.

Importantly, Schwartz cautions against using AI as a scapegoat for difficult decisions. We must be careful not to simply “blame the algorithm” when thorny ethical trade-offs arise. At the end of the day, it is human leaders who bear the responsibility for the AI systems they choose to deploy and the outcomes they generate.

Use AI as a Mirror to Examine Societal Biases

One of the most powerful ideas in Schwartz’s book is the notion of using AI as a tool for ethical introspection. Because AI models are trained on historical data, they often reflect and amplify the biases and inequities that are embedded in our society.

Rather than seeing this as a flaw to be ignored or minimized, Schwartz encourages leaders to seize it as an opportunity. By proactively auditing your AI systems for bias, you can surface uncomfortable truths about the way your organization and society operate. This can spark much-needed conversations about fairness, inclusion, and social responsibility.

In this way, AI can serve as a catalyst for positive change. By holding up a mirror to our collective blind spots, AI can challenge us to confront long-standing injustices and build a more equitable future.


As you embark on your own digital transformation journey, the insights from Adaptive Ethics for Digital Transformation provide an invaluable roadmap for navigating the ethical challenges of AI. By cultivating a culture of ethical awareness, embracing humility and adaptability, prioritizing transparency and human oversight, and using AI as a tool for introspection, you can harness the power of this transformative technology in a way that upholds your values and benefits society as a whole.

The path forward won’t always be clear or easy. But with the right ethical framework and a commitment to ongoing learning and adaptation, you can lead your organization confidently into the age of AI—and create a future that you can be proud of.

- About The Authors
Avatar photo

IT Revolution

Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.

Follow IT Revolution on Social Media
Jump to Section

    More Like This

    Revolutionizing Governance, Risk, and Compliance with Digital Twins
    By Summary by IT Revolution

    Organizations are constantly seeking innovative ways to manage the complexities of governance, risk, and…

    Understanding Work as a Flow
    By Steve Pereira , Andrew Davis

    This post is adapted from the book Flow Engineering: From Value Stream Mapping to Effective…

    Attendee Titles and Organizations (2020-2024)
    By Gene Kim

    Since 2020, we’ve had 9,824 delegates attend our DevOps Enterprise Summit and Enterprise Technology…

    Unlocking Organizational Flow: Lessons from Computer Networking
    By Summary by IT Revolution

    The Spring 2024 issue of the Enterprise Technology Leadership Journal features an insightful paper…