Modular robotics for ocean assessment - A research programme at INESC TEC

Nuno Paiva

  NOS
Why Does Responsible AI Matter?

Why does Responsible AI (RAI) matter?

“The most important thing is not life, but the good life.” This quote, attributed to Socrates, resonates profoundly in today's rapidly evolving technological landscape, where our pursuit of a good moral life is increasingly intertwined with the tools we develop and use. Technology is not ethically neutral; it significantly shapes our values, behaviours, and societal norms.

AI can make decisions that impact our lives, from recommending products to making healthcare or hiring decisions. This is why organisations must consider ethics—not just their business goals—when developing AI systems. Yet, many companies struggle to balance their ethical values with their day-to-day practices.

The AI Incident Database[1] highlights the challenges posed by AI systems by tracking instances where they have caused harm or near-harm. In 2023, 123 incidents were recorded, marking a 32% increase compared to 2022 – with a constant rise rise in reported cases over recent years. While high-stakes applications, such as the predictive AI tools developed during the COVID-19 pandemic for patient diagnosis and triage[3], exemplify the potential consequences of poorly implemented AI systems, these concerns are not limited to critical situations. For instance, the case of Staples[2], which varied online prices based on user location and demographics, caused reputational damage and illustrates how AI-driven practices can lead to significant public backlash. Notably, this incident has not been reported in the AI Incident Database, underscoring the issue of underreporting. This upward trend highlights the urgent need for organisations to prioritise RAI practices to mitigate risks and prevent further issues.

However, defining what constitutes RAI is not straightforward. Recent research[4] points out the challenges of inconsistent terminology and overlapping concepts like "trustworthy AI" and "ethical AI." These terms are often used interchangeably, creating confusion, and making it harder for organisations to understand and implement RAI principles effectively. Simply promoting trust in AI systems is insufficient. To truly matter, RAI must be rooted in ethical considerations that align with societal values and legal norms.

A clearer definition of RAI, proposed through an analysis of 254 papers, emphasises a human-centred approach—prioritising the well-being, rights, and needs of individuals affected by AI systems. This approach ensures user trust by promoting ethical decision-making that is fair, accountable, and consistent with societal laws and norms. It also includes sustainability, ensuring that AI systems consider long-term societal and environmental impacts. Additionally, RAI ensures that automated decisions are explainable to users and that their privacy is preserved through secure implementations.

What lies beneath Responsible AI?

While the importance of RAI is clear, it is underpinned by a robust framework designed to integrate both technical and ethical considerations. This research-proposed framework [4], visually represented in Figure 1, emphasises the interdependence of the technical and ethical pillars of RAI. Together, these pillars are managed through responsible governance, resulting in systems perceived by stakeholders as trustworthy. To illustrate what’s at stake for each pillar, we will use a hiring system as the conducting use case, demonstrating how these principles apply in practice.

Figure 1 – Research proposed RAI Framework

Technical Pillars

Ethical Pillars

Trustworthiness and Human-Centred Design

The RAI framework ensures that the technology not only operates adequately but does so ethically and safely. It integrates technical and ethical pillars to create systems that are explainable, secure, fair, and compliant with regulations. The use of real-world processes, such as in a hiring system, highlights how these pillars work together to foster trust and accountability, illustrating that RAI is not just about technology—it’s about its impact on people and society.

Which are the essential RAI Governance Frameworks?

As AI continues to evolve, governments, companies, and researchers are developing frameworks to ensure the responsible use of AI systems. Here are some key types of documents related to RAI governance:

Given the diversity of RAI governance frameworks, organisations must acknowledge that a "one-size-fits-all" approach is inadequate. To effectively implement RAI, organisations need to assess their specific contexts, capabilities, and objectives. This assessment is crucial for selecting and adapting frameworks that align with their ethical commitments and operational realities.

For example, a start-up developing an AI-driven hiring platform may prioritise implementing the OECD AI Principles to ensure fairness, transparency, and accountability in its algorithms. By focusing on these ethical design principles, the start-up can build trust with users and differentiate itself in the marketplace.

In contrast, a multinational corporation using the start-up's hiring system must navigate more complex compliance challenges. This organisation is likely to emphasise compliance with the EU AI Act, which mandates thorough accountability and risk management for high-risk AI applications. The multinational's concerns may include ensuring robust reporting, bias detection, and comprehensive data governance to mitigate reputational risks and comply with stringent regulations.

Thus, while both the start-up and the multinational are engaged in algorithmic hiring, their different priorities lead them to adopt distinct RAI frameworks. The start-up focuses on ethical design principles to foster trust, whereas the multinational favours compliance and risk management to safeguard operations. This example illustrates the importance of tailoring RAI frameworks to meet the unique needs and challenges of various organisations.

How Mature is Your RAI Approach?

Over the past four years, McKinsey, a consultancy company, has identified "AI High Performers"[13], companies that derive over 20% of their EBIT from AI, excelling in strategy, talent, and technology. Their 2022 Digital Trust survey [14] shows a strong correlation between RAI practices and business performance; organisations prioritising digital trust see annual revenue and EBIT growth of at least 10% more often than their peers.

Although we don't have research to support the notion that EBIT drives RAI programmes or vice versa, studies suggest that organisations demonstrating strong performance also tend to be more prepared, with faster and more robust practices. Whether it is RAI pushing EBIT or the other way around, these organisations are clearly better positioned for success. For example, 70% of leaders in digital trust have adopted automated models that prevent failures, compared to less than 40% of others. Additionally, the increasing focus on fairness, explainability, and security demands new skills from AI teams developing greater expertise in validating, debugging and discover new knowledge while deploying models.

Despite progress among top performers, advancements in addressing AI risks have been slow across the industry. According to McKinsey's State of AI 2022 survey[15] , only 22% of companies adequately explain their AI decision-making processes.

A survey by BCG [16], another leading consulting firm, highlights a gap between intention and execution—while 42% of organisations perceive AI as a strategic priority, only 19% have fully implemented RAI programmes. Among the 16% of "RAI Leaders," RAI is integrated into their broader corporate responsibility strategies, which increasingly align with sustainable development goals, such as reducing carbon footprints and promoting environmental stewardship.

A relevant example in the telecommunications sector is NOS Comunicações, a leading company in Portugal. Recently, the organisation undertook a project to enhance mobile network performance by deactivating network elements during periods of low usage, resorting to customer data to optimise this process while maintaining service quality. While this initiative is a commendable way to comply with SDGs, representing a commitment to RAI principles, it is also a less contentious use case - since it leads to cost savings for the company through improved operational efficiency and reduced energy consumption. Nonetheless, prioritisation and effective execution in said projects contribute significantly to building a culture of responsibility and innovation, reinforcing the importance of integrating RAI into the core business strategy.

This integration is further addressed in key survey findings that show significant differences between RAI Leaders and other organisations:

Analogous to Tom Davenport’s work on analytics [17], where the adoption of advanced analytics capabilities provided companies with a competitive edge, today’s leading organisations are similarly gaining advantages by embedding RAI into their core values. Just as analytics leaders thrived by prioritising data-driven decision-making, those that emphasise RAI are positioning themselves for sustainable long-term success.

Essential Recommendations for Aspiring RAI Leaders: Insights from a Data Science Manager

BCG’s survey also identifies which are the organisational constraints that limit organisations’ ability to implement RAI initiatives, which we can group into two main reasons:

Despite these challenges, there is always potential for progress, particularly through the proactive role of data science managers. In my experience, I have seen how AI initiatives can transform organisations; equally important is fostering RAI practices that meet business goals and ethical standards. As leaders, we must set the tone by demonstrating commitment to RAI, being transparent about ethical challenges, and involving our teams in crafting solutions. This approach cultivates accountability and shared responsibility.

Here are actionable recommendations for aspiring RAI leaders:

1. Focus on Training and Upskilling:

Technical teams in AI/ML must acknowledge that their decisions can significantly impact society. Although organisations may take time to fully establish RAI practices, early preparation is essential to accelerate this journey. Training is crucial for raising awareness; however, quality hands-on RAI education is limited. For instance, research [18] found that only 22 of 186 machine learning courses at leading U.S. universities included ethics-related content, exposing a gap in technical training. .

To address this, leveraging existing research [19], we developed a hands-on course aimed at improving the ethical use of AI in decision-making, focusing on enhancing AI explainability. Identifying the challenges in providing users with clear information, we blend techniques from explainable AI (XAI) to inform user experience (UX) design. The course introduces a Question-Driven Design Process that aligns user needs with the selection and implementation of XAI techniques, fostering collaboration between designers and AI engineers. Through practical applications, participants learn to effectively tackle the design challenges of AI systems following a methodology as described in Figure 2, and using a practical loan application case study. For instance, designing explanations for a risk manager stakeholder that aims to compare similar loan applications requires a contrastive explainability method, while for the end user (customer), counterfactual explanations could help to provide actionable insights to improve likelihood of loan approval – e.g.,: how much a clean default history can change the loan decision process. The course is now available as an elective in a partner university's master's programme, expanding its impact on future professionals.

Figure 2 – xAI Question Driven Design Methodology

2. Applying RAI Principles to Real-World Projects:

The next step was to identify projects for applying RAI principles. While fairness often appears in clear use cases like loan approvals or hiring, where unbiased decisions across sensitive attributes (e.g., gender, age) are critical, real-world ethical concerns can be less straightforward.

Understanding that fairness is vital for sustainable business outcomes requires it to be framed appropriately within the business context. For example, in a call-centre project, our initial focus on short-term optimisation led to experienced operators handling more calls, creating an imbalance. By reframing the challenge as an opportunity to train operators through balanced call assignments, we achieved a fairer workload distribution. This not only addresses fairness but also fosters long-term workforce sustainability and improves operator well-being.

3. Fostering Team Collaboration for RAI:

RAI initiatives must extend beyond data science and AI teams to include legal, compliance, and corporate social responsibility (CSR) departments. Engaging these teams ensures balance between AI projects and broader company policies, creating opportunities to enhance processes toward RAI-centric practices.

For instance, during project initiation, we carry out a thorough risk assessment in collaboration with the compliance team. This assessment can be improved by integrating RAI-related technical knowledge. A key question to consider is, “How do you justify the model's complexity for the specific use case?”

While this question may seem straightforward, a lack of understanding regarding glassbox and blackbox models can lead to vague justifications. This may result in deploying overly complex models, wasting time on post-hoc explanations that can be misleading[20]. By collaboratively addressing these questions, teams foster shared knowledge and responsibility, ultimately improving RAI outcomes.

As managers, we must bridge the gap between technical teams and senior leadership, advocating for resources and commitment to RAI. Ethical AI is not just a “nice-to-have”—it’s essential for reducing risk, improving trust, and aligning with long-term business goals.

By demonstrating how RAI practices like fairness and transparency drive both ethical outcomes and business success, we ensure AI systems benefit not just the company but also society. Leading with this mindset positions our organisations as RAI leaders, ensuring sustainable growth and positive societal impact.

References: