(1)INESC TEC & Faculty of Engineering of the University of Porto
(2)INESC TEC
Artificial Intelligence (AI) is a constantly evolving scientific domain. It's hard to predict what the future holds for this kind of technology. However, it's clear that AI is changing our lives in significant ways. It's shifting the way we work, live, and make decisions. It is currently going through a phase of fast growth and development.
A future with responsible AI must be collaborative. It will require a close cooperation between states, companies, and society. In the upcoming years, AI is set to have an ever-increasing impact on our lives, in areas like healthcare, education, work, and mobility.
Looking back, today's present issues have been addressed by remarkable technological advances. Currently, we face issues with major social impact: climate change and global warming, population growth and food production, desertification and water control. AI is contributing to and will play a key role in developing sustainable solutions to these problems. The changes associated with AI are likely to be far more impactful than any other technological revolution in human history.
Depending on the direction this revolution takes, AI will strengthen our ability to make more informed choices or reduce human autonomy; it will create new forms of human activity or make certain jobs redundant; it will help ensuring well-being among the many or increase the concentration of power and wealth in the hands of a few; it will expand democracy in our societies or endanger it. Understanding and addressing these tensions is of particular importance at a time when Europe perceives AI as a window of opportunity to overcome innovation and productivity gaps, while preserving the objectives of equity and cohesion as outlined in the Draghi report.
The choices we face today relate to fundamental ethical questions about the impact of AI on society; in particular, how it affects work, social interactions, healthcare, privacy, justice, and safety. The ability to make the right choices requires new solutions to fundamental scientific questions in AI and human-machine interaction. Choices that must be made today.
This issue of the magazine provides multiple outlooks on the opportunities and challenges associated with AI, and how to support more informed choices.
The growing impact of AI on society reinforces the importance of responsible AI, as all authors pointed out. Nuno Paiva states that "technology is not ethically neutral; it significantly shapes our values, behaviours, and societal norms." However, opinions differ on the ways and processes of obtaining a responsible AI.
Virginia Dignum mentions that "current AI systems, particularly Generative AI (GenAI), are designed to prioritize immediate performance over long-term maintainability and ethical considerations." Adopting an ethical relational approach to AI, combined with a structured, engineering-focused compositional paradigm, will lead to the development of AI systems that are not only powerful and efficient but also aligned with human values and societal needs."
Pedro Saleiro states that "there’s no shortage of research on fairness, safety, robustness, explainability, and privacy in AI, but there’s still a big gap between research work and real-world practice." Pedro stresses the need to regulate, test and evaluate not only the results of AI tools but also the processes that lead to these results. "Testing is the only way to ensure that AI behaves reliably in a range of environments, including edge cases that could present significant risks. A good analogy is to look at mission-critical industries like aerospace or nuclear energy, where failure is not an option. In these sectors, thorough testing is built into every stage of the development process, from initial design to final implementation. AI should be no different." The recent initiatives of the European Commission to regulate AI (namely the AI Act), despite being globally positive, present certain risks in terms of effective implementation - especially if they focus "more on legalistic compliance - producing mountains of paperwork rather than ensuring that AI systems are thoroughly and comprehensively tested".
Nuno Paiva emphasises that responsible AI should be based on ethical considerations in line with society's values and legal norms. He proposes a framework for "Trustworthiness and Human-Centred Design" from existing research. According to Nuno, "trust is built when users perceive the system as fair, transparent, and reliable." These ethical considerations should not be perceived as a burden, or as a mere need for ‘compliance’. On a positive note, Nuno Paiva points to a positive relationship between performance and adoption of responsible AI practices by leading AI companies.
Several articles in this issue address the transformative opportunities of AI. Diana Viegas and Nuno Cruz illustrate the multiple ways AI can expand capabilities to explore the deep sea, including real-time monitoring and autonomous operation, so robotic systems can stay underwater for longer periods, and reach deeper into ocean exploration. In terms of power and energy, Ricardo Bessa highlights the potential of AI for the energy transition, by supporting decision-making in energy systems with a high renewable component, where flexibility is key; or even for optimising the operation of energy communities or electric vehicle charging.
Moreover, harnessing the potential of AI requires tailored strategies. Pedro Amorim and Gonçalo Figueira propose five pillars for organisations to effectively transform AI's potential into reality. These pillars involve combining the tasks with the appropriate AI tools; exploring different AI approaches; using explainable AI methods; enabling different modes of interaction with humans; and ensuring AI literacy in the organisations.
Based on his experience of using ChatGPT in teaching, José Nuno Oliveira highlights the need to develop critical thinking in the use of Large Language Models (LLMs), and the importance of these skills in the jobs of the future. Finally, António Batista and António Lucas Soares draw attention to the need to consider not only the benefits, but also the negative effects of AI, namely on global energy consumption and its climate effects, as well as the impact of AI on the workforce and on social pressures.
Again, these tensions require reflection and action, and an ability to make the right choices considering different perspectives. As mentioned by João Claro and Arlindo Oliveira, the opportunities and challenges of AI require leadership that balances innovation and responsibility, going beyond efficiency and competitiveness, and building a vision focused on societal challenges, and based on ethical and collaborative principles. This leadership is fundamental to make the necessary choices to leverage an AI based on European values, so it becomes a key driver of innovation and competitiveness.