
Published 2025-01-20
How to Cite
Copyright (c) 2024 INESC TEC Science&Society

This work is licensed under a Creative Commons Attribution 4.0 International License.
Abstract
AI is moving at an incredible pace, and it’s accelerating, with trillions of dollars under investment. It’s reshaping industries and even the way we work and live our lives. However, as AI becomes more influent, the importance of Responsible AI becomes more obvious.
As researchers, engineers and industry leaders, we need to ensure our AI systems aren’t just theoretically good—they need to be proven trustworthy when used in everyday situations. There’s no shortage of research on fairness, safety, robustness, explainability, and privacy in AI, but there’s still a big gap between research work and real-world practice. With the EU AI Act on the horizon and ongoing research in this field, we must ask ourselves: are we truly prepared to implement these principles effectively?