
Let me start with a promise: this is not another article about how to use AI in your projects.
Honestly, I’m still waiting to see a real, embedded case study that goes beyond theory.
There’s no shortage of conversations about what AI could do — but far fewer practical examples of what it actually does in daily project life.
So no, this isn’t another “AI in Project Management” article.
And here’s my second confession: I have mixed feelings about calling it Artificial Intelligence.
Not because I’m afraid of it — quite the opposite — I fully embrace this revolution.
But calling it “intelligent” gives it a little too much credit, too early.
Yesterday, for example, I found myself arguing with Steevena — yes, that’s what I named my “Innovation Assistant” — because it didn’t know that Pope Francis had passed away or what happens at the Davos World Economic Forum.

Words matter. And for me, the word intelligence implies something human:
the ability to manifest intentions and to distinguish right from wrong.
When (and if) machines reach that point, it won’t be a milestone for progress — it will mark the beginning of the end of humankind.
But that’s another discussion, not the topic of this article.
AI — or more precisely, Generative AI — has become the unavoidable conversation of our time. Everywhere we look — on LinkedIn, in articles, in videos — we’re surrounded by new tools, bold promises, and endless debates. Everyone talks about it, many claim to use it. Few can show real implementations in their daily work.
AI can make our jobs faster and, in some cases, even more efficient, but it also brings new challenges. We now live in a world where it’s increasingly difficult to see where human effort ends and machine output begins.
And in that blur between man and machine, one truth stands out clearly:
transparency is no longer optional — it’s essential.
Being open about how we use AI is not a matter of compliance. It’s a matter of professional respect — respect for our readers, our clients, our colleagues, and our profession.
I’m a big admirer of Ricardo Vargas and Antonio Nieto-Rodriguez, two pioneers of modern project management. While listening to one of Ricardo’s five-minute podcasts about the World Economic Forum in Davos, something caught my attention. He mentioned that misinformation and disinformation, powered by AI, had been identified by world leaders as the top global risk. That made me curious — so I dug deeper.
The Global Risks Report 2025 confirmed it: AI-generated fake content has become the greatest short-term threat to social trust and stability.

With deepfake videos, synthetic voices, and machine-written articles flooding our feeds, truth itself is becoming negotiable. And that’s not only a media issue — it’s a professional challenge.
As project managers, our success depends on our capacity to deliver value through our projects.
But that value cannot exist without authenticity.
Our ability to build genuine relationships remains at the core of our profession.
Yet many professionals risk creating a “doped” version of themselves online — one that looks impressive for a moment but quickly backfires.
Because
credibility, once lost, is hard to regain.
At the end of the day, our influence doesn’t come from how we appear, but from how we act with honesty and consistency.
Project managers already operate at the crossroads of technology, people, and process.
That unique position makes us the ideal professionals to lead by example in the age of AI.

Just as we manage scope, risk, and change, we must now learn to manage AI ethics and transparency — not because someone tells us to, but because our leadership credibility depends on it.If we want to earn the trust of stakeholders, sponsors, and teams, we must show that technology doesn’t replace judgment — it enhances responsibility.When we show integrity in how we use AI, we set a new standard for others to follow.To make this principle practical, I propose the AI Engagement Transparency Matrix. It helps professionals clearly document which parts of their work are performed by humans and which are supported by AI.
Here is an example of AI Engagement Transparency Matrix used during my latest keynote speech.
| Area of Application | Performed By |
| Literature Research | Author |
| Literature Processing | Author |
| Reference Formatting | ChatGPT |
| Concept Clarification | Author |
| Text Editing | Author |
| Text Refinement | ChatGPT |
| Visual Aids (via SORA) | ChatGPT |
| Fact and Citation Verification | Author |
This table is not about compliance or self-defense — it’s about trust. It’s a transparent statement that says: “I use AI, but I remain fully accountable for my work.”
Including this matrix at the end of an article, a report, or a presentation builds credibility.
It invites others to do the same and raises the overall ethical standard of our profession.
As project managers, we can integrate this mindset into everyday work — in ways that are practical, measurable, and visible.

Through these small actions, we turn transparency from a personal choice into an organizational habit.
If you’re a project manager, you already have what it takes to lead this transformation.
You understand systems, you manage change, and you value integrity.
By doing this, you’re not just being transparent.
You’re building a culture of trust — one that will define the future of the project management profession.
The next generation of leaders won’t be judged by how perfectly they hide AI,
but by how openly and ethically they use it.AI will not replace project managers.
because in the end, technology can automate tasks — but only humans can build trust.
Let’s make that our differentiator. Let’s ensure that in a world filled with artificial voices,
ours remains authentic, credible, and human.
AI Engagement Transparency Matrix
| Area of Application | Performed By |
| Literature Research | Author |
| Literature Processing | Author |
| Reference Formatting | ChatGPT |
| Concept Clarification | Author |
| Editing | Author |
| Refinement | ChatGPT |
| Visual Aids | Author Scripts in ChatGPT |
| Fact and Citation Checking | Author |
Article Word Cloud

References :
#ProjectAbility #AI #EthicalAI

Hello Paolo, I fully agree with your opinion and your proposal. Your idea of an AI transparency matrix could be really useful for understanding what we are reading and how much trust we should place in the text and the author (assuming they exist :-)).
It also reminds me of the requirements traceability matrix that we should all be familiar with and the intention to keep track of where it all began, so that we can have a clear overview of what to take into account
LikeLiked by 1 person
Thank you Davide, I really appreciate your comment because it highlights that connection: the common principle of traceability is indeed what builds trust. Nevertheless I like to think that The RTM aims to ensure completeness and alignment between requirements and deliverables.
The AI Transparency Matrix instead focuses on authorship and accountability — clarifying which parts of a work were generated, supported, or validated by AI vs by a human.
In other words, one traces the “what” (requirements → outputs), while the other traces the “who and how” behind content creation.
Really appreciate you taking the time to stop by and share your thoughts — glad to have your perspective !
LikeLike