By
Sergio Carvajal-Leoni – Ph.D. Student, Texas State University; Erasmus+ Student, University of Padova

Turns out the cloud is not made of clouds.
It is made of plastic and metal.
Of rare earth materials pulled from the skin of the planet by children with no childhood.
It is a sleepless machine, electric and hungry,
demanding more power than we can ethically generate,
just to make sense of the traces we leave behind.
This image is one of many I created for the ETHTECH project, an interuniversity initiative that seeks to move beyond the shallow, binary debates that currently define AI discourse in higher education. Instead of asking, “Is AI good or bad?”, ETHTECH poses deeper questions: Who pays the price for this power? Who benefits from this automation? What kind of humans do we become when our learning is shaped by algorithms we didn’t choose and barely understand?
I admit I am one of those who finds the subject of Artificial Intelligence to have become extremely annoying, especially within higher education. Even though this computational technology has been developing for decades, we’ve gone from hardly mentioning AI to hearing about it constantly, perhaps far more than we should. The conversation tends to be simplistically polarized. On one side are those who embrace it eagerly and push for widespread adoption without much thought. On the other are those who detest it, counting the days to retirement with no intention of engaging. Having to choose between these extremes is counterproductive. Yet this is often the state of discourse on university campuses, particularly in the United States, a dynamic that risks erasing the agency we still have to shape our technological futures.
Fortunately, projects like ETHTECH exist to redirect the conversation. Getting to know this initiative has been a breath of fresh air in my ongoing efforts to understand the impact AI is having and could have on higher education. Funded by the European Union and led by Dr. Juliana Raffaghelli from the University of Padova, ETHTECH focuses on the ethics of AI use. It invites participants to look critically at this technology, without shying away from it, and without embracing it blindly.
I attended one of ETHTECH’s workshops at the Rovigo campus of the University of Padova in the summer of 2025. The experience was interactive and designed to bring together people from diverse backgrounds and areas of expertise. We worked through real-life scenarios in which AI is already being used in adult education, identifying the ethical tensions that emerge. The project is grounded in a solid theoretical framework and draws from the 2022 European Union guidelines on the ethical use of AI and data in education. It also uses active learning to encourage participants to examine their assumptions and better understand the multiple layers involved in our engagement with AI. The goal is to support better-informed, ethically grounded decisions moving forward.
As someone who maintains a critical but hands-on relationship with AI, discovering ETHTECH was a game changer. One of the most impactful aspects of the experience was being introduced to the concept of post-digital positionality. That term alone helped me articulate ideas I had been struggling with, offering a powerful way to locate myself and my work within this emerging reality. As an artist, I also appreciated the interdisciplinary nature of the project, which allows space for voices beyond academic texts and recognizes media and artistic production as valuable educational tools. I submitted a series of AI-generated artworks designed to help instructors prompt ethical discussions in the classroom. One of those images (the cloud not made of vapor but of circuitry and electricity) was meant to provoke reflection on the environmental and human costs of the infrastructure behind so-called cloud computing.
ETHTECH is the kind of initiative academia urgently needs, one that treats AI not just as a technical innovation but as an ethical, cultural, and pedagogical issue. Its strength lies in its interdisciplinary structure and its commitment to active engagement. I plan to carry many of the lessons I learned back to Texas, where I currently reside and work. In the United States, AI adoption is often treated as inevitable. But in universities, where critical thinking is still (at least theoretically) a core mission, we need to pause. We need to ask better questions. AI adoption, especially in educational contexts, must be rooted in ethical reflection. Otherwise, we surrender control of our futures to systems we did not build and do not fully understand.
Some of the most important questions ETHTECH raises center on data. AI systems do not function in a vacuum. They are powered by data, our data. Our thoughts, ideas, creative works, conversations, and actions are the raw material for training large language models. Understanding where that data comes from, how it is collected, who owns it, how it is stored, and what environmental toll it takes to process it. These are not side issues. They are foundational. These are the pillars of ETHTECH’s mission and part of what makes the project feel so necessary right now.
As I wrapped up my two-month Erasmus+ research visit at the University of Padova, I left feeling energized, challenged, and motivated to help shape the future of AI in education. I owe that to a remarkable group of scholars and visionaries who are using the ETHTECH platform to help others see beyond the techno-hype and toward a more just, critical, and human future. This is not about trendy terms or clever gimmicks. It is about making sure that technologies, however powerful, serve human dignity, and not the other way around.
AI has to make our lives better. Otherwise, why are we using it?
Leave a Reply