Humor is often overlooked as a tool in technology design, yet I believe its potential to transform user experiences is immense. From fostering adoption by breaking down barriers of intimidation, to building trust through authentic, lighthearted interactions, and even easing stress in high-pressure scenarios, humor and gamification offer unique avenues for crafting emotionally intelligent systems. Let’s delve into how these elements can bridge the gap between cold technology and warm human connection.
Published: February 13, 2025
Recently, I contributed a post on LinkedIn congratulating the French government—and by extension, the EU—for making a strong bet on AI. You can read my thoughts on that there.
For this blog entry, I want to focus on a key concern I have with AI agents: the challenge of fostering meaningful, empathetic interactions.
An AI agent is basically a piece of software that makes decisions or performs tasks for us based on the data it observes or learns from.
While AI agents cannot truly ‘feel’ empathy, many still come across as overly dry or lifeless, which can limit genuine engagement. My work—and that of others studying human–AI interaction—aims to develop strategies that make AI responses more relatable and supportive, without blurring the line between real empathy and simulated understanding. By refining language models, user experience design, and ethical considerations, we can encourage more human-friendly AI interactions while remaining transparent about what these systems can—and cannot—truly feel.”
Hugging Face is offering a course on AI agents, and I’ve signed up. So far, I’m really enjoying it. I appreciate their approach—their culture shines through both in their media presence and in the way they engage during the course. It’s a fantastic opportunity for anyone looking to get started in AI.
I’ll share more insights as I work through the modules.
Published: January 17, 2025
The year has starter strong and I have been consuming books, podcast and documentaries taking advantage of the holiday break.
One conversation that I found particularly interesting is Jordan Peterson's discussion with Andressen Horrobitz EP. 515. In it, Marc Andreessen makes the following point:
“The single biggest fight is going to be: what are the values of the AIs? … What will the AIs tell you when you ask them anything that involves values, social organization, politics, philosophy, or religion?”
Spot on! If you’ve been following my work, you’ll know that this is precisely why I’m getting involved—to influence the development of AI toward a more humanistic and human-centered approach.
This conversation is well worth a listen.
Published: January 1, 2025
Happy New Year 2025! I’m excited to dive deeper into research, continue learning and discovering, and share my findings with all of you. This year, I’m also looking forward to expanding my network and connecting with inspiring individuals—outliers, innovators, dreamers, and the courageous minds shaping the future. To keep my imagination thriving, I’ve added several sci-fi books to my reading list for the new year. Here’s to a year of curiosity, creativity, and growth!
Published: December 23, 2024
The Future of AI: Reflections on Trust from the ADIA Lab Symposium 2024
Attending the ADIA Lab Symposium 2024 in Abu Dhabi offered profound insights into how artificial intelligence is shaping our ability to address some of humanity’s most pressing challenges. From navigating complex market dynamics to fostering sustainable decision-making, the symposium emphasized the importance of balancing technological advancements with ethical priorities.
As I reflect on these discussions, one concept stood out to me as foundational: trust. Trust is more than a technical challenge; it is a human necessity. Professor Shafi Goldwasser’s presentation, Constructing and Deconstructing Trust, delved into the critical role trust plays in AI systems. Her exploration highlighted both the fragility and the potential of these systems, asking: Can we trust the tools we create to operate reliably in real-world conditions?
Trust and Its Dimensions
The notion of trust in AI, as explored during the ADIA Lab Symposium 2024, operates on multiple levels:
• System Resilience: AI must perform reliably, even under adversarial conditions.
• Ethical Frameworks: Fairness, transparency, and accountability are paramount in cultivating trust.
• Human Perception: For AI to be adopted widely, users must feel confident in the systems they engage with.
Why Trust Matters
Building trust in AI systems is not just about mitigating risks; it’s about ensuring these technologies genuinely serve humanity. As Goldwasser discussed, trust is a dynamic process, not a static state. It must be earned and continually reinforced through thoughtful design and robust safeguards.
Trust and Humor
Humor isn’t just entertainment—it’s a psychological tool that can subtly build trust and shape perceptions, especially when applied to emerging technologies like AI and robotics. Here’s how:
The Mere-Exposure Effect
The mere-exposure effect is a psychological phenomenon where repeated exposure to something makes us more likely to accept it. Companies like Boston Dynamics leverage this effect by showcasing robots in lighthearted, humorous scenarios, such as their holiday videos featuring robots dressed as Santa or dancing to pop songs. These portrayals create a sense of familiarity and friendliness, subtly easing public fears about robotic integration in everyday life.
Fun Fact: Studies have shown that people rate unfamiliar faces as more trustworthy after repeated exposure, even without direct interaction—a principle that extends to AI and robotics.
Humor and Perceived Intent
Humor signals benign intent. When a robot or AI interacts with humor, it conveys a sense of approachability, reducing the “uncanny valley” effect where machines feel unsettlingly human-like.
Behavioral Modification Through Humor
Humor can also subtly influence behavior. Positive reinforcement through humorous interactions encourages users to engage more frequently with AI systems, fostering trust over time. For instance, AI chatbots with humorous personalities turn mundane tasks into enjoyable exchanges, making users more likely to return.
Ethical Implications
While humor can build trust, it’s important to question its intent. If used to mask deeper ethical concerns—like data privacy or biases in AI systems—it risks becoming manipulative. Transparency about how humor is employed is key to maintaining genuine trust.
At a conference dominated by technical rigor, my focus on humor and gamification made me stand out, reaffirming my belief that AI isn’t just about data—it’s about people. I remain skeptical of building ‘human-like’ AI; instead, I champion human-complementary AI, one that amplifies our creativity, empathy, and emotional intelligence rather than attempting to replicate them. As I learn more, I remain open to where these discussions will lead.
Published: December 2, 2024
Framing Intelligence: Lessons from Homo Deus
As I read Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari, several questions arise:
Viewing AI through a human lens risks missing its intrinsic potential, its ability to form unique bonds with humans much like animals or insects, while retaining its own identity. But will AI ever truly be human? Perhaps the real question is whether AI could fool us into treating it as human. While this may one day be possible, I find myself reluctant to want AI to be human.
Instead, I envision AI as an extension of humanity, something we can bring into our lives and disconnect from with equal ease. For AI to seamlessly integrate with humanity, it would need to function as we do: flawed, vulnerable, and open to risk. But this raises critical concerns. Could AI become a kind of armor, shielding us from the world but stifling our ability to learn, grow, and make mistakes? Consider children, when should they have access to AI? Should we wait until they’ve developed a sense of independence?
This brings me to a broader point: intelligence is not, and should not be, confined to human terms. By framing ‘algorithms’ as the decision-making processes shared by humans, animals, insects, and AI, we can foster a more nuanced understanding of intelligence without reducing AI to an imitation of humanity.
For instance, animals exhibit intelligence that isn’t human but is no less remarkable. Bees navigate using celestial cues, and jumping spiders demonstrate problem-solving abilities. These forms of intelligence are deeply functional, but they don’t compete with human intellect, they complement it. Similarly, AI doesn’t need to replicate human thought or emotions to be valuable.
Instead of asking what AI can become, perhaps we should ask: what do we bring to AI? In a future where AI may surpass us in speed and logic, it’s our creativity, empathy, and emotional intelligence that remain irreplaceable. These human qualities are not liabilities, they are our contributions to a partnership with technology.
This framing matters because it sets the stage for how we develop and integrate AI into our lives. As Harari suggests, technology shapes civilizations, but it’s our responsibility to shape technology. By embedding humor, play, and human-centered design into AI, we can ensure that it uplifts rather than isolates us, fostering connection and resilience in a world of rapid change.
In the coming months, I’ll delve deeper into these ideas through Project Jolly Ride, an exploration of how humor and gamification can transform driver assistance systems into tools that are not just functional but also profoundly human-centered. This is my starting point, but the questions raised here will remain central to my journey.
References
Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow. Harper Perennial.
© 2024 Media Lechuga. All Rights Reserved.