Have you ever tried a new app only after the hype died down? That was me with AI. While my LinkedIn feed was flooded with "life-changing ChatGPT prompts" and "10 ways AI will make you rich" for months on end, I kept scrolling. It was just too much, and I hoped people would talk about something else for a change.
But earlier this year, when our studio started taking on bigger communication projects, I finally caved. Turns out, AI tools are not that bad. Let me share my experiences with the big three AI assistants and how I actually use them in my work. No hype.
Which one?
What’s the difference among the panorama of AI assistants that you hear about? Good question, and I was wondering the same. After testing the main contenders, here are my two cents.
Claude (my favourite). Quick to understand, structured in its responses, and transparent about its process—it even shows you the code behind its generations. Bonus: they just added data analysis and visualisation features that I'm curious to try.
ChatGPT. Decent all-rounder. Great at multilingual conversations and can mirror your writing style for edits. However, it sometimes stumbles on seemingly simple tasks (like generating city coordinates—more on that in a minute).
Gemini. I had high hopes for it as it comes free with the Google enterprise account, but it's been... frustrating. It often provides vague answers and tons of unsolicited suggestions. Can’t say I recommend.
What for?
Now that we know how AI works, is it ready to replace data storytelling folks? I don’t think so. But it can be a good (and cheap) assistant for some of your daily tasks.
Copy editing (not writing). I don't let AI write anything for me from scratch—that usually results in vague, overly pompous content. Instead, I use it for:
Tightening up existing text
Finding alternative phrasings
Brainstorming different tones
Getting quick feedback on clarity
If you try that, always make sure to re-read the output. AI suggests, you decide.
Code debugging. When our D3.js developer was off last month, AI helped me make minor tweaks to an interactive dataviz project. But there's a catch: while it's great for spotting simple bugs and suggesting fixes, some problems still need a human touch. When you start going in circles with AI, it's probably time to call a real developer.
Basic data tasks. AI can speed up simple data generation and analysis. Example: for this recent project (launched just two days ago—will talk more about process soon!), we used it to generate random global city coordinates for a globe visualisation. Though we ended up fine-tuning the locations manually, it gave us a solid starting point. But it's not perfect: when we asked ChatGPT to remove coordinates that fell in oceans, it couldn't handle it. 🤷🏻♀️

So that’s my journey with AI so far—hope it answers some of your questions, if you’re still debating on how to use it. I’ll report back once I try Claude’s new dataviz functionalities!
Thank you for reading The Plot.
See you in two weeks,
—Evelina
P.S. Something I've noticed on the topic of AI: we all have different ways of talking to it. Some people go straight-to-business with commands (after all, it is a machine), while others—like me—throw in "good mornings" and "thank yous" as if chatting with a colleague. So I'm curious: are you team "execute task" or team "friendly chat"?
Learn data storytelling. In person.
I’m leading several training sessions before the year ends, but only one is open to the public. Join us in Utrecht this November for two days of data storytelling fun! With my online trainings on hold for now, this could be your last chance to attend a public workshop for a while.
Let me build upon your notes by sharing my experience. I started with Generative AI about 2.5 years ago—doing the homework, so to speak, by studying LLMs and learning Prompt Engineering. In truth, since these were tools released as public betas, there was little to study and much to experiment with. Just consider how prompt engineering is going to become obsolete (?) now that new models from OpenAI or Perplexity autonomously apply the "chain of thought" without our instruction. So, I'll attempt to report my experience in bullet points (for work, I create content and handle communication, so my experience is less tied to development and more to storytelling—occasionally data storytelling as well).
1. RESEARCH So far, more frustration than enlightenment. In the past, PERPLEXITY.AI gave me factually incorrect or crude answers. However, even just to initiate a search—to be rigorously continued by other means—I'd say generative search should be kept on the radar, awaiting whoever will make it take the real leap forward.
2. COPY EDITING Now indispensable, both with the LLMs included in my Notion Notebook —currently Claude and ChatGPT — and with Gemini (less). I tidy up notes, adjust style, check spelling and grammar, and, like you, use it for brainstorming, especially with formats. For example, when I ask ChatGPT to help convert an article into a format compatible with a narrative podcast, providing detailed specifications of the desired result, the output is valid (much more than Gemini does today, even with its brand new NotebookLM app.
3. TRANSLATION Where I've seen the greatest progress, and where, in truth, we could see the light at the end of the tunnel even five years ago. In Germany, where I live, I was among the first users of DeepL, which has a proprietary model and was born vertically, developing around complex knowledge domains. So, in addition to the triad of ChatGPT, Claude, and (to a lesser extent) Gemini, DeepL is part of my daily experience. Here, however, a different approach is needed for each language: it's evident that the so-called minor languages (minor means: less spoken) have had to rely on local models to achieve valid results — I'm thinking of Hebrew, which I'm familiarizing myself with now, and which brings me to the fourth point of my experience.
4. LEARNING particularly language learning: LLMs, used well and with a grain of salt, can radically change the world of learning and teaching. The examples are nearly infinite for teachers who want to experiment with developing syllabus, exercises, games and Co.. I use ChatGPT in particular as support for my learning—as a complement to the lessons I receive via Zoom from REAL HUMAN teachers. I've joyfully discarded apps like Duolingo. Why? Because just as I've grown tired of social media's manipulative design — and in fact, I'm here on Substack as a quiet harbor that has restored my pleasure in communicating — I've also grown weary of intrusive apps.
5. I also do AUDIOVISUAL communication. This topic could be extensively discussed, but I'll be brief: prosumer design tools that have implemented AI, like CANVA, and other more specialized ones like RUNWAY, don't reinvent creativity. However, for those who need to create decent content in reasonable timeframes, they've put unimaginable powers in the hands of me and my colleagues. And then, of course, there's the photographic and audio editing side. I have my small list of tools that I use almost daily—e.g., CAPCUT (yes, the one made by Bytedance!) or ELEVENLABS for voices and sounds, ADOBE FIREFLY and Runway for some image creation/editing/motion graphics.
CONCLUSION You're right. NEVER use AI to create from scratch. Never ask a black box to replace your mind or your creativity. But for those who have always had a hands-on approach and love to produce as much of their own work as possible, generative intelligence is truly a superpower. Using it to create relevant and valuable content, or junk... well, it depends, as always, on us and our free will. I hope these notes help and are the beginning of an illuminating thread.
Thanks Evelina ! I'm wrong to be so late, but I'm still in the antechamber of generative AI. Hype always scares me (not tech). For data work, I'm not really sure it will help me to save time. Generative AI outputs seems to require too much fine-tuning. For the moment, I prefer to devote this time to python script adjustments.