In the Author Spotlight series, TDS Editors chat with members of our community about their career path in data science and AI, their writing, and their sources of inspiration. Today, we’re thrilled to share our conversation with Sabrine Bendimerad.
Sabrine is an applied math engineer who has spent the last 10 years working as a Senior AI Engineer, managing projects from the very first idea all the way to production.
Her journey has taken her through very different worlds, from analyzing satellite images for big European utility companies to her current role as a researcher in medical imaging at Neurospin. Today, she works on brain images to help stroke patients recover.
Sabrine is also a mentor and the founder of Dataiilearn. She loves to write not only about code, but also about how to build a real career and how to make sure data science projects actually reach that final stage where they have a real impact.
A few months ago, you tackled an urgent question facing data professionals today: “is it still worth it?” Why did you decide to address it, and has your position evolved in the meantime?
Actually, my article “Data Science in 2026: Is It Still Worth It?” triggered an avalanche of messages on LinkedIn. I expected juniors to be worried about this question, but I was surprised to see that people with years of experience were also questioning the future.
I have been in AI for 10 years now, and it’s true that in the beginning, just knowing Python and statistics/math made you a unicorn. Today, the market is saturated with new data scientists, and new tools based on AI agents are taking over the manual, simple tasks we used to do.
So my position is still the same or maybe even stronger today: AI and data science are still worth it, but the “generalist data scientist” is a dying species. To survive, you must evolve beyond just models in a notebook. You need to master deployment, LLMs, RAG, and, most importantly, domain knowledge that helps data interpretability. If we build basic models in a notebook, of course our tasks could be done by agents. The jobs aren’t disappearing; they are just different. You need to build skills that adapt to this new market.
You’ve written quite a lot about careers in data science and AI. How has your own journey shaped the insights you share with your readers?
From the beginning, my journey was never just about the code. I realized early on that solving real-world problems is something you don’t learn in a university or a bootcamp. You learn it by being in the trenches with real teams. In my years working with satellite images for energy and water companies, I learned that to create a real solution, you have to think “end-to-end.” If a model stays in a notebook, it has zero impact. This is why I write so much about MLOps — how to manage, deploy, and monitor models in production.
Moving into the medical area added a new layer to my thinking. In the utility sector, if you make a mistake, you handle financial loss. But in medical imaging, you handle human lives. This shift taught me that AI can generate code, but it cannot understand the weight of a human decision. This is exactly why I’ve started to write about things like RAG, LLMs, and their impact. It’s not just a trendy topic for me; it’s about how difficult it is to make these tools reliable enough for a human to trust them 100%.
My insights come from this bridge: I have the industrial background of building for production, but I also have the research background where the methodology must be perfect. I write to share these technical skills, but also to help people navigate their own journeys. I want to show them the possibilities they have in this field, how to manage their path. and how to handle complex projects. I want my readers to see that a career in data is not always a straight line, and that’s okay.
What are the most noticeable differences you observe between starting out now compared to your own early years in the field? How different is the playbook for early-career practitioners these days?
The game has been totally rewritten. When I started, we were builders, and we spent weeks just cleaning data and setting up servers. Today, you have to be an AI Orchestrator. You can build a system in days that used to take months. I wouldn’t say it’s more difficult now, but it is definitely difficult if you try to start a career using the trendy skills from 10 years ago.
Juniors today have so many options to get ready for the market. We have a goldmine of information on YouTube and on blogs. The real challenge now is filtering out the garbage. The ones who survive are those who monitor and understand the market to adapt quickly. Of course, you need to understand the theoretical side of AI, but the real skill today is flexibility.
It is not a good idea to only want to be an expert in one specific tool. 10 years ago, we were talking about switching from R to Python or from statistics to deep learning. Today, we are talking about switching to generative AI and agents. The foundations stay the same, but you need the flexibility to understand a new trend quickly, implement it, and answer your stakeholder’s needs. Flexibility has always been the “secret” skill of a data scientist, whether 10 years ago or today.
Your articles usually balance high-level information with hands-on insights. What do you hope your audience gains from reading your work?
When I write, I always keep in mind that I am sharing experiences to help people build their own expertise. For example, when I write about MLOps, I try to bridge the gap between the big picture of production and the practical technical steps needed to get there. I still hesitate every time I start a new article! Usually, I discuss topics with my students or colleagues to see what interests them, and then I link that to what I see myself in the industry. My goal is for the reader to walk away with practical guidelines, not just a concept.
I try to reach different audiences depending on the topic. Sometimes it is a very technical article, like how to deploy a model in a cloud using Docker and FastAPI, and other times it is a “big picture” piece explaining what “production” actually means for a business. I find it harder today to write only about specific tools, because they evolve so quickly. Instead, I try to share feedback on the things that slowed me down or the real challenges I face in implementing a specific project (like my article about RAG systems). I want my audience to learn from my mistakes so they can go faster.
In your own professional life, what impact has the rise of LLMs and agentic AI had? Do you sense the trend has been positive, negative, or something more nuanced?
In my day-to-day, I use LLMs as an experienced colleague, someone to brainstorm with or to quickly prototype and debug a script. With agents deployment I also start to use vibe coding and automation for basic tasks, but for deep research I am much more guarded. I currently work with medical data, where there is literally zero space for error. I might use AI to reshape a thought or refine my methodology, but for the complex tasks, I have to keep full control of my code.
I’m not against the use of LLMs and agentic AI, but If you let the AI do all the thinking, you lose your intuition. For example, when I’m working with brain imaging, I have to be annoyingly manual with my core logic because an LLM doesn’t understand the pathology you are trying to predict. Every brain is different; human anatomy changes from one subject to another. An AI agent sees a pattern, but it doesn’t understand the “why” of the disease.
I also see the impact of AI agents on the work of my interns. AI agents are a huge boost for their productivity, but they can be a disaster for human learning. They can generate in an afternoon a mountain of code that used to take months, and it’s hard to master a topic if you never make the mistakes that force you to understand the system. We must keep the human at the center of the logic, or we’re just building black boxes we don’t actually control.
Finally, what developments in the field are you hoping to see in the next year or so, and what topics do you hope to cover next in your writing?
I would really like to see the conversation shift away from constantly chasing new tools, and move toward better science and more meaningful applications of AI.
We’re in a phase where new tools, frameworks, and models are emerging very quickly. While that’s exciting, I think what’s often missing is transparency and a deeper focus on impact. I’d like to see more work that not only augments human productivity, but also contributes to areas like healthcare, education, and accessibility in a tangible way.
Of course, LLMs and agentic AI will continue to evolve, and I’m very interested in exploring what that actually means in practice. Beyond the hype, I’d like to better understand and write about questions like:
- Are these tools truly changing how we think, or just how fast we execute?
- Do they genuinely improve the quality of our work?
- What kind of impact do they have across different fields?
In my upcoming writing, I’d like to focus more on these reflections combining technical perspectives with a deeper look at how AI is shaping not just our tools, but our way of working and thinking.
To learn more about Sabrine’s work and stay up-to-date with her latest articles, you can follow her on TDS.
Parts of this Q&A were edited for length and clarity.