TL;DR:

I’ve been seriously using AI tools for about eight months now. I’m measurably more productive - multiple times over. But the question that keeps me up isn’t “am I better?” - it’s “am I better enough?” The tools move faster than anyone can learn them, the landscape shifts weekly, and the only honest benchmark I’ve found is: be better than yesterday. If you’re feeling the same way - you’re not alone, and this post is for you.


Introduction

I want to talk about something that isn’t a vulnerability, a framework, or a conference talk. I want to talk about what the last eight months of using AI have actually felt like - not the polished “10x developer” narrative, but the real thing. The productivity gains, the frustration, the constant feeling of being behind, and the uncomfortable realization that when something goes wrong, it’s almost always my fault.

I’m not writing this as a guide. There’s no “5 steps to AI mastery” here. This is what it actually looks like on a Sunday morning when you’re leading a research team, trying to ship work, and also trying to figure out if you’re squeezing enough out of tools that seem to evolve faster than you can learn them.

If you’re a security professional (or really, anyone in tech) trying to figure out your relationship with AI right now - I get it. Let me share what I’ve learned so far.


The tools and how I actually use them

I started casually about a year ago. Playing around, testing things, seeing what the hype was about. About eight months ago it became serious - integrated into my actual workflow, my team’s workflow, the way we plan, build, and deliver.

I’ve tried all the big ones. ChatGPT, Claude, Claude Code, Cursor, Gemini, NotebookLM, and a handful of smaller tools along the way. Here’s what stuck and why:

ChatGPT is my everyday tool. Brainstorming, quick questions, general tasks. It feels conversational and casual, and for the kind of “think out loud with me” work, it does the job well.

Claude is where I go when I need depth. Deep research, creative planning, bouncing ideas - both technical and non-technical. When I need a debugging buddy or when I need to build a plan of action, Claude is where I land. It also produces the most thorough deep research I’ve seen - it spends the most time, pulls more resources, cross-references them, and queries more before coming back with an answer.

Claude Code and Cursor are my coding tools. Bug fixes, platform improvements, building things from scratch. Claude Code became my main driver for one simple reason: effectiveness. The connection between how I explain things and what it produces just clicks. The technical outcomes are consistently what I need them to be. It’s also what we use at work, so it gets the most daily mileage.

NotebookLM is the most underrated tool in the stack. I use it for learning and organizing research. I’ll take the deep research output from Claude or ChatGPT, pass it into NotebookLM, and create learning materials - audio, visual, or text - to quickly get up to speed on something new or dive deep into a specific niche.

The key realization: there isn’t one tool that does everything best. Each one has a sweet spot. Knowing which tool fits which problem is a skill in itself - and one that took time to develop.


I’m 5x more productive. Is that enough?

Here’s something that troubled me for quite some time:

I can measure it. I track my own tasks, time to deliverables, the amount of work I can fit into a day, a week, a sprint. The numbers are clear - I’m doing significantly more, in less time, at the same or better quality. By any reasonable standard, that’s a win.

So why doesn’t it feel like enough?

Because the moment you realize you’re 3x or 5x more effective, the next thought is: should I be 8x? 10x? Are others getting more out of this than I am? Am I using these tools well enough? Do I even know what I don’t know about them?

There’s no ceiling to compare against. No benchmark that says “congratulations, you’ve arrived.” It’s just an open-ended question with no clear answer. And that can be genuinely uncomfortable.

The partial answer I’ve landed on - and I’ll be honest, it’s still evolving - is this: the goal isn’t a number. The goal is being better than yesterday.

I block time to review what I’ve done, look at what worked, what could be extended, what could be added. I look at what others are doing that helps them and ask myself if any of it is relevant for me and what I do. Not everything is. But some things are, and those compound over time.

It’s not a satisfying answer. But I think it’s the honest one.


The impossible pace

I’ve been in this field for over a decade. I’ve lived through new frameworks, new attack techniques, new compliance requirements, shifts in the industry - the usual. None of it felt like this.

The pace of AI is unlike anything I’ve experienced. You learn something on Sunday. By Thursday you’re only starting to understand it. And on that next Friday, you see twenty new things that just dropped - things you haven’t even heard of yet, things that might change how you use everything you just learned. So now you need to learn about those before you even finished learning what you started with.

It’s not just fast. It’s compounding. Every new thing builds on or changes the thing before it. And the gap between “I just heard about this” and “I understand how to use this effectively” is real.

I don’t have a system for filtering the noise yet. I’m still figuring it out. If someone tells you they have it all figured out, they’re either lying or they’re not paying close enough attention.

My best guess is that this is what it felt like when the internet was invented. A wave so big that nobody could fully grasp it while standing inside it. And just like the internet, the people who will benefit most aren’t the ones who learned everything - they’re the ones who learned the right things and applied them well.

For now, all I can do is stay curious, be honest about what I don’t know, and keep moving.


What it did for my team

I lead the research team at Mitiga. We publish research and present it at cybersecurity conferences. That means tracking conferences, CFP deadlines, submission topics, who submitted what, what got accepted, what didn’t, reviewing each other’s submissions, improving what failed, and when something gets accepted - tracking the work of preparing decks, booking flights and hotels.

For a long time, each person had their own system. We tried shared Confluence pages, Google Sheets, Excel. All of it was “fine” but far from comfortable or complete.

In a single evening - about five hours - I used Claude to build a full application. Designed it, developed it, deployed it to one of our cloud environments, put everything behind our SSO. The app has everything we needed: personal tracking per researcher, auto-updating conference details (locations, dates, CFP windows), LLM-powered auto-review for submission abstracts, tracking of submission and acceptance rates, a dashboard for team leads to see the full picture across the team, and - one of the highlights - Slack notifications. New conferences, approaching deadlines, and direct messages to people who got accepted about tasks they need to complete. Five hours. Fully operational. Multiple rounds of security audits included.

That would have taken weeks before. Maybe longer with the back-and-forth of requirements, development, testing, deployment. It’s not a hypothetical productivity gain - it’s a thing that exists and my team uses every day.

But adoption wasn’t instant. There wasn’t resistance exactly - more reluctance. The “hard to let go” of old ways. Hard to build trust in the output. A steep learning curve on how to use the tools effectively, how to get the most out of them, knowing which tools exist and which tool fits which problem. And like any honest lesson, a lot of worry about cost.

What I’ve learned about driving adoption: we need to constantly share with each other. Tools, tricks, skills, resources. When one person gets better, everyone should get better. That’s the only way this works at a team level.


Three things in parallel

Before AI, context switching cost me 20 to 40 minutes every time. That’s 20 to 40 minutes to sink into something deeply enough to be useful. And any small interruption could reset that timer completely.

Now I work on three things in parallel across different windows. I have my own tricks - different terminal colors for different workstreams, different screens on my computer for different projects. But the fundamental shift isn’t about tricks. It’s about what AI changed about the nature of focus itself.

I don’t need to “sink” into something anymore. I need to “refocus” - which is a much lighter operation. Check the current status, remember where I left off, give it the next task, and move to the next window. When I start something new, I use the AI to build a plan with milestones that I can always reference, and have the AI keep me in check against that plan.

The deep focus that used to be required - holding the entire problem space in your head, every variable, every edge case - the AI holds that for you. You just need enough focus to know what you’re trying to achieve and what the current state is. That’s it.

It’s a different way of working. Not necessarily better in some philosophical sense. But for getting things done across multiple fronts without losing quality - it’s a significant upgrade.


When it goes wrong, it’s (almost) always me

This is the part that took the longest to accept.

When the AI gives you garbage output, the instinct is to blame the tool. “It doesn’t understand.” “It hallucinated.” “It went off track.” And sometimes, sure, that’s true. But if I’m being honest - and this post is about being honest - it’s my fault about 99% of the time.

It comes down to three things:

Being lazy with prompts. Vague instructions, not listing everything, not being specific. Letting the LLM figure things out on its own when I should be spelling it out. The AI is powerful, but it’s not a mind reader. When I take the time to be detailed and specific with what I ask, I get better results. Every single time.

Not breaking things down. Throwing the entire problem at the AI in one go instead of decomposing it into parts, sections, logical steps. This is the fastest way to get fragmented, messy output that you’ll spend more time fixing than it would have taken to do it properly from the start.

Not knowing what I actually want. This is the big one. When I haven’t spent the time thinking through what the end goal looks like - when I’m unclear in my own head about what I’m trying to achieve - the AI spirals. It starts creating technical debt, fixing things that shouldn’t have been built that way, producing fragmented content that doesn’t tie together. And you end up in a loop of fixes on top of fixes.

The main difference between working with AI and working in your own head is this: when you think through a problem yourself, you already have all the details, the plan, the context - it’s all in there. But when you need to tell the AI, you have to convert your thoughts into explanations. That translation step takes time and energy. And when you skip it or rush it, you pay for it later.

This realization is humbling. But it’s also empowering - because it means the bottleneck isn’t the tool. It’s me. And I can improve me.


You’re not alone

If you’re reading this and nodding - if you feel like you’re more productive but not productive enough, if the pace of change gives you anxiety, if you’ve blamed the AI for output that was really your fault - I want you to know you’re not alone.

This space moves fast. Faster than anything I’ve seen in over a decade of doing this work. We need to stay on top of it. But we can’t have all of it in our heads at any given moment. Nobody can.

The goal isn’t to know everything. The goal isn’t to hit some theoretical maximum of productivity. The goal is simple: be better than yesterday. Learn something this week that you didn’t know last week. Share it with your team. Review what worked and what didn’t. And keep going.

That’s it. That’s the whole strategy.

It’s not glamorous. It won’t make a good LinkedIn post. But it’s honest, it’s sustainable, and it works.


“You don’t have to see the whole staircase. Just take the first step.” – Martin Luther King Jr.

logo