Are First Principles Becoming a Distraction?
I'd like to explore the inherent uncertainty and indecisiveness many are grappling with in balancing fundamental understanding versus the immediate results enabled by AI tools. For clarity's sake, know that in this post I use the terms "first principles" and "fundamentals" interchangeably.
Lifelong learning is essential for a lasting infosec career. For the past year or so, as I've worked toward expanding my knowledge and skill set, I've dealt with a persistent internal dilemma.
I like to know the bare truth behind systems. How they work; how individual first principles come together to make a larger whole. Naturally, this doesn't seem like a problem. The first principles—the fundamentals—are the most important, no? At least, that's what my teachers did their best to hammer home. Lately, however, that lesson is continually called into question.
In cybersecurity—and similarly, in many IT-related fields—practical necessity often forces us to learn tools rather than the underlying first principles. The industry is results-oriented: knowing how to quickly secure systems, respond to incidents, or manage infrastructure often matters more immediately than grasping the theoretical fundamentals behind every tool or practice. This pragmatic approach isn’t inherently flawed—it's effective—but it creates tension when considering long-term knowledge retention and adaptability. As the availability and sophistication of AI-powered tools grow, it becomes even easier to focus purely on immediate outcomes, potentially at the expense of deeper understanding.
The fact remains: first principles are of extreme importance. It's necessary to have some understanding of an underlying system and its processes before you can truly validate its state—its safety. That being said, there is no such thing as a "perfect" security attestation. Security is fluid. Ensuring security requires ongoing vigilance. Failing to remain vigilant inevitably invites risk. And so, we rely on our tools to get the job done. We have to—we often have little time for much else.
More and more, we're relying on outsourcing our understanding of some domain (in this case, information security) to the LLM. To the tool. What use is there in cognitively front-loading all the fundamentals when you can just have the model worry about them? And if the model doesn't have the capability, can't you just fill in the gaps as necessary?
Students and knowledge workers, especially those in IT-oriented fields, are faced with this dilemma as the age of AI progresses. The value proposition of putting the effort into learning first principles versus simply relying on outsourced intelligences to solve your problems is genuinely DIFFICULT to calculate on your own. This value is always in flux as the capabilities of models continue to enable more and more sophisticated agentic behavior.
On one hand, AI tooling can allow you to see the bigger picture. It can just as easily augment your ability to understand fundamental first principles. It's an amazing learning tool.
On the other hand, it gives one an intelligent "cudgel" to smash their problems with. You can tell it to do Z without having to know X and Y yourself. Because it already has all the information it needs—usually.
It's hard to avoid sentiments online that young and future generations could be at career disadvantages as the capability and dynamism of AI agents continue to outpace the rate at which humans can acquire similar skillsets. There's no telling whether this will actually be the case. Optimists often claim that AI agents will merely automate the boring parts, freeing up cognitive space for humans to focus on what only humans are good at. There's no telling whether this will actually be the case. And there lies the dilemma. At this point, it's impossible to know.
I don't mean to portray AI tools negatively. They've been of great use to me. Still, whenever studying on my own—typically the fundamentals of some IT-related domain, programming language, etc—I find myself asking a series of questions: "Are first principles becoming distractions? Am I wasting time mastering them, while others leap into AI tools and find rapid success? Will diving directly into results-oriented projects with these tools offer a faster, equally robust understanding? How does their usage align with personal privacy concerns—especially since these models often handle sensitive data behind closed doors without clear transparency or guarantees of trust?"
It's a real rabbit hole. And though I am resolved in the beliefs that an understanding of first principles will always remain important and that some measure of human intervention within our systems will always be necessary, I grow uncertain as to how practical and feasible they will remain as time passes.
Are these tools changing the way you learn? The way you approach problems? Do you feel over-reliant at all? Or do you not see it that way? What are your feelings regarding the bigger picture? What else?