CASE STUDY

The Rise of the Pseudo Tech Professional

For years, both the legitimate cybersecurity world and the criminal one shared an uncomfortable but useful truth: competence mattered.

Whether you were building defensive architecture, conducting incident response, developing malware, reverse engineering code, or running a network intrusion, there was an expectation that the person doing the work had actually spent the time. They had learned the trade. They had made mistakes. They had debugged broken systems at 2 a.m. They had stared at packet captures, logs, code, shell output, and malformed data until pattern recognition became instinct.

Not to be too indelicate, but there was usually a difference between a practitioner and a pretender.

In the FBI cybercrime field we had a name for one category of criminal pretender: the script kiddie. Someone with tools, attitude, and ambition, but not much underlying skill. But then, even in the Bureau we had our own versions as well: the political appointee, the nepotism hire, the executive favourite, the person with the title but not the technical depth. They existed, but they were still constrained by one hard reality. At some point, the technology exposed them.

That is what is changing.

AI, large language models, and now “vibe coding” have begun to erode one of the oldest friction points in cyber operations: the need to actually know what you are doing before you can look like you know what you are doing.

That distinction matters.

I am not arguing that AI has made expertise irrelevant. It has not. In fact, real experts are often becoming more effective because they know how to interrogate outputs, validate assumptions, test code, and identify subtle failure points. The problem is that AI has also produced something else: a growing class of pseudo tech professionals who can generate the appearance of expertise without the substance that used to sit behind it.

  • They can write code they do not understand.
  • They can produce architecture diagrams they could not defend under questioning.
  • They can draft threat assessments assembled from AI summaries, without the ability to evaluate source quality, analytic confidence, or operational plausibility.
  • They can speak in fluent technical prose while lacking the judgment that comes only from experience.

In other words, we are watching the rise of the cyber impersonator.

And yes, I mean that deliberately.

Because what AI increasingly enables is not simply assistance. It enables technical impersonation. A person with modest knowledge can now present as a strategist, analyst, engineer, malware developer, intelligence professional, or security architect largely because the machine can supply the language, the formatting, the code, and even the confidence.

Historically, bad actors in cybercrime needed either technical skill, access to skilled collaborators, or time to develop capability. Defenders and employers could take some comfort in the fact that credibility usually tracked, at least loosely, with competence.

Today, that relationship is breaking down.

The same phenomenon is appearing across the legitimate workforce. We are seeing people whose principal research method is asking an LLM. Their fact-finding is synthetic. Their coding is synthetic. Their analysis is synthetic. Their expertise is performative. When challenged, they often cannot explain why a control works, why a vulnerability matters, why an architecture is insecure, why a detection logic fails, or why an adversary would behave a certain way.

But they can sound convincing. That may be the most important change of all.

Cybersecurity has always had to deal with incompetence. What is new is the scale, speed, and plausibility with which incompetence can now be packaged as expertise. The old script kiddie was often obvious. The new pseudo tech professional may arrive with polished documentation, AI-generated code repositories, slick presentations, confident terminology, and a personal brand built entirely on machine-accelerated output.

The result is a market signal problem.

Leaders may struggle to distinguish the practitioner from the prompt operator. Recruiters may mistake fluency for competence. Clients may buy strategy from people who cannot validate their own recommendations. Teams may inherit brittle code, shallow analysis, insecure automations, and false confidence, all wrapped in professional formatting and consultant language.

This is not simply annoying. It is a security problem.

In cybersecurity, bad judgment scales badly. Poorly understood code becomes vulnerable code. Poorly understood architecture becomes fragile architecture. Poorly understood intelligence becomes misleading intelligence. And poorly understood AI output, when delivered by someone pretending to be an expert, can move from embarrassing to dangerous very quickly.

For cybercrime investigators, coding signatures may be replicated across dozens if not hundreds of malware samples making profiling and hacker identification more difficult. Digital forensics indicators, once important to examiners may now be deliberately obfuscated as part of forensic countermeasures. While AI may improve investigative analysis, it can also act to make identification more difficult.

For industry, mitigation and verification becomes critical. The industry therefore needs to recover something it has been too willing to outsource: technical discernment.

Companies wishing to hire experts in cybersecurity run the risk of getting AI driven expertise with little concern for an organisation’s actual needs. Choosing a company becomes more difficult as there is no guarantee the solutions will be based on skills, knowledge and abilities. We’ve seen this happen recently with some large cybersecurity providers. Companies looking for expertise must now ensure they are verifying contractor skills, previous successful contracts, and ensure they negotiate how much AI is being used in fulfillment of their contracts – not to mention quality assurance. Moreover, we as an industry must be better.

  • We need to test for depth, not just delivery.
  • We need to value verification over velocity.
  • We need to stop confusing AI-assisted production with expertise.

And we need to recognise that the newest insider risk to many organisations may not be the malicious actor, but the unqualified one who has learned to cosplay as a cyber professional.

The script kiddie has not disappeared.

He has evolved.

Now he may wear a blazer, call himself an AI consultant, produce “research” with a chatbot, generate insecure code on demand, and present himself as a cyber strategist on LinkedIn.

And that should concern all of us.

RELATED ARTICLES