Flying through fear of AI in Bug Bounty Hunting
WEB HACKINGSWITCH
ChatGPT and Deepseek have been two examples of generative AIs that have changed the landscape of the world today. These two models now have the ability to reason and answer questions with much more complexity. More complex models are also being created and rumored like robots or AI translating earbuds. With these advancements , there is a new but somewhat new question question of, "How will AI take over my career?".
This post will explore into answering this question along with the reality of the job market and how to not get replaced by AI.
Focusing on Ethical Pentesters
Information Technology is a very broad career with many more specialized career paths within. Because of this it is difficult to know what my next steps will look like after college. As of now, I have become fond of learning "bug bounty" hunting or pentesting- essentially an ethical and legal way to hack companies to help keep them safe from vulnerabilties in exchange for a reward. It is a symbiotic relationship because hackers do not have to work against laws and can be rewarded with money or points on the platform.
How could AI takeover?
Through this new interest, generative Ai has been helpful for learning by generating guides, templates and even answering questions. Ai has been useful with creating custom methods to explore or crawl web applications. LLM Ai models are also effective in giving summaries or templates to easily collect thoughts into a testable method. However, I know to not download random Ai because some are being created to mimic and record research behavior to build their own tools. This puts a risk on researchers , but not too much because Ais are sometimes vulnerable (for now) too.
With the current state, I believe in the technology field it is key to understand how AI works and adapt to it. I cannot remember if it was from a video we watched or reading, but I remember hearing where it seems we will not see the full potential of AI due to the technology not being as up to date. When hardware matches the AI software, that will be the game changer within the whole technology job. Leading to my hypothesis, that as AI develops it will disturb the whole bug bounty field’s job market.
The Reality
Although there is the fear AI could take over areas of the job market, but is not within the near future. This is due to generative AI, although having reasoning now, is tasked to answer a problem not perform a task. As of now, the common LLM models ChatGPT and DeepSeek main task is general intelligence. Meaning it really is just answering your request it does not have a job to do reducing the chance of complete replacement in the near future.
Relaying off the previous paragraph, creativity is essential. Right now AI LLMs are tasked with answering our questions, which means it is essentially remembering based off previous information deemed factual within the code. Leading into, "“While GenAI offers many advantages in pentesting, it is crucial not to become overly reliant on these technologies. Human oversight remains essential for ensuring accurate and effective results, as well as identifying and addressing any false positives or negatives generated by the AI.” (Hilario...)
Another contradiction to AI takeover within the pen testing job market, is there is the ability to ethically hack companies that run LLM models. From the common Hackerone or Bugcrowd platforms, there is the ability to potentially test these platforms which contradicts machines taking over.
AI can be helpful! AI tools can be extremely helpful to hackers as mentioned in the article by Faroa. There can be the implementation of generative AI to help create outlines for reporting vulnerabilities or even answering questions, however as of now the hacking is still being done by a human alongside AI help.
Take action against AI
The potential fear of AI taking over the career field may be daunting, but from research there is not a near future application of a takeover on pentesting careers. But, this does not mean it will not come, requiring the need to understand the functionality behind AI models. This can help keep the edge on new AI models that are coming out and understanding how they work.
In addition using AI isn't fully terrible in assisting with methodology or small questions, however the creative nature of finding vulnerabilities for companies still needs human minds to find those critical new applications of bugs.