I’ll say this time and time again, the way you set yourself apart from everyone else is your ability to take a problem and solve the problem. You need to get into the mindset of solving problems more independently when working a job in the technology industry, there are problems you will be given that have never been solved before, you are being paid BIG BUCKS to solve them.
Break the problem up into smaller steps. See what other people are doing. And most important of them all, go out and research your issues.
Over your entire educational journey, you are pigeonholed into a mentality that everything needs to be in your head, well I’m here to tell you that in the real world, your boss is going to care about results, not that you know everything. It is impossible for one person to know everything, which is literally one of the main reasons why the internet was created, to facilitate the transfer of knowledge. Yes, you need to be confident in your skills in the real world, but we’re not in the real world yet, so it's time to crack open the entire human knowledge repository indexer that is Google.
What about Large Language Models (ChatGPT, Copilot, Gemini, Claude, etc)?
This specific page was written in September 2024 during the AI boom, and as cybersecurity professionals, I believe it is in your best interests to know exactly what this “AI” everyone is talking about is. All the “AI” hype you surely have been hearing and possibly using revolves around what we call “Generative AI”. This is not HAL 9000, Skynet or Ex Machina thinking for itself AI, more like very, very complicated prediction models, which the core concept has not had much scientific advancement in the past 20 years.
When you ask ChatGPT a question, it is programmed to output the most probable response to that question, given trillions of data points in books, websites, popular culture, your mom’s cooking blog, and forums. This is amazing when it is correct, but keyword is when. ChatGPT does not know that it is correctly reciting the third law of thermodynamics, only that this is the most-likely correct answer given its data. It is very expensive to train these models with new data, and we are at the tip of the slope so to speak with Generative AI cannibalization (google what that is to find out!), as it gets harder to find new, high-quality data to train on.
The point I am trying to make is, these models are good for quick, well-known information as a shortcut to Googling and going through all the results, but give it a more specific question and the chances for hallucinations or flat out inaccuracies rise. I would also not be surprised that these “free” AI services start getting worse and worse, or get more expensive, as investor money dries up and actual revenue is expected. Better stick to your guts and expand your knowledge looking for solutions rather than ask the computer for the answer.