Generative AI (GenAI) has taken the world by storm and not just in tech but it has also infiltrated every single industry with billions of dollars (here, here, here, here, here, here, here & here) being invested to unlock its hidden potentials.
I am sure many of you have already experimented with some aspect of GenAI whether that is using chat interfaces like OpenAI's ChatGPT or Google Bard to the impressive text-to-image generation tools like DALL-E from OpenAI, Midjourney and Stable Diffusion from Stability.AI to just name a few.
I use ChatGPT/Bard on a regular basis to help me debug cryptic Linux error message to helping me a craft complex regular expression to generating random PowerShell snippets for automating various tasks, the possibilities even for IT Administrators are pretty endless. My workflow typically includes the use of ChatHub, an all-in-one chatbot browser plugin that allows me to use both ChatGPT and Bard simultaneously to compare and/or identify the best possible answer.
Until recently, solutions like ChatGPT only have access to data trained up to Sept 2021 but even with this constraint, the biggest issue that plagues all of these AI models are their hallucinations. AI hallucinations is where an AI simply makes up responses believing that it is factual and while this problem is being worked on by the broader industry, it certainly makes it difficult to trust and validate an answer before using it yourself. I have certainly seen this first hand when asking ChatGPT to generate some code, I would say it is usually 60/40% correct but I often have to verify and re-prompt when I know the syntax or answer is completely wrong.
While using these platforms, I had been thinking about a personal use case of mines and I was curious if other bloggers or even some of my readers might be able relate?