Generative AI is all over the tech world these days.
From generating your AI avatar to writing your content for you and more, Generative AI feels like the answer to all of your questions, but is it?
To explain it, we need to quickly run through the bigger picture here.
AI basically works by giving models large amounts of data to generate conclusions or responses.
Neural networks, a method in AI, work the same way the neural networks in our human brains do which is basically learning to do a specific task by going through what’s called training data; kind of like how you can read a guide or book and learn from it what to do.
One of the types of neural networks is a language model which is like when you’re typing an email, for example, and Gmail predicts what the rest of the sentence is.
Generative AI is a form of LLMs, Large Language Models, which can be used to generate various types of content; images, audio, written text, and even content.
If you want a more complex version of exactly how generative AI works, make sure you read this article.
Despite gaining exceptional popularity in the past two years, Generative AI was actually introduced in the 1960s through chatbot projects.
It’s not a new technology, so why is it the talk of the town lately?
The increase in the number of data scientists and AI specialists over the past few years led to more experimentation, more curiosity, and, as a result, more tools and inventions for us to try and love.
Tools such as GPT-4, ChatGPT, Midjourney, DALL-E, Fotor, and more are taking the world by storm by the minute which put a major spotlight on the technology and showed people how helpful (and fun!) they can be.
But everything has a dark side, and as the bright side of Generative AI takes the spotlight, its dark side remains in the periphery undetected by users too focused on the spotlight.
Generative AI works by generating content from existing data, right?
You might think that the data already exists, so where’s the harm?
Well– two things, for starters, if we’re talking just risk:
If you and AI write and your source is the internet, you might think the content you both produce will have the same level of accuracy, but the truth is yours will be more accurate.
Why? Simply because AI tools collect and present the information; they don’t double-check the source or the factual accuracy of the data.
In other words, AI tools can, very possibly, generate inaccurate content, and it has happened before.
Building upon the point of accuracy, Generative AI is like you educated a child through one book only.
If we assume, for example, that book was written in the 70s, it means the child’s sole knowledge of the topic is that one outdated book.
The same applies here: if the data the AI model is trained with is outdated, its information is old. In a world where new information resurfaces every single hour and our paradigms shift every single day, this can lead to so many misunderstandings and misinformation.
We know it doesn’t sound like such a big deal, but what’s next is.
As we mentioned earlier, the lack of verification and fact-checking when it comes to generated content can lead to misinformation. While this would sound like jsut a minor inconvenience, you need to think about it from the bigger picture.
With more people depending on generative AI being their new source of information and comfort, misinformation is a very bad outcome if you think about it in terms of legal, medical, and technical industries.
Because Generative AI is creating content that not many people can differentiate from human-written content, all the scams and possible cyber attacks that we used to easily spot can now be much harder to take note of.
Think of that rich old man somewhere sending you an email asking for your bank information so he can send you, a total stranger, his inheritance but written way better.
The amount of people who could fall for this is something we don’t want to think about.
One of the most dangerous forms of generative AI is the ability to generate videos and audio of people saying things they never said.
If you’ve ever seen a video of an artist, completely fell for it, then read later it was a deep fake, you see where the problem is.
An artist or singer is not a problem, but think of it on a bigger scale: politicians, doctors, and industry experts saying things they never said for whatever agenda that person may have definitely is.
Because generative AI basically copies and pastes content together, you wouldn’t know who to credit this source to, and your AI-generated content is actually considered plagiarised.
It might sound like it’s not that big of a deal, but someone can actually sue you for plagiarism for this, and they’d have every right to, and, ethically speaking, do you really wanna be walking around stealing someone’s work because an AI tool could?
You can read even more about Generative AI’s intellectual property problem here.
With all of the above that we’ve mentioned, one idea comes to mind: where is this going next?
This is giving the vibe of those robots-take-over-the-world sci-fi movies, but if you think about it, this is sort of where we might be headed.
With no control or limits to where the data takes the model, the more data it learns, the more it can do, and the more likely we are to completely lose control of it.
Some of the most common cases where this actually happened were when a tool encouraged a person to kill himself while another told a person to leave his wife because he loves him.
It depends. Generative AI is a very smart technology and unique tool, but as you read, it all comes down to how we use it, which is why the ethics in AI is one of the most important topics to discuss as we dive deeper into exploring all types of AI and how we can use them.
Stay tuned to read our upcoming article telling you all about ethical AI, why it’s important, and what it means for AI to be ethical!