I didn't expect the topic of AI to be so polarizing since last year. There are fears that it will replace many of us and some literally saying that it will take over the world. A dystopian future where your automotive accident insurance claim will be evaluated by a computer rather than a human adjuster.
Personally, I didn't feel worried. When Stable Diffusion was finally available to the public, I played around with it feeding it prompts and the results didn't impress me. In fact, they freaked me out because of how uncanny the images were.
So when ChatGPT became public, I predicted that it would have that same uncanny feel and that it would generate results that just feel a bit off. I thought, to the trained writer, they might be able to detect that a piece of text was AI generated.
Soon enough, I started hearing tales from close friends that they were using ChatGPT to write their essays for a hated philosophy course. Their essays were convincing enough to land them decent grades and pass the course.
I underestimated the usefulness of this AI system.
Going back to Stable Diffusion, the uncanny valley effect that I mentioned made me realize that what it lacked was depth. Everything that came out of it was just too clean and perfect. When we humans are presented with something that lacks those subtle cues, we start feeling uncomfortable. We know it's not right.
Last night, bored before sleep, I signed up for an OpenAI account and got into ChatGPT. I started asking it basic questions and it would respond with some pretty accurate and neutral output.
However, I decided to get tough with it and see if it could answer a question that I wrote about a few years ago and was basically the only online who ever wrote about it. "What games simulate automatic transmissions correctly?", all the games it listed were the popular racing titles, none of which don't simulate automatics accurately.
Now the above is just an anecdotal example and someone more clever than me would have provided better prompts for more accurate answers. Or get it to spit out very inaccurate information, or just make up facts. What they have termed as hallucinations.
After spending some time with ChatGPT, I started noticing that it always followed the same essay-like structure. An introduction with the thesis, a series of arguments that were even numbered, and a conclusion with a disclaimer that the answers could be subjective. For a school essay, this structure is perfect.
But, just like the freaky stable diffusions images, the results from ChatGPT lacked character and nuances that make each writer unique.
It had mechanical feel and felt generic but many popular websites write in that style since many of the authors are ghostwriters and they must purposely compose something vacuous.
CNET for instance was caught using ChatGPT output as articles for their website. Because their content is already pretty generic, it didn't seem like anyone noticed. They only got in trouble when they admitted themselves that they did this.
Personally, I'm not afraid that ChatGPT will replace me. I do know it will improve over time and other competing AI products might do a better job. In some near future, I can imagine an AI system would be able to replicate my style.
Especially in the realm of non-fiction, the amount of research done for a certain topic is easy to determine based on how depthful the discussion is. Humans can fall for the same trap as AI programs doing something shallow and poorly researched.
For example, the infamous Wikipedia "Bicholim Conflict" hoax describing a war that never happened. If you find a blogger or website writing about it as fact, then you know that they didn't look beyond Wikipedia for their research. Having that featured gives a good idea on how much effort the writer puts into their work.
I asked ChatGPT to tell me about this conflict, and I was impressed that it didn't fall for it, simply stating that it couldn't find anything about it or that it was lesser well known, or even fabricated. Another explanation though would be that the article was removed when ChatGPT was fed data to learn and maybe it's not as smart as it seemed.
For me, I will embrace ChatGPT as a tool to help me research, but just like Wikipedia, it would be a starting point rather than a replacement for good research. ChatGPT is seen for me as an interactive encyclopedia. And just in case anyone has any ideas, I won't be copy-pasting from the output or paraphrasing it.
When it comes to the general public, I don't think we should make laws to limit access to applications like ChatGPT or put huge disclaimers. Just like any tool, we should be taught about the limitations and drawbacks so we can use them appropriately.