I finally broke down and downloaded Ask Ai or whatever and I gotta admit, it's rather impressive. I never expected to it to reach its extent of human-esque authenticity and even scholarly credibility.
Indeed - though I haven't been fooled (yet) in a blind test to differentiate between a human paragraph and and AI paragraph - I've asked the software some pretty tricky questions and, I'll admit, some of the answers reflect nuance and ring of thorough, in-depth evaluation.
All that said I AIN'T WORRIED! At least not yet. Because I have yet to see or read anything that would lead me to believe the AI could take my job. Why? Because, as far as I've been able to discern, though AI can now approximate human detail and nuance, it remains incapable of 'thinking for itself' so to speak. Basically, the machines can duplicate or even surpass our best, but it cannot match our 'worst.' Corny as it sounds, our capacity for flaws and errors still gives us the advantage (I'll explain how in a bit).
Last year, it caused something of a national uproar when an AI-generated illustration won a prestigious art competition against human competitors. (A few) people seemed outraged, terrified, or both for, after all, it's one thing if a computer can calculate better than us... but art?? That's our thing, right?
Nah.
Humanity actually began its harrowing journey toward obsoletion decades ago. Personally, I begin the time-line of robot conquest back in 1997.
Mankind's ultimate downfall in the face of machines began way back then when the IBM Deep Blue Chess Engine defeated reigning champion Gary Kasparov at his own game.
Kasparov previously boasted that he would never lose to a machine because, as he explained, chess is not merely a game of raw logic; there's a intuitive human element and it was that element, the Russian mastermind argued, which machines simply could not duplicate. Of course, as it turned out, Kasparov was wrong. During their fateful bout, the Russian frequently exploded in emotional outburst after his algorithmic opponent made its moves. The ecstatic world champion went so far as to accuse IBM of cheating because Kasparov recognized Deep Blue's moves as those of an intuitive and very human mind, not merely those of an oversized calculator.
Ultimately, the faceless Deep Blue gained a narrow victory over the animated Gary Kasparov in 1997 and, with that victory, the machines gained their first real victory over mankind. From that point on, we no longer lived in a world when the John Henry best among us could still defeat the machines at their own game. Alas, these days, numerous chess-playing super computers and even advanced software programs could easily defeat even the most brilliant players like Magnus Carlsen. I read somewhere that, if we included chess machines and advanced software within the world rankings, Carlsen (mankind's best contender) would barely fall within the world's top-twenty-five best players.
What does this mean? Put simply, If machines can beat us at chess, they can and will (eventually) beat us at everything.
This all sounds very sci-fi but, after spending not even a full hour tinkering around with Ask Ai, it reaffirms my belief that, not only is such a statement not science fiction, it's not even the distant, dystopian future. We're quite literally living in a world where a basic and widely accessible software currently downloaded on my phone can nearly pass the fabled 'Turing Test.' Again, this ain't some advanced Ex Machina deal, this is a basic app that took scarcely a minute to download on a standard Android phone.
All that said, to get back to my original point, I AIN'T WORRIED!
Why? Because I'm wrong with far greater regularity than Ask Ai.
And yes, in this case, at least regarding my profession, such a reality still gives me the advantage.
I submitted a series of increasingly difficult questions to Ask Ai and, as mentioned, I was rather impressed by its thoroughness, nuance, and credibility. That's all great.
But here's the thing...
Ask Ai has yet to offer an answer that did not fall, as perfectly as a puzzle piece, within the realm of correct consensus and contemporary scholarship. Its answers are simply a little TOO right. TOO on the nose. It reflects not the capacity to disagree, push back, evaluate by its own capacity or, in short, to teach.
You remember that scene from Good Will Hunting when he's talking to the stuck-up Harvard student, all of whom's opinions came directly from the book without reflecting any deeper thought?
Ask Ai is... speaking of on the nose... it's JUST like that.
It's capable of approximating a very human-sounding answer to complex questions.
But, ultimately, it's still just a super advanced copy-and-paste software. It remains bereft of the critical human element that an apt scholar still needs.
Basically, ChatGPT software hasn't Deep Blued me yet. In fact, it's still not close. This thing is clearly still not human, nor even a close approximation of actual humanity for the simple reason that it cannot disagree, cannot offer a 'wrong' answer, and, therefore, it cannot truly think.
After all, what is thinking if not disagreeing? Our ability to be wrong demonstrates our ability to think for ourselves.
I know what you're thinking:
(You disagree with everyone and argue to the point that it's annoying. So what if ChatGPT software can't approximate your frequently aggravating capacity for raw contrarianism. Being insufferably disagreeable all the time doesn't make you any more 'human' than anyone else, and, even if it does, I'm glad the software doesn't follow in your irritating footsteps)
That's fair.
Still, though, I take reassurance in the fact that I remain human enough to disagree and push against even the most solid consensus, while knowing my Ask Ai app remains incapable of duplicating my imperfection, at least for the time being.