Intelligence* can't be simulated: the only way for a system to act intelligently is to BE intelligent. If humans cannot tell an AI from a human then the AI must be as intelligent as a human. By definition, intelligence is a description of how a system behaves, not how it works (its internal state), so there is no need to investigate HOW a machine can be intelligent if it possesses the traits of intelligence.
The limitation of the Turing test is NOT that it cannot verify intelligence, but that most people don't have a good idea of the capabilities of intelligence. That's because we've never had to explicitly identify the properties of an intelligent being. We made some guesses: only humans can do math, play chess, create art, or carry on conversations. Those WERE unique aspects of human beings -- until now.
A naive response to AI chatbots is to assume that because AI can do some things that only humans could previously the AI A: must be sentient or B: sentience must be redefined to somehow be limited to the human form. The correct response is to develop a more complete understanding of intelligence.
Large language models like OpenAI's GPT-3/ChatGPT possess only a small fraction of the abilities of a human mind. They cannot solve novel logical problems, incorporate new information into long-term memory, form new concepts, or generally come up with new ideas. I don't see any of these as unsurmountable obstacles but as a huge amount of new hardware and software problems that must be solved to fully reproduce human capabilities. I think a layperson could easily screen for these capabilities if given proper training on how to screen for each function. In other words, with an informed tester, the Turing test can confirm intelligence.
*I use sapience/sentience/intelligence/consciousness interchangeably even though these are very different concepts because few people know the difference.