AI is really a conceptual problem.
What is "human intelligence"? I think for most people it's a fuzzy concept combining self-awareness, reasoning, information processing, problem solving, symbol manipulation, insight, etc. In other words, not clear enough to make good sense when thinking about machine intelligence.
What is "thinking"? ChatGPT seems to think, but all it does it string together words and phrases and sentences, based on some probability calculations that it made during its training. I've tried it several times, and what I notice about it is that it uses vast amounts of cliche. Which is not at all surprising, since cliches by definition are more likely to occur in text than original tropes. Its output makes sense, but it's yawn-inducing boring.
On the other hand, I think all those processes, plus processes not yet understood or recognised, are necessary for sentience and self awareness. Will machines get there? Maybe. The real danger is that we will confuse their making sense with wisdom, and rely on them for to do things only humans should do. Such a judge guilt and innocence.
Footnote: The most common imagery of robots shows them as humanoid. But almost all robots currently at work are machines that look nothing like human beings. They're basically arms.