This is what I always think after I read a comment from a computer person along the lines of "this is nothing like a human brain, this is just an extremely complicated network of connections reacting to input by searching for patterns in that network." Before I decide whether or not LLMs can ever become human-like, I'll need to hear the opinion of someone who's an expert in computers AND neuroscience.
I'm only a neurobiologist with some Python knowledge, but the following quote from Sci-Fi writer Charles Stross feels plausible enough to me: "What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull." (check out the
entire keynote, it's amazingly prescient for being 6 years old).
More to your point of expertise: If you are into podcasts, I'd very much recommend the Brain Inspired podcast. It features long-form, in-depth discussions on the edge of experimental neuroscience, theoretical neuroscience, and AI research, with actual researchers in those fields:
https://braininspired.co/podcast/