Well, folks, if you're talking about modelling photorealistic characters, it's essentially two problems: 1) capturing every. single. subtlety of a person's physical features, and 2) throwing enough computation cycles into algorithms emulating them.
Now 1) is a pretty large problem. You're looking at things like subsurface scattering to emulate the diffusion of light through semi-translucent surfaces, like skin. After all, a character isn't going to look real if its skin looks like cardboard in sunlight. This is computationally expensive, but also easy to model because it's a solved algorithmical problem - if you have the algorithm, you can implement it. What isn't easy to model, so much, is proportions. It's not as easy as slapping a photo in a viewport and tracing a 3D model out of it. There's a slight asymmetry to most human faces, there are blemishes and imperfections, and it's hard to get these things absolutely right because CG modelling tends to make things too perfect.
And even if you do get past that block, as many have, the real issue is, even if you make an actual honest-to-goodness 100% replica of the human muscle/skeletal system in your model, you still get an unemotive android that tips into the Uncanny Valley simply by virtue of one thing: interaction and emotional response. It's the small things, like how you move your eyes when you speak, how often you blink, the curl of your lip when you sneer or smirk, the ever-so-slight wrinkling of the skin at the corners of your eyes when you grin, and a million other things like body language and gait and posture: it's easy to fuck these things up if you're not doing full-body mocap.
That's what Avatar tried to do with its performance capture. It succeeded to an extent, but the outcome wasn't perfectly photorealistic - though of course, we don't have a real-world reference to compare a photorealistic catthing to, so that's a bit moot. The problem with Avatar, of course, is that the CG stands out because it's mixed with live-action, and as good as CG artists might be, fusing the two seamlessly isn't going to happen until we've modelled every sort of interaction of light and materials and physics and implemented algorithms for all of them.
Which brings us to 2) - we've been able to implement an impressive amount of algorithms in CGI, from the aforementioned sub-surface scattering to radiosity to caustics by emulating light rays as beams of photons, and soft/hard-body physics and skeletal rigging and all that stuff. But are we close to simulating an objective reality in a viewport with all the laws of physics in tow? Not quite.
We're emulating that stuff as much as we can but there's always going to be something slightly off when you're fusing CGI on top of live-action stuff, because we can't simulate all 70 trillion photons in that real-world scene hitting our CGI character unless we have a supercomputing Dyson Sphere doing the goddamn calculations, or have elements like grass underfoot getting trampled by our fake character's feet.
So yeah, there's always going to be some element of the uncanny valley to a feature like Avatar. But if you model the entire scene from scratch, at some point eventually the fidelity of the simulation will be just good enough for most people to ignore the slightly uncanny parts. We aren't there yet, I don't think - even Pixar knows well enough with all its research to make its characters stylised and not photoreal.
It's a problem that Avatar acknowledged far better than TSW (which was mostly keyframed, IIRC), by capturing as high-fidelity a version of human performances as we can currently, and then transposing them to CG models, but it went about it in the wrong way. It's too easy to pick out the differences between CG and real-world scenes because the fidelity of our physical simulation just isn't there yet.
What QD is doing with Kara is pretty much the way the issue's going to be tackled in the future, in real-time or with off-line rendering: an extremely high-resolution performance capture, but with the entire scene existing purely in an artist's renderbox somewhere.
Sorry for the braindump. The topic's something I've always been interested in.