All you need is an Intel RealSense 3D camera, some strain sensors, and some time.
Read the whole story
Read the whole story
[url=http://meincmagazine.com/civis/viewtopic.php?p=29062187#p29062187:s742swdy said:araemo[/url]":s742swdy]So, I recently read Snow Crash for the first time.. and one of the things stressed as part of the success of their persistent virtual avatar-inhabited world was good code for interpreting facial expressions and transmitting them to your avatar. Basically, allowing almost proper non-verbal communication online. We can technically do that over skype/videoconferencing... if the connection speed is good. But perhaps a compressed 'facial expression description' would take less bandwidth?
[url=http://meincmagazine.com/civis/viewtopic.php?p=29062187#p29062187:3kjsim0t said:araemo[/url]":3kjsim0t]So, I recently read Snow Crash for the first time.. and one of the things stressed as part of the success of their persistent virtual avatar-inhabited world was good code for interpreting facial expressions and transmitting them to your avatar. Basically, allowing almost proper non-verbal communication online. We can technically do that over skype/videoconferencing... if the connection speed is good. But perhaps a compressed 'facial expression description' would take less bandwidth?
Even better, latency was generally low, with the researchers measuring 3ms for facial feature detection, 5ms for blend shape optimisation, and 3ms for the mapping in software.
I wonder what the discrepancy is that is adding another 22ms. Are the first numbers an average that can vary a lot, or do they not include the actual render time which is being done in series with the facial detection rather than in a pipelined fashion?Unsurprisingly, it currently needs a rather powerful rig to run well: powered by a Core i7-4820K, 32GB of RAM, and a GTX 980, the system renders at a steady 30 FPS.
Oculus Rift hack transfers your facial expressions onto your virtual avatar
[url=http://meincmagazine.com/civis/viewtopic.php?p=29062739#p29062739:apr4ljr1 said:nehinks[/url]":apr4ljr1]The examples seem to be hitting the uncanny valley pretty hard, IMHO. Is that just me?
Still, it is a good advancement - can only get better from here.
In theory, yes, it should take less bandwidth, because an image contains all that information, plus additional information on things like colors. So less information is less data in a perfect and minimal representation. In reality...that remains to be seen, but it's plausible, depending on how many points they're tracking.[url=http://meincmagazine.com/civis/viewtopic.php?p=29062187#p29062187:ve1ht1o1 said:araemo[/url]":ve1ht1o1]But perhaps a compressed 'facial expression description' would take less bandwidth?
[url=http://meincmagazine.com/civis/viewtopic.php?p=29063245#p29063245:2q5jpvd5 said:lewax00[/url]":2q5jpvd5]In theory, yes, it should take less bandwidth, because an image contains all that information, plus additional information on things like colors. So less information is less data in a perfect and minimal representation. In reality...that remains to be seen, but it's plausible, depending on how many points they're tracking.[url=http://meincmagazine.com/civis/viewtopic.php?p=29062187#p29062187:2q5jpvd5 said:araemo[/url]":2q5jpvd5]But perhaps a compressed 'facial expression description' would take less bandwidth?
Yeah.. basically, faces only move in so many ways. If the software can identify how your face is moving (IE, where the muscles pull from, what direction they pull in - that could be advertised to the remote display that can run a 3d 'rig' of a face with muscles in the specific locations/pulling in the specific directions.[url=http://meincmagazine.com/civis/viewtopic.php?p=29063245#p29063245:56u7omtu said:lewax00[/url]":56u7omtu]In theory, yes, it should take less bandwidth, because an image contains all that information, plus additional information on things like colors. So less information is less data in a perfect and minimal representation. In reality...that remains to be seen, but it's plausible, depending on how many points they're tracking.[url=http://meincmagazine.com/civis/viewtopic.php?p=29062187#p29062187:56u7omtu said:araemo[/url]":56u7omtu]But perhaps a compressed 'facial expression description' would take less bandwidth?
[url=http://meincmagazine.com/civis/viewtopic.php?p=29062485#p29062485:1e7i43du said:ElectricBlue[/url]":1e7i43du]Unless you have some ridiculous cartoony avatar I can see this leading to uncanny valley real quick
http://www.gamesajare.com/2.0/wp-conten ... 324495.jpg
[url=http://meincmagazine.com/civis/viewtopic.php?p=29063759#p29063759:37cb424r said:Quiet Desperation[/url]":37cb424r]I always wonder at the difference between people who want their online/onscreen avatars to look just like them and those that want something different.
I'm not sure how this would help, all it does is take those same expressions and put them on a digital face. That seems like saying putting words on a screen instead of paper should help people with dyslexia.[url=http://meincmagazine.com/civis/viewtopic.php?p=29064047#p29064047:1spc9nai said:NateH[/url]":1spc9nai]This has real potential to be very helpful for teaching those with Autism spectrum disorders to interact with people more normally. Learning to read and respond facially can be very difficult for some of them.
More like.. putting words on a screen where you can dynamically change their representation as the learner gains skill could help people with <limitation x>.[url=http://meincmagazine.com/civis/viewtopic.php?p=29064105#p29064105:2g3ztvam said:lewax00[/url]":2g3ztvam]I'm not sure how this would help, all it does is take those same expressions and put them on a digital face. That seems like saying putting words on a screen instead of paper should help people with dyslexia.[url=http://meincmagazine.com/civis/viewtopic.php?p=29064047#p29064047:2g3ztvam said:NateH[/url]":2g3ztvam]This has real potential to be very helpful for teaching those with Autism spectrum disorders to interact with people more normally. Learning to read and respond facially can be very difficult for some of them.
I'm not sure I see where the real time matching to an actual human is necessary in that scenario. Seems like pre-recorded expressions could be used just as well, and we could already do that before this research.[url=http://meincmagazine.com/civis/viewtopic.php?p=29064175#p29064175:c9qy09m9 said:araemo[/url]":c9qy09m9]More like.. putting words on a screen where you can dynamically change their representation as the learner gains skill could help people with <limitation x>.[url=http://meincmagazine.com/civis/viewtopic.php?p=29064105#p29064105:c9qy09m9 said:lewax00[/url]":c9qy09m9]I'm not sure how this would help, all it does is take those same expressions and put them on a digital face. That seems like saying putting words on a screen instead of paper should help people with dyslexia.[url=http://meincmagazine.com/civis/viewtopic.php?p=29064047#p29064047:c9qy09m9 said:NateH[/url]":c9qy09m9]This has real potential to be very helpful for teaching those with Autism spectrum disorders to interact with people more normally. Learning to read and respond facially can be very difficult for some of them.
So, you COULD do that with paper, but you would need lots of different versions of the same books/etc with different types of enhancements for dyslexia.. or you could just let the display adjust as the student learns.
Same thing with facial expressions - you could program the 'game' to overexaggerate them just enough, and then slowly reduce the exaggeration as the student learns to identify them correctly in real-time. So, yes, I think it has potential. Potential does not equal success, but it might be worth the attempt, in case it is successful.
[url=http://meincmagazine.com/civis/viewtopic.php?p=29062187#p29062187:1wf3077r said:araemo[/url]":1wf3077r]So, I recently read Snow Crash for the first time.. and one of the things stressed as part of the success of their persistent virtual avatar-inhabited world was good code for interpreting facial expressions and transmitting them to your avatar. Basically, allowing almost proper non-verbal communication online. We can technically do that over skype/videoconferencing... if the connection speed is good. But perhaps a compressed 'facial expression description' would take less bandwidth?