Believe nothing you hear, and only one half that you see.” Edgar Alan Poe wrote those words over a century ago, yet if he were alive today he may opt for the darker: “Believe nothing you hear and nothing you see.” Over the past decade, advances in graphics technology have provided visual effects artists the ability to create fantastical new worlds on film and to populate those worlds with people, all with an astounding amount of realism.
Of particular interest in this post is the ability of this technology to create realistic digital replicas of actors that can be manipulated like puppets to deliver new cinematic performances without the need for any further input from the actor—such as when the late Peter Cushing was digitally recreated in order to reprise the character of Grand Moff Tarkin in Rogue One: A Star Wars Story.
Can you tell the real person from the digital replica?
This technology has raised a host of legal issues regarding how best to protect an actor (or his or her estate) against unauthorized use of digital replicas. For celebrities, the solution has been to use a combination of right of publicity statutes and agreements to control the commercial use of their name, image and/or likeness.
However, this protective framework was developed in an industry where the ability to create a realistic digital replica is currently limited to a select number of players and requires an immense amount of resources, money and time to do it properly. This won’t always be the case though and protecting oneself from becoming a digital puppet won’t be a problem limited solely to the ranks of professional actors.
New developments in software technology have the potential to make such digital puppetry accessible to the masses. For starters, researchers at Stanford University are currently working on real-time face capture and reenactment software (called Face2Face) that allows a user to alter the facial expressions of a person in a previously recorded video in order to mimic the user’s own expressions in real-time.
Additionally, Adobe has been working on software that would act like Photoshop for audio (called Project VoCo) and could allow users to create new audio dialog that mimics a person’s voice.
Taking those two pieces of technology together, it is easy to see how in a few years you could be watching a YouTube video of yourself saying things you never said and doing things you never did—it may be your face and your voice, but they are no longer controlled by you. In fact, the Radio Lab podcast has already used some of this technology to create a fake video of President Obama as an illustration of the power (and ethical implications) of this technology.
So what legal recourse do you have to ensure that you don’t become a digital puppet? Well, the solution may be found in the way that the courts have dealt with memes—which Tim Rawson previously discussed in more depth on this blog. In short, for objectionable uses that are commercial in nature, the strongest option may be to rely primarily upon a right of publicity claim.
digital replicas are designed to fool the viewer into thinking that the filmed subject actually did the things shown in the video.
And if the use is not commercial in nature, then the stronger options may instead be a copyright claim on the source (reference) material or a traditional tort claim (e.g., defamation and/or false light). However, memes and digital replicas differ in one important aspect: digital replicas are designed to fool the viewer into thinking that the filmed subject actually did the things shown in the video.
Thus, for a false light or defamation claim, it may be easier for a plaintiff to assert that the inherent realism of using a digital replica supports two arguments that are not as readily available to meme-based claims:
1) that an altered video featuring a digital replica of the plaintiff conveys a falsehood (i.e., that the plaintiff actually did or said what is depicted in the video) that is intended to be believed by an audience, and
2) that use of such technology amounts to a reckless disregard for the truth if not placed in the proper context (e.g., with proper disclaimers that the originally video has been altered or features a digital replica).
However, if such technology continues to spread, it is likely that the hurdles to controlling one’s likeness will only become higher. In the end, netizens will simply have to be skeptical of all that they see and hear on the internet.