This post is also available in: Español
The uses of artificial intelligence are evolving at the same rate as technology. Unsurprisingly, deep learning, an automatic learning process used initially or at least conceived for licit purposes such as the detection of fraud or financial analysis, has begun to be used for the forgery of content. We are talking about deepfaking.
Deepfaking is the result of using artificial intelligence to generate human appearance and voice. With the help of software, this technique can learn voice patterns, gestures, and simulate realities that have not occurred. Statements by a politician or compromising situations involving public figures tend to be the actions recreated.
We are undoubtedly witnessing the evolution of fake news, which not only reports false information, but also backs up this ‘news’ with apparently real audiovisual proof.
Various techniques and technologies are being tested out to detect fake news, including (i) the use of algorithms similar to those employed by deepfakers, and (ii) block chain. The algorithms aim to detect deviations or inaccuracies that do not conform to previously detected patterns. Basically, they are looking to locate errors that would not occur in a real image or video. There is no denying their effectiveness, although they are also criticized for furthering the development of deepfake algorithms themselves, thus canceling out improvements made in tracking them. The second method, in contrast, does not focus so much on the potentially fake content but rather on its origin. It operates to prevent the dissemination of deepfakes, censoring their publication if it cannot be proved that they are from a secure source.
To incorporate these methods into the system of content supply and demand, the participation of information providers is vital, for example: (i) by including in the terms and conditions of their platforms specific prohibitions, penalties, and similar remedies, regarding the use of deepfakes, which would justify their suppression; (ii) setting up channels to report such content; (iii) implementing mechanisms for analyzing content before publication; and (iv) applying those analytical mechanisms to previously published content, so it may be removed if necessary. Such a move would certainly be advisable because, criminalizing or placing conditions on the creation and dissemination of deepfaking, as such, at a legal level, would require the collaboration of information society service providers if they are to avoid sanctions arising from the liability regime established by Spanish Act 34/2002, of June 11, on information society services and ecommerce.
Instead of exposing and banning deepfakes,we ponder the rights and interests they infringe.
From the perspective of the person affected, featured in or in any way related to a deepfake, legislation in Spain and elsewhere provides for image rights. Article 7, paragraphs six and seven of Act 1/1982, of May 5, regarding civil protection of the right to honor, personal and family privacy and image itself (“Act 1/1982“) list the following, among others, as illegitimate invasions of privacy: “the use of the name, voice or image of a person for advertising, commercial or similar purposes,” and “the dissemination of expressions or facts concerning a person when theses defame or discredit that person in the eyes of others.” These acts could also be classed as a criminal offense if the offense or act of fraud exceed a certain level.
From the perspective of public order and interest, it is difficult to ignore how this new type of fraud can promote violence, exacerbate social conflict, or even influence voting intentions in a certain segment of the population. We are talk about deepfaking as a vehicle for intimidation, for the entrenchment of erroneous gender roles or political ideas. This would certainly take us into the realms of criminal and also administrative offenses.
Some US states are already passing laws that criminalize or ban deepfakes. Time will tell how Europe reacts. However, as a bare minimum, consumers of content must be made aware of the existence, falseness and risk of deepfaking, to reduce its personal, social and economic impact.
Now, let us not forget that the application of artificial intelligence for the generation of human appearance and voice also has legitimate uses, so it would not be too smart to suspend its development. However, we must be able to redirect its use away from the violation of people’s rights or the manipulation of reality to the detriment of basic recognized freedoms.
This post is also available in: Español