Derechos humanos y algoritmos

This post is also available in: Español

On April 8, 2020, the Council of Europe issued Recommendations on the impact of algorithmic systems on human rights, in view of the current situation triggered by COVID-19, where all States have increased their use of algorithmic systems for accuracy, enhanced diagnostics and research in treatments and vaccines and considering potential uses of algorithmic systems to encourage innovation and economic development in other fields such as communication, education and transportation.

The Committee of Ministers acknowledges the impact of algorithms, positive or negative, on individual rights and freedoms. Therefore, it recalls the commitment of member States to secure the rights and freedoms enshrined in the Convention for the Protection of Human Rights and Fundamental Freedoms, adopted by the Council of Europe in 1950, in force since 1953, and signed by Spain on November 24, 1977. Regardless of technological advancement, member States must ensure that any design and development of algorithmic systems, whether from the public or private sector, abides by this commitment.

The Council of Europe defines “algorithmic systems” as applications that often use mathematical optimization techniques to perform one or more tasks such as collecting, combining, cleaning, sorting, classifying and inferring data, based on which the systems are able to select, prioritize and make recommendations or decisions. These algorithmic systems automate activities that allow to create adaptive services at scale and in real time.

Algorithmic systems have many advantages such as enhancing service performance through increased precision and consistency, providing new solutions, and increasing efficiency and effectiveness of task performance, which have enabled advances in several fields such as medical diagnostics, transportation and global cooperation. However, algorithmic systems also pose risks and challenges for member States in aspects such as the right to a fair trial; privacy and data protection; freedom of thought, conscience and religion; freedom of expression and equal treatment. The Council of Europe highlights the negative impact of algorithmic systems due to (i) large-scale data processing; (ii) the number of errors in the form of false positives and false negatives, which sometimes reinforces discrimination or stereotypes; (iii) inaccurate simulations and the adoption of general rules that do not take all information into account; and (iv) the lack of transparency in the optimization criteria and techniques; also, algorithmic systems sometimes prioritize certain values over others in a non-transparent or uncertain manner.

Within this context, the Council of Europe issued a set of guidelines for member States and private players. Below is an overview of the main recommendations, which somewhat affect both States and private entities:

  • Reviewing the legislative frameworks and policies applicable to the design, development and ongoing deployment of algorithmic systems for these frameworks and policies (i) to comply with the Council of Europe guidelines, and (ii) to be transparent, proactive and inclusive; as well as promoting the implementation of these frameworks and policies, assessing their effectiveness and ensuring compliance by all sectors.
  • Ensuring that algorithmic systems comply with the applicable legislation and human rights law through legislative, regulatory and supervisory mechanisms.
  • Engaging in regular dialogue and cooperation with all public and private stakeholders.
  • Assessing regularly and steadily (due diligence) during the entire life cycle of algorithmic systems (i) the impact on human rights of data processing through these systems; (ii) any potential discriminatory effects or biases (based on gender, race, religion, political opinion or social origin); (iv) their possible uses, and (iii) person re-identification. Also, the relevant public or private players must take appropriate measures to prevent or mitigate these effects.
  • Providing the competent national authorities with the necessary materials and resources to investigate and oversee compliance with the applicable legislation in line with these recommendations.
  • Facilitating the development of safe infrastructure alternatives that ensure high-quality data processing.
  • Ensuring that (i) testing and experimenting only occur after evaluating algorithmic systems from a human rights perspective and considering all factors, contexts and potential uses; (ii) all necessary safety and privacy guarantees and measures are applied through certification schemes and standards from the design stage, and (iii) the necessary datasets are used to prevent discrimination or unrepresentative results.
  • Establishing high transparency and disclosure levels allowing individuals to know about algorithmic systems, their functioning, behavior, decision-making, impact and risks. The aim is that data subjects learn (i) their rights; (ii) how to apply for rectification; and (iii) how to use these technologies appropriately, enjoying their benefits and minimizing risk exposure.
  • Taking the necessary measures to prevent the risks attached to algorithmic systems. For example, adopting standards and shared guidelines applicable to all market players, or having trained staff assess the impact of these systems on human rights to promptly remedy any violations and their potential effects.
  • Using incentives to promote the development of algorithmic systems that encourage compliance with human rights and with internationally agreed social and environmental goals.
  • Supporting, performing and publishing studies that monitor the implications, effects and benefits of algorithmic systems on human rights in an independent and impartial manner.

These guidelines are consistent with the European Commission’s approach both in the Communication on a European strategy for data, published on February 19, 2020, and in the White Paper on AI, discussed in this blog entry.

The massive use of new algorithm-based technology, software and big data processing, with increasingly greater public-private cooperation, is at its peak and rapidly increasing, especially at times like these, where public and private players race against time to contain the pandemic. Hopefully, the rush will not irreversibly affect human rights and individual freedoms.

By Sergi Gálvez and Adaya Esteban

This post is also available in: Español

Autores:

Asociado

35 artículos

Asociado del Área de Propiedad Intelectual y Protección de Datos. Especialista en protección de datos y tecnologías disruptivas. Participa en el asesoramiento recurrente en materia de protección de datos y contratación tecnológica de compañías nacionales e internacionales, especialmente en la configuración jurídica de evaluaciones de impacto, transferencias internacionales de datos personales, contratos de encargo de tratamiento y en el asesoramiento durante violaciones de seguridad. Además de prestar asesoramiento continuado a clientes en los ámbitos mencionados, tiene experiencia en asesorar a empresas de diferentes sectores en la configuración legal de proyectos que implementan tecnologías disruptivas, tales como el Big Data, Internet of Things, artificial intelligence y smart robots.

sergi.galvez@cuatrecasas.com

Asociada

51 artículos



adaya.esteban@cuatrecasas.com