Athina Fragkouli

Artificial Intelligence and Policy-Making Developments – Council of Europe’s CAHAI

Athina Fragkouli

5 min read

51 You have liked this article 0 times.
0

As the first in a series of articles aimed at exploring how policy-making institutions are reacting to challenges posed by Artificial Intelligence (AI), we look at how the Council of Europe seeks to examine the feasibility of a legal framework for the development, design and application of AI.


Over the last couple of years, policy-making institutions have been putting greater focus on the study of various aspects of Artificial Intelligence (AI). This doesn't come as a surprise. The use of AI is expanding, becoming so dominant as to affect not only our everyday lives but also the global economy. Accordingly, there's a debate going on about the benefits and risks of AI.

AI systems analyse huge amounts of (personal) data and predict behaviours. Such systems may also produce legally binding decisions based on algorithms, the logic of which is not necessarily explainable. Inevitably, the trustworthiness and accountability of AI systems come into question.

The main concern is whether the existing legal framework is adequate or whether there is a need for specific legislation or policies. Policy making institutions have a desire to form an environment in which there is legal protection for users and citizens, and legal certainty for parties that create and apply AI systems.

This discussion is particularly important for network operators. Data related to IP addresses, routing data or RPKI data may be used to profile and predict behaviours. AI applications are being developed at the network level so that networks become more autonomous and automated - for example, in detecting IP hijacking. Seeing the tendency of some governments to regulate parts of the Internet on a local level, we may well see national policies affecting routing decisions based on AI predictions in the future, which network operators will have to comply with. It’s important to ensure that any such AI systems and any national regulation based on them will not cause mismatches that lead to fragmentation and thus risk the integrity of the global Internet.

The RIPE NCC is observing these discussions with great interest. Currently we are following discussions taking place in the Council of Europe, the OECD and the EU. The goal is to raise awareness in the technical community so that the technical community gets involved at the early stages of these discussions and identifies the impact of issues in this area. It is important that any policy-making decisions have the appropriate technical input and the efforts of policy makers to include this input are observed (we see them including the technical community in their dialogue and launching public consultations on this matter).

The Council of Europe

Our focus in this first article is the work performed by the Council of Europe. The Council of Europe should not be confused with the Council of European Union. In fact, the Council of Europe has far more member-states than the European Union (47 and 27 respectively). Although the Council of Europe cannot make binding laws, it does create legal instruments that serve as guidelines for governments developing relevant legislation. One such legal instrument, for example, is the Convention on Cybercrime (also known as the Budapest Convention) that provides guidelines for national legislation against Cybercrime and a framework for international cooperation between state-parties to this treaty.

We decided to start with the Council of Europe for two reasons. First, because RIPE NCC’s collaboration with the Council of Europe was made official on 6 February 2020 when the RIPE NCC joined a partnership with the Council of Europe to develop recommendations for its member states on the development, use and regulation of digital technologies. As part of the partnership, the RIPE NCC has been able to attend Council of Europe meetings that focus on AI. Second, as a more practical reason, the Council of Europe has launched a “multistakeholder consultation” based on a questionnaire available at a page dedicated to the consultation.

The Council of Europe Initiative on AI

In 2019 the Council of Europe set up the Ad Hoc Committee on AI, CAHAI. The aim of CAHAI is to:

“...examine the feasibility and potential elements on the basis of broad multi-stakeholder consultations, of a legal framework for the development, design and application of artificial intelligence, based on Council of Europe’s standards on human rights, democracy and the rule of law” (source)

As a first step, CAHAI conducted a feasibility study and examined the reasons why the new challenges posed by AI make it necessary to have an adequate legal framework to protect human rights, democracy and the rule of law.

In 2021, the CAHAI is in the process of considering the main elements of such a framework, which would be based on the Council of Europe's standards on human rights, democracy and the rule of law. These main elements would include issues such as the values and principles on which the design, development and application of AI should be based, the areas where more safeguards are needed, and the kind of policies and solutions that need to be adopted for AI systems to be respectful of the Council of Europe's values.

For a decision to be made on these elements, CAHAI has declared the importance of a broad debate and has decided to launch a multi-stakeholder consultation. Representative institutional actors (not single individuals), such as government representatives and public administrations, international organisations, business, civil society, academia and the technical community are requested to participate by filling out a questionnaire (available here). The deadline of the public consultation is 29 April.

RIPE Community Engagement

We encourage anyone from the RIPE community who has interest or experience in this field to share their thoughts either by replying back to the public consultation, or by sharing them with the RIPE NCC so we can submit a response accordingly.

At stated up top, this is the first in a series of articles in which we'll be seeking to further explain the work policy-making institutions on AI to keep the community updated on relevant discussions on a regular basis.

51 You have liked this article 0 times.
0

You may also like

View more

About the author

I am the Chief Legal Officer at the RIPE NCC, responsible for all legal aspects of the organisation.

Comments 0