Ethics and Artificial Intelligence – The 5 Key Points

Ethics and Artificial Intelligence - The 5 Key Points

CMS Law –

Ethics and Artificial Intelligence - The 5 Key PointsThe House of Lords Select Committee on Artificial Intelligence (“Lords”) has published a report that puts ethics at the centre of the use and development of Artificial Intelligence (“AI”) within the UK.

AI continues to develop in the UK, due to the growth of available data, computer-processing power and improved techniques such as deep learning. The Lords has concluded that, whilst the UK should realise and harness the potential benefits of AI, its potential threats and risks also need to be minimised.

In order to achieve this balance, the Lords have set out an ethical framework to guide the development and application of AI.

The five-point framework

In order to achieve this balance, the Lords have set out an ethical framework to guide the development and application of AI. Each point of the framework considers the social impact of AI on the public.

The five main principles of the framework are as follows:

Developing AI should focus on the “common good of humanity”, with broad implementation in the public sector;

AI should operate with “intelligibility and fairness”, by encouraging greater diversity in the training and recruitment of AI specialists. There should be new approaches to auditing datasets so that past prejudices are not built into automated systems;

Data rights and privacy of individuals, families and communities need to be upheld.

This can be achieved by using established concepts, such as open data and data protection legislation, as well as new mechanisms, such as data portability and data trusts. Transparency in automated decision-making is particularly important. Industry and experts should establish a UK AI Council, which should include a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions;

AI education should be the right of all citizens, with significant government investment in skills and training, to mitigate the social impact of technology on human employment. The national curriculum at UK schools should incorporate AI into children’s education;

AI should not have the “autonomous power to hurt, destroy or deceive human beings”.

When the Cabinet Office’s Cyber Security Science and Technology Strategy is finalised, it should thoroughly consider the role of AI, with particular regard to data sabotage.

How does this fit in with the current UK legal system?

The report considers how far the UK legal system will need to adapt in order to regulate the risks of AI. It recommends that the Competition and Markets Authority review “the use and potential monopolisation of data by big technology companies operating in the UK” with a view to ensuring a fair marketplace for AI products, services and use of data. The report also recommends that the Law Commission investigate whether existing liability law will be sufficient when AI systems malfunction or cause harm to users.

Addressing criticisms

The UK’s pre-eminence in the global development of AI has been widely discussed over previous months. The UK is home to leading AI companies, world-leading research establishments and a trusted legal and regulatory system.

However, there has been criticism of the lack of specific resources available to implement the five-point framework. Doubts have also been raised about the UK’s bargaining power and ability to enforce data protection legislation post-Brexit. The Government should work closely with regulators such as the Information Commissioner’s Office to keep the social impact of AI at the forefront of future policy.

Scroll to Top