The UK Government has released a framework for the use of automated decision-making systems in the public sector, showing how to deploy technology safely, sustainably and ethically.
Jointly developed by the Cabinet Office, Central Digital Data Bureau, and Artificial Intelligence Bureau An ethical, transparent and accountable framework for automated decision making It is designed to improve general government literacy regarding the use of automated or algorithmic decision making and is intended for use by all civil servants.
This framework applies to both automated decision making (if human judgment is not included) and automated assisted decision making and should be used to inform each best practice.
Testing the system to avoid unintended consequences, providing fair service to all users and citizens, clarifying who is responsible for operating the system, safely processing data in a way that protects citizens, etc. 7 It consists of two core principles. Benefits-The framework contains hands-on steps on how to achieve each.
For example, if you want to test your system for unintended consequences, the framework recommends that your organization adopt “red team testing.” This works on the assumption that “all algorithmic systems can do some harm”.
The framework also emphasizes the need for organizations to perform data protection and equality impact assessments. These are necessary to comply with English law.
However, after the Government Digital Service (GDS) released the updated Data Ethics Framework in September 2020, Find “almost unconscious” across previous versions of government..
Other principles of the framework include helping citizens and users understand how the system affects them, ensuring compliance with the law, and building something for the future. Includes doing.
“Under data protection legislation, for a fully automated process, it is necessary to provide the individual with specific information about the process. The owner of the process may require human intervention by the affected person. , We need to introduce an easy way to challenge the decision. “
“If an automated or algorithmic system assists an accountable decision, it should be able to explain in plain English how the system reached that decision or suggested the decision.”
The framework also states that “algorithms are not the solution to all policy problems,” and public authorities need to consider whether the use of automated systems is appropriate in a particular context before proceeding with deployment. I explicitly admit that.
“Scrutiny must be applied to all automated algorithmic decision making. It should not be a reliable solution for solving the most complex and difficult problems due to its high risk,” the government said. Said the website, adding that the risks associated with automated decision-making systems are heavily dependent on policy areas. The context in which they are used.
“Senior owners should perform a thorough risk assessment, considering all options. Policy intents, specifications, or results are best achieved by automated or algorithmic decision-making systems. You have to be sure that. “
He added that if a public agency works with a third party, it must adhere to the framework and require early involvement to ensure that it is incorporated into commercial arrangements.
Despite a framework that includes examples of how both automated and partially automated decision systems are used in the workplace, for example, determine the amount paid to an employee. Therefore, the principles themselves do not directly address the impact of such systems on workplace dynamics. ..
Andrew Pakes, Prospect Union
“The new framework is a major step forward in providing clear guidance on the use of automated decision making. A clear statement that algorithms are not the answer to all questions, especially when it comes to digital surveillance and HR management growth. “We welcome you very much,” said Andrew Pakes, Research Director at Prospect Union. Representing science, technology and other professionals..
“With the rise of AI [artificial intelligence] And it’s very disappointing that the human analytics software in Covid doesn’t recognize the need for the framework to consult and involve workers in deciding how technology will affect us. is. If governments want to build trust in how technology is used in the workplace, they need much better rules about how technology is used in the workplace. “
November 2020, Algorithm Bias Review published by Center for Data Ethics and Innovation The (CDEI) said the UK government should force public authorities to be more transparent about the use of algorithms to make “life-threatening” decisions about individuals.
“The government needs to implement a project that more accurately defines the scope of this obligation and pilots an approach to implement it, but how the decision to use the algorithm was made, the algorithm. Information on the type and method of use must be proactively disclosed. Measures taken to ensure the fair treatment of individuals used in the overall decision-making process. “
“The Cabinet Office and Crown Commercial Services need to renew their model and framework contracts for public sector procurement to incorporate a set of minimum standards for the ethical use of AI, especially with the expected level of transparency. The focus is on continuous testing of accountability and fairness. “
Therefore, publications related to transparency must be easy to find, understand, and use, and should be used for narrow communication purposes or for deliberately manipulating the audience. Must be absolutely avoided.
CDEI’s review of algorithmic bias is cited in multiple parts of the framework as a relevant resource that public authorities need to consider when deploying automated decision-making systems.
“Actions, processes, and data can be viewed by publishing information about the project in a complete, open, understandable, easily accessible, and free format,” he said. ..
https://www.computerweekly.com/news/252501088/UK-government-publishes-framework-on-automated-decision-making The UK Government publishes a framework for automated decision making