Specifying algorithmic responsibility
Abstract
In seeking to specifying algorithmic responsibility, the aim is to classify protective actions against the impact of artificial intelligence. The article provides a description of the problems caused by artificial intelligence, as well as a review of evaluation models and their components in order to guide best practice and methods in the specification of the algorithmic footprint. The analysis of four evaluation models shows that the best models are those related to risk and legal responsibility. Good evaluation practices endeavor to obtain quantitative expressions of qualitative aspects, while the conclusions warn of difficulties in building standardized formulas. The metrics of quantitative expressions must consider weights, based on the number of areas affected, and establish the severity in four levels of impact, risk or damage. This permits the reciprocity of four protective actions: the prohibition of some systems, ensuring damage repair, promoting impact mitigation, and establishing risk prevention.
Downloads
Article download
License
In order to support the global exchange of knowledge, the journal Teknokultura is allowing unrestricted access to its content as from its publication in this electronic edition, and as such it is an open-access journal. The originals published in this journal are the property of the Complutense University of Madrid and any reproduction thereof in full or in part must cite the source. All content is distributed under a Creative Commons Attribution 4.0 use and distribution licence (CC BY 4.0). This circumstance must be expressly stated in these terms where necessary. You can view the summary and the complete legal text of the licence.