Input Talks

Deployment Stage - Input Talks with a Focus on Governance

Drawn graphic showing different people exchanging ideas symbolised by blurbs filled with symbols like light bulbs and scales.
© Ezequiel Hyon

Thu, 29.09.2022 9:30 AM - 11:00 AM

Online



Schedule


09:30 Giovanni De Gregorio (University of Oxford): European Values in the Artificial Intelligence Act
The launch of the European Artificial Intelligence Act has been welcomed as a fundamental step for “shaping the digital future of Europe”. However, there are still many questions surrounding the proposal and its ability to provide a clear legal framework that aims not only to ensure the objectives of the single market but also the protection of European values, such as the rule of law principle and the protection of fundamental rights. The Artificial Intelligence Act would represent another step of European digital constitutionalism that has led the Union to move from a neoliberal approach to a constitutional democratic strategy. The proposed regulation may represent an alternative, not only for Europe, to illiberal and neoliberal approaches to artificial intelligence technologies. This approach would constitute a European third way based on the enhancement of mankind and fundamental rights. However, the Artificial Intelligence Act does not reflect the human-centric approach, also suggested by the High-Level Expert Group on AI. Unlike the Digital Services Act or the GDPR, the proposal does not provide a system of individual safeguards and remedies. The presence of algorithmic technology in the technological future of Europe leads to the question of whether the regulation is part of a path of the Union in the era of digital capitalism. 


09:50 Christoph Benzmüller (University of Bamberg): Trusted AI through Ethico-Legal Governors?   
There are critical applications where the naïve use of modern AI technology could cause significant harm. Pleas for transparency and explanation often do not offer convincing solutions in such critical application contexts due to their posthoc character, as they are hardly suitable to prevent disasters in the first place; they might even be understood as an invitation to adversarial attacks. However, it is in society's interest to invest in preventive measures.    
The development of intelligent artificial agents with genuine moral competencies could be seen as an alternative, but I strongly doubt that we have made significant progress in this direction in recent years. In fact, recent research seems to focus primarily on mimicking human competencies (from large data sets) rather than exploring and demonstrating how own moral competencies or consciousness might emerge in intelligent artificial agents. The potential bias and fragility associated with only mimicking moral competencies leave me concerned about the prospects of such an option, especially in applications of highest criticality.  
I therefore share the opinion of several colleagues who argue for the development of ethico-legal governors that evaluate, justify, and legitimise decisions for critical actions by an intelligent artificial agent before an action execution is granted. In this context, the ethico-legal constraints are specified by humans and formulated declaratively in symbolic language and implemented top-down. The envisioned governor technology requires the development of explicit, value-sensitive ethico-legal reasoning capabilities in intelligent artificial agents so that, for example, verification of the agent's compliance with these ethical-legal constraints can be reliably performed; however, this approach does not require nor assume that the agents themselves become moral entities. 
Ongoing research of my team therefore focuses on the provision of flexible and expressive symbolic means for representation and argumentation with normative theories. To this end, we have developed the LogiKEy formal framework, methodology, and associated tool support. LogiKEy supports the design and development of ethical reasoners, normative theories, and deontic logics in a highly flexible manner, and also provides a fruitful link between different research communities, including knowledge representation and reasoning in AI, the deduction systems community, and formal ethics. In particular, LogiKEy enables the application of interactive and automated theorem proving techniques for classical higher-order logic (HOL) in ethico-legal reasoning. 


10:10 Sarah Spiekermann (Vienna University of Economics and Business): Value-based Engineering with IEEE 7000
Value-based Engineering (VbE) is a new and highly practical approach to human-centered engineering. It makes organizations aware of the ethical challenges in their IT systems and allows them to change the narrative of their IT innovations towards more social wellbeing. It provides a structured and transparent method to ensure that technical units work towards stakeholder value. The core of VbE is standardized in the IEEE 7000TM Model Process for Addressing Ethical Concerns During System Design, forthcoming as ISO/IEC/IEEE 24748-7000. VbE embeds the majority of this standard's best activities, tasks, concepts, definitions and recommendations.
This talk will give an overview of the IEEE 7000 standards, its process phases, its value ontology and terminology and how it is successful in guiding companies to build better IT systems.


10.30 Q&A