![]() Thus, it may not always be safe to assume an underlying normal distribution of moral values pertaining to a specific ethical dilemma. ![]() It is also possible, however, that other moral values are a more complex, non-linear function of other factors. Indeed, many moral values result from a linear summation of other values, and thus will tend toward a normal distribution by the Central Limit Theorem. Its efficacy, however, relies upon the assumption that the modeled abstract values are normally distributed. The Hierarchical Bayesian model proves valuable in predicting decisions in the Moral Machine experiment. Creating a model without assuming an underlying normal distribution al proposed a Hierarchical Bayesian model, which observes participants’ decisions in the Moral Machine experiment, and predicts individual decisions by inferring underlying moral value set w i for each individual. Observing the data and attempting to transfer scenarios into comparative costs based on abstract values is the first step in creating a model that can ethically make these decisions.Īssuming this underlying model, Kim, et. The survey aggregated answers based on region and evaluated the moral values that societies generally place on different abstract dimensions, such as age, social status, law adherence, and gender. In any given instance, a participant would be presented with an unwinnable scenario in which only one of two groups of people could survive (see Fig. This experiment surveyed thousands of people worldwide for their preferences in autonomous vehicle ethical dilemmas. This scenario is a suitable starting point for discussing ethical AI decision making, as it has been investigated extensively in the Moral Machine Experiment. One scenario relevant to ethical modeling with social utility is the imminent crash of a self-driving vehicle: in this hypothetical situation, an autonomous vehicle with a catastrophic brake failure must decide between killing one of two distinct groups of people. In this paper, we investigate a deep learning based moral decision model, taking a hypothetical autonomous vehicle dilemma as an example. As such, it may be possible to model ethical decisions based upon the social utility of each option within a decision. Research in universal moral grammar has supported this notion, additionally noting that the moral value of a decision also depends on the context and actions an agent must take within that decision, and not just the net result. English philosopher Jeremy Bentham theorized that individuals choose actions that yield the greatest social utility when faced with ethical dilemmas. Still, reducing complex moral scenarios to a form that a framework can easily digest is obtuse.Īs with many problems, researchers can find inspiration in human cognitive abilities, including moral determination. proposes a machine learning framework where a group of statistically trained models determine a moral action based on each individual model’s decision, and the confidence each model has in the morality of other models. Current research in AI moral decision making often theorizes abstract and general approaches to training moral agents For example, Shaw et al. Incorporating moral sensibility into machines remains challenging, as it is difficult to derive a quantitative model for objectively determining moral decisions. Formulating and demonstrating an easily applicable approach to programming moral agents is the first step in earning public trust in this domain. Other research and surveys indicate that a person’s previous exposure to machine-made decisions plays a crucial role in their confidence in ethical AI. ![]() Bigman and Gray highlights that people have shown distinct aversion to entrusting machines with ethical decisions in multiple studies, despite the fact that AI has demonstrated superior judgement to humans in certain domains. Public acceptance of AI as responsible moral agents is one of the greatest obstacles facing automation and machine learning. Furthermore, confidence in an AI’s ability to make sensible moral decisions is key to winning public acceptance of such systems. Often embedded within these tasks are small moral decisions: for example, is violating a minor traffic law justified when it saves the time of others? While humans take these small ethical decisions for granted, society must properly equip AI products with moral compasses if we are to entrust machines even with small daily tasks. With the rapid development toward automation, future reliance on artificial intelligence (AI) for everyday tasks is clear.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |