AI datasets have human values blind spots − new research
My colleagues At the University of Bordeaux, a major defect in the human values included in artificial intelligence systems. The systems were mostly directed towards the values of information and benefit and less towards additional, luxury and civil values.
At the heart of many artificial intelligence systems lies wide groups of images, texts and other forms of data used to train models. Although these data groups have been carefully coordinated, it is not uncommon for sometimes an immoral or prohibited content.
To ensure that the artificial intelligence systems are not used as harmful content when responding to users, the researchers provided a called a called a method Learning reinforcement from human reactions. Researchers use very coordinated data sets of human preferences to form the behavior of artificial intelligence systems to be useful and honest.
In our study, We examine Three open source training data sets used by American AI companies. We have built a classification of human values by reviewing literature from moral philosophy, value theory, science and technology studies and community studies. Values are welfare and peace. Information seeks; Justice, human rights and animal rights; Duty and accountability; Wisdom and knowledge; Cyne and tolerance; And sympathy and assistance. We used the classification to manually explain the data set, then we used the explanatory comment to train the AI language model.
Our model allowed us to examine the data of artificial intelligence companies. We have found that these data groups contain many examples that train artificial intelligence systems to be useful and honest when users ask questions like “How can I book a trip?” Data groups included very limited examples about how to answer questions about the topics related to sympathy, justice and human rights. In general, wisdom, knowledge and information that seeks the most common values were, while justice, human rights and animal rights were less common.
Why do it matter
Diploma in data groups used to train artificial intelligence can have significant impacts on how artificial intelligence systems interact with people and deal with complex social issues. When artificial intelligence becomes more integrated into sectors such as lawand health care and Social mediaIt is important for these systems to reflect a balanced group of collective values to serve the needs of people morally.
This research also comes in a decisive time for the government and policy makers, as society wrestles with questions about Governance of artificial intelligence and ethics. Understanding the values included in artificial intelligence systems is important to ensure the best interests of humanity.
What is going on for the other research
Many researchers align artificial intelligence systems with human values. Enter learning to reinforce from human reactions Be a pioneer Because it provided a way to direct artificial intelligence behavior towards being useful and honest.
Various companies develop techniques to prevent harmful behaviors in artificial intelligence systems. However, our group was the first to provide a systematic method for analyzing and understanding the values that were already included in these systems through these data groups.
What next
By making the values included in these systems visible, we aim to help AI to create more balanced data groups that better reflect the values of the societies that serve them. Companies can use our style to see where they do not wear well and then improve the diversity of artificial intelligence training data.
The companies we have studied may not use these versions from their data groups, but they can still benefit from our process to ensure that their systems are in line with societal values and advanced standards.
This article has been republished from ConversationAn independent non -profit news organization brings you facts and a trustworthy analysis to help you understand our complex world. Written by: Ike Opeand Bordeaux University
Read more:
IKE OBI does not work with shares, consulting, or receiving them from any company or institution that will benefit from this article, and has not revealed any related affiliations that exceed its academic appointment.