Ethics is an important consideration when building intelligent machines. As we (Cognitive Science & Solutions, CogSci) transition from proof-of-concept demos to products, we recognize how critical it is to address ethical issues from the beginning rather than resorting to band-aids late in the development cycle to cover up ethical concerns. Consequently, CogSci is ensuring that ethics is integrated into all relevant design and testing phases.
There are three well-known ethical challenges in the Artificial Intelligence community: protecting individual privacy, algorithmic discrimination, and autonomous decision-making. These are discussed, in turn, below.
Protecting Individual Privacy
The United States Constitution affords privacy protections to citizens, but they only apply to government actions. Laws for private entities are less restrictive and we therefore frequently see reports of companies abusing their customer data. In fact, a network of corporations and data brokers are busy exchanging Personally Identifiable Information (PPI) of individuals who are generally unaware that their identities, likes, dislikes and habits are being traded as a commodity.
Even without PPI, it is often possible to identify individual persons through statistical analysis. According to the National Academy of Sciences, 87% of the U.S. population is uniquely identified by three data items: zip code, gender, and birth date. These attributes combined create approximately seven billion possible unique identifiers for roughly just 350 million Americans. Accounting for collisions (the remaining 13% of people who do share all three data points with another person) we know 304.5 million people can be uniquely identified by those three data points alone. This is why so many personal privacy experts caution us to be careful sharing that information about ourselves. Not only can that data identify you, but it can also be used to pick you out of an enormous crowd.
CogSci is aware of these challenges and while we seek to use our technology to perform similar queries and make inferences, we are sensitive to these privacy concerns.
Algorithmic Discrimination
Closely related is the potential for discrimination against various protected classes that can occur by decision-making algorithms. Often, discriminatory decision-making is prohibited by law, but correlations made by many of the legacy AI systems can amount to the same. The practice is often hidden, however, by the complexity of the machine. For instance, a business may claim not to discriminate by religion, gender, or race but an algorithm designed to offer higher insurance or mortgage rates to individuals living in high-crime neighborhoods could in-directly discriminate against minority populations or other groups statistically shown to more frequently reside in inner-city areas where crime rates are higher. If an inference engine hides how the decisions are made there is no clear indication of discrimination or culpability despite the rendered harm.
While our platform can also make such inferences, CogSci provides the diagnostic tools to discover the exact patterns which led to the conclusions drawn. This provides the ability for the customer to discover whether the boundaries of ethics have been broached. We can provide transparency in decision-making.
Autonomous Decision-making
Autonomous decision-making is a third, well-known ethics concern and the one most applicable to Artificial General Intelligence (AGI). At the extreme, it applies to armed conflict. Just War Theory is the doctrine of restraint in the conduct of war. While we all sense that machines without a human in the loop can make faster and often better decisions, the United States has ethically restrained herself from full automation. To prevent the loss of innocent life, there is always a human making life or death decisions.
The same principle can also apply to less dramatic circumstances. For example, the use of autonomous vehicles. Their primary purpose is transporting people or goods but just as with an autonomous military weapon, those machines can also injure or kill others. Ethical considerations must be made when deciding how much control is removed from the hands of a human.
Artificial General Intelligence
Because AGI provides technological power far beyond that of conventional AI it is not surprising that the ethical concerns for AGI transcend those of AI. The most widely acknowledged ethical concern of AGI is known as “Singularity”, the point at which machines can design new, better, and smarter machines and the cycle of better and smarter then accelerates; eventually making machines that are not just smarter than humans but are much, much smarter. What will be the impact on society when our toasters are not only smarter than us but routinely discuss Einstein’s Theory of General Relativity with the microwave oven?
While we may have decades before we face many of these ethical concerns for AGI, CogSci is committed to being proactive. We will continue to study the full spectrum of ethical concerns and explore concerns that few are looking at yet. We will also begin formulating our corporate processes for the ethical governance of our technology and as we progress in these efforts, we plan to keep our followers informed and involved.
(Note: Cognitive Science & Solutions is looking for the right individual to join our team as Ethics Officer to ensure that ethics are integrated into all relevant design and testing phases. This person will report directly to the C-Suite and will be free to voice their concerns about any ethical issue or project. If you are interested in this crucial, relevant, and senior position, please reach out to us at info@cogscisol.com)