Sunday, April 1, 2018

Artificial Intelligence - A Way Forward

Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.

AI is positioned to be a game-changer.   

 

Artificial intelligence is a technology that is already impacting how users interact with, and are affected by the Internet. In the near future, its impact is likely to only continue to grow. AI has the potential to vastly change the way that humans interact, not only with the digital world, but also with each other, through their work and through other socioeconomic institutions – for better or for worse.
Artificial intelligence (AI) traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language.

Artificial intelligence (AI) has received increased attention in recent years. Innovation, made possible through the Internet, has brought AI closer to our everyday lives. These advances, alongside interest in the technology’s potential socio-economic and ethical impacts, brings AI to the forefront of many contemporary debates. Industry investments in AI are rapidly increasing, and governments are trying to understand what the technology could mean for their citizens. 

The collection of “Big Data” and the expansion of the Internet of Things (IoT), has made a perfect environment for new AI applications and services to grow. Applications based on AI are already visible in healthcare diagnostics, targeted treatment, transportation, public safety, service robots, education and entertainment, but will be applied in more fields in the coming years. Together with the Internet, AI changes the way we experience the world and has the potential to be a new engine for economic growth.

Few Uses of AI

Although artificial intelligence evokes thoughts of science fiction, artificial intelligence already has many uses today, for example

       Email filtering: Email services use artificial intelligence to filter incoming emails. Users can train their spam filters by marking emails as “spam”.
       Personalization: Online services use artificial intelligence to personalize your experience. Services, like Amazon or Netflix, “learn” from your previous purchases and the purchases of other users in order to recommend relevant content for you.
       Fraud detection: Banks use artificial intelligence to determine if there is strange activity on your account. Unexpected activity, such as foreign transactions, could be flagged by the algorithm.
       Speech recognition: Applications use artificial intelligence to optimize speech recognition functions. Examples include intelligent personal assistants, e.g. Amazon’s “Alexa” or Apple’s “Siri”.



Artificial intelligence is further defined as “narrow AI” or “general AI”. Narrow AI, which we interact with today, is designed to perform specific tasks within a domain (e.g. language translation). General AI is hypothetical and not domain specific, but can learn and perform tasks anywhere. This is outside the scope of this paper. This paper focuses on advances in narrow AI, particularly on the development of new algorithms and models in a field of computer science referred to as machine learning.

Challenges

Decision-making: transparency and “interpretability”. With artificial intelligence performing tasks ranging from self-driving cars to managing insurance payouts, it’s critical we understand decisions made by an AI agent. But transparency around algorithmic decisions is sometimes limited by things like corporate or state secrecy or technical literacy. Machine learning further complicates this since the internal decision logic of the model is not always understandable, even for the programmer

Data Quality and Bias. In machine learning, the model’s algorithm will only be as good as the data it trains on – commonly described as “garbage in, garbage out”. This means biased data will result in biased decisions. For example, algorithms performing “risk assessments” are in use by some legal jurisdictions in the United States to determine an offenders risk of committing a crime in the future. If these algorithms are trained on racially biased data, they may assign greater risk to individuals of a certain race over others. Reliable data is critical, but greater demand for training data encourages data collection. This, combined with AI’s ability to identify new patterns or re-identify anonymized information, may pose a risk to users’ fundamental rights as it makes it possible for new types of advanced profiling, possibly discriminating against particular individuals or groups.

Safety and Security. As the AI agent learns and interacts with its environment, there are many challenges related to its safe deployment. They can stem from unpredictable and harmful behavior, including indifference to the impact of its actions. One example is the risk of “reward hacking” where the AI agent finds a way of doing something that might make it easier to reach the goal, but does not correspond with the designer’s intent, such as a cleaning robot sweeping dirt under a carpet. 

Social and Economic Impact. It is predicted that AI technologies will bring economic changes through increases in productivity. This includes machines being able to perform new tasks, such as self-driving cars, advanced robots or smart assistants to support people in their daily lives. Yet how the benefits from the technology are distributed, along with the actions taken by stakeholders, will create vastly different outcomes for labor markets and society as a whole.

Governance. The institutions, processes and organizations involved in the governance of AI are still in the early stages. To a great extent, the ecosystem overlaps with subjects related to Internet governance and policy. Privacy and data laws are one example.



Guiding Principles and Recommendations:
  • Adopt ethical standards: Adherence to the principles and standards of ethical considerations in the design of artificial intelligence, should guide researchers and industry going forward.
  • Promote ethical considerations in innovation policies: Innovation policies should require adherence to ethical standards as a pre-requisite for things like funding.
  • Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident.
  • Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.
  • “Algorithmic Literacy” must be a basic skill: Whether it is the curating of information in social media platforms or self-driving cars, users need to be aware and have a basic understanding of the role of algorithms and autonomous decision-making. Such skills will also be important in shaping societal norms around the use of the technology. For example, identifying decisions that may not be suitable to delegate to an AI.
  • Provide the public with information: While full transparency around a service’s machine learning techniques and training data is generally not advisable due to the security risk, the public should be provided with enough information to make it possible for people to question its outcomes.
  • Humans must be in control: Any autonomous system must allow for a human to interrupt an activity or shutdown the system (an “off-switch”). There may also be a need to incorporate human checks on new decision-making strategies in AI system design, especially where the risk to human life and safety is great.
  • Make safety a priority: Any deployment of an autonomous system should be extensively tested beforehand to ensure the AI agent’s safe interaction with its environment (digital or physical) and that it functions as intended. Autonomous systems should be monitored while in operation, and updated or corrected as needed.
  • Privacy is key: AI systems must be data responsible. They should use only what they need and delete it when it is no longer needed (“data minimization”). They should encrypt data in transit and at rest, and restrict access to authorized persons (“access control”). AI systems should only collect, use, share and store data in accordance with privacy and personal data laws and best practices.
  • Think before you act: Careful thought should be given to the instructions and data provided to AI systems. AI systems should not be trained with data that is biased, inaccurate, incomplete or misleading.
  • If they are connected, they must be secured: AI systems that are connected to the Internet should be secured not only for their protection, but also to protect the Internet from malfunctioning or malware-infected AI systems that could become the next-generation of botnets. High standards of device, system and network security should be applied.
  • Responsible disclosure: Security researchers acting in good faith should be able to responsibly test the security of AI systems without fear of prosecution or other legal action. At the same time, researchers and others who discover security vulnerabilities or other design flaws should responsibly disclose their findings to those who are in the best position to fix the problem.
  • Ensure legal certainty: Governments should ensure legal certainty on how existing laws and policies apply to algorithmic decision-making and the use of autonomous systems to ensure a predictable legal environment. This includes working with experts from all disciplines to identify potential gaps and run legal scenarios. Similarly, those designing and using AI should be in compliance with existing legal frameworks.
  • Put users first: Policymakers need to ensure that any laws applicable to AI systems and their use put users’ interests at the center. This must include the ability for users to challenge autonomous decisions that adversely affect their interests.
  • Assign liability up-front: Governments working with all stakeholders need to make some difficult decisions now about who will be liable in the event that something goes wrong with an AI system, and how any harm suffered will be remedied.
  • Social and Economic Impacts: All stakeholders should engage in an ongoing dialogue to determine the strategies needed to seize upon artificial intelligence’s vast socio-economic opportunities for all, while mitigating its potential negative impacts. A dialogue could address related issues such as educational reform, universal income, and a review of social services.
  • Promote Multistakeholder Governance: Organizations, institutions and processes related to the governance of AI need to adopt an open, transparent and inclusive approach. It should be based on four key attributes: Inclusiveness and transparency; Collective responsibility; Effective decision making and implementation and Collaboration through distributed and interoperable governance 
Thanks to the Source with References

Notes
 Images Sources :  Google, Internet