The idea of machines that operate on the principles of the human brain has been around for more than fifty years. However, for most of the history of artificial intelligence, progress has been measured by how well machines solve particular problems, such as playing chess, driving cars, or passing the Turing Test. Relatively few artificial intelligence and machine learning techniques are based on an understanding of how the brain works and how it solves problems.

The impact of intelligent machines will rival and likely surpass the impact of computers operating under traditional principles, i.e. computers with pre-programmed rules, rather than learning systems. This endeavor will involve many people and many companies around the world.

Machine Intelligence: A Focus on Flexibility

To be intelligent, a brain or machine must take in a stream of sensory data, automatically find patterns, adapt to changing conditions, make predictions about future events, and be able to act as required to get desired outcomes. Essentially, this automated pattern finding, learning, and behavior is what the brain does and what intelligent machines need to do.

Today’s computers operate on entirely different principles. In a simple sense, we can think of them as “programmed machines” where brains are “learning machines”. Since learning machines are often implemented on programmed computers, it is worth clarifying this distinction. To us, a “programmed computer” or just “computer” is one that executes a series of instructions where the programmer knows in advance what problem he or she is trying to solve and the algorithms for solving it. On the other hand, a learning machine does not know in advance exactly how to solve a problem. It has to learn from data. If a learning machine is implemented on a computer, the software is not solving the problem directly but instead implementing the learning rules and methods. A learning machine always has to be trained, where a programmed computer does not.

Programmed computers have many strengths. They can be programmed to execute any algorithm, they are fast, and they are reliable. The result is great performance for applications where the inputs and desired outcomes are known.

But programmed computers are unable to do many tasks that our brains perform easily, such as understanding language, analyzing a complex visual scene, planning, moving through a world filled with obstacles, or learning new solutions as the world changes.

Intelligent machines will accomplish tasks that humans cannot do. For example, intelligent machines can directly ingest data from non-human sensors such as GPS or radar. An intelligent machine using the same learning principles as the brain could automatically find patterns in a scanning radar data stream, make predictions, and identify anomalies. The explosion of sensors in every area of human endeavor will require automated learning systems in order to understand and make use of that data. Throughout the evolution of programmed computers, no one could imagine which applications would be important even ten years in the future. Similarly, we expect there will be important applications for intelligent machines that we can’t imagine today. This unclear future argues for flexibility as an essential component of machine intelligence. Intelligent machines designed around flexibility offer the promise of solving any problem where we have large amounts of data, the need for individualized models, and a need to understand data in a rapidly changing environment. Finally, another important reason to have a flexible, general purpose architecture is the notion of “network effects”. If each problem has a custom-built solution, the learning involved in solving that problem cannot be easily applied to the next problem. Moreover, the costs of crafting individual solutions to every problem are high, and are reliant on the availability of a small cadre of highly skilled data scientists. A universal, highly flexible approach will attract the greatest talent and resources. The accumulated value of shared applications, algorithms, utilities, tools and knowledge will enable the work to progress faster. Ultimately, this approach will yield lower cost solutions for a broader range of problems.

The Brain as a Blueprint

If you talk to someone outside of the field of artificial intelligence or machine learning and suggest that the path to create intelligent machines is to first understand how the human brain works and then build machines that work on the same principles, they will invariably say “that makes sense.”

However, this view is not held by everyone inside the fields of artificial intelligence and machine learning. A typical response you might hear is that “airplanes don’t flap their wings”, suggesting that it doesn’t matter how brains work, or worse, that by studying the brain you will go down the wrong path — like the people who tried to build planes with flapping wings. This analogy is mistaken. The Wright brothers understood well the difference between the aerodynamics of lift and the need for a method of propulsion. In fact, Orville Wright’s motivating question was, “if birds can glide for long periods of time, then…why can’t I?” Bird wings and airplane wings work on the same principles of lift. Those principles had to be understood before the Wright brothers could build an airplane. Wing flapping is a means of propulsion and there are several ways to create propulsion; the specific method used doesn’t matter that much. By analogy, we need to understand the principles of intelligence before we can build intelligent machines. We might find that we deviate from the brain in some of our methods, but since the only example we have of a truly intelligent system is the brain, and since the principles of intelligence are not obvious, it is wise to first understand these principles before attempting to build intelligent machines.

What does AI mean today?

Today, due to the scale of the challenge, research scientists still lack a digitally federated environment in which to fully sustain AI. Because of this, they typically deploy several types of shortcuts (such as heuristic approaches) to develop algorithms that represent their best bet of localizing and managing views of specific intelligent logic. This takes the form of a centralized model that attempts to represent the intelligent logic researchers would like to use, then connects as much information and data to that as possible.

Many large companies and universities are using more and more computing power and connecting to more and more data. However, this is strengthening their analytics capability rather than building what we understand as their intelligence; the focus is on automation, not autonomy. So far, the focus on hardwired algorithms such as semantic or logic representation or neural mimicking and learning has not derived a solution that best represents intellectual logic. These approaches do not replicate the natural order within the universe and they are unlikely to efficiently yield the expected benefit from AI, at least in the medium term. Of course, in time this will be achieved – a natural organisation will take place. However, we will get there faster if we understand the basic patterns of how intelligence evolves in nature.

AI in governance

Deep learning, a part of AI, can be employed to tackle issues of scale often prevalent in the execution of government schemes. It is essentially a process that can be used for pattern recognition, image analysis and natural language processing (NLP) by modelling high-level abstractions in data which can then be compared with various other recognised contents in a conceptual way rather than using just a rule-based method. Take for instance the Clean India Initiative directed towards the construction of toilets in rural India. Public servants are tasked with uploading images of these toilet constructions to a central server for sampling and assessment. Image processing AI can be used to flag photographs that do not resemble completely built toilets.

Image recognition capabilities can also be used to identify whether the same official appears in multiple images or if photos have been uploaded by officials from a location other than the intended site. Considering the scale of this initiative, which involves creating more functional toilets, being able to check every image rather than a small sample will actually help increase effectiveness. Further, AI can be applied to the Prime Minister’s initiatives such as the Digital India Initiative, Skill India and Make in India with varying effects. The range of application for AI techniques in such large-scale public endeavors could range from crop insurance schemes, tax fraud detection, and detecting subsidy leakage and defense and security strategy.

The Make in India and Skill India initiatives can be heavily augmented as well as disrupted by AI adoption in the short term. While the former is aimed at building the nation-wide capabilities required to make India a self-sustaining hub of innovation, design, production and export, the latter seeks to aggressively build and enhance human capital.

However, the point to consider here is that if investments are made in the two initiatives without due cognizance of how Industry 4.0 (the next industrial revolution driven by robotic automation) may evolve with respect to demand for workforce size and skill sets, there is a possibility of ending up with capital-intensive infrastructures and assets that fall short of being optimized for automated operations and a large workforce skilled in areas growing beyond the need for manual intervention only.

AI can also be consumed in traditional industries like agriculture. The Department of Agriculture Cooperation and Farmers Welfare, Ministry of Agriculture runs the Kisan Call Centers across the country to respond to issues raised by farmers instantly and in their local language. An AI system will help assist the call centre by linking various available information. For example, it could pick up soil reports from government agencies and link them to the environmental conditions prevalent over the years using data from a remote sensing satellite. It could then provide advice on the optimal crop that can be sown in that land pocket. This information could also be used to determine the crop’s susceptibility to pests. Necessary pre-emptive measures can then be taken—for instance, supplying the required pesticides to that land pocket as well as notifying farmers about the risk. With a high level of connectivity, this is a feasible and ready to deploy solution which uses AI as an augmentation to the system.

Ethical considerations

One of the major concerns in any conversation involving AI is the topic of ethical, legal and societal norms. AI research needs to base itself on a sound understanding of the various implications of any innovation and ensure alignment with rules and norms. Common concerns are the breach of privacy that might arise from an environment where hackers can exploit AI solutions to collect private and sensitive information.

A bigger threat is the misuse of ML algorithms by hackers to develop autonomous techniques that jeopardize the security and safety of vital information.

There is a need to define what ‘acceptable behavior’ for an AI system translates to in its respective application domain. This should ideally drive design considerations, engineering techniques and reliability. Due diligence in ensuring that AI technologies perform in an easy to understand manner and the outcome from their applications is in line with the perception of fairness, equality and local cultural norms to ensure broad societal acceptance.

AI development will hence need involvement of experts from multidisciplinary fields such as computer science, social and behavioural sciences, ethics, biomedical science, psychology, economics, law and policy research.

AI algorithms might, by design, be inherently subject to errors that can lead to consequences such as unfair outcomes for racial and economic classes—for example, citizen profiling based on demographics to arrive at the probability to commit crimes or default on financial obligations. AI system actions should therefore be transparent and easily understandable by humans. Deep learning algorithms that are opaque to users could create hurdles in domains such as healthcare, where diagnosis and treatment need to be backed by a solid chain of reasoning to buy into patient trust. Trustworthy AI systems are built around the following tenets:

Transparency (operations visible to user)

  • Credibility (outcomes are acceptable)
  • Auditability (efficiency can be easily measured)
  • Reliability (AI systems perform as intended)
  • Recoverability (manual control can be assumed if required)

Owing to their vague and contextual interpretation, ethical standards pose a challenge while being encoded into AI systems. Some architectural frameworks that have been widely cited to counter the above challenge are:

  • An architecture designed with operational AI distinct from a monitor agent responsible for legal and ethical supervision of any actions
  • A framework to ensure that AI behavior is safe for humans and implemented through a set of logical constraints on AI system behavior

The real challenges

In order to build a truly empowering AI we need to take a fundamentally new and broader perspective that builds a digital environment (including proposals for the infrastructure and social organisation) and enables digital intelligence to flourish. This means adopting the following set of resolutions:

  • To build AI platforms that are additionally based on the laws of complexity and thereby exploit approaches linked to natural evolutionary models
  • To evolve our existing cloud infrastructures to be more cortex-oriented
  • To use the Internet of Things (IoT) as a consistent platform to create an interface that could work like a digital membrane in our physical world

Creating basic systems following these principles will bring us closer to the moment in which much more of our world can be better understood and even predicted, having optimized our capacity to be aware of the complexity of our world and our day-to-day activities.

Both physical and digital worlds stand on the same fundamental building blocks: particles, atoms and matter. It is apparent that both will follow the same pattern of evolving to generate increasingly complex structures. Understanding and exploiting this will enable us to build AI in a way that is more efficient and empowering for the human race.

 In order to build improved augmented intelligence or AI systems we need to leverage the complexity and natural structure of our own world and existence. We need to create more socially oriented and communicative systems that interact with us as individuals and groups in a more standard and universally structured way. We also need to build a supporting digital environment in which AI systems can navigate and operate separately and together in clusters.

With this in mind, by exploiting both digital content and the IoT we can build digital-like membranes in our own physical and human world. Of course this is not a trivial task and we therefore will need to build on what has already been discovered and created.

Looking ahead

The field of AI has awed researchers and users equally over time. Right from Alan Turing’s paper in the 1950s to sci-fi movies, there has been a debate on what AI can do and how human beings will be affected by it. In many ways, this thought process and speculation are not surprising; rather, they are typical in the case of any evolving field about which complete knowledge is yet to be obtained. The only difference is that AI will constantly evolve and, hence, being able to foresee the next change becomes a big ask.

Receive latest electronics news and products in your inbox
Weekly alerts and exclusive articles, products and free webinars
Your email will not be shared with anyone else
SHARE