The Evolution of Machine Learning

Machine Learning is the branch of computer science that deals with the development of computer programs that teach and grow themselves. According to Arthur Samuel, an American pioneer in computer gaming, Machine Learning is the subfield of computer science that “gives the computer the ability to learn without being explicitly programmed.” Machine Learning allows developers to build algorithms that automatically improve themselves by finding patterns in the existing data without explicit instructions from a human or developer. Machine Learning relies entirely on the data; the more the data, the more efficient Machine Learning is.

machine learning development-02

 

The Machine Learning development approach includes learning from data inputs and evaluating and optimizing the model results. Machine Learning is widely used in data analytics as a method to develop algorithms for making predictions on data. Machine Learning is related to probability, statistics, and linear algebra.

Machine Learning is broadly classified into three categories depending on the nature of the learning ‘signal’ or ‘feedback’ available to a learning system.

  1. Supervised learning: Computer is presented with inputs and their desired outputs. The goal is to learn a general rule to map inputs to the output.
  2. Unsupervised learning: Computer is presented with inputs without desired outputs, the goal is to find structure in inputs.
  3. Reinforcement learning: Computer program interacts with a dynamic environment, and it must perform a certain goal without a guide or teacher.

 

Machine Learning takes advantage of the ability of computer systems to learn from correlations hidden in the data; this ability can be further utilized by programming or developing intelligent and efficient Machine Learning algorithms. While Machine Learning may seem new, it has been around long before people observed it as popular technology. It has evolved to solve real problems of human life and automate the processes used in various industries such as banking, healthcare, telecom, retail, and so on. The software or application or solution developed using Machine Learning can learn from its dynamic environment and adapt to changing requirements.

In contrast to traditional software implementation, the lessons learned from Machine Learning algorithms can be scaled and transferred across multiple applications. Machine Learning naturally considers a large number of variables that influence the results or observations, which can be used in both science and business. Because of all these features and advantages, today’s software is developed for automated decision making and more innovative solutions, which makes an investment in Machine Learning a natural evolution of technology.

 

Evolution over the years

Machine Learning technology has been in existence since 1952. It has evolved drastically over the last decade and saw several transition periods in the mid-90s. The data-driven approach to Machine Learning came into existence during the 1990s. From 1995-2005, there was a lot of focus on natural language, search, and information retrieval. In those days, Machine Learning tools were more straightforward than the tools being used currently. Neural networks, which were popular in the 80s, are a subset of Machine Learning that are computer systems modeled on the human brain and nervous system. Neural networks started making a comeback around 2005. It has become one of the trending technologies of the current decade. According to Gartner’s 2016 Hype Cycle for Emerging Technologies, Machine Learning is among the technologies at the peak of inflated expectations and is expected to reach the mainstream adoption in the next 2–5 years. Technological capabilities such as infrastructure and technical skills also must advance to keep up with the growth of Machine Learning.

 

 

Machine Learning has been one of the most active and rewarding areas of research due to its widespread use in many areas. It has brought a monumental shift in technology and its applications. Some of the evolutions, which made a huge positive impact on real-world problem solving, are highlighted in the following sections.

 

Natural Language Processing

Natural language processing (NLP) defines the way or method of connecting computer systems with natural languages such as English. NLP helps computer systems perform the tasks and automate manual processes based on human input, which may be either spoken or written text. For example, shopping by user audio analysis (through speakers like Amazon’s Alexa) and automating user preference list in a web page based on user interests.

NLP is applied widely to characterize, interpret, or understand the information content of the free-form text or unstructured data. It is estimated that 80% of the world’s data is unstructured. Hence to handle and get valuable insights from unstructured data, NLP is essential. It allows computer systems to learn and draw insights from data such as email, social media response, audio, and videos, which helps computer systems understand human interaction, human response, and other associated events or activities related to an environment.

Unlike the older NLP algorithm generation, which involved manual categorization of text, the modern NLP algorithms are mainly based on statistical Machine Learning. Machine Learning algorithms are used to automatically learn the rules of categorizing the text through the analysis of a corpus (a set of text documents). Many different classes of Machine Learning algorithms such as Decision tree, Support vector machine, Naïve Bayes, and so on have been applied to NLP tasks. The process of NLP is explained in the following steps:

  1. Lexical analysis: This step deals with identifying and analyzing the structure of words. After this step, the whole chunk of raw text is divided into paragraphs, sentences, and words.
  2. Syntactical analysis (parsing): Analyzing the words in a sentence for grammar and arranging the words in a way that shows the relationship among them. The sentence such as “The school goes to the boy” is rejected by an English syntactic analyzer.
  3. Semantic analysis: Focuses on drawing dictionary meaning from the text. This step is to disregard the sentences, that are not meaningful such as “hot ice cream.”
  4. Discourse integration: The meaning of any word or sentence is analyzed through the associated prefixes to words or sentences. The situations such as “looking for a great product” and “not looking like a great product” are analyzed and categorized as different (as positive and negative feedbacks) to draw meaning out of text data.
  5. Pragmatic analysis: In this phase, the interpretation of results from the semantic analysis are performed concerning a context or environment. For example, the sentences such as “The large cat chased the rat.” and “The large cat is Felix” derived from the semantic analysis are further interpreted to identify the large cat as Felix.

 

Deep Learning

Deep Learning is part of a broader family of Machine Learning methods, which is also called deep structured learning, hierarchical learning, or Deep Machine Learning. Deep Learning is a rebranding of a Machine Learning algorithm called an artificial neural network.

 

Artificial neural networks are a class of computing systems. They are inspired and derived by analyzing the structure and function of the brain. They are created from very simple processing nodes formed into a network. They are fundamentally patterned recognition systems and tend to be more useful for tasks which can be described regarding pattern recognition. They are ‘trained’ by feeding them with datasets of known outputs.

Deep learning is an extension of neural networks that have existed since the 1960s. According to Jeff Dean, an American computer scientist who is involved with the Google Brain Project and development of large-scale deep learning software Disbelief and Tenser flow, Deep Learning is as large as deep neural networks. He mentions that the scalability of neural networks gets better with more data and larger models that in turn require more computation power to train models (Machine Learning models) illustrates working of Deep Learning or deep neural network for solving a pattern recognition problem.

Cognitive Computing

Many enterprises are evolving and incorporating new technologies to keep pace with modern business. There are many technologies which are limited to research; narrow industry niches are now being considered for mainstream adoption. One such technology, which is gaining popularity is cognitive computing (or cognitive intelligence).

Cognitive computing is a simulation of human thought processes in the computerized model. Cognitive computing develops self-learning systems that use data mining and Machine Learning-related techniques such as pattern recognition and Natural Language Processing to mimic the way the human brain works. The goal of cognitive computing is to create automated IT systems that can solve problems without requiring human assistance or guidance.

Cognitive computing is a new kind of computing aimed at very complex problems. It can develop meaningful conclusions from diverse resources. IBM began a research project called Watson with the intent to successfully build a system which can learn, think, and understand like a human. It was specifically developed to answer questions on the quiz show called “Jeopardy” in 2011. Watson is developed by combining NLP, Machine Learning, and knowledge representation. Watson was given questions, searched its repository for information, developed and analyzed hypotheses, and produced answers that were also in natural language form.

About The Author