For Deep Blue the harder problem turned out to be grabbing the pieces and moving them on the board without knocking it over!
- Autonomy: the ability to perform tasks in complex environments without constant guidance by a user.
- Adaptivity: the ability to improve performance by learning from experience.
For there to be AI, there must be a decision and an unforeseen external context (autonomous-adaptive). The only processing based on saved information and defined formulas is not AI.
Machine learning can be said to be a subfield of AI, which itself is a subfield of computer science.
Computer science is a relatively broad field that includes AI but also other subfields such as distributed computing, human-computer interaction, and software engineering.
Machine learning enables AI solutions that are adaptative to learn from the context. Theses are systems that improve their performance in a given task thanks to more and more experience or data (the adaptive side).
Deep learning is a subfield of machine learning related to complex mathematical models.
Robotics means building and programming robots so that they can operate in complex, real-world scenarios (the autonomous side). In a way, robotics is the ultimate challenge of AI since it requires a combination of virtually all areas of AI, for example: computer vision, natural language processing or reasoning under uncertainty.
It is possible to respond or act in a "coherent" way without guidance only if you -as a system- can understand the context and continue learning from it.
What is a robot? In brief, a robot is a machine comprising sensors (which sense the environment) and actuators (which act on the environment) that can be programmed to perform sequences of actions. Any kind of vehicles that have at least some level of autonomy and include sensors and actuators are also counted as robotics.
On the contrary, software-based solutions such as a customer service chatbot, even if they are sometimes called software robots, aren't counted as robotics. They don't have sensors neither actuators.
Definition of AI
A set of tools that enable a mechanism to read/recognize, interpret, decide/choose an act based on combined information coming from own memory and real-time context.
This definition does not encapsulate AI within the limits of computing so far; takes into account the need to understand and interpret the context in real time; and -if it is correct- an AI system would have autonomy to adaptively and progressively increase in knowledge.

Statistics and deep learning
We need to introduce concepts like uncertainty and probability (statistics) to approach real-world AI instead of simple puzzles and games.
Probability (statistics) has turned out to be the best approach for reasoning under uncertainty, and almost all current AI applications are based, to at least some degree, on probabilities.
Probability allows to quantify/automate uncertainty
For AI, uncertainty is not beyond the scope of rational thinking and discussion, and probability provides a systematic way of doing just that. Probably the easiest way to represent uncertainty is through odds.
If the adaptivity side of AI is based on ML, then, the classic methods of Statistics are the basis on which the AI proceeds. Most common are:
- Linear Regression. Well known examples of this method are: prediction of click rates in online advertising, prediction of retail demand for products, prediction of box-office revenue of Hollywood movies, prediction of software cost, prediction of insurance cost, prediction of crime rates, prediction of real estate prices.
- Logistic Regression
- Decision Trees
- Random Forests
- Support Vector Machines (SVM)
- Naive Bayes
- K-Nearest Neighbors (KNN)
- Principal Component Analysis (PCA)
- Clustering Algorithms: Methods like K-Means, Hierarchical Clustering, and DBSCAN group similar data points together based on their similarity or distance metrics.
It should be said that with advancements in deep learning and neural networks, many traditional statistical methods have been complemented or replaced by more complex models.
In fact, deep learning is a subfield of ML that focuses on the development and training of neural networks, particularly deep neural networks.
Neural networks are computational models inspired by the structure and function of biological neural networks in the human brain. They consist of interconnected nodes, called artificial neurons or "units," organized in layers. The connections between these units are assigned weights that determine the strength and influence of the signals being passed through the network.
Deep neural networks are characterized by having multiple hidden layers between the input and output layers, allowing them to learn and represent complex patterns and relationships in the data.
To summarize, deep learning is a subset of machine learning that focuses on training deep neural networks to learn complex patterns and make predictions or decisions based on the learned representations.