AI (Artificial Intelligence) Words You Need To Know
In 1956, John McCarthy setup a ten-week research project at Dartmouth University that was focused on a new concept he called “artificial intelligence.” The event included many of the researchers who would become giants in the emerging field, like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.
Yet the reaction to the phrase artificial intelligence was mixed. Did it really explain the technology? Was there a better way to word it?
Well, no one could come up with something better–and so AI stuck.
Since then, we’ve seen the coining of plenty of words in the category, which often define complex technologies and systems. The result is that it can be tough to understand what is being talked about.
So to help clarify things, let’s take a look at the AI words you need to know:
Algorithm
From Kurt Muehmel, who is a VP Sales Engineer at Dataiku:
A series of computations, from the most simple (long division using pencil and paper), to the most complex. For example, machine learning uses an algorithm to process data, discover rules that are hidden in the data, and that are then encoded in a "model" that can be used to make predictions on new data.
Machine Learning
From Dr. Hossein Rahnama, who is the co-founder and CEO of Flybits:
Traditional programming involves specifying a sequence of instructions that dictate to the computer exactly what to do. Machine learning, on the other hand, is a different programming paradigm wherein the engineer provides examples comprising what the expected output of the program should be for a given input. The machine learning system then explores the set of all possible computer programs in order to find the program that most closely generates the expected output for the corresponding input data. Thus, with this programming paradigm, the engineer does not need to figure out how to instruct the computer to accomplish a task, provided they have a sufficient number of examples for the system to identify the correct program in the search space.
Neural Networks
From Dan Grimm, who is the VP and General Manager of Computer Vision a RealNetworks:
Neural networks are mathematical constructs that mimic the structure of the human brain to summarize complex information into simple, tangible results. Much like we train the human brain to, for example, learn to control our bodies in order to walk, these networks also need to be trained with significant amounts of data. Over the last five years, there have been tremendous advancements in the layering of these networks and the compute power available to train them.
Deep Learning
From Sheldon Fernandez, who is the CEO of DarwinAI:
Deep Learning is a specialized form of Machine Learning, based on neural networks that emulate the cognitive capabilities of the human mind. Deep Learning is to Machine Learning what Machine Learning is to AI–not the only manifestation of its parent, but generally the most powerful and eye-catching version. In practice, deep learning networks capable of performing sophisticated tasks are 1.) many layers deep with millions, sometimes, billions of inputs (hence the ‘deep’); 2.) trained using real world examples until they become proficient at the prevailing task (hence the ‘learning’).
Explainability
From Michael Beckley, who is the CTO and founder of Appian:
Explainability is knowing why AI rejects your credit card charge as fraud, denies your insurance claim, or confuses the side of a truck with a cloudy sky. Explainability is necessary to build trust and transparency into AI-powered software. The power and complexity of AI deep learning can make predictions and decisions difficult to explain to both customers and regulators. As our understanding of potential bias in data sets used to train AI algorithms grows, so does our need for greater explainability in our AI systems. To meet this challenge, enterprises can use tools like Low Code Platforms to put a human in the loop and govern how AI is used in important decisions.
Supervised, Unsupervised and Reinforcement Learning
From Justin Silver, who is the manager of science & research at PROS:
There are three broad categories of machine learning: supervised, unsupervised, and reinforcement learning. In supervised learning, the machine observes a set of cases (think of “cases” as scenarios like “The weather is cold and rainy”) and their outcomes (for example, “John will go to the beach”) and learns rules with the goal of being able to predict the outcomes of unobserved cases (if, in the past, John usually has gone to the beach when it was cold and rainy, in the future the machine will predict that John will very likely go to the beach whenever the weather is cold and rainy). In unsupervised learning, the machine observes a set of cases, without observing any outcomes for these cases, and learns patterns that enable it to classify the cases into groups with similar characteristics (without any knowledge of whether John has gone to the beach, the machine learns that “The weather is cold and rainy” is similar to “It’s snowing” but not to “It’s hot outside”). In reinforcement learning, the machine takes actions towards achieving an objective, receives feedback on those actions, and learns through trial and error to take actions that lead to better fulfillment of that objective (if the machine is trying to help John avoid those cold and rainy beach days, it could give John suggestions over a period of time on whether to go to the beach, learn from John’s positive and negative feedback, and continue to update its suggestions).
Bias
From Mehul Patel, who is the CEO of Hired:
While you may think of machines as objective, fair and consistent, they often adopt the same unconscious biases as the humans who built them. That’s why it’s vital that companies recognize the importance of normalizing data—meaning adjusting values measured on different scales to a common scale—to ensure that human biases aren’t unintentionally introduced into the algorithm. Take hiring as an example: If you give a computer a data set with 100 female candidates and 300 male candidates and ask it to predict the best person for the job, it is going to surface more male candidates because the volume of men is three times the size of women in the data set. Building technology that is fair and equitable may be challenging but will ensure that the algorithms informing our decisions and insights are not perpetuating the very biases we are trying to undo as a society.
Backpropagation
From Victoria Jones, who is the Zoho AI Evangelist:
Backpropagation algorithms allow a neural network to learn from its mistakes. The technology tracks an event backwards from the outcome to the prediction and analyzes the margin of error at different stages to adjust how it will make its next prediction. Around 70% of our AI assistant (called Zia) features the use of backpropagation, including Zoho Writer's grammar-check engine and Zoho Notebook's OCR technology, which lets Zia identify objects in images and make those images searchable. This technology also allows Zia's chatbot to respond more accurately and naturally. The more a business uses Zia, the more Zia understands how that business is run. This means that Zia's anomaly detection and forecasting capabilities become more accurate and personalized to any specific business.