HomeAbout Me

Artificial Intelligence

By Cecilia Caselli
Published in Fundamentals
February 25, 2023
4 min read
Artificial Intelligence

After reading yet another article about Artificial Intelligence, I realized the little I knew about this theme. Why was everyone enthusiastic when ChatGPT came out? Why did it raise so many discussions over the Internet? Why are they saying AI can be a threat for many current jobs? The list of questions could go on.

In recent years, Artificial Intelligence has rapidly grown in popularity. This, inevitably, has brought also non-specialized newspapers, media, and people to increasingly mention AI and the role it may have on the society dynamics, with a special attention on the future of the job market. However, as it often happens when certain topics become socially “hot”, the search for sensationalistic stories can distort reality, and for AI it is not different.

To keep it simple, Artificial intelligence is what enables machines to learn from experience and from the information received. As a result, you get softwares capable of performing tasks while imitating human logic.

While it is a common belief saying that AI wants to replace human activity, the truth is another. In fact, its primary scope is to build softwares that can serve as a support for specific tasks.

Currently, modern AI is used in many sectors, in particular from the ones that offer applications of digital assistance.

Before digging into what AI is and what its functionalities are, let’s get to know more about the story behind the AI.

Can Machines Think?

During the fifties, the idea of Artificial Intelligence was already something that mathematicians, scientists and, to certain extent, also people were familiar with. A few years earlier, science fiction movies like Wizard of Oz and Metropolis had indeed brought into the scenes the concept of artificially intelligent robots.

In particular, there was one British mathematician that would have made the history of Informatics and of Artificial Intelligence, and his name was Alan Turing. In one of his most famous papers - Computer Machinery and Intelligence (1950) - he explored the future possibility for machines to imitate human logic, and proposed the following  - later famous - question: “can machines think?“.

Five years later, this concept started to be explored by Allen Newell, Cliff Shaw, and Herbert Simon who partook at the Dartmouth Summer Research Project on Artificial Intelligence (1956), hosted by the American computer scientist John McCharthy. Here, they presented Logic Theorist, a program designed to imitate human skills, and considered by many to be the first program of AI in history.

In the following years, the Informatics and AI sector experienced an important growth. Computers became faster, cheaper, and more accessible, while Machine Learning algorithms quality improved as well. These results caught the attention of government agencies - amongst them, the Defense Advanced Research Projects Agency (DARPA) - which started to fund AI research.

In the 1980’s,, the course of AI went through an expansion of the algorithmic toolkit, and an increase of funds. For example, Edward Feigenbaum introduced Expert systems, a program aimed at imitating human logic while making decisions, which the Japanese government funded with over $400 million as part of their Fifth Generation Computer Project (FGCP).

Between the 1990’s and 2000’s, in spite of the lack of government funding and public interest, AI made a huge progress in terms of mimicking human behavior. In 1997, the world chess champion Gary Kasparov lost to IBM’s Deep Blue, a computer program. Moreover, it seemed that computers were starting to understand and display emotions, as the robot kismet did.

What has changed over the years that led AI to another level? To put simply, the computer’s ability to store has been increasing enormously.

How AI Works?

AI makes intensive use of data and intelligent algorithms to develop models and extrapolate trends so that the software can learn from them.

AI is a broader discipline that encompasses multiple theories and methods. The main technologies are:

  1. Machine Learning (ML);

  2. Neural Networks (ANNs);

  3. Deep Learning (DL).

Additionally, there are others that support AI, including:

  • Computer Vision. It trains computers to understand and interpret the visual world. Facial recognition is an example.

  • Natural Language Processing. It supports computers to read human language so that they can understand texts, listen to voices, interpret it, and analyze feelings. Amazon Alexa is one of the main proofs of this technology.

  • Graphical processing units;

  • Internet of Things (ioT);

Reasons AI is important

  1. Automation of common and high-volume tasks. For this kind of automation, human intervention is still needed to set up the system and ask the right questions.

  2. Improvement of the already existing softwares. It adds new functionalities that make the user experience more pleasant. This is what Apple did with its products by implementing SIRI.

  3. Intelligent algorithms capable of improving continuosly. From the large flux of data, these algorithms  acquire the ability to classify and/or predict. It’s the algorithm that learns to play chess, as to give personalized suggestions.

  4. It provides a high accuracy according to its field of application. For example, AI can detect cancer with magnetic resonance imaging, ensuring the same level of accuracy of a doctor.

  5. It removes the competition. Intelligent algorithms can transform data into intellectual property. In the current market, who wins gets to have the better data.

Machine Learning

Machine Learning trains computers to learn from data and to improve from past experiences. For this reason, ML softwares gains more accuracy every time they have more data at their disposal. Algorithms are trained to extrapolate trends and correlations from the big pool of data. From these algorithms, AI elaborates the data at its disposition to execute tasks and/or formulate previsions.

ML softwares are an integrating part of human existence, more than we could imagine: at work, at home, at supermarket, in the entertainment sector, in healthcare.

Deep Learning

Deep learning is a subset of Machine Learning. Its work is, indeed, to train computers to carry out activities by imitating humans like speaking, forecasting, and identifying images.

https://www.sas.com/it_it/insights/analytics/what-is-artificial-intelligence.html

https://www.sap.com/italy/products/artificial-intelligence/what-is-machine-learning.html

https://www.sas.com/it_it/insights/articles/analytics/five-ai-technologies.html

https://www.simplilearn.com/data-science-vs-data-analytics-vs-machine-learning-articl

https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/


Tags

#ai#ml#artificial-intelligence

Share

Next Article
Software Developer vs Software Engineer
Cecilia Caselli

Cecilia Caselli

Recruitment Resourcer

Topics

Fundamentals

Related Posts

Big Data for Dummies
February 11, 2024
3 min
About Me

Social Media