We stand with Ukraine

Deep learning: how it works

Expert.ai Team - 22 March 2016

Deep Learning definition

There are different ways to define deep learning but in general, without becoming too technical, deep learning is a class of machine learning algorithms that can be applied to the world of structured or unstructured data. If we focus on unstructured information, we usually consider text mining as the typical task performed by deep learning solution.

There are several algorithms in the Machine Learning world for performing text mining or, in other words, extracting knowledge from text, but they all share a similar core of statistics (distribution and frequency of keywords) and co-occurrences. The three most common applications of text mining solutions are automatic categorization, entity and concept extraction and natural language question answering.

How a deep learning solution works

In simple terms, a deep learning solution, for example of automatic categorization, requires many documents to be used for training (the more the better). In order for the algorithm to identify and store what’s included in the content and link the specific content to the related tags, these documents must already be analyzed and described (tagged) by people.

For example, for a deep learning system to recognize documents about finance, it must be trained to do so. This means that the system will have to process a lot of finance-related documents tagged as “finance” as well as lots of other content tagged as “non-finance”. While analyzing these documents, the system keeps track of all the patterns, keywords and sequences it finds in the documents. A deep learning solution related to entity or concept extraction and natural language question answering works in a similar fashion.

Once all of this information is stored, the deep learning system is said to be trained. In reality, the training phase could continue almost indefinitely, but beyond a certain number of documents processed the improvement in performance is limited.

How to measure the performance of deep learning solutions

The performance of a deep learning solution related to automatic categorization is measured in terms of precision and recall.
Precision is the number of documents that are correctly assigned to a category, divided by the total number of documents assigned to that category.
Recall is the correct number of documents assigned to the categories, divided by all the documents in the sample that should have been assigned to that category.

When a trained deep learning system processes a new document, it compares the data (keywords, frequency and distribution) extracted from this document to the stored data. If the new document contains a pattern that is similar to the known pattern of the documents coded as “finance,” the system will recognize that the new document is talking about finance.

Therefore, the level of accuracy (precision and recall) of a trained deep learning solution will vary based on the number of documents used in the training phase, and the coverage of topic-specific jargon contained in those documents.

Pure deep learning systems have significant implementation costs and limits due to their pure black box approach. Their text mining performance tends to become insufficient the more sophisticated the task. Therefore today, organizations in sectors such as Finance, Energy, Intelligence and Law Enforcement and Life Science tend to look to semantic-based systems to reach the level of performance required.