Deep Learning Software – Taking Us to New Frontiers

Google’s computing resources and technological innovations have been nothing short of astounding over the past several years, and perhaps no more so than in the area of AI called deep learning. Deep-learning software looks to replicate the activity of neurons in the brain in an effort to recognize patterns in digital representations in the forms of sound, images and other communicative data.

Thanks to advancements in mathematical formulas and increasingly powerful computers, computer scientists are able to build many more layers of virtual neurons than ever before, which means reading people’s minds isn’t as far fetched as one would think.

Advancements in speech and image recognition have improved exponentially. According to a report from the Massachusetts Institute of Technology (MIT), a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software.

Other leading technology companies such as Microsoft have also made great strides in developing deep learning.  Microsoft chief research officer Rick Rashid was in China last year where he astounded the audience in a demonstration using speech software that successfully and very efficiently transcribed his spoken words into English text.  The error rate was less than 7 per cent.  Keep in mind this is translating from English into Chinese text, and then his own voice simulated to utter the audio in Mandarin.

Google has without doubt become a leader in deep learning.  Last year the world’s largest search engine (amongst many other things) bought a startup co-founded by Geoffrey Hinton, a University of Toronto computer science professor. Hinton, who will split his time between the university and Google, plans to take ideas out of this field and apply them to real problems such as image recognition, search, and natural-language understanding.

Concepts that were once discounted as mere science fiction earlier in our lives is now becoming stark reality before our very own disbelieving eyes.  Humans have developed computing machines to the point we are able to transform communications, manufacturing, medicine and transportation, to name a few. Fans of the television show Jeopardy! will recall watching IBM’s Watson computer, which uses deep-learning techniques in order to function. The advanced AI methodology allowed the computer to not only answer questions, but win as well. Microsoft has made use of deep learning in its Windows Phone and Bing voice search.

The incredible aspect of all of this is that computer scientists maintain they have just begun to scratch the surface when it comes to deep learning in terms of what the future can and will hold. As time goes on, extended applications beyond speech and image recognition will be made possible, but in order to achieve that there will be a much greater input of conceptualizations and software innovation.  Let’s also not forget that as more and more commands are placed within the software making it mind-bogglingly complex, the need for advanced hardware to handle this increased production in processing must also be addressed.

Advancements have come a long way from the simple mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency.

By the mid 1980s there was a revival of interest in so-called  neutral networks that in effect made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network.

Within the past few years, researchers have made a number of crucial, fundamental conceptual breakthroughs.

Big Data

By most accounts, about 80 per cent of the recent advances in AI can be attributed to the availability of more computer power. That is why Google has invested so much in its data centers since about 2009. Deep learning has also benefited from the company’s method of splitting computing tasks among many machines so they can be done much more quickly.

Big Data consists of data with sizes far beyond the ability of commonly used software tools to capture, curate, manage, and process the data within a tolerable elapsed time.

Big data sizes are a constantly moving target in terms of size and content, ranging from a few dozen terabytes to many petabytes of data in a single data set. It becomes difficult for many companies to process using on-hand database management tools or traditional data processing applications in order to effectively carry out the full expectations of their enterprise. Cloud computing alleviates a lot of stress by hosting immense levels of data from a centralized location.

In early 2014, deep learning has noticeably improved voice search on smartphones. Until last year, Google’s Android software used a method that misunderstood many words. But in preparation for a new release of Android in July 2013, the old system was replaced with another based on deep learning.  The efficiency rating was quickly recognized with the amount of errors dropping by almost 25 per cent.

But despite the many undeniable advancements in computer technology, by no means is everyone convinced that deep learning as a core method will be the salvation and the answer that can ultimately rival human intelligence. One of the main criticisms from skeptics is that deep learning and AI don’t take in all the brain’s activity, leaving a number of gaps that will invariably skew the results of artificial intelligence computing.

Perhaps the most common criticism is that deep learning fails to account for the concept of time. Brains process streams of sensory data, and human learning depends on our ability to recall sequences of patterns. Heaping more data onto a sophisticated program will help lessen the gap, but not erase it entirely.

It’s hard to say where future software applications will gain the most traction.  Wearable computing devices is something Chris O’Neill has gone on record as saying he believes will become more prominent in the very near future.  From Google’s perspective, stronger image search would help the likes of YouTube, for example. Perhaps we’ll soon be at the point where advancements in image recognition will enhance the efficiency of self-driving vehicles.

One thing is for sure, the frontier surface has just barely been scratched.

Recommended