Differences Between AI vs. Machine Learning vs. Deep Learning
Neural Networks: You've Got It So Easy
Neural networks are all the rage right now with increasing numbers of hackers, students, researchers, and businesses getting involved. The last resurgence was in the 80s and 90s, when there was little or no World Wide Web and few neural network tools. The current resurgence started around 2006. From a hacker's perspective, what tools and other resources were available back then, what's available now, and what should we expect for the future? For myself, a GPU on the Raspberry Pi would be nice.
The 80s and 90s Neural network 80s/90s books and magsFor the young'uns reading this who wonder how us old geezers managed to do anything before the World Wide Web, hardcopy magazines played a big part in making us aware of new things. And so it was Scientific American magazine's September 1992 special issue on Mind and Brain that introduced me to neural networks, both the biological and artificial kinds.
Back then you had the option of writing your own neural networks from scratch or ordering source code from someone else, which you'd receive on a floppy diskette in the mail. I even ordered a floppy from The Amateur Scientist column of that Scientific American issue. You could also buy a neural network library that would do all the low-level, complex math for you. There was also a free simulator called Xerion from the University of Toronto.
Keeping an eye on the bookstore Science sections did turn up the occasional book on the subject. The classic was the two-volume Explorations in Parallel Distributed Processing, by Rumelhart, McClelland et al. A favorite of mine was Neural Computation and Self-Organizing Maps: An Introduction, useful if you were interested in neural networks controlling a robot arm.
There were also short courses and conferences you could attend. The conference I attended in 1994 was a free two-day one put on by Geoffrey Hinton, then of the University of Toronto, both then and now a leader in the field. The best reputed annual conference at the time was the Neural Information Processing System conference, still going strong today.
And lastly, I recall combing the libraries for published papers. My stack of conference papers and course handouts, photocopied articles, and handwritten notes from that period is around 3″ thick.
Then things went relatively quiet. While neural networks had found use in a few applications, they hadn't lived up to their hype and from the perspective of the world, outside of a limited research community, they ceased to matter. Things remained quiet as gradual improvements were made, along with a few breakthroughs, and then finally around 2006 they exploded on the world again.
The Present ArrivesWe're focusing on tools here but briefly, those breakthroughs were mainly:
There are now numerous neural network libraries, usually called frameworks, available for download for free with various licenses, many of them open source frameworks. Most of the more popular ones allow you to run your neural networks on GPUs, and are flexible enough to support most types of networks.
Here are most of the more popular ones. They all have GPU support except for FNN.
TensorFlow
Languages: Python, C++ is in the works
TensorFlow is Google's latest neural network framework. It's designed for distributing networks across multiple machines and GPUs. It can be considered a low-level one, offering great flexibility but also a larger learning curve than high-level ones like Keras and TFLearn, both talked about below. However, they are working on producing a version of Keras integrated in TensorFlow.
We've seen this one in a hack on Hackaday already in this hammer and beer bottle recognizing robot and even have an introduction to using TensorFlow.
Theano
Languages: Python
This is an open source library for doing efficient numerical computations involving multi-dimensional arrays. It's from the University of Montreal, and runs on Windows, Linux and OS-X. Theano has been around for a long time, 0.1 having been released in 2009.
Caffe
Languages: Command line, Python, and MATLAB
Caffe is developed by Berkeley AI Research and community contributors. Models can be defined in a plain text file and then processed using a command line tool. There are also Python and MATLAB interfaces. For example, you can define your model in a plain text file, give details on how to train it in a second plain text file called a solver, and then pass these to the caffe command line tool which will then train a neural network. You can then load this trained net using a Python program and use it to do something, image classification for example.
CNTK
Languages: Python, C++, C#
This is the Microsoft Cognitive Toolkit (CNTK) and runs on Windows and Linux. They're currently working on a version to be used with Keras.
Keras
Languages: Python
Written in Python, Keras uses either TensorFlow or Theano underneath, making it easier to use those frameworks. There are also plans to support CNTK as well. Work is underway to integrate Keras into TensorFlow resulting in a separate TensorFlow-only version of Keras.
TF Learn
Languages: Python
Like Keras, this is a high-level library built on top of TensorFlow.
FANN
Languages: Supports over 15 languages, no GPU support
This is a high-level open source library written in C. It's limited to fully connected and sparsely connected neural networks. However, it's been popular over the years, and has even been included in Linux distributions. It's recently shown up here on Hackaday in a robot that learned to walk using reinforcement learning, a machine learning technique that often makes use of neural networks.
Torch
Languages: Lua
Open source library written in C. Interestingly, they say on the front page of their website that Torch is embeddable, with ports to iOS, Andoid and FPGA backends.
PyTorch
Languages: Python
PyTorch is relatively new, their website says it's in early-release beta, but there seems to be a lot interest in it. It runs on Linux and OS-X and uses Torch underneath.
There are no doubt others that I've missed. If you have a particular favorite that's not here then please let us know in the comments.
Which one should you use? Unless the programming language or OS is an issue then another factor to keep in mind is your skill level. If you're uncomfortable with math or don't want to dig deeply into the neural network's nuances then chose a high-level one. In that case, stay away from TensorFlow, where you have to learn more about the API than Kera, TFLearn or the other high-level ones. Frameworks that emphasize their math functionality usually require you to do more work to create the network. Another factor is whether or not you'll be doing basic research. A high-level framework may not allow you to access the innards enough to start making crazy networks, perhaps with connections spanning multiple layers or within layers, and with data flowing in all directions.
Online ServicesAre you you're looking to add something a neural network would offer to your hack but don't want to take the time to learn the intricacies of neural networks? For that there are services available by connecting your hack to the internet.
We've seen countless examples making use of Amazon's Alexa for voice recognition. Google also has its Cloud Machine Learning Services which includes vision and speech. Its vision service have shown up here using Raspberry Pi's for candy sorting and reading human emotions. The Wekinator is aimed at artists and musicians that we've seen used to train a neural network to respond to various gestures for turning things on an off around the house, as well as for making a virtual world's tiniest violin. Not to be left out, Microsoft also has its Cognitive Services APIs, including: vision, speech, language and others.
GPUs and TPUs Iterating through a neural networkTraining a neural network requires iterating through the neural network, forward and then backward, each time improving the network's accuracy. Up to a point, the more iterations you can do, the better the final accuracy will be when you stop. The number of iterations could be in the hundreds or even thousands. With 1980s and 1990s computers, achieving enough iterations could take an unacceptable amount of time. According to the article, Deep Learning in Neural Networks: An Overview, in 2004 an increase of 20 times the speed was achieved with a GPU for a fully connected neural network. In 2006 a 4 times increase was achieved for a convolutional neural network. By 2010, increases were as much as 50 times faster when comparing training on a CPU versus a GPU. As a result, accuracies were much higher.
Nvidia Titan Xp graphics card. Image Credit: NvidiaHow do GPUs help? A big part of training a neural network involves doing matrix multiplication, something which is done much faster on a GPU than on a CPU. Nvidia, a leader in making graphics cards and GPUs, created an API called CUDA which is used by neural network software to make use of the GPU. We point this out because you'll see the term CUDA a lot. With the spread of deep learning, Nvidia has added more APIs, including CuDNN (CUDA for Deep Neural Networks), a library of finely tuned neural network primitives, and another term you'll see.
Nvidia also has its own single board computer, the Jetson TX2, designed to be the brains for self-driving cars, selfie-snapping drones, and so on. However, as our [Brian Benchoff] has pointed out, the price point is a little high for the average hacker.
Google has also been working on its own hardware acceleration in the form of its Tensor Processing Unit (TPU). You might have noticed the similarity to the name of Google's framework above, TensorFlow. TensorFlow makes heavy use of tensors (think of single and multi-dimensional arrays in software). According to Google's paper on the TPU it's designed for the inference phase of neural networks. Inference refers not to training neural networks but to using the neural network after it's been trained. We haven't seen it used by any frameworks yet, but it's something to keep in mind.
Using Other People's HardwareDo you have a neural network that'll take a long time to train but don't have a supported GPU, or don't want to tie up your resources? In that case there's hardware you can use on other machines accessible over the internet. One such is FloydHub which, for an individual, costs only penny's per hour with no monthly payment. Another is Amazon EC2.
Datasets Training neural network with labeled dataWe said that one of the breakthroughs in neural networks was the availability of training data containing large numbers of samples, in the tens of thousands. Training a neural network using a supervised training algorithm involves giving the data to the network at its inputs but also telling it what the expected output should be. In that case the data also has to be labeled. If you give an image of a horse to the network's inputs, and its outputs say it looks like a cheetah, then it needs to know that the error is large and more training is needed. The expected output is called a label, and the data is 'labeled data'.
Many such datasets are available online for training purposes. MNIST is one such for handwritten character recognition. ImageNet and CIFAR are two different datasets of labeled images. Many more are listed on this Wikipedia page. Many of the frameworks listed above have tutorials that include the necessary datasets.
That's not to say you absolutely need a large dataset to get a respectable accuracy. The walking robot we previously mentioned that used the FNN framework, used the servo motor positions as its training data.
Other ResourcesUnlike in the 80s and 90s, while you can still buy hardcopy books about neural networks, there are numerous ones online. Two online books I've enjoyed are Deep Learning by the MIT Press and Neural Networks and Deep Learning. The above listed frameworks all have tutorials to help get started. And then there are countless other websites and YouTube videos on any topic you search for. I find YouTube videos of recorded lectures and conference talks very useful.
The Future Raspberry Pi 7 with GPUDoubtless the future will see more frameworks coming along.
We've long seen specialized neural chips and boards on the market but none have ever found a big market, even back in the 90s. However, those aren't designed specially for serving the real growth area, the neural network software that everyone's working on. GPUs do serve that market. As neural networks with millions of connections for image and voice processing, language, and so on make their way into smaller and smaller consumer devices the need for more GPUs or processors tailored to that software will hopefully result in something that can become a new component on a Raspberry Pi or Arduino board. Though there is the possibility that processing will remain an online service instead. EDIT: It turns out there is a GPU on the Raspberry Pi — see the comments below. That doesn't mean all the above frameworks will make use of it though. For example, TensorFlow supports Nvidia CUDA cards only. But you can still use the GPU for your own custom neural network code. Various links are in the comments for that too.
There is already competition for GPUs from ASICs like the TPU and it's possible we'll see more of those, possibly ousting GPUs from neural networks altogether.
As for our new computer overlord, neural networks as a part of our daily life are here to stay this time, but the hype that is artificial general intelligence will likely quieten until someone makes significant breaktroughs only to explode onto the scene once again, but for real this time.
In the meantime, which neural network framework have you used and why? Or did you write your own? Are there any tools missing that you'd like to see? Let us know in the comments below.
Harnessing The Power Of AI: ProfileTree's Cutting-Edge Solutions For Business Transformation
In recent years, artificial intelligence (AI) has emerged as one of the most transformative forces in the business world. As companies across industries grapple with ever-increasing amounts of data and complexity, AI offers a powerful set of tools for driving efficiency, insight, and innovation.
At ProfileTree, we've been at the forefront of this revolution, developing a suite of cutting-edge AI solutions designed to help our clients thrive in the age of intelligent automation. With a decade of experience delivering digital transformation for leading brands and organizations across the UK and Ireland, our team has the expertise and vision to guide companies through this new frontier.
Understanding AI: A Primer for Business Leaders
Before diving into ProfileTree's specific solutions, let's take a step back and define what we mean by artificial intelligence. At its core, AI refers to the development of computer systems that can perform tasks that typically require human-like intelligence, such as recognizing patterns, making predictions, and learning from experience.
Under the broad umbrella of AI, there are several key approaches and technologies that power these capabilities:
Machine Learning: This involves training algorithms to identify patterns and make decisions based on data, without being explicitly programmed. Just as humans learn from experience, machine learning allows systems to continuously improve their performance as they are exposed to more data.
Neural Networks: Inspired by the structure of the human brain, neural networks are a type of machine learning that uses interconnected nodes to process information. Deep learning, which uses multi-layered neural networks, has driven breakthroughs in areas like image and speech recognition.
Natural Language Processing (NLP): This branch of AI focuses on enabling computers to understand, interpret, and generate human language. NLP powers applications like chatbots, sentiment analysis, and language translation.
So how does AI actually work? At a high level, AI systems ingest vast amounts of data, use algorithms to identify patterns and insights within that data, and then learn and adapt based on feedback and new inputs. This allows them to make predictions, automate complex tasks, and provide intelligent recommendations.
The potential applications of AI in business are vast and varied. Some of the key areas where AI is driving transformation include:
Process Automation: AI can automate repetitive and time-consuming tasks, freeing up human workers to focus on higher-level activities. This includes everything from data entry and scheduling to customer service and supply chain optimization.
Predictive Analytics: By analyzing historical data, AI algorithms can forecast future trends and outcomes with a high degree of accuracy. This enables businesses to make smarter decisions about resource allocation, demand planning, and risk mitigation.
Personalization: AI allows companies to tailor products, services, and experiences to the unique preferences and needs of individual customers. By analyzing user behavior and sentiment, AI systems can deliver highly targeted content, recommendations, and offers.
Fraud Detection: AI's ability to identify anomalies and patterns in vast datasets makes it a powerful tool for detecting and preventing fraudulent activities. Financial institutions, for example, are using AI to monitor transactions and flag potential fraud in real-time.
These are just a few examples of how AI is reshaping the business landscape. As the technology continues to advance, its impact will only grow more profound and far-reaching.
ProfileTree's AI-Powered Solutions: Driving Measurable Results
At ProfileTree, we've harnessed the power of AI to develop a range of innovative solutions that are helping our clients achieve transformative results. Let's take a closer look at some of the key areas where our AI expertise is driving value:
Advanced Analytics for Smarter Decisions
We leverages cutting-edge machine learning algorithms to help businesses make sense of their data and uncover actionable insights. By processing and analyzing massive datasets in real-time, our solution enables companies to spot trends, predict outcomes, and optimize performance with unprecedented speed and accuracy.
By analyzing customer behavior and purchase patterns, we were able to identify key factors driving cart abandonment and develop targeted strategies to improve conversion rates. The power of AI-driven analytics is clear. According to a recent survey by McKinsey, companies that have fully absorbed AI into their analytics workflows are seeing a 6% increase in revenue on average, compared to those that have not. With ProfileTree's support, businesses can gain a significant competitive advantage by harnessing the power of predictive intelligence.
Streamlined Content Creation and Management
In the digital age, content is king - but creating high-quality, engaging content at scale can be a significant challenge. That's where ProfileTree's AI-assisted content suite comes in. By leveraging natural language processing and generation technologies, our tools streamline the content creation process and help businesses produce more effective content in less time.
One of our clients, a global B2B software company, was struggling to keep up with the demands of creating content for multiple markets and personas. By implementing our AI-powered content tools, they were able to automate much of the research, ideation, and drafting process. As a result, they increased their content output by 200% while maintaining quality and consistency across all channels.
Our AI content suite also includes powerful optimization and distribution features. By analyzing factors like user engagement, search rankings, and conversion rates, our algorithms can continuously refine and adapt content to maximize its impact. This enables businesses to get more mileage out of every piece of content they create.
The benefits of AI-driven content are significant. According to research by Accenture, companies that have fully committed to AI-generated content have seen a 4x increase in content efficiency and a 2x increase in content effectiveness. ProfileTree's suite makes it easy for businesses to tap into these gains.
Hyper-Personalization for Superior Customer Experiences
Personalization has become a critical imperative for businesses looking to stand out in an increasingly competitive marketplace. But delivering truly personalized experiences at scale requires a level of data analysis and real-time adaptation that is beyond human capabilities. That's where ProfileTree's AI-driven personalization engine comes in.
By analyzing vast amounts of customer data - from demographics and purchase history to browsing behavior and social media activity - our algorithms create detailed profiles of individual users and their preferences. These profiles enable our platform to deliver hyper-targeted content, recommendations, and offers across all customer touchpoints.
We recently implemented our personalization solution for a leading fashion retailer, with impressive results. By dynamically adapting the products, styling tips, and promotions shown to each visitor based on their unique profile, the site was able to increase conversion rates by 75% and average order value by 40%.
The power of AI-driven personalization is backed up by hard data. According to research by BCG, companies that implement advanced personalization strategies see a 6-10% increase in revenue, 2-3 times greater customer satisfaction rates, and 10-30% higher marketing spend efficiency. In other words, personalization pays off - and AI is the key to achieving it at scale.
At ProfileTree, we're pushing the boundaries of what's possible with AI-powered personalization. Our platform goes beyond simple segmentation to create truly individual experiences for each user. This includes:
Dynamically optimizing website layouts, content, and calls-to-action based on user behavior and inferred preferences
Tailoring email and push notification campaigns to individual user profiles and contexts
Providing intelligent product and content recommendations based on user affinities and predicted interests
Personalizing customer service interactions based on sentiment analysis and interaction history
By making each touchpoint feel uniquely relevant and valuable to the individual customer, our clients are able to foster deeper engagement, loyalty, and spend.
Addressing Potential Concerns about AI
While AI offers tremendous potential for business transformation, it's natural to have questions or concerns. Here at ProfileTree, we understand these concerns and want to assure you that AI is here to augment human capabilities, not replace them. Our focus is on using AI to automate repetitive tasks, freeing up human employees to focus on higher-level strategic thinking and creative problem solving.
Data Privacy and Security: ProfileTree takes data privacy and security very seriously. We adhere to the strictest industry standards and regulations to ensure your data is always protected.
Transparency and Explainability: We believe in transparent and explainable AI. Our solutions are designed to be clear about how they arrive at decisions, fostering trust and user confidence.
Case Studies: Empowering Small Businesses with AI
At ProfileTree, we're passionate about empowering businesses of all sizes to leverage the power of AI. Here are a couple of examples showcasing how we've supported small businesses in key sectors:
ConnollyCove: Personalization and Automation Drive Conversions
ConnollyCove, a popular travel website, partnered with ProfileTree to personalize its user experience. We implemented an AI-powered recommendation engine that analyzes website data and analytics. This allows ConnollyCove to suggest tailored experiences, from recommending nearby attractions to offering spa treatments based on visitor profiles.
LearningMole: Advanced AI Learning Enhances Children's Education
LearningMole is a leading provider of e-learning platforms for children. They approached ProfileTree to explore how AI could personalize the learning experience for each child. We implemented an AI-powered adaptive learning system that monitors a child's progress and adjusts the difficulty and pace of learning materials in real-time. This personalized approach ensures children are neither bored nor overwhelmed, keeping them engaged and motivated to learn. LearningMole has reported a dramatic increase in student engagement and learning outcomes since implementing our AI solution.
These are just a few examples of how ProfileTree is helping small businesses harness the power of AI to achieve remarkable results. If you're a small business owner curious about how AI can benefit your company, contact ProfileTree today for a free consultation.
Proactive Cybersecurity with AI
As businesses become increasingly digital, cybersecurity has emerged as one of the most critical challenges they face. With cyber threats growing in volume and sophistication, traditional security approaches are struggling to keep up. That's where ProfileTree's AI-powered cybersecurity solutions come in.
Our platform leverages machine learning algorithms to continuously monitor network activity, user behavior, and system logs for signs of potential threats. By analyzing patterns and anomalies in real-time, our AI can identify and alert on cyber attacks with a speed and accuracy that would be impossible for human security teams.
We recently deployed our AI cybersecurity solution for a major financial services client. Within the first month, our platform identified and blocked a series of advanced phishing attempts that had gone undetected by the client's existing security tools. By preventing a potential data breach, we helped the client avoid millions in financial and reputational damage.
The benefits of AI in cybersecurity are well-established. According to the Capgemini Research Institute, 69% of organizations believe AI is necessary to respond to cyberattacks and 74% say AI-enabled automation is improving the efficiency of threat detection and response. With ProfileTree's solutions, businesses can stay one step ahead of even the most sophisticated cyber threats.
At ProfileTree, we're committed to helping our clients not just navigate this change, but thrive in it. Our team is constantly pushing the boundaries of what's possible with AI, from exploring new frontiers like deep learning and computer vision to pioneering best practices in AI ethics and governance.
By partnering with ProfileTree, businesses can tap into this cutting-edge expertise and stay ahead of the curve in an increasingly AI-driven world. Whether you're looking to optimize your operations, transform your customer experiences, or secure your digital assets, our team has the knowledge and tools to help you succeed.
The AI revolution is here - and with ProfileTree as your partner, you can be at the forefront of it. Contact us today to learn more about how our AI solutions can help you achieve your business goals and unlock new levels of growth and innovation.
ProfileTree is a leading digital services agency based in Northern Ireland. Founded by Ciaran Connolly, we specialize in helping businesses harness the power of technology to drive transformation and growth. With a team of experts in AI, software development, web design, and digital marketing, we provide comprehensive solutions that deliver measurable results.
Disclaimer: The above is a sponsored post, the views expressed are those of the sponsor/author and do not represent the stand and views of Outlook Editorial.
Apple Researchers Open-source OpenELM Language Model Series
Apple Inc. Researchers today open-sourced a series of small language models, OpenELM, that can outperform similarly sized neural networks.
OpenELM's debut comes a day after Microsoft Corp. Introduced a small language model lineup of its own. The first neural network in the series, Phi-3 Mini, features 3.8 billion parameters. Microsoft says that the AI can generate more accurate prompt responses than Llama 2 70B, a large language model with 70 billion parameters.
Apple's OpenELM series comprises four models with varying capabilities. The smallest model features 270 million parameters, while the most advanced packs about 1.1 billion. Apple trained the four neural networks on a dataset with about 1.8 trillion tokens, units of data that each contain a few characters.
The OpenELM series is based on a neural network design known as the decoder-only Transformer architecture. It's also the basis of Microsoft's newly debuted Phi-3 Mini model, as well as many larger LLMs. A neural network based on the architecture can take into account the text that precedes the word when trying to determine its meaning, which boosts processing accuracy.
A language model is made up of interconnected building blocks called layers. The first layer takes the prompt provided by the user, performs some of the processing necessary to generate a response and then sends the processing results to the second layer. This workflow is then repeated multiple times until the input reaches the last AI layer, which outputs a prompt response.
In models based on the decoder-only Transformer architecture, all the layers are usually based on the same common design. Apple says that its OpenELM model series takes a different approach.
The manner in which an AI layer goes about processing user prompts is determined by configuration settings called parameters. Those settings are responsible for, among other tasks, determining which data points a language model takes into account when making a decision. An AI layer's behavior is determined not only by the type of parameters it includes but also by the number of those parameters.
In contrast with more traditional language models, OpenELM's layers aren't based on an identical design but rather each include a different mix of parameters. Apple's researchers determined that this arrangement helps optimize the quality of responses. In an internal test, the most capable version of OpenELM managed to outperform a slightly larger model that was trained on twice as much data.
Alongside OpenELM, Apple today open-sourced several tools designed to help developers more easily incorporate the model series into their software projects. One of those tools is a library that makes it possible to run the models on iPhones and Macs. The library makes use of MLX, a framework Apple open-sourced in December to ease the task of optimizing neural networks for its internally-designed chips.
Image: Unsplash Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. Join our community on YouTube Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.Com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts."TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well" – Andy Jassy
THANK YOU
Comments
Post a Comment