What is AI? Everything to know about artificial intelligence
Real-Time Analysis In Artificial Intelligence
One of the lesser-realized but very important elements of artificial intelligence is real-time adaptation and decision-making. Where is this important, one might ask? The ability to process information as it arrives and then to make informed decisions without significant delay is an area where AI can be quite valuable.
Familiar applications of real-time adaptation include command-and-control environments, security situations, traffic control or monitoring environments and in autonomous driving (autopilots). Every one of these situations requires intelligent systems to be able to make adjustments in response to dynamic situations and—in most cases—in real time.
Adapt in Real TimeAny one of us can probably imagine the number of decisions that must be made in a self-driving vehicle solution. The ability for any system to "adapt in real time" is becoming essential in this fast-moving world—and AI is a primary element in those advancements.
Today, companies must be able to act quickly on data-driven insights to be more agile, proactive and to seize emerging opportunities or respond to sudden market shifts. Amazon is a good example of a business that must be able to move in a certain direction without being burdened by "legacy" components that bind it to restrictive methods that cannot react to sudden changes in marketplace demands.
For time-based analysis, the AI-driven environment might depend on the following: (1) continuous time analysis and (2) discrete time analysis. Each of these specific methodologies in mathematics, networks and analysis have subcomponents that become applicable to many elements in media, business/financial forecasting, signal and process management and system modeling (both complexity and accuracy).
Signal processing involves elements such as signal data visualization techniques, preprocessing and filtering techniques, plus physical-based time-domain and frequency-domain analysis (especially in real time), and the use or application of the data derived from signal processing.
Broadly defined, signal processing is a fundamental discipline in data science that deals with the extraction, analysis and manipulation of signals and time-series data. The depth of this science can get very complex and highly dependent, which is why we may see "data scientist-engineer" as a profession grow rapidly in the workplace.
Reading the SignalsIn data science, a signal is defined as a gesture, action, element or sound used to convey information or instructions. From a third-person perspective, signals "transmit" information (such as instructions) by such means—i.E., by gesture, action or element/component, including audio/visual elements such as sound, light or even temperature changes in the environment, etc.
When placed into the context of signal processing, a "signal" can be any form of information that varies over time or space. Such signals may take many familiar forms, ranging from audio waveforms and temperature readings to financial market data and sensor-activity measurements. AI operations function on categorizing such signal data forms and learning the variations or changes from actual environments prompted by stimuli generated by external components including humans, the climate, physical alterations or altercations and such.
Neural NetworkAccording to IBM, a neural network "is a machine-learning program, or model, that makes decisions in a manner like the human brain." In our cases, these networks are specifically a computer system modeled on the human brain and nervous system, i.E., the "ideal" AI-environment. By using processes that mimic how biological neurons work together to identify phenomena, the model can weigh options and arrive at conclusions.
Conclusions are generally reached by using a series of training exercises which in turn "machine-learn" to improve their accuracy over time. As these successive training exercises are fine-tuned for accuracy (and application), they become powerful tools in data and computer science and, in turn, support artificial intelligence. The results are that tasks, such as image or speech recognition, can take seconds compared to the hours a human might require using manual identification methods.
One of the best-known examples of a neural network is Google's PageRank (PR) search algorithm, used to rank web pages in its search-engine results tabulations. We note that PR is named after both the term web page and Google's co-founder Larry (Leonard) Page—associated with the co-founder of Google, Sergey Brin.
ANNs and SNNsNeural networks are sometimes cataloged as artificial neural networks (ANNs) or simulated neural networks (SNNs). There are several types or forms of neural networks, two of which are discussed here.
Fig. 1: An ANN training process uses a set of unit cells (or artificial neurons), depicted by the circles, arranged in an input layer, one or more hidden layers and an output layer. Each neuron is connected to those neurons in the neighboring layers via adaptive weights. (Image credit: Karl Paulsen)Artificial neural networks (ANNs) are a type of machine-learning algorithm that employs artificial neurons—a network of interconnected nodes (see Fig. 1 for a conceptual node diagram). These nodes then attempt to model the human brain's neural network. Each individual node acts like its own linear regression model—composed of weights, a bias (i.E., a "threshold") and an output. Linear regression models predict the value of a variable based on the value of another variable (see Fig. 2 for equation).
Fig. 2: Linear regression in machine learning is a statistical method used to model the relationship between a dependent variable and one or more independent variables. The aim is to find a linear equation that best describes this relationship, allowing the system to make predictions based on new data. (Image credit: Karl Paulsen)Linear regression fits a straight-line model (or surface) that is useful to "minimize discrepancies between a predicted value and an actual output value"—the approach in the training models used in artificial intelligence solutions and appears much like the slope equation from Algebra 1 (y=mx+b). For more information on the mathematics of these principles, follow up on the details of linear equations, least squares methods, and predictive coefficients.
Typically, ANNs will be used to solve complex problems—for example, facial recognition or document summary processing. Essentially, the theory behind the ANN is the teaching of computers to process data in methodologies that mimic the human brain.
Disadvantages of ANN may include: (1) they are computationally expensive and consume massive amounts of training cycles to obtain accuracy; and (2) it can be difficult for them to perfect predictions or categorize data. At this point in time, generative AI (one of the more familiar AI activities) can be considered "experimental" at best—with improvements being made as persistence in applications and libraries of training models are created.
A simulated neural network (SNN) is simply another name for an artificial neural network (ANN), considered a subset of machine learning. Summarily, ANNs are made up of connected nodes, or artificial neurons, that are loosely based on the neurons in the brain.
Discrete and ContinuousIn our AI category, "continuous time analysis" refers to studying systems where changes occur smoothly over an uninterrupted time interval. Contrary to "continuous" is "discrete time analysis," which examines systems where changes are only observed at specific, discrete points in time, essentially treating time as a series of intervals rather than a continuous flow.
The nature of the problem to be solved and the degrees (amount, type and frequency) of data being analyzed are determining factors when using either the discrete- or continuous- time analysis approaches.
Discrete time models often are the more preferred choice due to computational ease, however, continuous time models can provide a more accurate representation of certain real-world phenomena when applicable (e.G., in self-driving vehicles or other autonomous activities.)
AI Art and ApproachesAI allows leaders of organizations to make better decisions by using the built-in methodologies employed in many conditioned AI approaches to problems (such as linear regression techniques.)
Furthermore, better insights into the solution may be accomplished by uncovering patterns and relationships that others might have previously seen and thought they already understood. Fundamentally this is how AI is utilized in the generation or modification of art and images—referred to as "AI art."
AI art is "any kind of image, text, video, audio or other kind of digital artwork produced by generative AI tools." Such tools leverage millions of written, visual or aural content samples in reference to the prompts or known images employed when creating AI-generated art. AI art is currently integrated into many, if not all, of the major products from companies including Adobe, Microsoft, Google and more.
Google Wants To Simulate The World With AI
Apparently not content with its grip on this world, Google is in the process of staffing up its DeepMind research lab to build generative models that are capable of simulating the physical world. The project—which will be headed up by Tim Brooks, one of the leads who helped build OpenAI's video generator, Sora—will be a critical part of the company's attempt to achieve artificial general intelligence, according to job listings related to the new team.
Brooks, who joined DeepMind after fleeing from OpenAI back in October, and his team have "ambitious plans to make massive generative models that simulate the world." According to the role descriptions, the effort to build world models will "power numerous domains, such as visual reasoning and simulation, planning for embodied agents, and real-time interactive entertainment." If you're willing to take on one of these roles, maybe you can figure out what those vagueries mean and get back to us.
A world model, put as simply as possible, typically seeks to simulate how the world actually works. Generative models like Sora are able to replicate things that it has seen before within its training data, it doesn't have any real understanding as to why that thing happens. So it can successfully generate a video of a person throwing a baseball, but it doesn't have any understanding of the physics of what is happening. World models aim to arm the machine with enough information to actually parse through how an action happens and the likely outcome of it.
Meta's chief AI scientist Yann LeCun described world models this way during a speech at Hudson Forum earlier this year: "A world model is your mental model of how the world behaves…You can imagine a sequence of actions you might take, and your world model will allow you to predict what the effect of the sequence of action will be on the world."
World models are difficult to build for a number of reasons, including the massive amount of compute needed to run a model and the lack of sufficient training data to create an accurate model, resulting in most world models working only for limited and specific contexts.
DeepMind's team seems intent on taking the world model wider. The plan is to build "real-time interactive generation" tools on top of the models and potentially look into how they could integrate their world model into Google's large language model Gemini.
One likely area that DeepMind will try to tackle is video games. The job description for the new team notes that they will collaborate with the Veo and Genie teams at Google. Genie is Google's Sora-like video generator and Genie is an existing world model that can simulate 3D environments in real time. The video game industry is already keen to adopt AI tools, displacing thousands of workers. A CVL Economics survey found that more than 86% of all gaming firms have already adopted generative AI tools and nearly 15% of all gaming jobs could be disrupted by 2026.
Maybe improving this world would be a better use of time than modeling it.
Where AI Meets The Physical World: The Rise Of Humanoid Robots
Greg Ombach, Head of Disruptive Research & Technology, Senior Vice President at Airbus.
getty A New Era In RoboticsArtificial intelligence (AI) is moving beyond the digital space and significantly impacting the physical world. At the Web Summit in Lisbon, Cristiano Amon, CEO of Qualcomm, highlighted how small language models (SLMs) enable AI to operate at the edge. These models let robots and devices process generative AI locally, eliminating delays, enhancing responsiveness and improving energy efficiency. This innovation is a breakthrough for real-time applications and robotics.
At the same event, Peggy Johnson, CEO of Agility Robotics, introduced Digit, a humanoid robot designed for industrial tasks like moving goods and handling repetitive processes. Meanwhile, at Slush in Helsinki, Bernt Øivind Bornich, CEO of 1X Robotics, showcased EVE, a humanoid robot tailored for customer-facing roles in retail and healthcare.
These two examples illustrate the diverse directions in robotics development: Digit prioritizes industrial applications, while EVE focuses on commercial environments.
With my background in driving innovation across industries, I see humanoid robots spearheading a new era in robotics. By combining cutting-edge AI with physical functionality, they are set to transform productivity and reshape collaboration in diverse sectors.
How AI Meets The Physical WorldThe rise of humanoid robots like Digit and EVE highlights a transformative shift—AI is no longer confined to software. These robots combine cognitive intelligence with physical actions, enabling them to perform tasks in real-world environments.
At the core of this transformation is edge AI, where data is processed locally on devices rather than relying on the cloud. This eliminates latency, enhances energy efficiency and allows robots to adapt quickly to dynamic situations. Whether sorting goods in a warehouse or assisting customers in a store, this ability to act in real time makes humanoid robots indispensable across industries.
Revolutionizing Industrial Applications With Humanoid RobotsHumanoid robots like Digit and EVE are revolutionizing industrial processes across sectors such as automotive and aerospace by combining AI-driven intelligence with physical capabilities. In the automotive industry, robots transform production lines by assembling components, handling heavy materials and adapting to dynamic workflows. Often referred to as "AI Getting Physical," this transition enables robots to navigate human-centric environments seamlessly, thanks to reinforcement learning and edge AI. Their contributions enhance productivity and allow human workers to focus on higher-value tasks, fostering collaboration between humans and machines.
Similarly, humanoid robots streamline manufacturing and logistics processes in the aerospace sector. AI-driven robotics ensure consistent quality while accelerating production rates in assembly lines, reducing errors and minimizing material waste. These robots also handle logistics tasks like sorting, loading and transporting goods, operating efficiently without requiring significant infrastructure changes. Their ability to adapt to complex, dynamic environments ensures uninterrupted operations, making them valuable assets for the aerospace industry.
By integrating reinforcement learning and edge AI, humanoid robots in both sectors improve efficiency and scalability and demonstrate how physical AI can solve challenges in demanding industrial environments.
Commercial ApplicationsWith its focus on customer-facing roles, EVE exemplifies how humanoid robots enhance the commercial sector. In retail, robots can guide shoppers, answer questions and assist with checkouts, improving customer satisfaction while allowing staff to manage more complex responsibilities.
In healthcare, humanoid robots assist with physically demanding tasks such as lifting patients, delivering supplies and monitoring vital signs. This collaboration alleviates labor shortages and enhances the quality of care, particularly in elder care settings where demand continues to grow.
The Role Of Edge AI In RoboticsThe game-changing role of edge AI lies in its ability to make robots faster, more energy-efficient and more responsive. By processing data locally, robots can make real-time decisions and adapt to new environments without delays.
Reinforcement learning further enhances this adaptability, allowing robots to learn from experience and refine their performance. For instance, a robot in a warehouse can adjust to inventory changes or new layouts without reprogramming, ensuring uninterrupted productivity. These advancements are critical for scaling robotics across industries.
Challenges AheadDespite their potential, humanoid robots face significant challenges:
• Cost: The high price of advanced robotics limits access for small and medium-sized enterprises. As production scales, these costs are expected to decrease.
• Regulations: Clear frameworks are needed to address ethical concerns, safety and privacy issues.
• Training: Workers must be equipped to collaborate effectively with robots, ensuring a smooth workplace transition.
Overcoming these challenges is essential for humanoid robots to gain widespread acceptance and become a seamless part of daily operations.
Looking To The FutureHumanoid robots rapidly evolve, blending cognitive and physical capabilities to complement humans. The next generation of robotics will focus on seamless collaboration, allowing robots to handle repetitive tasks while humans lead with creativity and problem-solving.
The emergence of edge AI
As these advancements converge, we are on the brink of a robotics boom in both commercial and industrial applications.
Humanoid robots like Digit and EVE mark the beginning of this transformation. They represent a new era of innovation by automating repetitive tasks, enhancing productivity and fostering human-robot collaboration. As we embrace these changes, the synergy between AI and robotics will redefine industries, reshape how we work and live and demonstrate the true potential of AI in the physical world.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Comments
Post a Comment