What is AI? Everything to know about artificial intelligence



artificial narrow intelligence systems are :: Article Creator

NASA Appears To Step Back From The Term 'artificial General Intelligence'

The terminology NASA once used to refer to artificial general intelligence has changed, the space agency said in response to questions from FedScoop about emails obtained through a public records request, signaling the ways that science-focused federal agencies might be discussing emerging technologies in the age of generative AI. 

Building artificial general intelligence — a powerful form of AI that could theoretically rival humans — is still a distant goal, but remains a key objective of companies like OpenAI and Meta. It's also a topic that remains hotly contested and controversial among technology researchers and civil society, and one that some feel could end up distracting from more immediate AI risks, like bias, privacy, and cybersecurity. 

NASA is one of the few government agencies that's expressed any particular interest in AGI issues. Many federal agencies remain focused on more immediate applications of AI, such as using machine learning to process documents. Jennifer Dooren, NASA's deputy news chief, said in a statement to FedScoop that the agency is "committed to formalizing protocols and processes for AI usage and expanding efforts to further AI innovations across the agency."

A framework for the ethical use of artificial intelligence published by the space agency in April 2021 made reference to both artificial general intelligence and artificial super intelligence. In response to FedScoop questions about the status of this work, NASA said "the terminology of AI" has changed, pointing to the agency's handling of generative artificial intelligence, which typically includes the kind of large language models that fuel systems like ChatGPT. (Whether systems like ChatGPT eventually serve as a foundation for AGI remains up for debate among researchers.) 

"NASA is looking holistically at Artificial Intelligence and not just the subparts," Dooren said. "The terms from this past framework have evolved. For example, the terms AGI and ASI could now be viewed as generative AI (genAI) today."

The agency also highlighted a new working group focused on ethical artificial intelligence and NASA's work to meet goals outlined in President Joe Biden's AI executive order from last October. The space agency also hosted a public town hall on its AI capabilities last month. 

But the apparent retreat from the term artificial general intelligence is notable, given some of the futuristic concerns outlined in the 2021 framework. One goal outlined in the document, for instance, was to "set the stage for successful, peaceful, and potentially symbiotic coexistence between humans and machines." The framework noted that while AGI had not yet been achieved, there was growing belief that there could be a "tipping point" in AI capabilities that would fundamentally change how humans interact with technology. 

Experts sometimes split artificial intelligence into several categories: artificial narrow intelligence, or AI use cases designed with specific applications in mind, and artificial general intelligence, referring to AI systems that could match capabilities of human users. NASA's framework also refers to artificial super intelligence, which would represent AI capabilities that "surpass" human capabilities.

The document stipulated that NASA should be an "early adopter" of national and global best practices in regard to these advanced technologies. It noted that many AI systems won't advance to the level of AGI or ASI, but still encouraged NASA to consider the potential impacts of these technologies. Many of the considerations outlined in the report appear to be far off, but range from analyzing the possibility of encoding morality in advanced AI systems (a potentially impossible task) or "merging" astronauts with artificial intelligence.  

"Creating a perfect moral code that works in all cases is still an elusive task and must be pursued by NASA experts in conjunction with other national or global experts," the document stated. "As humans pursue long term space flight, technology may advance to a point where it would be necessary to consider the benefits and impacts of melding humans and AI machines, most notably adaptations that allow survivability during long duration space flight, but challenges if returning to Earth."

NASA's interest in studying the ramifications of AGI, as part of this framework, were also discussed in an email obtained by FedScoop earlier this year. 

Written by Rebecca Heilweil Rebecca Heilweil is an investigative reporter for FedScoop. She writes about the intersection of government, tech policy, and emerging technologies. Previously she was a reporter at Vox's tech site, Recode. She's also written for Slate, Wired, the Wall Street Journal, and other publications. You can reach her at rebecca.Heilweil@fedscoop.Com. Message her if you'd like to chat on Signal.

In The Race To Artificial General Intelligence, Where's The Finish Line?

To hear companies such as ChatGPT's OpenAI tell it, artificial general intelligence, or AGI, is the ultimate goal of machine learning and AI research. But what is the measure of a generally intelligent machine? In 1970 computer scientist Marvin Minsky predicted that soon-to-be-developed machines would "read Shakespeare, grease a car, play office politics, tell a joke, have a fight." Years later the "coffee test," often attributed to Apple co-founder Steve Wozniak, proposed that AGI will be achieved when a machine can enter a stranger's home and make a pot of coffee.

Few people agree on what AGI is to begin with—never mind achieving it. Experts in computer and cognitive science, and others in policy and ethics, often have their own distinct understanding of the concept (and different opinions about its implications or plausibility). Without a consensus it can be difficult to interpret announcements about AGI or claims about its risks and benefits. Meanwhile, though, the term is popping up with increasing frequency in press releases, interviews and computer science papers. Microsoft researchers declared last year that GPT-4 shows "sparks of AGI"; at the end of May OpenAI confirmed it is training its next-generation machine-learning model, which would boast the "next level of capabilities" on the "path to AGI." And some prominent computer scientists have argued that with text-generating large language models, it has already been achieved.

To know how to talk about AGI, test for AGI and manage the possibility of AGI, we'll have to get a better grip on what it actually describes.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

General Intelligence

AGI became a popular term among computer scientists who were frustrated by what they saw as a narrowing of their field in the late 1990s and early 2000s, says Melanie Mitchell, a professor and computer scientist at the Sante Fe Institute. This was a reaction to projects such as Deep Blue, the chess-playing system that bested grandmaster Garry Kasparov and other human champions. Some AI researchers felt their colleagues were focusing too much on training computers to master single tasks such as games and losing sight of the prize: broadly capable, humanlike machines. "AGI was [used] to try to get back to that original goal," Mitchell says—it was coinage as recalibration.

But viewed in another light, AGI was "a pejorative," according to Joanna Bryson, an ethics and technology professor at the Hertie School in Germany who was working in AI research at the time. She thinks that the term arbitrarily divided the study of AI into two groups of computer scientists: those deemed to be doing meaningful work toward AGI, who were explicitly in pursuit of a system that could do everything humans could do, and everyone else, who was assumed to be spinning their wheels on more limited—and therefore frivolous—aims. (Many of these "narrow" goals, such as teaching a computer to play games, later helped advance machine intelligence, Bryson points out.)

Other definitions of AGI can seem equally wide-ranging and slippery. At its simplest, it is shorthand for a machine that equals or surpasses human intelligence. But "intelligence" itself is a concept that's hard to define or quantify. "General intelligence" is even trickier, says Gary Lupyan, a cognitive neuroscientist and psychology professor at the University of Wisconsin–Madison. In his view, AI researchers are often "overconfident" when they talk about intelligence and how to measure it in machines.

Cognitive scientists have been trying to home in on the fundamental components of human intelligence for more than a century. It's generally established that people who do well on one set of cognitive questions tend to also do well on others, and many have attributed this to some yet-unidentified, measurable aspect of the human mind, often called the "g factor." But Lupyan and many others dispute this idea, arguing that IQ tests and other assessments used to quantify general intelligence are merely snapshots of current cultural values and environmental conditions. Elementary school students who learn computer programming basics and high schoolers who pass calculus classes have achieved what was "completely outside the realm of possibility for people even a few hundred years ago," Lupyan says. Yet none of this means that today's kids are necessarily more intelligent than adults of the past; rather, humans have amassed more knowledge as a species and shifted our learning priorities away from, say, tasks directly related to growing and acquiring food—and toward computational ability instead.

"There's no such thing as general intelligence, artificial or natural," agrees Alison Gopnik, a professor of psychology at the University of California, Berkeley. Different kinds of problems require different kinds of cognitive abilities, she notes; no single type of intelligence can do everything. In fact, Gopnik adds, different cognitive abilities can be in tension with each other. For instance, young children are primed to be flexible and fast learners, allowing them to make many new connections quickly. But because of their rapidly growing and changing mind, they don't make great long-term planners. Similar principles and limitations apply to machines as well, Gopnik says. In her view, AGI is little more than "a very good marketing slogan."

General Performance

Moravec's paradox, first described in 1988, states that what's easy for humans is hard for machines, and what humans find challenging is often easier for computers. Many computer systems can perform complex mathematical operations, for instance, but good luck asking most robots to fold laundry or twist doorknobs. When it became obvious that machines would continue to struggle to effectively manipulate objects, common definitions of AGI lost their connections with the physical world, Mitchell notes. AGI came to represent mastery of cognitive tasks and then what a human could do sitting at a computer connected to the Internet.

In its charter, OpenAI defines AGI as "highly autonomous systems that outperform humans at most economically valuable work." In some public statements, however, the company's founder, Sam Altman, has espoused a more open-ended vision. "I no longer think [AGI is] like a moment in time," he said in a recent interview. "You and I will probably not agree on the month or even the year that we're like, 'Okay, now that's AGI.'"

Other arbiters of AI progress have drilled down into specifics instead of embracing ambiguity. In a 2023 preprint paper, Google DeepMind researchers proposed six levels of intelligence by which various computer systems can be graded: systems with "No AI" capability at all, followed by "Emerging," "Competent," "Expert," "Virtuoso" and "Superhuman" AGI. The researchers further separate machines into "narrow" (task-specific) or "general" types. "AGI is often a very controversial concept," lead author Meredith Ringel Morris says. "I think people really appreciate that this is a very practical, empirical definition."

To come up with their characterizations, Morris and her colleagues explicitly focused on demonstrations of what an AI can do instead of how it can do tasks. There are "important scientific questions" to be asked about how large language models and other AI systems achieve their outputs and whether they're truly replicating anything humanlike, Morris says, but she and her co-authors wanted to "acknowledge the practicality of what's happening."

According to the DeepMind proposal, a handful of large language models, including ChatGPT and Gemini, qualify as "emerging AGI," because they are "equal to or somewhat better than an unskilled human" at a "wide range of nonphysical tasks, including metacognitive tasks like learning new skills." Yet even this carefully structured qualification leaves room for unresolved questions. The paper doesn't specify what tasks should be used to evaluate an AI system's abilities nor the number of tasks that distinguishes a "narrow" from a "general" system, nor the way to establish comparison benchmarks of human skill level. Determining the correct tasks to compare machine and human skills, Morris says, remains "an active area of research."

Yet some scientists say answering these questions and identifying proper tests is the only way to assess if a machine is intelligent. Here, too, current methods may be lacking. AI benchmarks that have become popular, such as the SAT, the bar exam or other standardized tests for humans, fail to distinguish between an AI that regurgitates training data and one that demonstrates flexible learning and ability, Mitchell says. "Giving a machine a test like that doesn't necessarily mean it's going to be able to go out and do the kinds of things that humans could do if a human got a similar score," she explains.

General Consequences

As governments attempt to regulate artificial intelligence, some of their official strategies and policies reference AGI. Variable definitions could change how those policies are applied, Mitchell points out. Temple University computer scientist Pei Wang agrees: "If you try to build a regulation that fits all of [AGI's definitions], that's simply impossible." Real-world outcomes, from what sorts of systems are covered under emerging laws to who holds responsibility for those systems' actions (is it the developers, the training data compilers, the prompter or the machine itself?) might be altered by how the terminology is understood, Wang says. All of this has critical implications for AI safety and risk management.

If there's an overarching lesson to take away from the rise of LLMs, it might be that language is powerful. With enough text, it's possible to train computer models that appear, at least to some, like the first glimpse of a machine whose intelligence rivals that of humans. And the words we choose to describe that advance matter.

"These terms that we use do influence how we think about these systems," Mitchell says. At a pivotal 1956 Dartmouth College workshop at the start of AI research, scientists debated what to call their work. Some advocated for "artificial intelligence" while others lobbied for "complex information processing," she points out. Perhaps if AGI were instead named something like "advanced complex information processing," we'd be slower to anthropomorphize machines or fear the AI apocalypse—and maybe we'd agree on what it is.


How Artificial Intelligence Is Taking Over The World

Artificial Intelligence is everywhere. It would be impossible to go your entire life without using artificial intelligence. The software in your smartphones, ATMs, Cars and Robots, its all intelligence. So what is this artificial intelligence, according to the internet of things the AI is a field of computer science that attempts to stimulate characteristics of human intelligence or senses such as learning, reasoning, and adapting. AI is like a newborn baby. It lacks the ability to learn independently. It cannot think outside its code. AI, like a baby, depends on the information given to it. That's where machine learning comes in; it enables the AI to think outside its code.

Machines are given tasks, as they go through the tasks they adapt and learn. It requires computer scientists to come up with algorithms that enable the AI to learn more and adapt to more than no task. In Biology, as we know, the neurons or brain cells control the activities of the human body, but in AI, the artificial neural networks which imitate the brain cells in humans will use mathematics and computer science to copy the activities of the human brain.

Artificial Intelligence can be classified into weak and strong AI. A weak AI is trained for only one task only, while the strong AI has human cognitive abilities, and when presented with an unfamiliar task, it has enough intelligence to find a solution. There are many types of AI:

1. Artificial Narrow Intelligence (ANI)-its a strand of intelligence which is prominent in performing a single task; such AIs include Speech Recognition [ can only recognize speech] and Voice assistants such as Cortana and Alexa [ act only upon voice commands ],

2. Artificial General Intelligence [ AGI ]- It's comparatively as intelligent as the human brain. It can learn and improve itself,

3. Artificial Super Intelligence [ ASI ]- It's more sophisticated than human intelligence. It can surpass human intelligence.

These machines are inbuilt in each electronic device, on your smartphones, the speech recognition that unlocks your phone, the face lock on your iPhones, and so much more. It has so many benefits:

1. It's used in health care where a faster diagnosis is delivered. It's more accurate because it will surpass human error. In hospitals, the AI will record patient documents and medical history via face recognition. It will guarantee total accuracy him surgeries and medicine administration.

2. AI in business- Chatbots have been integrated within websites. Many business companies use computer intelligence to calculate the odds of closing a business deal. For example, betting companies use this intelligence to calculate the odds of a player winning a match or even loosing. International companies use computer intelligence to gather information on what their customers want.

3. Social media- All humans are connected to the internet, even the dead. Artificial intelligence is used to give a better experience for online users, like in facebook they use an AI to manage all the user information

4. Cyborg technology- The human limit is their own body. It's a future advancement, but it's used to change the human body into a machine. This technology is not yet available.

People don't want to admit this, but Artificial Intelligence watches them each second, the selfie camera on your phones, the CCTV cameras in the malls, tech industries, and the camera on your laptop. The government will never tell you the truth that there is a god watching our every step. If the truth comes out, they will instantly deny, such AIs include Pandora, Samaritan, and The Machine. If people knew the truth, they would feel insecure because they would feel disadvantaged.

Just like the way how gods lose control, computers do too. Let's look on the bright side, it will help us to do so much than our expectations, some of them may even gain the ability to feel consciousness, so are we all ready to await the awakening of the machine?

Many companies have showcased their Artificial Intelligence to the world, companies such as Google, which uses the voice typing command on our phones; there is also a new robot known as Sophia, which now currently has the citizenship of Saudi Arabia. IPhone 4s phones had SIRI integrated into them in the year 2011. What many people actually don't know is that SIRI is a project originally developed by SRI International Artificial Intelligence Center. Its speech recognition engine was provided by Nuance Communications, and SIRI uses advanced machine learning to perform its functions. Armed with AI, voice assistants will grow ever more capable of scouring the web, helping us shop, providing directions. There's an expectation that this voice technology will power in-home assistants to help care for the elderly, which is among countless other examples of AI voice recognition.

In many smart homes right now, advanced computer systems are used. For example, in Two Rivers Mall in Kenya, Nairobi they have new homes that have computer systems that enable a homeowner to switch on the lights, control the sound output, open the curtains automatically and turn on the shower. Right now, there are android televisions that enable you the viewer to sign in your social media such as Instagram and Facebook. In order for us to understand AI, we must know how it works; it's scientifically proven that AI works by combining large amounts of data with fast, interactive processing and intelligent algorithms allowing the software to learn automatically from patterns or features in the data.

Stay informed. Subscribe to our newsletter

Those who have a TESLA car can attest that Artificial Intelligence is already here; this is because of the techniques a car uses. A driver assistant feature while driving, the car has the ability to sense when the driver feels sleepy; this is because it has sensors on the seats and facial sensors. The car takes on the self-driving mode by use of 3D view range camera, and it has the capability to detect traffic lights, therefore, reducing accidents as compared to human error, which is the main reason for most road accidents.

The network provides the use of advanced computer systems that use the if-then command to operate. These companies use mainframe computers to store human data; this data must be analyzed so that each customer has the ability to store money, buy data bundles, and even be provided with Home WiFi or Mifi. This is where Artificial Narrow Intelligence comes in, for it performs only one task subdivided into sub-tasks.

People don't know that software are examples of Artificial Intelligence; these software are given the ability to control data usage and information. Each electronic device has its own complicated software; this software becomes the admin to our devices. 

Like all inventions, they both have a negative and positive side. On the bright beside, AI will improve human life and even boost astronomical studies. Right now, in Kenya and other nations, junior students from grades 1 to 3 are being taught using special-purpose tablets; of course, this is also Artificial Intelligence, although some of us may deny. Artificial Intelligence is an artificial life nearly crawling through the codes in our software and phones like a baby trying to gather information before it goes to the real world from the virtual reality. Humanity is open to all possibilities as technology changes each day because a journey starts from a single step, which eventually progresses to miles, the question is, will you be there?






Comments

Follow It

Popular posts from this blog

Reimagining Healthcare: Unleashing the Power of Artificial ...

What is Generative AI? Everything You Need to Know

Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions