Five Important Subsets of Artificial Intelligence



human like ai :: Article Creator

AI 'breakthrough': Neural Net Has Human-like Ability To Generalize Language

An chalkboard illustration of two figures communicating and understanding each other.

A version of the human ability to apply new vocabulary in flexible ways has been achieved by a neural network.Credit: marrio31/Getty

Scientists have created a neural network with the human-like ability to make generalizations about language1. The artificial intelligence (AI) system performs about as well as humans at folding newly learned words into an existing vocabulary and using them in fresh contexts, which is a key aspect of human cognition known as systematic generalization.

The researchers gave the same task to the AI model that underlies the chatbot ChatGPT, and found that it performs much worse on such a test than either the new neural net or people, despite the chatbot's uncanny ability to converse in a human-like manner.

The work, published on 25 October in Nature, could lead to machines that interact with people more naturally than do even the best AI systems today. Although systems based on large language models, such as ChatGPT, are adept at conversation in many contexts, they display glaring gaps and inconsistencies in others.

The neural network's human-like performance suggests there has been a "breakthrough in the ability to train networks to be systematic", says Paul Smolensky, a cognitive scientist who specializes in language at Johns Hopkins University in Baltimore, Maryland.

Language lessons

Systematic generalization is demonstrated by people's ability to effortlessly use newly acquired words in new settings. For example, once someone has grasped the meaning of the word 'photobomb', they will be able to use it in a variety of situations, such as 'photobomb twice' or 'photobomb during a Zoom call'. Similarly, someone who understands the sentence 'the cat chases the dog' will also understand 'the dog chases the cat' without much extra thought.

But this ability does not come innately to neural networks, a method of emulating human cognition that has dominated artificial-intelligence research, says Brenden Lake, a cognitive computational scientist at New York University and co-author of the study. Unlike people, neural nets struggle to use a new word until they have been trained on many sample texts that use that word. AI researchers have sparred for nearly 40 years as to whether neural networks could ever be a plausible model of human cognition if they cannot demonstrate this type of systematicity.

To attempt to settle this debate, the authors first tested 25 people on how well they deploy newly learnt words to different situations. The researchers ensured the participants would be learning the words for the first time by testing them on a pseudo-language consisting of two categories of nonsense words. 'Primitive' words such as 'dax,' 'wif' and 'lug' represented basic, concrete actions such as 'skip' and 'jump'. More abstract 'function' words such as 'blicket', 'kiki' and 'fep' specified rules for using and combining the primitives, resulting in sequences such as 'jump three times' or 'skip backwards'.

Participants were trained to link each primitive word with a circle of a particular colour, so a red circle represents 'dax', and a blue circle represents 'lug'. The researchers then showed the participants combinations of primitive and function words alongside the patterns of circles that would result when the functions were applied to the primitives. For example, the phrase 'dax fep' was shown with three red circles, and 'lug fep' with three blue circles, indicating that fep denotes an abstract rule to repeat a primitive three times.

Finally, the researchers tested participants' ability to apply these abstract rules by giving them complex combinations of primitives and functions. They then had to select the correct colour and number of circles and place them in the appropriate order.

Cognitive benchmark

As predicted, people excelled at this task; they chose the correct combination of coloured circles about 80% of the time, on average. When they did make errors, the researchers noticed that these followed a pattern that reflected known human biases.

Next, the researchers trained a neural network to do a task similar to the one presented to participants, by programming it to learn from its mistakes. This approach allowed the AI to learn as it completed each task rather than using a static data set, which is the standard approach to training neural nets. To make the neural net human-like, the authors trained it to reproduce the patterns of errors they observed in humans' test results. When the neural net was then tested on fresh puzzles, its answers corresponded almost exactly to those of the human volunteers, and in some cases exceeded their performance.

By contrast, GPT-4 struggled with the same task, failing, on average, between 42 and 86% of the time, depending on how the researchers presented the task. "It's not magic, it's practice," Lake says. "Much like a child also gets practice when learning their native language, the models improve their compositional skills through a series of compositional learning tasks."

Melanie Mitchell, a computer and cognitive scientist at the Santa Fe Institute in New Mexico, says this study is an interesting proof of principle, but it remains to be seen whether this training method can scale up to generalize across a much larger data set or even to images. Lake hopes to tackle this problem by studying how people develop a knack for systematic generalization from a young age, and incorporating those findings to build a more robust neural net.

Elia Bruni, a specialist in natural language processing at the University of Osnabrück in Germany, says this research could make neural networks more-efficient learners. This would reduce the gargantuan amount of data necessary to train systems such as ChatGPT and would minimize 'hallucination', which occurs when AI perceives patterns that are non-existent and creates inaccurate outputs. "Infusing systematicity into neural networks is a big deal," Bruni says. "It could tackle both these issues at the same time."


Humans Absorb Bias From AI—And Keep It After They Stop Using The Algorithm

Artificial intelligence programs, like the humans who develop and train them, are far from perfect. Whether it's machine-learning software that analyzes medical images or a generative chatbot, such as ChatGPT, that holds a seemingly organic conversation, algorithm-based technology can make errors and even "hallucinate," or provide inaccurate information. Perhaps more insidiously, AI can also display biases that get introduced through the massive data troves that these programs are trained on—and that are indetectable to many users. Now new research suggests human users may unconsciously absorb these automated biases.

Past studies have demonstrated that biased AI can harm people in already marginalized groups. Some impacts are subtle, such as speech recognition software's inability to understand non-American accents, which might inconvenience people using smartphones or voice-operated home assistants. Then there are scarier examples—including health care algorithms that make errors because they're only trained on a subset of people (such as white people, those of a specific age range or even people with a certain stage of a disease), as well as racially biased police facial recognition software that could increase wrongful arrests of Black people.

Yet solving the problem may not be as simple as retroactively adjusting algorithms. Once an AI model is out there, influencing people with its bias, the damage is, in a sense, already done. That's because people who interact with these automated systems could be unconsciously incorporating the skew they encounter into their own future decision-making, as suggested by a recent psychology study published in Scientific Reports. Crucially, the study demonstrates that bias introduced to a user by an AI model can persist in a person's behavior—even after they stop using the AI program.

"We already know that artificial intelligence inherits biases from humans," says the new study's senior researcher Helena Matute, an experimental psychologist at the University of Deusto in Spain. For example, when the technology publication Rest of World recently analyzed popular AI image generators, it found that these programs tended toward ethnic and national stereotypes. But Matute seeks to understand AI-human interactions in the other direction. "The question that we are asking in our laboratory is how artificial intelligence can influence human decisions," she says.

Over the course of three experiments, each involving about 200 unique participants, Matute and her co-researcher, Lucía Vicente of the University of Deusto, simulated a simplified medical diagnostic task: they asked the nonexpert participants to categorize images as indicating the presence or absence of a fictional disease. The images were composed of dots of two different colors, and participants were told that these dot arrays represented tissue samples. According to the task parameters, more dots of one color meant a positive result for the illness, whereas more dots of the other color meant that it was negative.

Throughout the different experiments and trials, Matute and Vicente offered subsets of the participants purposefully skewed suggestions that, if followed, would lead them to classify images incorrectly. The scientists described these suggestions as originating from a "diagnostic assistance system based on an artificial intelligence (AI) algorithm," they explained in an email. The control group received a series of unlabeled dot images to assess. In contrast, the experimental groups received a series of dot images labeled with "positive" or "negative" assessments from the fake AI. In most instances, the label was correct, but in cases where the number of dots of each color was similar, the researchers introduced intentional skew with incorrect answers. In one experimental group, the AI labels tended toward offering false negatives. In a second experimental group, the slant was reversed toward false positives.

The researchers found that the participants who received the fake AI suggestions went on to incorporate the same bias into their future decisions, even after the guidance was no longer offered. For example, if a participant interacted with the false positive suggestions, they tended to continue to make false positive errors when given new images to assess. This observation held true despite the fact that the control groups demonstrated the task was easy to complete correctly without the AI guidance—and despite 80 percent of participants in one of the experiments noticing that the fictional "AI" made mistakes.

A big caveat is that the study did not involve trained medical professionals or assess any approved diagnostic software, says Joseph Kvedar, a professor of dermatology at Harvard Medical School and editor in chief of npj Digital Medicine. Therefore, Kvedar notes, the study has very limited implications for physicians and the actual AI tools that they use. Keith Dreyer, chief science officer of the American College of Radiology Data Science Institute, agrees and adds that "the premise is not consistent with medical imaging."

Though not a true medical study, the research offers insight into how people might learn from the biased patterns inadvertently baked into many machine-learning algorithms—and it suggests that AI could influence human behavior for the worse. Ignoring the diagnostic aspect of the fake AI in the study, Kvedar says, the "design of the experiments was almost flawless" from a psychological point of view. Both Dreyer and Kvedar, neither of whom were involved in the study, describe the work as interesting, albeit not surprising.

There's "real novelty" in the finding that humans might continue to enact an AI's bias by replicating it beyond the scope of their interactions with a machine-learning model, says Lisa Fazio, an associate professor of psychology and human development at Vanderbilt University, who was not involved in the recent study. To her, it suggests that even time-limited interactions with problematic AI models or AI-generated outputs can have lasting effects.

Consider, for example, the predictive policing software that Santa Cruz, Calif., banned in 2020. Though the city's police department no longer uses the algorithmic tool to determine where to deploy officers, it's possible that—after years of use—department officials internalized the software's likely bias, says Celeste Kidd, an assistant professor of psychology at the University of California, Berkeley, who was also not involved in the new study.

It's widely understood that people learn bias from human sources of information as well. The consequences when inaccurate content or guidance originate from artificial intelligence could be even more severe, however, Kidd says. She has previously studied and written about the unique ways that AI can shift human beliefs. For one, Kidd points out that AI models can easily become even more skewed than humans are. She cites a recent assessment published by Bloomberg that determined that generative AI may display stronger racial and gender biases than people do.

There's also the risk that humans might ascribe more objectivity to machine-learning tools than to other sources. "The degree to which you are influenced by an information source is related to how intelligent you assess it to be," Kidd says. People may attribute more authority to AI, she explains, in part because algorithms are often marketed as drawing on the sum of all human knowledge. The new study seems to back this idea up in a secondary finding: Matute and Vicente noted that that participants who self-reported higher levels of trust in automation tended to make more mistakes that mimicked the fake AI's bias.

Plus, unlike humans, algorithms deliver all outputs—whether correct or not—with seeming "confidence," Kidd says. In direct human communication, subtle cues of uncertainty are important for how we understand and contextualize information. A long pause, an "um," a hand gesture or a shift of the eyes might signal a person isn't quite positive about what they're saying. Machines offer no such indicators. "This is a huge problem," Kidd says. She notes that some AI developers are attempting to retroactively address the issue by adding in uncertainty signals, but it's difficult to engineer a substitute for the real thing.

Kidd and Matute both claim that a lack of transparency from AI developers on how their tools are trained and built makes it additionally difficult to weed out AI bias. Dreyer agrees, noting that transparency is a problem, even among approved medical AI tools. Though the Food and Drug Administration regulates diagnostic machine-learning programs, there is no uniform federal requirement for data disclosures. The American College of Radiology has been advocating for increased transparency for years and says more work is still necessary. "We need physicians to understand at a high level how these tools work, how they were developed, the characteristics of the training data, how they perform, how they should be used, when they should not be used, and the limitations of the tool," reads a 2021 article posted on the radiology society's website.

And it's not just doctors. In order to minimize the impacts of AI bias, everyone "needs to have a lot more knowledge of how these AI systems work," Matute says. Otherwise we run the risk of letting algorithmic "black boxes" propel us into a self-defeating cycle in which AI leads to more-biased humans, who in turn create increasingly biased algorithms. "I'm very worried," Matute adds, "that we are starting a loop, which will be very difficult to get out of."


Augmented Human: How Generative AI Can Be An Extension Of Ourselves

Chief technology officer and co-founder of LUCID, working to transform music into medicine. Mental health advocate.

Getty

As a subset of large language models (LLMs), ChatGPT has become a sensation, bridging the worlds of media, pop culture and the AI research community. It's a rare phenomenon that unites these diverse spheres, but ChatGPT has done precisely that.

The rapid advent of this technology and considerable "wow factor" triggers the usual sensationalist narrative: "It's going to take over the world." To me, this reaction underscores the urgency to integrate AI more comprehensively and efficiently into our lives.

As John M. Culkin wrote in an article explaining the work of his friend, media theorist Marshall McLuhan, "We shape our tools, and thereafter our tools shape us." The importance of careful advancement is key, but like McLuhan, I view technology as "extensions of ourselves."

A Universe Of Data

One of the primary reasons LLMs are so powerful is that they can learn from so much data. This type of deep learning architecture allows us to feed massive datasets to train models about the world they need to operate within.

Before LLMs and generative AI, processing such immense quantities of data was unthinkable. Traditional models were limited by computational constraints and lacked the complexity to understand intricate patterns. The advent of LLMs has dramatically changed this landscape, unlocking new horizons in data analysis and interpretation and offering endless possibilities for innovation.

Now, with LLMs, we have the possibility to explore dense datasets like biosignals, which measure the functions of the human body. We may be on the verge of building models that not only grasp natural language through text and images but also delve into the very signals that make up our minds and consciousness.

I Was Mute, But Now I Speak

For example, LLMs are emerging as potent tools for mental health interventions, aiding in recovery from trauma and PTSD and assisting with challenges like suicidal ideation, depression and anxiety.

Real-world applications are burgeoning. Woebot, an AI-powered mental health chatbot, has been utilized to provide immediate, cost-effective psychological support. Another remarkable example is Cass, an AI-driven mental health coach that personalizes conversations based on users' emotions and needs, offering compassionate care.

Beyond therapy, LLMs are beginning to find their way into educational settings, supporting students while they learn new skills. Duolingo's conversational agents are providing personalized learning pathways that tailor content to the needs of each student.

These examples epitomize the profound impact that LLMs can make in enhancing our lives, acting as powerful extensions of ourselves. They embody the technological evolution that is reshaping mental healthcare, making it more accessible, personalized and effective.

Our Expanding Senses

The frontier of biosignals and generative AI is rich with possibilities. Imagine measuring massive amounts of EEG data to develop implants that stimulate the right cortices in the brain, restoring lost senses. Envision external sensors, like cameras and microphones, interfacing with generative AI to recreate necessary EEG activity, allowing someone to "sense" those signals anew. The implications of such technology are profound, extending our sensory capabilities and redefining what it means to experience the world.

Speculating further, we could build a baseline generative AI model, akin to an LLM, to gain a profound understanding of the brain. This insight could lead to the visualization of thought, a concept as dramatic as it is game-changing. The ability to see thoughts materialize could revolutionize therapy, communication and artistic expression.

The advancement of LLMs and generative AI has brought us to a threshold where we can interpret and simulate complex biological processes. By mapping and mimicking the neural patterns of the brain, we are not merely exploring technology but pioneering a new era of human augmentation.

Responsibility By Design

While the prospect of blending biosignals with generative AI is thrilling, it's important to pause and consider the hurdles and ethical concerns that lie ahead.

The first challenge is technological. Biosignals like EEG and PPG are complex and nuanced, and the science of interpreting them is still emerging. Even as AI becomes more advanced, translating this raw data into meaningful and actionable insights is a monumental task.

The ethical stakes are also high. As we venture into the realm of reading brain signals and potentially modifying them, questions about autonomy, consent and the potential for misuse become increasingly pressing. Regulatory frameworks are not yet fully equipped to deal with the fast-paced development of this technology. The potential for biased algorithms, unauthorized data usage and unintended psychological effects are challenges that we must thoughtfully navigate.

Several experts, myself included, advocate for "responsibility-by-design," a nascent yet pivotal principle for ethically deploying AI-based solutions. This approach mandates that ethical considerations be woven into the technology's design and development from the beginning.

Responsibility-by-design requires responsible AI training for all stakeholders, equipping them with the tools to address potential concerns and build algorithms that respect human values. By adhering to ethical guidelines and scientific rigor, responsibility-by-design serves as a safeguard for ensuring this transformative technology advances in humanity's best interests while maintaining ethical integrity.

The Perfect Extension Of Ourselves

At the end of the day, generative AIs are built on the artificial neural network, meant to mimic the biological neural networks in our brains by design. In this way, it's the perfect extension of ourselves. Where our biology limits us, these systems can bridge the gap. Whether it's processing unfathomable amounts of data, aiding in mental health care or unlocking new frontiers in brain-computer interfaces, generative AI stands as a symbol of human ingenuity and aspiration.

The journey into the universe of data, the empowerment of voice and the expansion of our senses through AI paints a future filled with possibilities. It's a future where technology isn't an alien or foreign entity but a seamless part of our existence.

As we continue to explore and harness the power of LLMs and generative AI, we are not only shaping the tools but allowing them to shape us, to elevate us. In embracing these technologies, we are taking confident steps toward a future where we reclaim and redefine what it means to be human.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?






Comments

Follow It

Popular posts from this blog

Reimagining Healthcare: Unleashing the Power of Artificial ...

What is Generative AI? Everything You Need to Know

Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions