Artificial Intelligence | Pros, Cons, Debate, Arguments, Computer Science, & Technology



field of nlp :: Article Creator

The Field Of Natural Language Processing Is Chasing The Wrong Goal

My colleagues and I at Elemental Cognition, an AI research firm based in Connecticut and New York, see the angst as justified. In fact, we believe that the field needs a transformation, not just in system design, but in a less glamorous area: evaluation.

The current NLP zeitgeist arose from half a decade of steady improvements under the standard evaluation paradigm. Systems' ability to comprehend has generally been measured on benchmark data sets consisting of thousands of questions, each accompanied by passages containing the answer. When deep neural networks swept the field in the mid-2010s, they brought a quantum leap in performance. Subsequent rounds of work kept inching scores ever closer to 100% (or at least to parity with humans).

So researchers would publish new data sets of even trickier questions, only to see even bigger neural networks quickly post impressive scores. Much of today's reading comprehension research entails carefully tweaking models to eke out a few more percentage points on the latest data sets. "State of the art" has practically become a proper noun: "We beat SOTA on SQuAD by 2.4 points!"

But many people in the field are growing weary of such leaderboard-chasing. What has the world really gained if a massive neural network achieves SOTA on some benchmark by a point or two? It's not as though anyone cares about answering these questions for their own sake; winning the leaderboard is an academic exercise that may not make real-world tools any better. Indeed, many apparent improvements emerge not from general comprehension abilities, but from models' extraordinary skill at exploiting spurious patterns in the data. Do recent "advances" really translate into helping people solve problems?

Such doubts are more than abstract fretting; whether systems are truly proficient at language comprehension has real stakes for society. Of course, "comprehension" entails a broad collection of skills. For simpler applications—such as retrieving Wikipedia factoids or assessing the sentiment in product reviews—modern methods do pretty well. But when people imagine computers that comprehend language, they envision far more sophisticated behaviors: legal tools that help people analyze their predicaments; research assistants that synthesize information from across the web; robots or game characters that carry out detailed instructions.

Today's models are nowhere close to achieving that level of comprehension—and it's not clear that yet another SOTA paper will bring the field any closer.

How did the NLP community end up with such a gap between on-paper evaluations and real-world ability? In an ACL position paper, my colleagues and I argue that in the quest to reach difficult benchmarks, evaluations have lost sight of the real targets: those sophisticated downstream applications. To borrow a line from the paper, the NLP researchers have been training to become professional sprinters by "glancing around the gym and adopting any exercises that look hard."

To bring evaluations more in line with the targets, it helps to consider what holds today's systems back.

A human reading a passage will build a detailed representation of entities, locations, events, and their relationships—a "mental model" of the world described in the text. The reader can then fill in missing details in the model, extrapolate a scene forward or backward, or even hypothesize about counterfactual alternatives.

This sort of modeling and reasoning is precisely what automated research assistants or game characters must do—and it's conspicuously missing from today's systems. An NLP researcher can usually stump a state-of-the-art reading comprehension system within a few tries. One reliable technique is to probe the system's model of the world, which can leave even the much-ballyhooed GPT-3 babbling about cycloptic blades of grass.

Imbuing automated readers with world models will require major innovations in system design, as discussed in several Theme-track submissions. But our argument is more basic: however systems are implemented, if they need to have faithful world models, then evaluations should systematically test whether they have faithful world models.

Stated so baldly, that may sound obvious, but it's rarely done. Research groups like the Allen Institute for AI have proposed other ways to harden the evaluations, such as targeting diverse linguistic structures, asking questions that rely on multiple reasoning steps, or even just aggregating many benchmarks. Other researchers, such as Yejin Choi's group at the University of Washington, have focused on testing common sense, which pulls in aspects of a world model. Such efforts are helpful, but they generally still focus on compiling questions that today's systems struggle to answer.

We're proposing a more fundamental shift: to construct more meaningful evaluations, NLP researchers should start by thoroughly specifying what a system's world model should contain to be useful for downstream applications. We call such an account a "template of understanding."

One particularly promising testbed for this approach is fictional stories. Original stories are information-rich, un-Googleable, and central to many applications, making them an ideal test of reading comprehension skills. Drawing on cognitive science literature about human readers, our CEO David Ferrucci has proposed a four-part template for testing an AI system's ability to understand stories.

  • Spatial: Where is everything located and how is it positioned throughout the story?
  • Temporal: What events occur and when?
  • Causal: How do events lead mechanistically to other events?
  • Motivational: Why do the characters decide to take the actions they take?
  • By systematically asking these questions about all the entities and events in a story, NLP researchers can score systems' comprehension in a principled way, probing for the world models that systems actually need.

    It's heartening to see the NLP community reflect on what's missing from today's technologies. We hope this thinking will lead to substantial investment not just in new algorithms, but in new and more rigorous ways of measuring machines' comprehension. Such work may not make as many headlines, but we suspect that investment in this area will push the field forward at least as much as the next gargantuan model.

    Jesse Dunietz is a researcher at Elemental Cognition, where he works on developing rigorous evaluations for reading comprehension systems. He is also an educational designer for MIT's Communication Lab and a science writer.


    NLP Evolution: Changes Coming To The Ways We Interact With Technology

    getty

    Natural language processing technology refines our internet search results, helps voice assistants such as Siri understand our questions and commands, keeps spam out of our email inboxes, and more. The eruption of ChatGPT and other generative artificial intelligence tools introduced the world to multifunctional assistants that, with a simple query, can help with research, writing and generating images—even creating code.

    With all that, NLP hasn't reached its pinnacle yet. As the technology continues to evolve, experts expect it to play a role in everything from Web surfing to business automation (and much, much more). Below, 20 members of Forbes Technology Council discuss some of the big developments they see coming to NLP and how its evolution will change the ways we interact with technology.

    1. NLP's Accuracy, Breadth And Capabilities Will Improve

    The accuracy, breadth and overall capabilities of NLP will greatly improve, making it cheaper, easier to deploy and better integrated with the applications and workflows we use daily. Advances in reasoning capabilities will take us beyond text generation and question answering to more complex tasks, including planning, integrating resources and reacting appropriately to changes and surprises. - David Talby, John Snow Labs

    2. Online Interactions Will Become Extremely Personalized

    Large language models will allow all online interactions to become extremely personalized. Each page on a website can have its content (text, images and video) customized to the specific task you are performing at that specific moment. The entire structure of websites will change from being a set of linked pages to an interactive agent that partners with you to solve high-level needs. - Ari Kahn, Bridgeline Digital

    Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

    3. Cross-Domain Thinking Will Be Enabled

    Natural language processing will evolve and expand its impact through cross-domain thinking, which entails taking a concept from one field and applying it in a seemingly very different domain to enable new insights and innovation. Cross-domain thinking will boost interdisciplinary creativity and help create new mental pathways, expand perspectives and enhance problem-solving abilities. - Victor Shilo, EastBanc Technologies

    4. Complex Interactions Will Be Replaced By Simpler Commands

    The human experience with technology can be greatly improved with natural language as an extra modality of interaction. Today, we click, touch, type and issue basic voice commands, but as natural language is woven into every experience—and I'm not talking about chatbots—it will unlock a powerful tool where a very long sequence of clicks, swipes and touches can be replaced by a phrase. - Lucas Persona, CI&T

    5. Computers Will More Intelligently Interact With Humans

    AI using NLP will power the new user interface. We are in a world where humans work with computers in a language computers understand, using devices such as keyboards, mice and monitors. When computers can intelligently interact with humans in our language and environment, it will impact every aspect of technology. For the most part, technology devices will dissolve into everyday things we use. - Vishwas Manral, Cloud Security Alliance

    6. Companies Will Know Much More About Their Customers

    Companies and applications will know more about us and help make all our experiences more personalized; the experience we have now with applications such as Netflix and Spotify will be the experience we have across the Web. Foundational models will be trained on more data and become a lot more intuitive. NLP-powered applications will become much easier to use, as prompts won't need to be as technical, and they will be much better at predictive analytics. - Shayan Hamidi, Rechat

    7. NLP Models Will Become Highly Specialized

    AI will evolve at breakneck speed, with changes that once took years happening in weeks or days. NLP will be one of a myriad of GenAI-adjacent technologies, each growing more specialized and powerful as we leverage high-quality data for high-trust applications. Future models will get master's degrees in the fields for which they're used, making them far more reliable and useful for businesses. - Carolyn Parent, Conveyer

    8. 'Cooperative' LLMs Will Emerge

    The power and limits of monolithic large language models will become clearer, with a possible shift toward smaller, multimodal models with curated datasets and deeper levels of detail. I'd expect traditional systems of record (such as SAP, Oracle and ServiceNow) to lead in these efforts. These new "cooperative" LLMs will combine to bring more intelligent systems of engagement to market. - Shawn Rosemarin, Pure Storage

    9. Software Will Become A Partner, Not Just A Tool

    NLP technology has been around for decades, recently receiving much attention as it was integrated with large language models and generative AI. In the next five years, NLP will transform software from being a mere tool, as it is today, to becoming a partner that works alongside humans in both advisory and action-oriented capacities. NLP will become adaptable, making AI seem human and resulting in increased adoption. - Simon Bennett, AVEVA

    10. Digital Twins Will Develop From Regular Interactions

    The NLP field is moving toward ultra-personalization through the development of digital twins, created via continual learning from the everyday text and speech interactions of their physical counterparts. This evolution will facilitate the creation of highly personalized and context-aware digital entities such as assistants, mentors or therapists that are tailored to meet individual needs and preferences. - Ivan Tankoyeu, AI Superior GmbH

    11. Digital Assistants Will Become More Sophisticated

    NLP technology will evolve significantly. It will enhance and personalize user experiences and become more adept at understanding and interpreting user preferences. Digital assistants will become more sophisticated and capable of handling complex and nuanced conversations. NLP will also evolve to understand context and semantics more effectively, leading to more accurate and context-aware search results. - Amit Jain, Roadz

    12. NLP Will Accept More Varied Inputs And Sustain Richer Conversations

    Improved NLP models will better understand and maintain context during conversations and may become more adept at recognizing and responding to human emotions in text. Additionally, multimodal NLP models will integrate multiple modes of communication—such as text, images and video—enabling them to understand queries better and generate higher-quality content. - Syl Omope, Elekta

    13. Applications Will Become More Comprehensive And Context-Aware

    Natural language processing will evolve by integrating multimodal capabilities to better understand and generate content across different media, leading to more comprehensive and context-aware applications. In the healthcare industry, such multimodal NLP models could analyze both textual triage notes and relevant medical images or charts to provide a more holistic understanding of a patient's condition. - Kanishk Agrawal, Judge Group

    14. We'll Build Our Own Online Experiences

    I believe natural language processing, coupled with developments in large language models, will transform how we interact online. Currently, we heavily rely on others to build our online experiences. I think we will be able to flip that around and build our own experiences without tracking or surveillance technology such as cookies. - Caroline McCaffery, ClearOPS

    15. Digital Customer Service Will Expand And Improve

    NLP technologies are expected to enhance their capabilities to generate more effective, conversational customer service responses and manage more complicated inquiries. This will allow businesses to automate tasks such as responding to repetitive customer inquiries, freeing up employees to tackle more complex activities while providing a better customer experience. - Rick Watkin, KUBRA

    16. Conversational Histories Will Lead To More 'Natural' Interactions

    References to conversational history often serve as a shortcut to evoke context. AI agents with NLP will make use of the ever-growing accumulation of conversations to increase the "naturalness" and personal tone of interactions. - Viktor Trón, Swarm

    17. Language Barriers Will Be Overcome

    In the next five years, natural language processing will have a major impact on multilingual capabilities and addressing language diversity challenges. NLP will focus on bridging cultural and language barriers to facilitate global communication. For global businesses, this plays a crucial role in bridging linguistic and cultural gaps, enabling more inclusive communication across the globe. - Joe McCunney, Scalar Labs

    18. Automation Initiatives Will Become Simpler And Less Costly

    The past decades have seen impressive efficiency gains inside large enterprises and governments through the automation of tasks traditionally performed by back-office workers, such as data entry and claims processing. Digitization projects typically cost millions of dollars, require external consultants and take years to complete, but LLMs will make the digitization process much easier and cheaper. - Patricia Thaine, Private AI

    19. NLP-Enabled Cybersecurity Solutions Will Detect Subtle Warning Signs

    NLP is already having a significant impact on the field of cybersecurity, where its ability to recognize patterns in communication has proven invaluable in the war against advanced phishing. By identifying patterns in users' day-to-day communications, NLP-enabled security solutions can then detect the subtle deviations indicative of impersonation, account takeover and other types of fraud. - Eyal Benishti, IRONSCALES

    20. NLP Technology Will Extend To Nonverbal Communication

    NLP continually evolves with the accumulation of unstructured data, making its evolution perpetual. So let's be bold in predicting its possibilities in the next five years. As data grows, NLP, which has been focused on human language, will evolve to extend to nonverbal communication. NLP and biometrics will enable human-like robots to see, touch, hear and speak like humans. - Paula Kennedy Garcia


    The Rise Of NLP Startups

    image

    Artificial intelligence has been the king of recent tech advancements. Breakthrough after breakthrough, individuals, small and medium-sized businesses, and enterprises have been the lucky beneficiaries of the current wave of AI madness – from automation to generative AI. 

    Today's AI high is just the beginning. In fact, experts are anticipating an annual growth rate of 37% from 2023 through 2030, signaling a promising era of innovation, disruption, and transformative possibilities in the years ahead.

    One of the most notable AI subfields is Natural Language Processing (NLP). Coined as linguistic AI, NLP focuses on the interaction between computers and human language. Since its emergence, a remarkable number of NLP startups have taken over the scene, overhauling how people communicate, and making technology more accessible, intuitive, and responsive to human needs.

    A Brief History of NLP

    While NLP has only seemingly emerged in recent years, its history roots back to the 1950s when Alan Turing introduced the concept of the "Turing test" in an article titled "Intelligence." This test has since become widely recognized as a benchmark for assessing AI's capacity for human-like intelligence, including linguistic capabilities.

    During the 1980s, IBM emerged as a prominent player in advancing NLP through the development of several intricate and successful statistical models. This shift towards statistical models represented a departure from rigid rule-based systems and paved the way for more flexible, data-driven approaches in the field of NLP.

    Welcoming Siri

    In 2011, Apple introduced Siri, marking a significant milestone as the world's first NLP/AI assistants. Siri's innovative system was among the earliest to achieve widespread success. Its functionality revolves around a sophisticated automated speech recognition module that translates the user's spoken words into digital concepts. Subsequently, its voice-command system matches these concepts with predefined commands, thereby initiating specific actions. This capability heralded a new era in human-computer interaction, making technology more accessible and user-friendly for a broader audience. 

    This breakthrough became one of the catalysts that spearheaded a multitude of NLP startups that are currently revolutionizing the current global business ecosystem.

    Disrupting Global Industries

    The worldwide market for NLP is anticipated to undergo substantial expansion, with a projected increase from $24.10 billion in 2023 to a significant $112.28 billion by the year 2030. This growth trajectory reflects a robust CAGR of 24.6%, signifying the increasing significance and adoption of NLP technologies across various industries and applications.

    Today, NLP startups are underway to overhaul business processes, streamline workflows, and optimize organizational revenue generation. One of today's key players is Novacy, a behavioral intelligence platform that utilizes NLP to help B2B revenue teams close more deals by deciphering prospects' non-verbal cues, understanding their prospects' underlying perceptions, and decoding the things that are often left unsaid. The company aims to turn psychology into technology and empower humans to understand humans. Moreover, Novacy cuts 85% of the call analysis time through AI-powered summaries, transcripts, call snippets, and seller insights. 

    Another NLP capacity is to convert speech to text with unmatched accuracy. Deepgram is leveraging this power to make voice intelligence available to all with faster, more accurate, and more scalable speech recognition made through end-to-end deep learning.

    The healthcare industry has seen unprecedented improvements in patient care and treatment plans with the rise of NLP. One startup that leads this movement is Marigold Health, which employs innovative NLP techniques to seamlessly merge text-based peer support groups with substance use and behavioral health care services. This groundbreaking approach provides individuals facing stigmatization with round-the-clock access to tailored care. Simultaneously, it empowers existing care managers and peer coaches, enabling them to effectively handle a tenfold increase in patient capacity.

    Conclusion

    The momentum that NLP startups have created these past several years is just the tip of the iceberg. With the pace of these breakthroughs, progressive founders are poised to continue reshaping human interaction with technology – from streamlining processes to addressing long-standing challenges – promising a future marked by sustained innovation and transformative possibilities.






    Comments

    Follow It

    Popular posts from this blog

    What is Generative AI? Everything You Need to Know

    Top Generative AI Tools 2024

    60 Growing AI Companies & Startups (2025)