Sketch: An Innovative AI Toolkit Designed to Streamline LLM Operations Across Diverse Fields



parsing in natural language processing :: Article Creator

5 Natural Language Processing Libraries To Use

Natural language processing (NLP) is important because it enables machines to understand, interpret and generate human language, which is the primary means of communication between people. By using NLP, machines can analyze and make sense of large amounts of unstructured textual data, improving their ability to assist humans in various tasks, such as customer service, content creation and decision-making.

Additionally, NLP can help bridge language barriers, improve accessibility for individuals with disabilities, and support research in various fields, such as linguistics, psychology and social sciences.

Here are five NLP libraries that can be used for various purposes, as discussed below.

NLTK (Natural Language Toolkit)

One of the most widely used programming languages for NLP is Python, which has a rich ecosystem of libraries and tools for NLP, including the NLTK. Python's popularity in the data science and machine learning communities, combined with the ease of use and extensive documentation of NLTK, has made it a go-to choice for many NLP projects.

NLTK is a widely used NLP library in Python. It offers NLP machine-learning capabilities for tokenization, stemming, tagging and parsing. NLTK is great for beginners and is used in many academic courses on NLP.

Tokenization is the process of dividing a text into more manageable pieces, like specific words, phrases or sentences. Tokenization aims to give the text a structure that makes programmatic analysis and manipulation easier. A frequent pre-processing step in NLP applications, such as text categorization or sentiment analysis, is tokenization.

Words are derived from their base or root form through the process of stemming. For instance, "run" is the root of the terms "running," "runner," and "run." Tagging involves identifying each word's part of speech (POS) within a document, such as a noun, verb, adjective, etc.. In many NLP applications, such as text analysis or machine translation, where knowing the grammatical structure of a phrase is critical, POS tagging is a crucial step.

Parsing is the process of analyzing the grammatical structure of a sentence to identify the relationships between the words. Parsing involves breaking down a sentence into constituent parts, such as subject, object, verb, etc. Parsing is a crucial step in many NLP tasks, such as machine translation or text-to-speech conversion, where understanding the syntax of a sentence is important.

Related: How to improve your coding skills using ChatGPT?

SpaCy

SpaCy is a fast and efficient NLP library for Python. It is designed to be easy to use and provides tools for entity recognition, part-of-speech tagging, dependency parsing and more. SpaCy is widely used in the industry for its speed and accuracy.

Dependency parsing is a natural language processing technique that examines the grammatical structure of a phrase by determining the relationships between words in terms of their syntactic and semantic dependencies, and then building a parse tree that captures these relationships.

Stanford CoreNLP

Stanford CoreNLP is a Java-based NLP library that provides tools for a variety of NLP tasks, such as sentiment analysis, named entity recognition, dependency parsing and more. It is known for its accuracy and is used by many organizations.

Sentiment analysis is the process of analyzing and determining the subjective tone or attitude of a text, while named entity recognition is the process of identifying and extracting named entities, such as names, locations and organizations, from a text.

Gensim

Gensim is an open-source library for topic modeling, document similarity analysis and other NLP tasks. It provides tools for algorithms such as latent dirichlet allocation (LDA) and word2vec for generating word embeddings.

LDA is a probabilistic model used for topic modeling, where it identifies the underlying topics in a set of documents. Word2vec is a neural network-based model that learns to map words to vectors, enabling semantic analysis and similarity comparisons between words.

TensorFlow

TensorFlow is a popular machine-learning library that can also be used for NLP tasks. It provides tools for building neural networks for tasks such as text classification, sentiment analysis and machine translation. TensorFlow is widely used in industry and has a large support community.

Classifying text into predetermined groups or classes is known as text classification. Sentiment analysis examines a text's subjective tone to ascertain the author's attitude or feelings. Machines translate text from one language into another. While all use natural language processing techniques, their objectives are distinct.

Can NLP libraries and blockchain be used together?

NLP libraries and blockchain are two distinct technologies, but they can be used together in various ways. For instance, text-based content on blockchain platforms, such as smart contracts and transaction records, can be analyzed and understood using NLP approaches.

NLP can also be applied to creating natural language interfaces for blockchain applications, allowing users to communicate with the system using everyday language. The integrity and privacy of user data can be guaranteed by using blockchain to protect and validate NLP-based apps, such as chatbots or sentiment analysis tools.

Related: Data protection in AI chatting: Does ChatGPT comply with GDPR standards?


What Is Natural Language Processing?

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Natural language processing (NLP) is a branch of artificial intelligence (AI) that focuses on computers incorporating speech and text in a manner similar to humans understanding. This area of computer science relies on computational linguistics—typically based on statistical and mathematical methods—that model human language use.

NLP plays an increasingly prominent role in computing—and in the everyday lives of humans. Smart assistants such as Apple's Siri, Amazon's Alexa and Microsoft's Cortana are examples of systems that use NLP.

In addition, various other tools rely on natural language processing. Among them: navigation systems in automobiles; speech-to-text transcription systems such as Otter and Rev; chatbots; and voice recognition systems used for customer support. In fact, NLP appears in a rapidly expanding universe of applications, tools, systems and technologies.

In every instance, the goal is to simplify the interface between humans and machines. In many cases, the ability to speak to a system or have it recognize written input is the simplest and most straightforward way to accomplish a task.

While computers cannot "understand" language the same way humans do, natural language technologies are increasingly adept at recognizing the context and meaning of phrases and words and transforming them into appropriate responses—and actions.

Also see: Top Natural Language Processing Companies

Natural Language Processing: A Brief History

The idea of machines understanding human speech extends back to early science fiction novels. However, the field of natural language processing began to take shape in the 1950s, after computing pioneer Alan Turing published an article titled "Computing Machinery and Intelligence." It introduced the Turing Test, which provided a basic way to gauge a computer's natural language abilities.

During the ensuing decade, researchers experimented with computers translating novels and other documents across spoken languages, though the process was extremely slow and prone to errors. In the 1960s, MIT professor Joseph Weizenbaum developed ELIZA, which mimicked human speech patterns remarkably well. Over the next quarter century, the field continued to evolve. As computing systems became more powerful in the 1990s, researchers began to achieve notable advances using statistical modeling methods.

Dictation and language translation software began to mature in the 1990s. However, early systems required training, they were slow, cumbersome to use and prone to errors. It wasn't until the introduction of supervised and unsupervised machine learning in the early 2000s, and then the introduction of neural nets around 2010, that the field began to advance in a significant way.

With these developments, deep learning systems were able to digest massive volumes of text and other data and process it using far more advanced language modeling methods. The resulting algorithms had become far more accurate and utilitarian.

Also see: Top AI Software 

How Does Natural Language Processing Work?

Early NLP systems relied on hard coded rules, dictionary lookups and statistical methods to do their work. They frequently supported basic decision-tree models. Eventually, machine learning automated tasks while improving results.

Today's natural language processing frameworks use far more advanced—and precise—language modeling techniques. Most of these methods rely on convolutional neural networks (CNNs) to study language patterns and develop probability-based outcomes.

For example, a method called word vectors applies complex mathematical models to weight and relate words, phrases and constructs. Another method called Recognizing Textual Entailment (RTE), classifies relationships of words and sentences through the lens of entailment, contradiction, or neutrality. For instance, the premise "a dog has paws" entails that "dogs have legs" but contradicts "dogs have wings" while remaining neutral to "all dogs are happy."

A key part of NLP is word embedding. It refers to establishing numerical weightings for words in specific context. The process is necessary because many words and phrases can mean different things in different meanings or contexts (go to a club, belong to a club or swing a club). Words can also be pronounced the same way but mean different things (through, threw or witch, which). There's also a need to understand idiomatic phrases that do not make sense literally, such as "You are the apple of my eye" or "it doesn't cut the mustard."

Today's models are trained on enormous volumes of language data—in some cases several hundred gigabytes of books, magazines articles, websites, technical manuals, emails, song lyrics, stage plays, scripts and publicly available sources such as Wikipedia. As the deep learning system parse through millions or even billions of combinations—relying on hundreds of thousands of CPU or GPU cores—they analyze patterns, connect the dots and learn semantic properties of words and phrases.

It's also often necessary to refine natural language processing systems for specific tasks, such as a chatbot or a smart speaker. But even after this takes place, a natural language processing system may not always work as billed. Even the best NLPs make errors. They can encounter problems when people misspell or mispronounce words and they sometimes misunderstand intent and translate phrases incorrectly. In some cases, these errors can be glaring—or even catastrophic.

Today, prominent natural language models are available under licensing models. These include the OpenAI codex, LaMDA by Google, IBM Watson and software development tools such as CodeWhisperer and CoPilot. In addition, some organizations build their own proprietary models.

How is Natural Language Processing Used?

There are a growing array of uses for natural language processing. These include:

Conversational AI. The ability of computers to recognize words introduces a variety of applications and tools. Personal assistants like Siri, Alexa and Microsoft Cortana are prominent examples of conversational AI. They allow humans to make a call from a mobile phone while driving or switch lights on or off in a smart home. Increasingly, these systems understand intent and act accordingly. For example, chatbots can respond to human voice or text input with responses that seem as if they came from another person. What's more, these systems use machine learning to constantly improve.

Machine translation. There's a growing use of NLP for machine translation tasks. These include language translations that replace words in one language for another (English to Spanish or French to Japanese, for example). Google Translate and DeepL are examples of this technology. But machine translation can also take other forms. For example, NLP can convert spoken words—either in the form of a recording or live dictation—into subtitles on a TV show or a transcript from a Zoom or Microsoft Teams meeting. Yet while these systems are increasingly accurate and valuable, they continue to generate some errors.

Sentiment analysis. NLP has the ability to parse through unstructured data—social media analysis is a prime example—extract common word and phrasing patterns and transform this data into a guidepost for how social media and online conversations are trending. This capability is also valuable for understanding product reviews, the effectiveness of advertising campaigns, how people are reacting to news and other events, and various other purposes. Sentiment analysis finds things that might otherwise evade human detection.

Content analysis. Another use case for NLP is making sense of complex systems. For example, the technology can digest huge volumes of text data and research databases and create summaries or abstracts that relate to the most pertinent and salient content. Similarly, content analysis can be used for cybersecurity, including spam detection. These systems can reduce or eliminate the need for manual human involvement.

Text and image generation. A rapidly emerging part of natural language processing focuses on text, image and even music generation. Already, some news organizations produce short articles using natural language processing. Meanwhile, OpenAI has developed a tool that generates text and computer code through a natural language interface. Another OpenAI tool, dubbed Dall-E-2, creates high quality images through an NLP interface. Type the words "black cat under a stairway" and an image appears. GitHub Copilot and Amazon CodeWhisperer can auto-complete and auto-generate computer code through natural language.

Also see: Top Data Visualization Tools 

NLP Business Use Cases

The use of NLP is increasingly common in the business world. Among the top use cases:

Chatbots and voice interaction systems. Retailers, health care providers and others increasingly rely on chatbots to interact with customers, answer basic questions and route customers to other online resources. These systems can also connect a customer to a live agent, when necessary. Voice systems allow customers to verbally say what they need rather than push buttons on the phone.

Transcription. As organizations shift to virtual meetings on Zoom and Microsoft Teams, there's often a need for a transcript of the conversation. Services such as Otter and Rev deliver highly accurate transcripts—and they're often able to understand foreign accents better than humans. In addition, journalists, attorneys, medical professionals and others require transcripts of audio recordings. NLP can deliver results from dictation and recordings within seconds or minutes.

International translation. NLP has revolutionized interactions between businesses in different countries. While the need for translators hasn't disappeared, it's now easy to convert documents from one language to another. This has simplified interactions and business processes for global companies while simplifying global trade.

Scoring systems. Natural language is used by financial institutions, insurance companies and others to extract elements and analyze documents, data, claims and other text-based resources. The same technology can also aid in fraud detection, financial auditing, resume evaluations and spam detection. In fact, the latter represents a type of supervised machine learning that connects to NLP.

Market intelligence and sentiment analysis. Marketers and others increasingly rely on NLP to deliver market intelligence and sentiment trends. Semantic engines scrape content from blogs, news sites, social media sources and other sites in order to detect trends, attitudes and actual behaviors. Similarly, NLP can help organizations understand website behavior, such as search terms that identify common problems and how people use an e-commerce site. This data can lead to design and usability changes.

Software development. A growing trend is the use of natural language for software coding. Low-code and no-code environments can transform spoken and written requests into actual lines of software code. Systems such as Amazon's CodeWhisperer and GitHub's CoPilot include predictive capabilities that autofill code in much the same way that Google Mail predicts what a person will type next. They also can pull information from an integrated development environment (IDE) and produce several lines of code at a time.

Text and image generation. The OpenAI codex can generate entire documents, based a basic request. This makes it possible to generate poems, articles and other text. Open AI's DALL-E 2 generates photorealistic images and art through natural language input. This can aid designers, artists and others.

Also see: Best Data Analytics Tools 

What Ethical Concerns Exist for NLP?

Concerns about natural language processing are heavily centered on the accuracy of models and ensuring that bias doesn't occur. Many of these deep learning algorithms are so-called "black boxes," meaning that there's no way to understand how the underlying model works and whether it is free of biases that could affect critical decisions about lending, healthcare and more.

There is also debate about whether these systems are "sentient." The question of whether AI can actually think and feel like a human has been expressed in films such as 2001: A Space Odyssey and Star Wars. It also reappeared in 2022, when former Google data scientist Blake Lemoine published human-to-machine discussions with LaMDA. Lemoine claimed that the system had gained sentience. However, numerous linguistics experts and computer scientists countered that a silicon-based system cannot think and feel the way humans do. It merely parrots language in a highly convincing way.

In fact, researchers who have experimented with NLP systems have been able to generate egregious and obvious errors by inputting certain words and phrases. Getting to 100% accuracy in NLP is nearly impossible because of the nearly infinite number of word and conceptual combinations in any given language.

Another issue is ownership of content—especially when copyrighted material is fed into the deep learning model. Because many of these systems are built from publicly available sources scraped from the Internet, questions can arise about who actually owns the model or material, or whether contributors should be compensated. This has so far resulted in a handful of lawsuits along with broader ethical questions about how models should be developed and trained.

Also see: AI vs. ML: Artificial Intelligence and Machine Learning

What Role Will NLP Play in the Future?

There's no question that natural language processing will play a prominent role in future business and personal interactions. Personal assistants, chatbots and other tools will continue to advance. This will likely translate into systems that understand more complex language patterns and deliver automated but accurate technical support or instructions for assembling or repairing a product.

NLP will also lead to more advanced analysis of medical data. For example, a doctor might input patient symptoms and a database using NLP would cross-check them with the latest medical literature. Or a consumer might visit a travel site and say where she wants to go on vacation and what she wants to do. The site would then deliver highly customized suggestions and recommendations, based on data from past trips and saved preferences.

For now, business leaders should follow the natural language processing space—and continue to explore how the technology can improve products, tools, systems and services. The ability for humans to interact with machines on their own terms simplifies many tasks. It also adds value to business relationships.

Also see: The Future of Artificial Intelligence


Natural Language Processing: CC Computer Science #36

Hi, I'm Carrie Anne, and welcome to Crash Course Computer Science!

Last episode we talked about computer vision - giving computers the ability to see and understand visual information.

Today we're going to talk about how to give computers the ability to understand language.

You might argue they've always had this capability.

Back in Episodes 9 and 12, we talked about machine language instructions, as well as higher-level programming languages.

While these certainly meet the definition of a language, they also tend to have small vocabularies and follow highly structured conventions.

Code will only compile and run if it's 100 percent free of spelling and syntactic errors.

Of course, this is quite different from human languages - what are called natural languages - containing large, diverse vocabularies, words with several different meanings, speakers with different accents, and all sorts of interesting word play.

People also make linguistic faux pas when writing and speaking, like slurring words together, leaving out key details so things are ambiguous, and mispronouncing things.

But, for the most part, humans can roll right through these challenges.

The skillful use of language is a major part of what makes us human.

And for this reason, the desire for computers to understand and speak our language has been around since they were first conceived.

This led to the creation of Natural Language Processing, or NLP, an interdisciplinary field combining computer science and linguistics.

INTRO There's an essentially infinite number of ways to arrange words in a sentence.

We can't give computers a dictionary of all possible sentences to help them understand what humans are blabbing on about.

So an early and fundamental NLP problem was deconstructing sentences into bite-sized pieces, which could be more easily processed.

In school, you learned about nine fundamental types of English words: nouns, pronouns, articles, verbs, adjectives, adverbs, prepositions, conjunctions, and interjections.

These are called parts of speech.

There are all sorts of subcategories too, like singular vs. Plural nouns and superlative vs. Comparative adverbs, but we're not going to get into that.

Knowing a word's type is definitely useful, but unfortunately, there are a lot words that have multiple meanings - like "rose" and "leaves", which can be used as nouns or verbs.

A digital dictionary alone isn't enough to resolve this ambiguity, so computers also need to know some grammar.

For this, phrase structure rules were developed, which encapsulate the grammar of a language.

For example, in English there's a rule that says a sentence can be comprised of a noun phrase followed by a verb phrase.

Noun phrases can be an article, like "the", followed by a noun or they can be an adjective followed by a noun.

And you can make rules like this for an entire language.

Then, using these rules, it's fairly easy to construct what's called a parse tree, which not only tags every word with a likely part of speech, but also reveals how the sentence is constructed.

We now know, for example, that the noun focus of this sentence is "the mongols", and we know it's about them doing the action of "rising" from something, in this case, "leaves".

These smaller chunks of data allow computers to more easily access, process and respond to information.

Equivalent processes are happening every time you do a voice search, like: "where's the nearest pizza".

The computer can recognize that this is a "where" question, knows you want the noun "pizza", and the dimension you care about is "nearest".

The same process applies to "what is the biggest giraffe?"

or "who sang thriller?"

By treating language almost like lego, computers can be quite adept at natural language tasks.

They can answer questions and also process commands, like "set an alarm for 2:20" or "play T-Swizzle on spotify".

But, as you've probably experienced, they fail when you start getting too fancy, and they can no longer parse the sentence correctly, or capture your intent.

Hey Siri... Methinks the mongols doth roam too much, what think ye on this most gentle mid-summer's day?

Siri: I'm not sure I got that.

I should also note that phrase structure rules, and similar methods that codify language, can be used by computers to generate natural language text.

This works particularly well when data is stored in a web of semantic information, where entities are linked to one another in meaningful relationships, providing all the ingredients you need to craft informational sentences.

Siri: Thriller was released in 1983 and sung by Michael Jackson Google's version of this is called Knowledge Graph.

At the end of 2016, it contained roughly seventy billion facts about, and relationships between, different entities.

These two processes, parsing and generating text, are fundamental components of natural language chatbots - computer programs that chat with you.

Early chatbots were primarily rule-based, where experts would encode hundreds of rules mapping what a user might say, to how a program should reply.

Obviously this was unwieldy to maintain and limited the possible sophistication.

A famous early example was ELIZA, created in the mid-1960s at MIT.

This was a chatbot that took on the role of a therapist, and used basic syntactic rules to identify content in written exchanges, which it would turn around and ask the user about.

Sometimes, it felt very much like human-human communication, but other times it would make simple and even comical mistakes.

Chatbots, and more advanced dialog systems, have come a long way in the last fifty years, and can be quite convincing today!

Modern approaches are based on machine learning, where gigabytes of real human-to-human chats are used to train chatbots.

Today, the technology is finding use in customer service applications, where there's already heaps of example conversations to learn from.

People have also been getting chatbots to talk with one another, and in a Facebook experiment, chatbots even started to evolve their own language.

This experiment got a bunch of scary-sounding press, but it was just the computers crafting a simplified protocol to negotiate with one another.

It wasn't evil, it's was efficient.

But what about if something is spoken - how does a computer get words from the sound?

That's the domain of speech recognition, which has been the focus of research for many decades.

Bell Labs debuted the first speech recognition system in 1952, nicknamed Audrey - the automatic digit recognizer.

It could recognize all ten numerical digits, if you said them slowly enough.

5... 9... 7?

The project didn't go anywhere because it was much faster to enter telephone numbers with a finger.

Ten years later, at the 1962 World's Fair, IBM demonstrated a shoebox-sized machine capable of recognizing sixteen words.

To boost research in the area, DARPA kicked off an ambitious five-year funding initiative in 1971, which led to the development of Harpy at Carnegie Mellon University.

Harpy was the first system to recognize over a thousand words.

But, on computers of the era, transcription was often ten or more times slower than the rate of natural speech.

Fortunately, thanks to huge advances in computing performance in the 1980s and 90s, continuous, real-time speech recognition became practical.

There was simultaneous innovation in the algorithms for processing natural language, moving from hand-crafted rules, to machine learning techniques that could learn automatically from existing datasets of human language.

Today, the speech recognition systems with the best accuracy are using deep neural networks, which we touched on in Episode 34.

To get a sense of how these techniques work, let's look at some speech, specifically, the acoustic signal.

Let's start by looking at vowel sounds, like aaaaa...And Eeeeeee.

These are the waveforms of those two sounds, as captured by a computer's microphone.

As we discussed in Episode 21 - on Files and File Formats - this signal is the magnitude of displacement, of a diaphragm inside of a microphone, as sound waves cause it to oscillate.

In this view of sound data, the horizontal axis is time, and the vertical axis is the magnitude of displacement, or amplitude.

Although we can see there are differences between the waveforms, it's not super obvious what you would point at to say, "ah ha!

this is definitely an eeee sound".

To really make this pop out, we need to view the data in a totally different way: a spectrogram.

In this view of the data, we still have time along the horizontal axis, but now instead of amplitude on the vertical axis, we plot the magnitude of the different frequencies that make up each sound.

The brighter the color, the louder that frequency component.

This conversion from waveform to frequencies is done with a very cool algorithm called a Fast Fourier Transform.

If you've ever stared at a stereo system's EQ visualizer, it's pretty much the same thing.

A spectrogram is plotting that information over time.

You might have noticed that the signals have a sort of ribbed pattern to them - that's all the resonances of my vocal tract.

To make different sounds, I squeeze my vocal chords, mouth and tongue into different shapes, which amplifies or dampens different resonances.

We can see this in the signal, with areas that are brighter, and areas that are darker.

If we work our way up from the bottom, labeling where we see peaks in the spectrum - what are called formants - we can see the two sounds have quite different arrangements.

And this is true for all vowel sounds.

It's exactly this type of information that lets computers recognize spoken vowels, and indeed, whole words.

Let's see a more complicated example, like when I say: "she.. Was.. Happy" We can see our "eee" sound here, and "aaa" sound here.

We can also see a bunch of other distinctive sounds, like the "shh" sound in "she", the "wah" and "sss" in "was", and so on.

These sound pieces, that make up words, are called phonemes.

Speech recognition software knows what all these phonemes look like.

In English, there are roughly forty-four, so it mostly boils down to fancy pattern matching.

Then you have to separate words from one another, figure out when sentences begin and end... And ultimately, you end up with speech converted into text, allowing for techniques like we discussed at the beginning of the episode.

Because people say words in slightly different ways, due to things like accents and mispronunciations, transcription accuracy is greatly improved when combined with a language model, which contains statistics about sequences of words.

For example "she was" is most likely to be followed by an adjective, like "happy".

It's uncommon for "she was" to be followed immediately by a noun.

So if the speech recognizer was unsure between, "happy" and "harpy", it'd pick "happy", since the language model would report that as a more likely choice.

Finally, we need to talk about Speech Synthesis, that is, giving computers the ability to output speech.

This is very much like speech recognition, but in reverse.

We can take a sentence of text, and break it down into its phonetic components, and then play those sounds back to back, out of a computer speaker.

You can hear this chaining of phonemes very clearly with older speech synthesis technologies, like this 1937, hand-operated machine from Bell Labs.

Say, "she saw me" with no expression.

She saw me.

Now say it in answer to these questions.

Who saw you?

She saw me.

Who did she see?

She saw me.

Did she see you or hear you?

She saw me.

By the 1980s, this had improved a lot, but that discontinuous and awkward blending of phonemes still created that signature, robotic sound.

Thriller was released in 1983 and sung by Michael Jackson.

Today, synthesized computer voices, like Siri, Cortana and Alexa, have gotten much better, but they're still not quite human.

But we're soo soo close, and it's likely to be a solved problem pretty soon.

Especially because we're now seeing an explosion of voice user interfaces on our phones, in our cars and homes, and maybe soon, plugged right into our ears.

This ubiquity is creating a positive feedback loop, where people are using voice interaction more often, which in turn, is giving companies like Google, Amazon and Microsoft more data to train their systems on...

Which is enabling better accuracy, which is leading to people using voice more, which is enabling even better accuracy... And the loop continues!

Many predict that speech technologies will become as common a form of interaction as screens, keyboards, trackpads and other physical input-output devices that we use today.

That's particularly good news for robots, who don't want to have to walk around with keyboards in order to communicate with humans.

But, we'll talk more about them next week.

See you then.






Comments

Follow It

Popular posts from this blog

What is Generative AI? Everything You Need to Know

Top Generative AI Tools 2024

60 Growing AI Companies & Startups (2025)