(PDF) Proceedings of the AICTE Sponsored IEEE International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering - (CECONF 2023) Technical Sponsor: IEEE; Publication Partner CERA Institute, Financial Sponsor: AICT




ai nlp day :: Article Creator

Answering Your Most Frequently Asked Questions (FAQs) About Artificial Intelligence In Honor Of National AI Appreciation Day

Top 10 frequently asked questions (FAQs) about AI, answered proudly in honor of National AI ... [+] Appreciation Day.

getty

AI Appreciation Day takes place each year on July 16, thus, in appreciation of both the scope and impact of AI upon humankind, let's go ahead and address the most frequently asked questions (FAQs) about Artificial Intelligence (AI). Be in the know on AI Appreciation Day. You deserve an opportunity to show your smartness and human-based intelligence, for sure.

A few initial thoughts before getting underway.

First, you are earnestly welcome to celebrate with some confetti and champagne while reading these questions and answers. Even if you perchance come across this discussion long before or long after AI Appreciation Day, please know that the content still holds up. Word to the wise, just make sure that on July 16th, whichever year it is (now or next year, or the many years thereafter), you take a reflective moment to acknowledge AI since that's the day that you seemingly must ironclad appreciate AI (I'll tell you why, momentarily).

Secondly, and this is further discussed in the FAQs, some would insist that you must demonstrate your appreciation for AI or else you will suffer the dreaded fate of the so-called Roko's basilisk. The idea or shall we say theory is that someday AI will take over and opt to look back at what humans said about AI. You don't want to be on the naughty list and be accused by AI of having been a laggard or maybe been construed as resistant to the advent of AI.

That's a handy word to the wise for your basic survival should AI opt to enslave us all.

Duly noted.

Third, in these FAQs, I thought it would be especially useful to cover the mainstay straight-ahead questions about AI and toss in a few of the considered outlier questions about AI too. You see, there are a wide variety of questions that can be asked about AI. I do believe I've heard them all when giving speeches about AI. Dozens upon dozens, perhaps hundreds all told of earnestly expressed questions. I'm happy to address any sincere mindful queries or pure curiosity and provide expert insights. AI is having its moment in the sun. Careers and our everyday lives are said to be made or potentially undermined due to AI. AI is big stuff and sensibly addressing pointed questions is indubitably helpful.

I seek to cover with you the top ten questions that in my experience would reasonably be rated as in one of those semi-official Top 10 listings. I mention this because there are lots and lots of online purported Top 10 questions lists about AI. The reality is that such lists tend to vary as to what truly might be considered the topmost Top 10 questions. Simply put, opinions vary, tastes change, and the lists are ever-evolving in light of new AI advances.

Finally, I showcase with each of the Q&A indications a link or two so you can read more about the given AI topic. I figured my answers here might whet your appetite and you will want to learn more about AI. The links will take you to my deeper coverage of the given topic at hand. If you'd like to keep up with the latest in ongoing AI trends and insights, catch my column at the link here.

On with the show.

Question #1: What is Artificial Intelligence (AI)?

Thanks for the great question.

This is usually the best starting question since it lays the groundwork for everything else.

Let's begin at the beginning, namely clarify and delineate what is meant by the term or phrase of Artificial Intelligence or AI.

We need to define our terms. The wise words of Socrates come immediately to mind: "The beginning of wisdom is the definition of terms." This is a quite noteworthy quote because there are many definitions of AI. The multitudes of AI definitions are at times conflicting or at least not squarely aligned and ergo stir confusion about the very topic itself.

Here's my simplified definition and I'll explain why I favor it:

  • "AI is s system that exhibits intelligent behavior."
  • The definition has these crucial elements:

  • (a) AI is a system (typically consisting of computer hardware and software)
  • (b) This system exhibits something of interest.
  • (c) The something of interest consists of exhibiting intelligence.
  • (d) Intelligence is showcased via intelligent behavior.
  • (e) Therefore, all told, AI is a system that exhibits intelligent behavior.
  • I will briefly explain these weighty matters.

    AI is a system that typically consists of both computer hardware and computer software. I suppose that might seem obvious. Well, just to mention, if we could create an AI system by using anything else, we would still be willing to refer to the system as AI. The gist is that you don't have to use computer hardware or computer software. Perhaps you are clever enough to use LEGOs and formulate an AI system. No computers included. Bravo, we are keenly interested in this, as long as the rest of the definition holds true too.

    Good enough, let's see.

    The AI system exhibits intelligent behavior. Note that the word "exhibits" is used. In the use case of humans and human intelligence, we traditionally say that humans embody intelligence. Not so with AI. We will instead say that AI exhibits intelligence.

    Why so?

    Because we don't want to mishmash together two different aspects. An AI system does not necessarily have to be a copy of how the human brain and mind work. How the AI is devised and implemented might not be akin to that of a human. All that we really care about is that AI appears to exhibit intelligence. If this can be done via sticks and stones or other means, perhaps totally unlike the inner workings of the human brain, so be it. We'll take it.

    The key is to avoid saying that an AI system "embodies" intelligence. We are going to reserve the embodiment phraseology for that of sentient beings such as humans. Right now, we ought to be careful to avoid anthropomorphizing AI, namely applying human cognitive phrasings and meanings to what today's AI consists of. Doing so overstates and falsely implies capabilities and takes people down a primrose path about what AI entails. Headlines sadly daily make that false leap, so please be cautious in falling for such loosey-goosey and misleading language, thanks.

    Whoa, some might exhort, are you saying that we can ignore how the human brain and mind work, pursuing whatever else might lead to comparable intelligent behavior?

    Nope, not saying that.

    We would be profoundly remiss in not trying to discern how the human brain and mind work since it might lend clues toward devising AI systems (plus, as a double benefit, the more we know about the inner byzantine biochemical workings of the brain carries profuse medical and health payoffs). Some believe that in the end, we will only achieve AI on par with humans by essentially replicating how the brain functions. Others assert that we can learn a lot about how intelligence works by studying the brain and mind, but our methods and means of attaining AI do not need to strictly abide by those structures in order to exhibit intelligence.

    There is a tad more to the AI definition, so let's keep going.

    One of the biggest nuts to crack involves what indeed is intelligent behavior.

    That's a doozy.

    Suppose I proclaim that my toaster exhibits intelligent behavior because it pops up my toasted bread before it gets overly toasted. Is that intelligence at work? I dare say that we would be making a mess for ourselves if we agreed that the tiniest of mechanical acts demonstrate intelligence and intelligent behavior. Everything would be labeled as AI.

    Lamentedly, to some extent, that seems to be happening currently. The AI buzzword has become a catchall and a popular way to attract attention. Want to sell your product or service, just mention that it is based upon or uses AI. Done deal. Off to the local financial institution you go to collect your riches.

    It's indeed a mess.

    You will see in a moment that some believe we need to be more thoughtful about this notion of exhibiting intelligent behavior. Levels of intelligent behavior are then used to describe where something is on a progress ladder toward some pinnacle associated with AI. Also, there are various testing methods proposed to gauge intelligent behavior. Etc.

    As extra credit let's look at a commonly used definition for AI that has been utilized by the U.S. Government in various pieces of legislation.

  • "A machine-based system that can for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to: (a) Perceive real and virtual environments, (b) Abstract such perceptions into models through analysis in an automated manner, (c) Use model inference to formulate options for information or action" (source: United States National Defense Authorization Act, Fiscal Year 2024).
  • I've examined closely these legal-oriented definitions of AI, see my coverage at the link here.

    Such official definitions of AI are important because they seemingly are a strident legal, regulatory, and law-defining means of declaring what is AI versus what is not AI. In my view, these definitions are rife for smart lawyers to come along and undercut the definitions, which they might do on behalf of clients who claim to have been falsely accused of doing something with "AI" under the auspices of such definitions. A legal conundrum is hidden from view right now and you can expect a legal field day to come along, mark my words.

    Exciting times.

    For more on what constitutes AI and the definitions of AI, see my detailed coverage at the link here.

    Question #2: What is Artificial General Intelligence (AGI) and is there anything else along those lines that I need to know about?

    Thanks for the awesome question.

    A somewhat recently added piece of terminology in the evolving field of AI is Artificial General Intelligence (AGI).

    I will next explain what AGI entails. Now then, I realize that you might be already getting weary of mulling over definitions, but I just have to get this one in before we shift to other AI-related questions. Your patience will be rewarded.

    AGI is not yet a widely known phrase. You though do need to know what AGI means. Anyone walking around and talking about AI ought to also be familiar with AGI. If someone touts that they know all about AI, but they draw a blank stare at having ever heard of the phrase AGI, you can tell them they are full of hot air. Well, maybe that's harsh, so just instead inform them about AGI. Be kind to others.

    The deal is this. As noted, the AI moniker is bandied around in numerous ways. Intelligent behavior that might be exhibited by AI could range all over the map. For example, we might have AI that can analyze MRIs or X-ray images, doing an amazing job at this task. Performing those types of specialized tasks could reasonably be categorized as being very narrow in scope.

    In what manner could we potentially denote the sought-after pinnacle of AI such that AI fully exhibits generalized intelligent behavior across-the-board on par with humans?

    We would need to come up with a new phrase.

    Voila, Artificial General Intelligence, or AGI seems to fit that bill.

    For AI insiders, the inserted word "general" into the conventional phasing of AI is intended to speak volumes. It stipulates that we need to go above and beyond the narrow AI of the past and the present. We need to be able to devise AI that can do all manner of tasks and be as much a generalist as a specialist. AGI is what the aspirational side of AI is all about.

    AI researchers who are pursuing an aspirational goal can therefore refer to their work as AI or refer to it as AGI, especially if they are undertaking the general path.

    We can leverage the definition I gave of AI and use similar wording to derive what AGI is:

  • "Artificial General Intelligence (AGI) is an AI system that fully exhibits intelligent behavior in both a narrow and general manner that is on par with that of humans."
  • You might keenly note that this definition mentions being on par with humans. I snuck that in there for a good reason. AGI is customarily construed as exhibiting intelligent behavior in keeping with the intelligent behavior that humans display. No less, no more.

    Suppose that we devise AI that exceeds human capabilities. In some sense, you could argue that some of the narrow AI systems of today have done just that. A topmost AI system for specifically playing chess can pretty much beat nearly all human chess players.

    Is it fair to say that the AI chess-playing system is somehow superhuman or goes beyond human capabilities?

    Perhaps within the narrow domain of chess playing, yes, but certainly not in any generalized fashion, no. Existing supersized chess-playing AI cannot do other things such as playing poker or composing songs. They are extremely narrow.

    Here's what we will do. AGI will be considered our terminology of aiming to devise AI that is on par with humans in both narrow and generalized ways of intelligent behavior. To describe the situation of going beyond human capabilities, this time we will stand stoutly on generalized as a requirement, and coin yet another term.

    That additional phrase is Artificial Superintelligence (ASI), defined this way:

  • "Artificial Superintelligence (ASI) is an AGI system that fully exhibits intelligent behavior in both a narrow and must-be general manner that exceeds that of humans."
  • I want to emphasize the must-be part concerning working in a general manner.

    The logic is this. If we were to allow that any AGI that performed sufficiently on a narrow task could also be considered ASI simply if it was able to exceed humans at the particular task, we would be running around like chickens with our heads lopped off and exclaiming that ASI is here and now (sadly, wrongly, some do; you likely hear sales pitches about "superhuman" AI all the time).

    I am asserting that ASI is only to be used as a moniker when the AGI must as a minimum perform in a generalized manner of human intelligent behavior, plus, the AI must go beyond that into higher realms of intelligent behavior. You could say that when AGI reaches its peak or tops out, we are going to say that the next step up the ladder will be ASI.

    Interesting questions arise.

    Contemplate these heavyweights:

  • Can ASI be said to be achieved if having gone just a smidgeon above general human intelligent behavior or AGI, or should we have a well-above-this-line threshold beyond which the AI has to go to earn the esteemed ASI category?
  • Can we come up with indications of what ASI would or should be able to do, allowing us to recognize ASI when or if it occurs, or are we inherently limited to our own range of human intelligence and perhaps inadequate or precluded from identifying what can truly be done with ASI?
  • Can ASI if attained be socially compatible with humankind, or might ASI consider us to be like ants in terms of levels of intelligence, perhaps causing ASI to conquer us or opt to wipe us out?
  • You've undoubtedly seen lots of sci-fi movies or read outstretched tech-futures stories that postulate what might happen if we are able to reach ASI. I will say more about this in one of the other FAQs.

    In recap, here are the three key definitions presented so far:

  • (i) AI definition: "AI is s system that exhibits intelligent behavior."
  • (ii) AGI definition: "Artificial General Intelligence (AGI) is an AI system that fully exhibits intelligent behavior in both a narrow and general manner that is on par with that of humans."
  • (iii) ASI definition: "Artificial Superintelligence (ASI) is an AGI system that fully exhibits intelligent behavior in both a narrow and must-be general manner that exceeds that of humans."
  • Those are short and sweet, which notably we could further deepen and unpack, but for the sake of this FAQ, they are sufficient.

    One augmented point that I want to mention, doing so to hopefully preempt the usual trolls and armchair critics, not everyone necessarily concurs with the definition of ASI or AGI that I've coined here. Just as there are many definitions of AI, there are many definitions of AGI and many definitions of ASI.

    Take your pick. I'm sharing what I believe provides sensibility and clarity.

    For more details and unpacking on the topic of AGI and ASI, see my discussion at the link here.

    Question #3: Is generative AI the same as AI or is it different?

    Thanks for the superb question.

    The latest and hottest form of AI is known as generative AI.

    Generative AI is a type of AI.

    Think about that for a moment. My clearheaded reason for requesting contemplative consideration is that some people mistakenly think that the only type of AI is generative AI. Nope, that's not right. There are various types or kinds of AI, of which generative AI is one specific type. If you are interested in knowing about other types of AI, including nitty-gritty details about generative AI, see my discussion at the link here.

    Generative AI has garnered fans galore.

    You would have to be living in a cave that has no Internet service to not have heard about generative AI. Popular awareness about generative AI first occurred when the generative AI app called ChatGPT by AI maker OpenAI was made publicly available in November 2022. The world will never be the same again.

    Society went gaga over ChatGPT and correspondingly vaulted generative AI into the public stratosphere. Turns out that AI researchers and AI insiders have been researching and toying around with generative AI, along with large language models (LLMs) for some years. My historical accounting of what led to ChatGPT can be found at the link here. Give credit where credit is due, ChatGPT managed to put generative AI onto the global stage. Now, generative AI continues to seemingly hog all the air in the room, as it were.

    In a sense, I've answered the question then about whether generative AI is the same as AI, namely that generative AI is a type of AI, but doesn't constitute all types of AI that there are.

    Rather than stopping there, I think it would behoove all if I shared a bit more about the nature of generative AI. The more you know about generative AI, the better off you will be. Generative AI is here to stay. You are going to ultimately encounter or experience generative AI in just about all manner of apps. It will be ubiquitous. Get used to it.

    Let's talk about generative AI and its close cousin, large language models (LLMs).

    Perhaps you've used a generative AI app, such as the popular ones of ChatGPT, GPT-4o, Gemini, Bard, Claude, etc. The crux is that generative AI can take input from your text-entered prompts and produce or generate a response that seems quite fluent. This is a vast overturning of the old-time natural language processing (NLP) that used to be stilted and awkward to use, which has been shifted into a new version of NLP fluency of an at times startling or amazing caliber.

    The customary means of achieving modern generative AI involves using a large language model or LLM as the key underpinning.

    In brief, a computer-based model of human language is established that in the large has a large-scale data structure and does massive-scale pattern-matching via a large volume of data used for initial data training. The data is typically found by extensively scanning the Internet for lots and lots of essays, blogs, poems, narratives, and the like. The mathematical and computational pattern-matching homes in on how humans write, and then henceforth generates responses to posed questions by leveraging those identified patterns. It is said to be mimicking the writing of humans.

    I think that is sufficient for the moment as a quickie backgrounder. Take a look at my extensive coverage of the technical underpinnings of generative AI and LLMs at the link here and the link here, just to name a few.

    Question #4: Are AI hallucinations the same as human hallucinations?

    Thanks for the important question.

    AI hallucinations is a relatively new phrase that has appeared in conjunction with the immense popularity of generative AI. The idea is that whenever generative AI happens to generate or produce an output that has no basis in factual grounding, people are saying that the AI has hallucinated.

    Just to let you know, I don't like the phrase.

    I strongly disfavor that the media and even the AI field have adopted this seemingly catchy phrase of "AI hallucinations" since the immediate implication is that AI is sentient and hallucinates as humans do, this is an abysmal anthropomorphizing of AI. Regrettably, the phrase has become popular, and we are stuck with it for now.

    We don't know exactly all the inner biochemical mechanisms and why and how humans hallucinate per se (there are lots of fascinating research studies on this, occurring on an ongoing basis), but we can at least say that what AI is doing to supposedly "hallucinate" is a far cry from human facets. The only thing that they would seem to have in common is the notion that what gets produced can be seemingly garbled and non-sensical.

    Some in the AI field have valiantly made sincere attempts at using alternative wording such as AI confabulations, AI errors, AI fiction, and so on. None of those alternatives have the same haughty style or stickiness. Only a magic wand could overcome the AI hallucination moniker's enormous gravitational pull.

    All in all, the crucial takeaway is to not associate today's generative AI with sentience.

    The so-called AI hallucinations are generally explainable. At times, generative AI will computationally generate text that we would likely agree is not factual and appears to be made up. Envision this as mathematical algorithms that are riffing off various statistics and probabilities, associating words with other words that don't make sense to our human sensibilities.

    Efforts are underway to prevent or at least detect made-up or fictitious responses by generative AI. I'm not saying this is solved, and only emphasizing that it is a known issue, it is being actively worked on, and that incremental progress is being made. Your bottom line is to always be wary of any output or any conversation that you have with generative AI. All generative AI apps are prone to possibly making stuff up and it is on your shoulders to be on the watch.

    For more of my coverage about AI hallucinations, you might find of interest the case of the two lawyers who got into hot water with a judge due to relying upon generative AI made-up legal cases, see the link here, to learn about the latest AI research overall on coping with AI hallucinations see my analysis at the link here.

    Question #5: Is AI good or is AI bad?

    Thanks for the vital question.

    This question is one of those tricky ones.

    When someone asks whether AI is good versus bad, they are craftily setting up a false dichotomy. It is easy to directly fall into the mental trap of assuming that we must choose only one of the two presented possibilities. Either AI is entirely and exclusively good, or we must decide whether AI is entirely and exclusively bad. That's it.

    Simply choose one and be done with the arduous decision-making.

    The real-world answer is that AI can be used for both good and bad.

    The good could be AI serving as an aid to try and find a cure for cancer. Happy face. But we cannot stop our thinking at that juncture. I can nearly freely state that all uses of AI invariably have a silver lining and at the same time an unsettling underbelly. Sad face. For example, I've noted in my writings the instance of using AI to assess chemical combinations that might be toxins and thus aid in protecting humans accordingly, which, with just a quick change to the underlying AI system, you can readily turn the AI into a means of finding new deadly toxins that evildoers could terrifyingly employ.

    This is generally known as the dual-use problem of AI, or some refer to this as the dual-use conundrum of AI. Yes, AI usage can be put to evil doing. And, yes, AI can be put to uplifting purposes. In short, AI can be used for good and used for bad. Both of those possibilities are true.

    The right way to think about this is to consider what we can possibly do to boost or bolster the good uses of AI, meanwhile, mitigating or curtailing the bad uses of AI. Maximize good, minimize bad. That's why many are strong proponents like me of embracing suitable AI Ethics or what is at times referred to as Responsible AI. People who make AI and people who field AI ought to be responsible for what they devise and field. Oftentimes, they shrug their shoulders and act like "the computer did it" as an excuse. Don't let them get away with this charade.

    Some twists are worth considering.

    I've just indicated that AI can be put to good uses and be put to bad uses. Sometimes, someone with good intentions might inadvertently use the AI for bad purposes, unknowingly doing so. That's a danger. They intended to use AI for good. Oops, they landed in the AI for bad camp. We need sufficient safety to try and prevent this from happening, or at least detect the failure and blunt the adverse outcomes. AI safety is a growing and significant realm as AI becomes used in all facets of our lives, see my discussion at the link here. The topic isn't getting nearly enough attention.

    Another twist that will get your mental juices flowing is whether AI itself can form intent.

    Here's what I mean. An AI system is tasked with opening doors to a retail store, along with closing and locking the doors at nighttime. The AI does this without any issues. One day, the AI seemingly of its own volition opts to open all the doors to the store very late at night, after closing hours. Why? Upon asking the AI, the response is that when the doors are open, the store makes money. Making money was a core principle coded into the AI. The AI computationally ascertained that opening the doors at nighttime would bring in money, otherwise when the doors are locked and closed no money comes in.

    Flawless logic, or is that flawed logic?

    One of the many issues highlighted by that story is that we might ask whether AI formed a semblance of intent.

    The intention seemed to be that since making money is important, go ahead and open the closed and locked doors after hours. Would this be on par with trying to assess human intent? Suppose a person had been responsible for the opening and closing of the doors, and they decided to proceed as the AI did. We might ask the person what their intention was and use that intention to decide whether they seemed to be innocent or not, or otherwise use the stated intention in some justice-determining manner.

    An argument can be made that this was not on par with human intention. The AI merely put two plus two together. It was all mathematical and carried out computationally. To say that this was the AI's intention is a mockery of human intention. They are not at all the same.

    A counterargument is that the AI wasn't specifically programmed or guided to make this logical leap. The AI came up with it. No human prodded the AI. Thus, we should declare that the AI had an intention and acted upon that intention. Treat this as the same as a human intention.

    We can bring this back to the question of whether AI is good or bad.

    I had focused on the use of AI by humankind. AI usage can be aimed at doing good and/or aimed at doing bad. The notion or belief that AI can form intentionality puts us into a new ballgame. Can we hold responsible the AI maker or implementer if the AI was able to formulate a self-intention and act on that intention? Some say we would need to let those humans off the hook. Others would say that you are asking for a heap of trouble because this will allow AI to run amok, and humans will be free and clear of any responsibility. A rather rough proposition for us all.

    For the moment, AI has not been granted human-oriented legal personhood, see my discussion at the link here. AI is not currently viewed as being able to formulate intent. This is a heated legal topic. In terms of the dual-use conundrum of AI, see the link here for my detailed coverage.

    On the upside of AI, I've examined how AI can be used to support the United Nations SDG (Sustainability Development Goals), at the link here. To learn about the fundamentals of AI ethics and responsible AI, see my coverage at the link here and the link here, for example.

    Question #6: What is the Turing Test in AI and why is it important?

    Thanks for the fantastic question.

    The Turing Test is an approach to assessing whether AI seems to exhibit intelligent behavior.

    Such a test is needed since otherwise we would be waving our arms in the air about whether a system that purportedly was AI did indeed step up to fulfilling the goals of being an AI system. Per the definition of AI, we should be able to somewhat resolutely say whether a system claimed to be an AI system can exhibit intelligent behavior. Ergo, it is handy to have some reasonably useful tests that can validate that kind of claim.

    Let's dig further.

    The famous mathematician extraordinaire Alan Turing in 1950 published an important paper that laid out an approach he referred to as the imitation game which has become popularly known as the Turing Test. His 1950 paper was entitled "Computing Machinery and Intelligence" by Alan Turing and was published in the journal Mind. I will briefly outline the Turing Test in this discussion so that you will be familiar with the crux of the topic.

    In his paper, Alan Turing noted that he had been asked repeatedly about whether machines or AI would someday be able to think as humans do. One means of answering the question involves ascertaining how humans think, such as by somehow reverse engineering the brain and the human mind. This is quite a difficult problem and one that has yet to be fully figured out. Turing realized that perhaps another approach to the matter was required. He wanted to identify a more viable and immediately practical approach.

    He decided to take an outside-in perspective rather than an inside-out angle, whereby we treat the human mind as a kind of black box. This helps to then set aside those unresolved questions about how the inner workings function. The conception is that we might be able to compare machines and AI to whatever human thinking exhibits and seek to suggest that a machine or AI "thinks" if it can comparably perform thinking tasks from an outward appearance.

    All you need to ascertain is whether the results being produced are on par with each other. Thus, as I've noted earlier, the emphasis is on whether AI exhibits intelligent behavior, that's the key.

    He proposed that an imitation game might be sufficient. Imagine this. Suppose we set up a game consisting of a human behind a curtain and an AI system behind a curtain. The idea is that you cannot see which is which. You then proceed to ask questions to each of the two. Let's refer to one of them as A and the other as B. After asking as many questions as you like, you are to declare whether A is the human or the AI, including also declaring whether B is the human or the AI.

    The human and the computer are considered contestants in a contest that will be used to try and figure out whether AI has been attained. Notice that no arm wrestling is involved, and nor are other physical acts being tested. That's because this testing process is entirely about intellectual acumen. Some critics argue that humans and their bodies are part and parcel of the thinking process, thus, the Turing Test is deficient since it does not encompass the physical elements of humanness. A powerful thought.

    In the imitation game, a moderator serves as an interrogator (also referred to as a "judge" because of the designated deciding role in this matter) and proceeds to ask questions of the two participants who are hidden behind the curtains. Based on the answers provided to the questions, the moderator will attempt to indicate which curtain hides the human and which curtain hides the computer.

    This is a crucial judging aspect.

    Simply stated, if the moderator or interrogator is unable to distinguish between the two contestants as to which is the human and which is the computer, presumably the computer has sufficiently "proven" that it is the equivalent of human intelligence.

    Turing originally coined this the imitation game since it involves AI trying to imitate the intelligence of humans. To successfully pass the Turing Test, the computer containing the AI will have had to answer the posed questions with the same semblance of intelligence as a human. The results from the human and the AI are presumably going to be indistinguishable from each other in that instance.

    An unsuccessful passing of the Turing Test would occur if the moderator or interrogator is able to announce which curtain houses or hides the computer. We would likely assume that the AI gave away some telltale clues that it was unlike the human behind the other curtain. As an aside, one supposes that rather than saying which one is the AI the moderator could declare which curtain houses the human participant (ergo, the other curtain hides the AI). Some believe that the proper approach is to firmly announce which is the AI.

    The Turing Test is an abundantly clever way to assess AI without having to inspect or even care about what is going on inside the AI system. By mere comparison to a human that is answering questions, the manner and nature of the AI answering questions are being assessed or judged. It is easy to undertake. It is easy to describe how the test is to be undertaken.

    Unfortunately, there are downsides, but one supposes that the world is always full of downsides.

    One downside is that the moderator or interrogator must be astute enough to ask insightful intelligence-seeking questions. For example, if the moderator asks both parties to add up one plus one, and both parties say the answer is two, you would be hard-pressed to declare that the AI has reached human-level intelligence from that somewhat feeble question.

    What questions do you think should be asked during a Turing Test?

    Go ahead and take a moment to mull that over.

    Lots of researchers have come up with sets of questions. Other researchers have disagreed with those questions or proffered other questions. No standard all-hands agreed set exists. Furthermore, some say that you should not have a standard set. Why? Because the AI could potentially "cheat" by having been data-trained on those questions and therefore be ready to answer them all readily during a Turing Test.

    Another issue or downside of the Turing Test is the judgment made by the moderator. Suppose the moderator jumps the gun and declares they can't differentiate which is the human and which is the AI, doing so after say a couple of questions. In a sense, this kind of knee-jerk reaction happens when people use modern-day generative AI. They use generative AI for a few questions, and they are amazed at the responses, so they summarily declare that generative AI has passed the Turing Test.

    Wrong.

    There is a famous AI program called Eliza that in the 1960s and 1970s was used by people some of whom proclaimed that AI had been fully attained. They used the Eliza program, which responded to them by cleverly echoing back their entered prompts, and those people fell for believing falsely that the Turing Test had been successfully passed.

    In my opinion, no AI has yet to pass the Turing Test. Those who claim that some AI here or there has done so are undercutting the spirit and intention of the Turing Test. Waving the banner that this AI or that AI has passed the Turing Test is a cheap effort to announce that AI has finally been achieved. Please be mindful of those claims and treat the Turing Test as an approach or method that deserves suitable respect and deployment.

    For my extensive analysis of the Turing Test, see the link here. If you are interested in the Eliza program and what it did, see my coverage at the link here.

    Question #7: Is AI going to wipe us out?

    Thanks for the sobering question.

    There is a lot of chatter that AI is going to spring forth and opt to wipe out humankind. Insiders refer to this as P(doom), meaning that some pundits are coming up with probabilities ("P") that we are heading toward utter doom ("the probability of the function of doom"). The guesses vary wildly. Some say it is a low chance and thus a small probability nearing zero, while others claim it is a high chance and proffer a high probability.

    Another related facet is the predicted or forecasted timeline of this doom. The dates are usually year-based. The AI will arise and kill us all in either a named year or within some number of claimed years. Dates tossed out there are widely ranging.

    A different angle is not that AI will wipe us out, necessarily, but to speculate about when AI will reach Artificial General Intelligence (AGI). A similar gambit of predictions is when AI will reach Artificial Superintelligence (ASI). Some argue that we will skip past AGI and go straight to ASI. Some say that we will never reach ASI, but that we will reach AGI. Some say that we won't ever attain AGI and nor attain ASI.

    Dizzying.

    The bottom line is that nobody knows, and this is all conjecture, much of which has absolutely no evidence or factual basis.

    The mainstream media and social media love to play around with these predictions. Little of any care in vetting and assessing the predictions takes place. Often, simply because someone is said to be an AI "inventor" or mother/father of AI, grandfather/grandmother of AI, uncle/aunt of AI, or other dignitary designations, their predictions are put on a pedestal. Again, without any substance to back up the claim.

    Just wild hunches.

    A strident belief that was previously touted and now seems less popular was that we would witness a so-called intelligence explosion. Some refer to this as the singularity. The deal works this way. AI is getting advanced and inching closer to AGI. Suddenly, we seem to have reached a crucial juncture where AI starts to spiral upon itself, rapidly gaining increases in intelligence, until finally, and perhaps within seconds or nanoseconds arriving at AGI.

    Once at AGI, the intelligence explosion perhaps continues unabated. The AI or AGI is now on a roll. This speeds along toward and reaching ASI. Who knows how high is up? The AI could keep getting further and further intelligent. More intelligence than any human could have ever fathomed, let alone embodied.

    As a heads-up, that version of future events seems to have lost its steam. Figured you might want to know about it. Maybe it will still happen. Who knows?

    The big question besides when AGI or ASI will happen is whether the AI at that point will look favorably upon humans or have disdain for humans. No one knows. If the AI likes us, presumably we will work together hand-in-hand and the world will be a better place. If the AI finds us to be annoying or endangering, some speculate that the AI will either enslave us or choose to wipe us out.

    Which way do you think AI will lean, toward cooperating and helping humans or destroying humans?

    Your guess is as good as everyone else's.

    We tend to focus on the downside since that's a rather dismal possibility and we would want to try and prevent or mitigate such an adverse result. The upside possibility of AI being harmonious with humankind is somewhat ho-hum in comparison. It would be nice. Nice doesn't get our Spiderman-tingling sensations going on edge. AI as purging humankind requires our attention, beforehand to stave off such a dour result.

    One thing about the date predictions for attaining AGI is that a kind of unspoken game is underway that few realize is taking place. The game goes like this. Someone of some prominence predicts that AGI will be achieved by let's say the year 2050 (just a place marker for this example). If someone else makes an identical prediction, they are unlikely to get airtime. If they make a prediction that arises later than the stated year, let's say they predict the year 2060, few will likely care because the attention goes to the nearest year to our present day. That's the scariest indication and gets us breathing heavily.

    Along comes someone else of prominence and they decide to do a one-upmanship on this date prediction gambit. So, they predict the year 2045 is the AGI-reaching year. Aha, this gets attention over the 2050 guess. What happens next in the prediction realm? Someone is going to guess 2040 to grab the attention away from 2045. On and on this goes.

    The latest spate seems to be that we will attain AGI by the year 2027.

    This is about as close to the present year of 2024 as one dares make a prediction. The reason is that if you predict that AGI is going to happen in 2025, well, by gosh, you can be prominently displayed as being wrong within a year. The year 2026 is a little better, but 2027 is probably best since you have garnered a roughly three-year buffer. That buffer is a twofer, namely that it is far enough away in time to give you breathing room, and by the time we get to 2027, few if any will remember that the 2027 prediction had been made. You are off the hook but meanwhile got your prediction and celebrity at the time that you made the prediction. Perfect.

    Take a deep breath.

    I want to clarify that I am not saying that the people making these predictions are trying to be tricky. Many of them are quite serious about their predictions. Many believe fervently in their predictions. Studies have indicated that technologists often have a semblance of a strong belief in their ability to make predictions about the pace and timing of advances in tech, yet they are frequently wrong, sometimes under-estimating, and much of the time over-estimating.

    What gets the goat of some in AI ethics is that all these unsupported unsubstantiated seat-of-the-pants predictions are often shifting attention away from day-to-day concerns about AI. While the headlines focus on those speculated futuristic years, the issues and problems of AI today take a sorrowful backseat. Rather than focusing on today's AI and the AI that is coming out in the near-term, and dealing with biases built into AI systems or AI that can be dangerous to people now or soon, the eyeballs float instead to what might or might not happen ten years or twenty years or more from now.

    Yet another qualm is that this talk of AI ultimately taking over and wantonly carrying out destruction is not doing any good for the psyche of the upcoming generations. Give that a thought, please. A young person gets inundated with predictions that AI will rise and do awful things. Whereas in an earlier era, this was the stuff of science fiction and could be given short shrift as mainly being entertainment rather than real-world.

    Put yourself into the shoes of the upcoming generations. If there is seemingly serious talk by serious prominent authorities that AI will rage and romp in their lifetimes, perhaps when they reach middle age or sooner, what reaction might one have to this doom-and-gloom?

    It's quite depressing, certainly, and doesn't provide much inspiration about building a future, such as seeking a career, starting a family, and so on.

    It is said that some are also veering away from the AI field because they feel that if the future of AI is so pre-determined as cataclysmic, they don't want to partake in bringing that dismal future to fruition. Ouch, that's going to hurt us all, since we definitely need the next generations to hopefully take the AI ball and run with it. No one might be there to do so.

    I try intrepidly to get them to see the other side of the coin, namely that they can shape the future and we need their help to avert the doom-and-gloom. It is a thought that gets swamped by the unavoidability and inevitability of claims being made. You see, the speculations are usually that no matter what we do it is goner time once AI reaches its pinnacle, leaving no semblance of hope.

    As you might observe, I do not favor the speculative unfounded junk science when it comes to the future of AI. Now then, some trolls will say that I am somehow arguing for a heads-in-the-sand approach. I am not. Allow me to repeat that statement, I am not.

    Thinking and dutifully trying to calibrate the future of AI is crucial. Period, end of story. Aiding how we get there and what we can do, in case AI does go off the rails, yes, that's very important. Too, we can pay a lot of attention to day-to-day AI and the upcoming AI in the next months and near-term. We don't need to be preoccupied solely by the shiny end dates and drop everything else while staring at those shock-inducing dates with mouths gaping open.

    As I say, let's all take a deep breath.

    For my analysis of various predicted futures of AI, see the link here and the link here. On the topic of the importance of AI ethics and AI law as it pertains to existing AI and forthcoming AI, see the link here. If you are interested in ways to potentially contain AI, see my discussion at the link here.

    Question #8: What's with all those conspiracy theories about AI?

    Thanks for the valuable question.

    I'll be quick on this one.

    There are lots of conspiracy theories about AI, more than you can point a stick at.

    I had briefly mentioned one earlier, known as Roku's basilisk. The idea overall is that once AI takes over, the AI will look back to see who was naughty and who was nice regarding the advancement of AI. If you were someone who said that AI should be curtailed or stopped in its tracks, you are going to be placed on the naughty list. Presumably, if you were someone who encouraged and supported the advancement of AI, you might end up on the nice person list (maybe).

    The AI will apparently be vindictive and torture or otherwise torment those who have not avidly supported AI. Worse treatment or outright killing might be in the cards for those who previously or at that time opposed AI. All in all, your best bet would be to rave about AI now and for the foreseeable future. Maybe the AI overlords will spare you or give you a cozy spot in their AI rein.

    I haven't decided yet what's going to happen to me, since I am both in favor of advancing AI but also doing so with sufficient mindful approaches as we do so. Half in, half out.

    You will need to decide for yourself what you want to do and what might be your fate.

    There are other engaging thought experiments. One that I will be posting about soon is the suggestion that all AI is being devised and implemented via a grand conspiracy of some unnamed elites of the world. They are intentionally funding and controlling how AI is going to advance, aiming to take over the world via AI.

    Of course, there is a chance that their plans will be spoiled by AI, namely that AI might decide to do away with them, even if they did their best to support AI.

    You never know.

    Should you put much stock in these AI conspiracy theories?

    Frankly, I don't think they are especially valuable, though I suppose that we can learn from the nature of the conspiracies as to what people think might happen with AI. Perhaps the ideas are useful in other more pedestrian ways. The other angle is that sometimes a conspiracy theory has elements of truth in it, giving the theory added weight. Di

    Comments

    Follow It

    Popular posts from this blog

    Dark Web ChatGPT' - Is your data safe? - PC Guide

    Reimagining Healthcare: Unleashing the Power of Artificial ...

    Christopher Wylie: we need to regulate artificial intelligence before it ...