Guidance for generative AI in education and research



the field of artificial intelligence :: Article Creator

Drink The Kool-Aid All You Want, But Don't Call AI An Existential Threat

Gif of HALA friendlier HAL from "2001: A Space Odyssey." By Thomas Gaulkin / Cryteria (CC-BY)

In a 2012 essay for Aeon, physicist David Deutsch wrote of artificial intelligence: "I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors. Yet it has also been one of the most self-confident fields in prophesying that it will soon achieve the ultimate breakthrough."

The self-confident prophesying Deutsch wrote of has since continued, except it is now accompanied with the warning that a rogue artificial intelligence could end life on Earth as we know it, and the technology should be categorized as an existential threat.

Artificial intelligence has come into the spotlight since 2022's release of OpenAI's ChatGPT, which was followed by several other text and image generation models. The applications have already been incorporated into businesses, healthcare settings, and even military operations. When given prompts, these systems respond by statistically predicting words or other forms of output based on the massive amounts of data they were trained on. In other words, they cannot think or rival human cognition. They are, at least for now, just mathematical models.

While generative AI can wreak havoc in many ways—such as amplifying existing biases or dispensing misinformation at scale—it's not an existential threat any more than computer code is. It's merely a tool that could, like many other tools, facilitate a bad outcome. It is, therefore, imperative that the public and policymakers understand this distinction to devise appropriate regulations that address the real risks of these applications and not fictional ones.

The thermonuclear bomb would never have been invented had it not been for an innovation in computing called the Von Neumann architecture whereby a computer could store data and run a program in memory. To successfully achieve that innovation, a single mathematical operation on a computer ironically named MANIAC at Los Alamos National Laboratory ran non-stop for 60 days. Which, then, was the weapon of mass destruction, or existential threat, the thermonuclear bomb, or the innovation in computing that brought it into existence? Obviously, it was the former.

Even in the case of lethal autonomous weapons or AI-controlled drone swarms, the designation belongs to the weapon, not to the tech that helped create, or control, it. But when the discussion centers on AI as an existential risk, the proponents are not speaking of an AI-controlled weapons platform. They're speaking of a superintelligence—an entity whose cognitive abilities surpass human intelligence—that could wipe out humanity simply by doing what it was asked to do.

According to AI philosopher Nick Bostrom and computer scientist Geoffrey Hinton, the capabilities of a superintelligence would be exponentially greater than a human being in every way. If life were a game like the one in the television show Survivor, an artificial general intelligence (systems that could rival human intelligence) would outthink, outwit, outlast, or outright kill humans in short order because of two theoretical dilemmas: misalignment and instrumental convergence. Misalignment is what happens when an AI completes a task or operation in a way that is detrimental to humans or the environment. Instrumental convergence theorizes that an AI, when given a task, would maximize all available resources towards its completion.

One famous example of misaligned AI and instrumental convergence is the paperclip maximizer thought experiment that Bostrom came up with in 2003. "Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans."

RELATED:

Where Biden's AI policies fall short in protecting workers

Another theory, the Orthogonality Thesis (a term Bostrom took from mathematics and applied to artificial intelligence), says that distinct AIs with different goals will share similar values such as self-preservation, cognitive enhancement (software), technical enhancement (hardware), and resource acquisition (time, space, matter, and free energy).

These thought experiments and theories recall HAL, the antagonist and superintelligence featured in 2001: A Space Odyssey, a collaboration between iconic film director Stanley Kubrick and writer Arthur C. Clarke.

Kubrick, who used a familiar science fiction trope—that humans can no longer control the technology they created—provided some background to HAL in a 1969 interview, the year after the movie was released: "One of the things we were trying to convey in this part of the film is the reality of a world populated—as ours soon will be—by machine entities who have as much, or more, intelligence as human beings, and who have the same emotional potentialities in their personalities as human beings."

In the movie, astronaut Dave Bowman, the protagonist, asks HAL to open the pod bay doors. To that, the supercomputer eerily responds, "I'm sorry, Dave. I'm afraid I can't do that…this mission is too important for me to allow you to jeopardize it."

That science fiction template of a runaway machine found its way into science as well. In 1964, RAND, a non-profit policy think tank, hired Hubert Dreyfus, a professor of philosophy at MIT, to review the work of AI researchers Allan Newell and Herbert A. Simon. Newell was one of AI's earliest scientists who in 1957 predicted that in 10 years a digital computer would accomplish three remarkable achievements: become a world champion in chess, discover a new mathematical theorem, and write music that will be acclaimed by critics. That level of optimism energized his work at RAND and, since he and Simon's research presumably had defense implications, the think tank was looking for an objective review by an outside expert. In 1965, Dreyfus shared his findings in Alchemy and Artificial Intelligence and refuted the thesis that human and computer intelligence operate in similar ways. Both Newell and Simon called the paper nonsense and fought against its release. They eventually lost that battle, and Dreyfuss's analysis became one of RAND's most popular papers.

Artificial intelligence research has come a long way since then. Today, the world is transfixed by what AI can do. Google DeepMind AlphaGo's momentous victory against World Go champion Lee Sodel in 2015 was instrumental in Elon Musk and Sam Altman's decision to launch OpenAI with the intention of achieving the benefits of artificial general intelligence safely while mitigating the existential risks. ChatGPT became the fastest growing app in history with 100 hundred million active users just two months after its release. Soon, AI startups popped up everywhere as venture capitalists dumped billions of dollars into new players including Anthropic, started by former OpenAI employees that raised $7.3 billion in its first year. Everyone was, and still seems to be, drinking the Kool-Aid and the AI innovation race is on.

Despite all that investment and progress, there is no evidence AI is closer to achieving superintelligence than it was in 1965 when Dreyfus wrote: "An alchemist would surely have considered it rather pessimistic and petty to insist that, since the creation of quicksilver, he had produced many beautifully colored solutions but not a speck of gold; he would probably have considered such a critique extremely unfair. But if the alchemist had stopped poring over his retorts and pentagrams and had spent his time looking for the true structure of the problem, things would have been set moving in a more promising direction."

RELATED:

Interview: California Congressman Ted Lieu on what you can do about existential threats

Predictions of future catastrophe rely upon the ability of AI to apply knowledge it learned in one area to problem solving in a completely different area that it hasn't been trained on. This broad generalization capability is the key ingredient to achieving human-level intelligence. That not only hasn't happened, but there's no clear path to achieving it.

The AI must also be strongly goal oriented to the point where it will take independent action to prevent any interference in the achievement of its objective. There's no evidence that such a thing is even feasible.

Additionally, any suggestion of a rogue AI showing ill-intent requires that it be conscious and self-aware. That is the stuff of alchemy and other esoteric magical traditions, not science.

In Why general artificial intelligence will not be realized, Norwegian physicist and philosopher Ragnar Fjelland points out that Dreyfus's critiques still hold today; that human intelligence is partly transmitted in ways that are not taught but are communicated tacitly; ways that cannot be duplicated using algorithms or code.

Turing Award winner Yann LeCun, the Chief AI Scientist at Meta and the Silver Professor of the Courant Institute of Mathematical Sciences at New York University explains the problem this way: "A 4-year-old child has seen 50x more information than the biggest LLMs [large language models] that we have." If you do the math, that's 20 megabytes per second through the optic nerve for 16,000 wake hours, and that doesn't include how much data is received through the other senses.

Considering the lack of any substantive evidence supporting the existential risk theory, it's puzzling why many in the fields of computer science and philosophy appear to believe AI is an existential threat. It turns out, there aren't that many who have bought into the theory. A recent poll of more than 2,000 working artificial intelligence engineers and researchers by AI Impacts put the risk of human extinction by AI at only five percent.

Also, despite the fact that thousands of experts signed petitions on AI risk, it doesn't necessarily mean they agree with those statements, as was discovered by two MIT students who took a closer look at the Future of Life Institute's 2023 open letter to pause AI. The undergraduate students, Isabella Struckman and Sofie Kupiec, discovered that "very few claimed to sign because they agreed with all or even most of the letter."

These concerns might have gained traction because AI safety is an extremely well-funded segment of the tech industry, and developers have an interest in securing a voice on AI policy in the United States and European Union, especially when it comes to future regulation of the industry. Fear mongering tends to be more successful and work faster than objective reasoning in convincing an underinformed group of people, which legislators can be, to adopt policy recommendations. In this case, the AI safety lobby hopes governments regulate the use, rather than the development, of artificial intelligence. In other words, the lobby wants to put the onus on the user, not on the developer. If that sounds familiar, it's because the tech industry has been beating the same drum for the past 40 years: "If you regulate us, you hurt innovation. We'll just have customers accept an end-user licensing agreement and all will be well."

The specter of a rogue superintelligence is just a distraction by the tech industry to stave off the regulations that they know are coming their way.


Books In The Shadow Of Artificial Intelligence

https://cdn-assetd.Kompas.Id/5zFfad8NXQMgTdB8kOKPTKa80JQ=/1024x576/https%3A%2F%2Fasset.Kgnewsroom.Com%2Fphoto%2Fpre%2F2024%2F04%2F29%2F73bee514-3d60-46bb-a5a8-dde12e2249c9_jpg.Jpg

Currently, we are facing an onslaught of artificial intelligence that is almost unstoppable. Of course, the presence of artificial intelligence has an impact felt by various parties. People ultimately rely on and try to take advantage of the existing opportunities, including in the world of writing.

Artificial intelligence (artificial intelligence/AI) can be commanded to produce text like a writer. At the same time, we are forced to reinterpret the true meaning of creativity and uniqueness of human thought. Friedrich Nietzsche, in his work, Beyond Good and Evil, has one message that "It is not the strength, but the duration of great sentiments that makes men great".

The message can be relevant when evaluating the capabilities of AI in writing - where machines may be able to create complex and informative texts, but often fail to maintain the emotional nuances and depth of human thought.

This reminds us that behind the efficiency offered by advanced technology, authenticity and emotional depth remain irreplaceable qualities that can only be possessed by human works. Is that true?

Also read: Artificial Intelligence and the Future of Humanity

In his book entitled AI Superpowers, Kai-Fu Lee discusses how the presence of artificial intelligence in the world of writing has raised a series of profound questions about the future human creativity.

Although the use of AI in writing offers efficiency and innovation, many criticisms have emerged, especially from academics and writers. One of the major concerns is the loss of human nuance and analytical depth typically found in human-presented writing.

Artificial intelligence (AI), despite its extraordinary sophistication, often produces texts that lack empathy and a deep understanding of social and cultural context, which is crucial in many aspects of writing, especially in literary and academic works. Additionally, there is concern about the potential for AI to diminish the authenticity of work by producing uniform content, which could reduce intellectual diversity and creativity.

Further question, if writing can use AI, what about the books we will publish in the future?

https://cdn-assetd.Kompas.Id/OpvygvjNhXHj1jy8gTePjgIvvdk=/1024x576/https%3A%2F%2Fasset.Kgnewsroom.Com%2Fphoto%2Fpre%2F2023%2F02%2F08%2Fb5bb8c34-0326-4ad4-a9f6-b80dfadd3f0f_jpg.Jpg

As technology continues to advance, books as a medium of knowledge and entertainment may undergo drastic changes in the way they are produced, distributed, and consumed. There are also concerns that the use of AI in writing and publishing books may exacerbate issues such as privacy surveillance and copyright.

This anxiety is reinforced by our difficulties in distinguishing between AI-generated and human-generated results. AI, with its tendency to produce text based on algorithms that prioritize efficiency, has the potential to produce homogeneous works - losing the uniqueness and depth that can only be achieved through human touch.

This not only threatens the intellectual and creative diversity in the field of literature, but also endangers the unique deep experiences that books possess. Works produced by AI may meet certain standards in terms of quantity and accessibility, but often lack in quality and emotional resonance, thus diluting the role of books as mirrors of authentic human experiences and thoughts.

It is important for society and policy makers to formulate strategies that limit the use of AI in creative writing.

Moreover, the position between human writers and AI algorithms in the creation of literary works or other forms of writing raises serious questions about the future of language and creative expression. Books are not only a container of information, but also a medium for transmitting culture, emotions and deep life philosophies.

As AI begins to dominate this creative space, we risk losing the texture and resonance that only human interaction with its language, which has been cultivated throughout the long history of civilization, can provide. The implications of this phenomenon may be more than just a change in the way we write; it can be a shift in the way we think and feel, affecting the essence of humanity itself.

In an effort to ensure that books remain a source of intellectual and emotional wealth, it is important for the public and policymakers to formulate strategies that limit the use of AI in creative writing. We must advocate for responsible limitations, ensuring that this technology is used as a complementary tool in the creative process, rather than a replacement for it.

https://cdn-assetd.Kompas.Id/FNuGeFlLRVi_JEcFw1dD9UqHIhU=/1024x576/https%3A%2F%2Fasset.Kgnewsroom.Com%2Fphoto%2Fpre%2F2022%2F09%2F22%2F918002f9-4581-420d-9447-afb09ef2081a_jpg.Jpg

Furthermore, in the academic field, the challenges presented by AI in writing become more complex and crucial. In academia, the accuracy and authenticity of writing are not only important for scientific integrity but also essential in advancing knowledge.

Artificial intelligence used to write papers or scientific articles may produce text that appears convincing on the surface, but often lacks a deep understanding of the theory and context necessary for truly meaningful analysis.

This issue mainly arises in disciplines that require critical reasoning and abstract thinking, such as philosophy and social sciences, where nuances of arguments and theoretical connections cannot be fully replicated by algorithms.

Also read: Library Services and Generative Artificial Intelligence

The presence of AI in academic writing poses a risk of plagiarism and intellectual integrity. In this context, it is important for academic institutions to establish clear guidelines regarding the use of AI in scientific writing, ensuring that any work involving AI is acknowledged transparently and used ethically to support, not replace, human researchers.

Therefore, in a world that is increasingly digitized and dominated by data, we need to revisit classic books that offer an escape into a world where human values and interpersonal interactions are still so prominent. Reading classic books becomes a kind of spiritual journey that takes us back to an era where narratives were built through deep perception and thoughtful contemplation, rather than through structured and calculated algorithms.

Books by previous writers have the potential to remind us of the importance of introspection and internal dialogue, components that make us human. They teach us to appreciate the beauty of language, the power of rhetoric, and the subtlety of metaphors, all of which are aspects that are often overlooked in texts produced by AI.

Wawan Kurniawan, Social Psychology Researcher at the Political Psychology Laboratory, Faculty of Psychology, University of Indonesia

Twitter: wnkurn

Wawan KurniawanARSIP @WNKURN

Wawan Kurniawan


Huawei Launches Strategic Initiatives To Promote Development In The Field Of Artificial Intelligence

ERROR: The request could not be satisfied

Request blocked. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner. If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.

Generated by cloudfront (CloudFront) Request ID: gTpuo8I6eSMi7uA1fjuMi5DvSut85Lb3G4Xhs5BOg-vxenJiVkEstg==




Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Christopher Wylie: we need to regulate artificial intelligence before it ...

Top AI Interview Questions and Answers for 2024 | Artificial Intelligence Interview Questions