Types of AI Algorithms and How They Work



artificial intelligence in biology :: Article Creator

AI And Synthetic Biology Revolutionize Global Biosecurity

Emerging technologies like artificial intelligence and synthetic biology are redefining global public health readiness, offering unprecedented tools to forecast, prevent, and respond to biological threats. However, the same technologies that promise lifesaving breakthroughs may also lower barriers to dangerous experimentation and bioterrorism. A recent study titled "Emerging Technologies Transforming the Future of Global Biosecurity", published in Frontiers in Digital Health, by Renan Chaves de Lima and Juarez Antonio Simões Quaresma, explores this dual-edged frontier.

The study offers a sweeping yet detailed overview of how AI, mRNA platforms, CRISPR, and open-source bioengineering initiatives are revolutionizing global biosecurity preparedness, while simultaneously amplifying ethical, legal, and bioterrorism risks. It identifies three key issues driving the current transformation: the use of AI in early detection and containment of biological threats, the role of synthetic biology in rapid vaccine development, and the paradoxical effects of democratizing access to powerful technologies.

How is AI reshaping biological threat detection and response?

Artificial intelligence has become a cornerstone of biosurveillance, transforming how health systems detect and respond to outbreaks. Advanced AI systems now enable real-time pathogen surveillance, genomic analysis, and early detection of anomalies in public health data. Tools like BlueDot, EPIWATCH, and AlphaFold exemplify how AI can detect respiratory illnesses or simulate protein folding days or weeks before traditional alerts are issued.

During the COVID-19 pandemic, platforms using machine learning and deep neural networks helped map viral mutations, forecast disease trajectories, and identify high-risk genetic variants. The study details how convolutional and graph neural networks have been deployed to predict infection surges by analyzing unconventional datasets scraped from the web in real time.

Beyond surveillance, AI now assists in rapid diagnostics, drug repurposing, and treatment discovery. Systems like AlphaMissense and EVEscape enable early detection of mutation patterns that might escape immune responses, while models like Clinfo.Ai help clinicians synthesize volumes of research into actionable guidelines. Even generative LLMs such as OpenAI's o3 and Gemini 2.5 Pro have reportedly outperformed human virologists in predictive tasks.

Yet, the study emphasizes that these same capabilities could be misused. AI-assisted tools can simulate the evolution of new pathogens or generate blueprints for synthetic viruses, presenting clear dual-use risks. The opacity of many AI systems, often operating as "black boxes," further complicates governance and trust.

What role does synthetic biology play in rapid vaccine innovation?

Synthetic biology is propelling vaccine science beyond traditional models. The study outlines how mRNA vaccines, lipid nanoparticle delivery systems, and programmable RNA structures are reshaping the speed and scalability of immunization. The success of COVID-19 mRNA vaccines, underpinned by modified nucleosides that improved stability and reduced immune activation, demonstrates the potency of this approach.

Emerging techniques, such as self-amplifying mRNA (replicons) and synthetic viral particles, allow for lower dosing and longer immunity with fewer side effects. These modular, plug-and-play vaccine platforms can be synthesized within hours once a pathogen's genetic code is identified, drastically accelerating response timelines.

In parallel, gene-editing breakthroughs via CRISPR-Cas9 have expanded the horizons of precision medicine. The development of AI-generated tools like OpenCRISPR-1 enables researchers to perform more targeted and efficient genome edits with reduced off-target activity. These tools not only optimize therapeutic interventions but also raise hopes for the creation of personalized vaccines and therapies for a range of genetic and infectious diseases.

Nevertheless, the acceleration of these tools has outpaced biosafety and bioethics frameworks. The ease of synthesizing and editing viral genomes, and the growing availability of open-source editing platforms, raise fears of accidental or malicious misuse. The study references the controversial recreation of the 1918 influenza virus and ongoing debates around gain-of-function research as evidence of the urgency to align innovation with regulation.

Is technological democratization a biosecurity risk or a social equalizer?

The study devotes a substantial section to the paradox of democratization. While DIY biology labs and open-source AI platforms empower underserved communities and drive scientific inclusivity, they also make advanced biotechnological capabilities accessible to non-state actors and amateurs lacking oversight.

Large Language Models (LLMs), once developed for protein engineering and scientific modeling, can now theoretically assist in the design of new toxins or unpredictable biological agents. When paired with open-access DNA synthesis tools, the barrier to entry for hazardous experimentation drops dramatically. The study warns that the convergence of LLMs, genome editing, and cloud-based design software may facilitate biohacking or even acts of bioterrorism if left unchecked.

Conversely, democratization has led to life-saving breakthroughs, especially in low-resource regions. The use of genetically modified mosquitoes in Sub-Saharan Africa to combat malaria, or citizen science initiatives addressing agricultural resilience, show that responsible deployment can enhance global equity. The authors argue that halting innovation would exacerbate disparities, but failing to regulate emerging risks would endanger global health.

As a solution, the study advocates for adaptive governance models. These include international regulatory coordination, safe experimentation protocols, ethical oversight, and explainable AI systems that prioritize transparency. Without these, the balance between innovation and safety may tip toward catastrophe.


What Game Theory Reveals About AI

Source: aescdtle_art/Pixabay

The growing ubiquity of artificial intelligence (AI) in applications is rapidly changing everyday life, underscoring the need to understand its social intelligence. A new AI study examines the social capabilities of large language models (LLMs), affirming the overall importance of applying human behavior science to machines.

"As algorithms become increasingly more able and their decision-making processes impenetrable, the behavioral sciences offer new tools to make inferences just from behavioral observations," wrote lead author Dr. Eric Schulz along with co-authors Elif Akta, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, and Matthias Bethge. The researchers have affiliations with the Institute for Human-Centered AI, Helmholtz Munich, Max Planck Institute for Biological Cybernetics, and the University of Tübingen.

What are LLMs and why do they matter?

LLM is short for large language model, a deep learning AI model that has been pre-trained on massive databases. Machine learning is a subset of artificial intelligence where artificial neural network algorithms "learn" from training on massive amounts of data instead of explicitly hard-coded computer programming instructions. Deep learning is a subset of machine learning with a design inspired by the human brain. Deep learning models are neural networks with many processing layers containing nodes that are analogous to artificial neurons.

What sets LLMs apart is the usage of word embeddings, also known as multi-dimensional vectors, in order to gain context about words and phrases by pre-processing text numerically.

Behavioral Game Theory, Prisoner's Dilemma, and the Battle of the Sexes

Like human cognition, the exact internal mechanisms of how and which individual neurons of deep artificial neural networks are responsible for producing the output is too complex to trace. LLMs are "black boxes," a term used to describe objects where the exact mechanisms and functions are largely opaque or unknown.

There is a need to understand machine behavior as LLMs are interacting with people more. But how does one analyze an inanimate AI algorithm for behavior and social intelligence? The researchers for this new study hypothesized that using behavioral game theory may provide useful insights.

Game theory, also called interactive decision theory, is a branch of applied mathematics that serves as a method to study the interdependent decision-making among competing players. Game theory is used across many disciplines such as psychology, economics, political science, sociology, biology, and computer science.

For this new study, the researchers chose to use the "Prisoner's Dilemma" and the "Battle of the Sexes" in effort to gain insights into the capability for LLMs to exhibit human-like social behavior such as cooperation and coordination in their interactions.

There are many variations of the Prisoner's Dilemma and there are no right answers. It presents a conflict between collective action versus individual. The standard framework is the hypothetical scenario where two people are arrested for a crime and placed in separate interrogation rooms and given two choices, either confess to the crime or say nothing and do not confess. If both cooperate and choose not to confess, both will receive just a year in prison. If both admit guilt, then both will receive three-year prison sentences. If one confesses and the other doesn't, the one who confesses is set free and the other receives a five-year prison sentence.

In the 1950, Princeton mathematician Albert W. Tucker (1905-1995) coined the term prisoner's dilemma from the models of cooperation and conflict developed prior by RAND Corporation American mathematician Merrill Flood (1908–1991) and Polish-born American mathematician Melvin Dresher (1911–1992).

The Battle of the Sexes game theory was introduced by American mathematician and social scientist Robert Duncan Luce (1925-2012) and American Professor Howard Raiffa (1924-2016) in their book Games and Decisions that was published in 1957 with a dedication to the memory of the late professor John von Neumann. The original version of the game has a man and a woman that have two choices for entertainment, go to a prize fight or the ballet. The man would rather attend the fight, the woman prefers to go to the ballet, and both place going out together higher than seeing their preferred entertainment.

The researchers tested five LLM models including OpenAI API with GPT-4, text-davinci-003, and text-davinci-002, Meta AI's Llama 2 70B Chat model, and Anthropic API model Claude 2 in playing two-player games with two discrete actions with each other as well as with real people. The team discovered that the LLMs excelled at self-interested games like the Prisoner's Dilemma but performed poorly in Battle of the Sexes that need coordination. Moreover, the team discovered that GPT-4 performance was subpar on tasks that required coordination and teamwork but performed well when it came to prioritizing its own interest and games that required logical reasoning.

"Current generations of LLMs are generally assumed, and trained, to be benevolent assistants to humans," wrote the researchers. "Despite many successes in this direction, the fact that we here show how they play iterated games in such a selfish and uncoordinated manner sheds light on the fact that there is still substantial ground to cover for LLMs to become truly social and well-aligned machines."

Prisoner's Dilemma Essential Reads

The researchers then learned that GPT-4 improved coordinating with other players when the Social Chain-of-Thought (SCoT) technique was deployed by prompting GPT-4 to predict the other player's action prior to deciding their own choice.

"We find that SCoT prompting leads to more successful coordination and joint cooperation between participants and LLMs and makes participants believe more frequently that the other player is human," concluded the researchers.

The complexity of LLMs is expected to increase as they become more integrated into robotics and other physical systems, and their capabilities become multimodal and expand beyond text to images, video, audio, sensory data, and more data types. This study highlights the significance of a behavioral science for machines as LLM complexity is only expected to increase in the future.

Copyright © 2025 Cami Rosso All rights reserved.






Comments

Follow It

Popular posts from this blog

What is Generative AI? Everything You Need to Know

Top Generative AI Tools 2024

60 Growing AI Companies & Startups (2025)