AI Weekly: This machine learning report is required reading



human artificial intelligence :: Article Creator

Artificial Intelligence May Amplify Bias, But Fights It As Well

AI as a force for good.

getty

AI doesn't create bias, it only amplifies it. But it serves as a potential solution as well. But organizations need to better understand that the problem isn't AI itself, but the human foibles behind it.

That's the word from Thomas Chamorro-Premuzic, author of I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. Chamorro-Premuzic, a psychologist by training, chief innovation officer at Manpower Group, and professor at University College London, opines that "AI could become the biggest reality check weapon in the history of technology but is instead co-opted in a reality-distortion tool. To the degree that AI can help us confirm our own interpretations of reality or make us look good, we will embrace it. Failing that. We should regard AI as a failed experiment."

In other words, AI, used judiciously, is the best tool to come along to show us where bias is occurring within our businesses and interactions. As explored in previous posts here at Forbes, we spoke with workplace equity advocates about the power AI and related technologies bring to opening opportunities for minorities and women in corporations. AI is shining a light on where disclination and bias is occurring.

"Most high-profile cases of AI horror stories, or transfer human decision-making to machines, are akin to 'shooting the messenger,'" Chamorro-Premuzic points out. "The very algorithms that are indispensable for exposing the bias of a system, organization, or society are lambasted for being biased, racist, or sexist, just because they do a terrific job replicating human preferences or decision-making."

If only AI "could convert people into more open-minded versions of themselves by showing them what they don't want to hear, it would certainly do that," he continues. "If only AI alone would present to hiring managers people who are categorically different from those they have hired in the past and change managers' preferences."

If only. "Then we would not talk about open-minded AI or ethical AI, but open-minded humans or ethical, intelligent, curious humans. It's the same for the reverse, which is the real world we live in."

Chamorro-Premuzic pulls no punches when it comes to pointing out how AI is not to blame, but is amplifying our worse human traits. "The most notable thing about AI is not AI itself, let alone its intelligence, but its capacity for reshaping how we live, particularly through its ability to exacerbate certain human behaviors, turning them into undesirable or problematic tendencies," he says. "Irrespective of the pace of technological advancement, and how rapidly machines may be acquiring something akin to intelligence, we are as a species exhibiting some of our least desirable character traits, even according to our own low standards."

AI may be able to help us on the bias front, Chamorro-Premuzic states. "One of its biggest potential utilities is to reduce human biases in decision-making, which is something modern society appears to be genuinely interested in doing. AI has been successfully trained to do what humans generally struggle to do, namely, to adopt and argue from different perspectives, including taking a self-contrarian view or examining counterarguments in legal cases."

In general, he continues, "you can think of AI as a pattern-detection mechanism, a tool that identifies connections between causes and effect, inputs, and outputs. Furthermore, unlike human intelligence, AI has no skin in the game, it is by definition neutral, unprejudiced, and objective. This makes it a powerful weapon for exposing biases, a key advantage of AI is that is rarely discussed."

AI and machine systems are only as good as their inputs. "And if the data we use as input is biased or dirty, the outputs – the algorithm-based decisions – will be biased, too. Worse, in some scenarios, including data-intensive technical tasks, we trust AI over other humans. This problem also highlights to biggest potential AI has for de-biasing our world. But it does require an understanding – and willingness to acknowledge – that the bias is not the product of AI, but rather, only exposed by AI."

If you don't use AI or algorithms to recommend online dating users who they should date, their preferences may still be biased," Chamorro-Premuzic illustrates. "Refraining from using AI to select and recruit candidates that fit a certain mold—say, middle-aged white male engineers – will not stop people who fit in with that tribe from succeeding in the future. If the bias does not go away just because you don't use AI, then you can see where the bias actually lies – in the real world, human society, or the system that can be exposed through the use of AI."


New Artificial Intelligence Tool Can Accurately Identify Cancer

Doctors, scientists and researchers have built an artificial intelligence model that can accurately identify cancer in a development they say could speed up diagnosis of the disease and fast-track patients to treatment.

Cancer is a leading cause of death worldwide. It results in about 10 million deaths annually, or nearly one in six deaths, according to the World Health Organization. In many cases, however, the disease can be cured if detected early and treated swiftly.

The AI tool designed by experts at the Royal Marsden NHS foundation trust, the Institute of Cancer Research, London, and Imperial College London can identify whether abnormal growths found on CT scans are cancerous.

The algorithm performs more efficiently and effectively than current methods, according to a study. The findings have been published in the Lancet's eBioMedicine journal.

AI could help NHS surgeons perform 300 more transplants every year, say UK surgeons

Read more

"In the future, we hope it will improve early detection and potentially make cancer treatment more successful by highlighting high-risk patients and fast-tracking them to earlier intervention," said Dr Benjamin Hunter, a clinical oncology registrar at the Royal Marsden and a clinical research fellow at Imperial.

The team used CT scans of about 500 patients with large lung nodules to develop an AI algorithm using radiomics. The technique can extract vital information from medical images not easily spotted by the human eye.

The AI model was then tested to determine if it could accurately identify cancerous nodules.

The study used a measure called area under the curve (AUC) to see how effective the model was at predicting cancer. An AUC of 1 indicates a perfect model, while 0.5 would be expected if the model was randomly guessing.

The results showed the AI model could identify each nodule's risk of cancer with an AUC of 0.87. The performance improved on the Brock score, a test currently used in clinic, which scored 0.67. The model also performed comparably with the Herder score – another test – which had an AUC of 0.83.

"According to these initial results, our model appears to identify cancerous large lung nodules accurately," Hunter said. "Next, we plan to test the technology on patients with large lung nodules in clinic to see if it can accurately predict their risk of lung cancer."

The AI model may also help doctors make quicker decisions about patients with abnormal growths that are currently deemed medium-risk.

When combined with Herder, the AI model was able to identify high-risk patients in this group. It would have suggested early intervention for 18 out of 22 (82%) of the nodules that went on to be confirmed as cancerous, according to the study.

The team stressed that the Libra study – backed by the Royal Marsden Cancer Charity, the National Institute for Health and Care Research, RM Partners and Cancer Research UK – was still at an early stage. More testing will be required before the model can be introduced in healthcare systems.

But its potential benefits were clear, they said. Researchers hope the AI tool will eventually be able to speed up the detection of cancer by helping to fast-track patients to treatment, and by streamlining the analysis of CT scans.

"Through this work, we hope to push boundaries to speed up the detection of the disease using innovative technologies such as AI," said the Libra study's chief investigator, Dr Richard Lee.

The consultant physician in respiratory medicine at the Royal Marsden and team leader at the Institute of Cancer Research said lung cancer was a good example of why new initiatives to speed up detection were urgently needed.

Lung cancer is the biggest worldwide cause of cancer mortality, and accounts for a fifth (21%) of cancer deaths in the UK. Those diagnosed early can be treated much more effectively, but recent data shows more than 60% of lung cancers in England are diagnosed at either stage three or four.

"People diagnosed with lung cancer at the earliest stage are much more likely to survive for five years, when compared with those whose cancer is caught late," said Lee.

"This means it is a priority we find ways to speed up the detection of the disease, and this study – which is the first to develop a radiomics model specifically focused on large lung nodules – could one day support clinicians in identifying high-risk patients."


What Are The Dangers Of AI? Find Out Why People Are Afraid Of Artificial Intelligence

Here's a look at how powerful artificial intelligence can be

SHARE

SHARE

TWEET

SHARE

EMAIL

Click to expand

UP NEXT

UP NEXT

Many experts worry that the rapid development of artificial intelligence may have unforeseen disastrous consequences for humanity. 

Machine learning technology is designed to assist humans in their everyday life and provide the world with open access to information. 

However, the unregulated nature of AI in its current state could lead to harmful consequences for its users and the world as a whole. Read below to find out the risks of AI.

CHATGPT AND HEALTH CARE: COULD THE AI CHATBOT CHANGE THE PATIENT EXPERIENCE?

The emergence of artificial intelligence has led to feelings of uncertainty, fear, and hatred toward a technology that most people do not fully understand. AI can automate tasks that previously only humans could complete, such as writing an essay, organizing an event, and learning another language. However, experts worry that the era of unregulated AI systems may create misinformation, cyber-security threats, job loss, and political bias. 

For instance, AI systems can articulate complex ideas coherently and quickly due to large data sets and information. However, the information used by AI to generate responses tends to be incorrect because of the inability of AI to distinguish valid data. The open-access usage of these AI systems may further promote this misinformation in academic papers, articles, and essays. 

READ ON THE FOX NEWS APP

In addition, the algorithms that compose the operational capabilities of artificial intelligence are built by humans with certain political and social biases. If humanity becomes reliant on AI to seek out information, then these systems could screw research in a way that benefits one side of the political aisle. Certain AI chat programs, such as ChatGPT, have faced allegations of operating with a liberal bias by refusing to generate information about Hunter Biden's laptop scandal. 

Artificial intelligence poses many advantages to humans, including streamlining simple and complex everyday tasks, and can act as a ready-to-go 24/7 assistant; however, AI does have the potential to get out of control. One of the dangers of AI is its ability to be weaponized by corporate entities or governments to restrict the rights of the public. For example, AI has the capability of using the data of facial recondition technology to track the location of individuals and families. China's government regularly uses this technology to target protesters and those advocating against regime policies. 

Moreover, artificial intelligence offers a wide range of advantages to the financial industry by advising investors on market decisions. Companies use AI algorithms to help build models that predict future market volatility and when to buy or sell stocks. However, algorithms do not use the same context that humans use when making market decisions and do not understand the fragility of the everyday economy. 

Companies using artificial intelligence to filter out applicants during the hiring process may lead to discrimination. REUTERS/Dado Ruvic/Illustration © Reuters/Dado Ruvic/Illustration Companies using artificial intelligence to filter out applicants during the hiring process may lead to discrimination. REUTERS/Dado Ruvic/Illustration

AI DATA LEAK CRISIS: NEW TOOL PREVENTS COMPANY SECRETS FROM BEING FED TO CHATGPT

AI could complete thousands of trades within a day to help boost profits but may contribute to the next market crash by scaring investors. Financial institutions need to have a deep understanding of the algorithms of these programs to ensure there are safety nets to stop AI from overselling stocks. 

Religious and political leaders have also noted how the rapid development of machine learning technology can lead to a degradation of morals and cause humanity to become completely reliant on artificial intelligence. Tools such as OpenAI's ChatGPT may be used by college students to forge essays, thus making academic dishonesty easier for millions of people. Meanwhile, jobs that once gave individuals purpose and fulfillment, as well as a means of living, could be erased overnight as AI continues to accelerate in public life. 

Artificial intelligence can lead to invasion of privacy, social manipulation, and economic uncertainty. But another aspect to consider is how the rapid, everyday use of AI can lead to discrimination and socioeconomic struggles for millions of people. Machine learning technology collects a trove of data on users, including information that financial institutions and government agencies may use against you.

A common example is a car insurance company raising your premiums based on how many times an AI program has tracked you using your phone while driving. In the employment arena, companies may use AI hiring programs to filter out the qualities they want in the candidates. This may exclude people of color and individuals with fewer opportunities. 

Over the last few years, the use and popularity of AI has grown rapidly across the world. IStock © iStock Over the last few years, the use and popularity of AI has grown rapidly across the world. IStock

The most dangerous element to consider with artificial intelligence is that these programs do not make decisions based on the same emotional or social context as humans. Although AI may be used and created with good intentions, it could lead to the unforeseen dangers of discrimination, privacy abuse, and rampant political bias.  






Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Reimagining Healthcare: Unleashing the Power of Artificial ...

Christopher Wylie: we need to regulate artificial intelligence before it ...