How artificial intelligence is transforming the world
Businesses Can't Escape The AI Revolution – So Here's How To Build A Culture Of Safe And Responsible Use
In November 2023, the estates of two now-deceased policyholders sued the US health insurer, United Healthcare, for deploying what they allege is a flawed artificial intelligence (AI) system to systematically deny patient claims.
The issue – they claim – wasn't just how the AI was designed. It was that the company allegedly also limited the ability of staff to override the system's decisions, even if they thought the system was wrong.
They allege the company even went so far as to punish staff who failed to act in accordance with the model's predictions.
Regardless of the eventual outcome of this case, which remains before the US court system, the claims made in the suit highlight a critical challenge facing organisations.
While artificial intelligence offers tremendous opportunities, its safe and responsible use depends on having the right people, skills and culture to govern it properly.
Read more: Beyond the hype: what workers really think about workplace AI assistants
Getting on the front footAI is pervading businesses whether they like it or not. Many Australian organisations are moving quickly on the technology. Far too few are focused on proactively managing its risks.
According to the Australian Responsible AI Index 2024, 78% of surveyed organisations claim their use of AI is in line with the principles of responsible AI.
Yet, only 29% said they had implemented practices to ensure it was.
AI applications range from easily accessible general-use chatbots to highly-specialised software. Tada Images/Shutterstock Sometimes visible, sometimes notIn some cases, AI is a well-publicised selling point for new products, and organisations are making positive decisions to adopt it.
At the same time, these systems are increasingly hidden from view. They may be used by an upstream supplier, embedded as a subcomponent of a new product, or inserted into an existing product via an automatic software update.
Sometimes, they're even used by staff on a "shadow" basis – out of sight of management.
AI is increasingly becoming embedded in all kinds of systems, making it hard to know where and how we rely on it. Metamorworks/ShutterstockThe pervasiveness – and often hidden nature – of AI adoption means that organisations can't treat AI governance as merely a compliance exercise or technical challenge.
Instead, leaders need to focus on building the right internal capability and culture to support safe and responsible AI use across their operations.
What to get rightResearch from the University of Technology Sydney's Human Technology Institute points to three critical elements that organisations must get right.
First, it's absolutely critical that boards and senior executives have sufficient understanding of AI to provide meaningful oversight.
This doesn't mean they have to become technical experts. But directors need to have what we call a "minimum viable understanding" of AI. They need to be able to spot the strategic opportunities and risks of the technology, and to ask the right questions of management.
If they don't have this expertise, they can seek training, recruit new members who have it or establish an AI expert advisory committee.
Clear accountabilitySecond, organisations need to create clear lines of accountability for AI governance. These should place clear duties on specific people with appropriate levels of authority.
A number of leading companies are already doing this, by nominating a senior executive with explicitly defined responsibilities. This is primarily a governance role, and it requires a unique blend of skills: strong leadership capabilities, some technical literacy and the ability to work across departments.
Third, organisations need to create a governance framework with simple and efficient processes to review their uses of AI, identify risks and find ways to manage them.
Above all, building the right culturePerhaps most importantly, organisations need to cultivate a critically supportive culture around AI use.
What does that mean? It's an environment where staff – at all levels – understand both the potential and the risks of AI and feel empowered to raise concerns.
Telstra's "Responsible AI Policy" is one case study of good practice in a complex corporate environment.
To ensure the board and senior management would have a good view of AI activities and risks, Telstra established an oversight committee dedicated to reviewing high-impact AI systems.
The committee brings together experts and representatives from legal, data, cyber security, privacy, risk and other teams to assess potential risks and make recommendations.
Importantly, the company has also invested in training all staff on AI risks and governance.
Appropriate AI training is necessary at every level of an organisation. Gumbariya/Shutterstock Bringing everyone alongThe cultural element is particularly crucial because of how AI adoption typically unfolds.
Our previous research suggests many Australian workers feel AI is being imposed on them without adequate consultation or training.
This doesn't just create pushback. It can also mean organisations miss out on important feedback on how their staff actually use AI to create value and solve problems.
Ultimately, our collective success with AI depends not so much on the technology itself, but on the human systems we build around it.
This is important whether you lead an organisation or work for one. So, the next time your colleagues start discussing an opportunity to buy or use AI in a new way, don't just focus on the technology.
Ask: "What needs to be true about our people, skills and culture to make this succeed?"
This School Will Have Artificial Intelligence Teach Kids (With Some Human Help)
Artificial intelligence is at the foreground of an Arizona online charter school slated to open in the fall, with teachers taking on the role of guides and mentors rather than content experts.
Unbound Academy was approved by the state school board last month, with enrollments beginning in January. The school's model, which will prioritize AI in its delivery of core academics, is part of a continuing evolution of using AI technology in classrooms.
But as the technology becomes more prevalent, so too does the conflict of determining how schools can use it to enhance offerings and downsize workloads, without risking replacing teachers. The nation's two largest teachers' unions have already begun to grapple with AI's growing involvement in the nation's classrooms, issuing guidance and guardrails around its use.
In this case, humans are still an important part of the equation, according to the founders of the school. Still, it marks a movement toward embracing AI as a collaborator—something schools are more readily doing now, said Marcelo Worsley, an associate professor of computer science and learning sciences at Northwestern University's school of education and social policy.
"I think COVID kind of pushed us more into that space as more students were getting connected to one-to-one technology experiences, and people were looking for the resources and tools that students could use—especially when they don't have continuous access to an instructor, or when they're not always certain that they're going to be in person in a classroom setting," Worsley said.
'You cannot get rid of the human in the classroom'Unbound Academy aims to enroll roughly 200 students in its first year, and will serve students in grades 4 to 8 initially. School leadership told the school board in December they hoped to expand to kindergarten through 3rd grade eventually.
The program is affiliated with private schools in Texas and Florida, but this will be the founders' first foray into public schools.
The school will prioritize AI in its content delivery model, with students working at their own pace through math, reading, and science for the first two hours of their day.
AI, the founders say, will adapt to address what students are excelling in—ratcheting up the instruction to match the student's knowledge and skills to keep things challenging—while tempering other lessons if a student isn't grasping material. The goal is for fine-tuned personalization: One 5th grade student could be reading at an 8th grade level, while starting math at a 3rd grade level.
The school's charter school application to the state school board says the "AI rigorously analyzes comprehensive student data—response accuracy, engagement duration, and emotional feedback via webcam—to ensure lessons are appropriately challenging."
The curriculum will utilize third party providers—such as online curriculums like IXL or Math Academy, among others—along with their own apps, including the AI tutor, which monitors how students are learning, and how they're struggling.
Meanwhile, teachers—known as "guides"—will monitor the students' progress. Mostly, the guides will serve as motivators and emotional support, said MacKenzie Price, a cofounder of Unbound Academy. She also founded 2 Hour Learning, which focuses on having two hours of academics a day followed by four hours of personal projects, the model Unbound Academy will employ.
"You cannot get rid of the human in the classroom. That is the whole connection," Price said. "But what we can do is provide a better model. Instead of a teacher having to try to meet 20-plus different students who are all at totally varied levels of understanding where they're at academically—that is such an impossible hill, in traditional models, to climb—we're allowing them to really do what they're able to do really well: connecting with students."
The guides, who will be "well-compensated" according to the school's application, will be charged with connecting with students throughout the day, including in a group session in the morning before students begin their coursework.
Price said the teachers will be certified according to Arizona's requirements, though at the private brick-and-mortar schools employing the same model in Texas, previous teaching experience is not required for the guides, according to NBC's reporting. The application projects a ratio of one guide to 33 students.
Guides will hold one-on-one meetings with students throughout each week. They will be able to see how students are progressing and learning, will assist if there are challenges with the material, and contact families if students aren't doing coursework.
"Our teachers are looking at the motivation, how kids are learning, if they're learning effectively and efficiently through the system—but they're not teaching math," Price said.
Guides will lead "life skills" workshops in the afternoon, where students learn "practical, real-world experiences," such as financial literacy, public speaking, goal setting, and more, according to the application. If students work together on a specific project such as a simulation of defusing a bomb, Price said, the guides will help teach communication, teamwork, and leadership.
Schools are more readily embracing AIThere has been a decades-long movement toward intelligent tutoring systems, said Worsley, the professor from Northwestern—identifying what students know, don't know, and if they've demonstrated mastery of a topic. The original models relied more on human input, but now technology is more advanced, he said.
Public schools now often incorporate AI-powered resources like IXL or Khan Academy into their instruction, Worsley said.
And for years now, some schools have used online learning programs to fill hard-to-staff vacancies—students learn from the software with oversight from an in-person facilitator. AI could make those models more effective.
Still, teachers and their associations remain wary of AI taking over classroom duties. In the National Education Association's July 2024 guidance, the teachers' union emphasized that AI should never replace human interaction. The American Federation of Teachers also highlighted the importance of humans in a June report.
Unbound Academy's model, of having AI take over instruction with a human touch, is an outlier for now—but it might show up more frequently, Worsley said.
"The reality is that aspects of AI are being built into many of the tools that school districts were using beforehand, or recently adopted, as a result of the pandemic, or just the general explosion and excitement around AI that's happening right now," he said.
5 Big Advances Last Year In Artificial Intelligence
2024 Business
gettyPerhaps the new year is a good time to look back on the old year, and see where we've come within the annual cycle.
There will never be another year like 2024 again for artificial intelligence.
Throughout the year, obscure product demos became household names. People started to really zero in on using non-human sentient agents for problems like climate change. We also saw radical changes in the infrastructure behind these models.
I was looking at some of the roundups that are out there as we launch into the new year. This one is fairly detailed, and has several dozen points, many of which I've covered. But here are some of the big ones that stand out to me as I look back through the last 365 days.
AGI is CloserOne of the overarching ideas that comes back, time and time again, is that we're closer to artificial general intelligence or AGI than we thought we were at the beginning of last year.
Here's a survey that I did with a variety of people close to the industry in January. You can see those different time frame predictions, balanced against each other.
Now, though, much of the cognoscenti is thinking that we're on the cusp of AGI itself right now. So a good number of those forecasts are going to be revised a lot.
AI Can Solve LanguageToward the end of the year, we also found out that we actually have the power right now to build real-time translation into our consumer products.
That mainly came about through the demos of Meta's AI Ray-Ban glasses just weeks ago. When Mark Zuckerberg interviews people with the AI engine that transforms his question to other languages in real time, we see this technology at work.
Language is important, too.
I was looking at this interview with Lex Fridman from last February, where he was talking about the importance of applying AI to different world languages. We can't take for granted, he explained, that people speak English.
"Anything where there's interaction going on with a product, all of that should be captured, all that should be converted into data," he said at the time. "And that's going to be the advantage - the algorithms don't matter … you have to be able to fine-tune it to each individual person, and do that, not across a single day or single interaction, but across a lifetime, where you share memories, the low, the highs, and the lows, with your large language model."
I've consistently brought the analogy of the Tower of Babel story to the process of figuring out how to use AI to communicate. It's a "reverse Tower of Babel" in which various language speakers come together to celebrate their new ability to understand one another without the use of a human translator.
The Transformer is the Engine, but It's Also ReplaceableAs 2024 wore on, I covered the use of transformers in new language model systems.
Experts talk about the transformer as an "attention mechanism" that allows the program to focus on things that matter more - to it - and to the human user.
But 2024 also brought glimmers of brand-new concepts to replace the transformer, ideas that move toward the realm of quantum computing and super powerful processing of information that's not gated by a traditional logic structure.
Which brings me to my next point.
Revolutionizing Neural Network CapacityAnother thing we saw grow in prominence is liquid neural networks.
Now is the time to add the usual Disclaimer: I have consulted on liquid neural network projects tackled by the MIT CSAIL lab group under director Daniela Rus - so I have some personal affiliation with this trend.
Liquid neural networks change the essential structure of the digital organism, in order to allow for much more powerful AI cognition on fewer resources.
That's, to a large extent, the type of thing that's been useful in allowing people to put powerful LLMs on edge devices like smartphones. It's probably the deciding factor in the ability of Google to roll out Gemini on personal devices late this year. So now we're able to "talk to our pockets" quite literally, and that's a big difference. Part of the acceptance of AI itself is going to be in its ubiquity – where we encounter it, and how it impacts our lives.
AI is Winning at MultimediaHere's one more big overarching premise of the work that people have done with AI in 2024. It has to do with media.
I looked back, and it turns out I covered an early notice on OpenAI's Sora in February. And sure enough, late last year we saw an early version roll out. I used it personally to create some interesting and whimsical little film clips, all without any casting or shooting or production at all. It was pretty amazing.
That's not to mention the groundbreaking text-to-podcast model where you can actually plug in a PDF or some resource info sheet, and have two non-human "people" gabbing about your chosen topic, sounding exactly like a couple of traditional disc jockeys. (Also: check out the brand-new blizzard of stories about Scarlett Johansson protesting the use of a Scarlett-esque voice for the now-pulled Sky assistant.)
This is another example of personal use of AI to bring home the point that we're in a new era now. As you listen to these people talk, or even interact with them in conversation, you have to ask yourself: are these people real? And how do I know that? They're responding to me personally in real time – how do they do that if they don't exist?
You could call this a "deep Turing test," and it's clear that the systems are passing with flying colors.
Anyway, that's my roundup for 2024. There's a lot more, of course, from genomics to publishing, and everything in between, but now that we're past the Auld Lang Syne, people are asking themselves what's to come in 2025? We'll see, pretty soon.
Comments
Post a Comment