Age of AI: Everything you need to know about artificial intelligence
3 Leaders Share How They're Approaching The AI Revolution
Artificial intelligence
gettyWhen the world was introduced to the internet in the 1990s, plenty of professionals weren't sure whether it was friend or foe. Today, the same is happening with the emergence of artificial intelligence solutions.
It's clear that AI isn't going anywhere—despite warnings that we need a temporary AI moratorium. AI is moving quickly and changing the economic and social landscape. Without thoughtful approaches, we may find ourselves in a pickle of our own making.
Yet, it can be hard to know where to begin. Each day, it seems like there are new AI developments. We can't afford to lollygag when it comes to AI adoption. At the same time, we can't afford to leave it all to chance; we must be strategic.
Navigating the evolution of AI
I turned to a few leaders who have been using AI to get some on-the-ground advice. Interestingly, all three of the executives and founders I interviewed acknowledged similar concerns about AI's challenges as well as hope for a future buoyed by AI's best promises.
For instance, ethics was a common thread. Perhaps AI products should come with blanket warnings: Privacy isn't something AI has mastered. AI products like ChatGPT require ridiculously large data sets to keep learning. However, the products can't tell if a use case is ethical without some kind of boundary in place.
Like ethics, bias is another issue with AI. When studying one AI image-generating program, Bloomberg researchers found that the images were shockingly biased and trained on biased data. When asked to produce images of doctors, just 7% of those images were women, even though real-world data suggest women make up 39% of doctors.
All of this shows AI isn't perfect. However, it isn't some darkness descending upon humanity. On the contrary, many AI programs afford organizations tremendous opportunities, such as increased productivity and augmented creativity. Workers can finally be free of repetitive tasks and begin to explore their creative, innovative talents.
This is hardly terrible news. It points to a future when people can unlock more of the brain's power and burned-out managers and employees can focus on tasks that give them meaning. But to make this happen, companies must be strategic. Here are three ways to engage with AI according to industry experts.
1. Take a front-seat approach.
Subha Tatavarti, chief technology officer at Wipro, wants leaders to drive AI development within their organizations. Rather than watching and waiting, she recommends implementing strategies that direct the internal course of AI. This way, as AI evolves and progresses, those businesses that have taken a front-seat approach can harness the technology's best aspects: productivity, revenue, and innovation gains.
"For CTOs to understand how to use AI to innovate and grow their organizations, igniting creativity and simplifying the business down to its building blocks is crucial," she says. "CTOs should focus on what their business is about at its core and then investigate how the team can use AI to enhance these foundational blocks. The art of being a CTO is finding the right problem to focus on. Once you have that, you can use your creativity to find the right tech to overcome that problem."
Tatavarti and her company have set up an AI council. The council works on firming up standards for AI development and usage, setting ethical guidelines for mitigating biased algorithms, ensuring fairness, and preventing discriminatory outcomes. Wipro has also engaged with several leading companies across a wide variety of industries to develop centers of excellence by leveraging their consulting expertise and foundational research knowledge through academic partnerships. The measures that Tatavarti is focused on—regular audits, employee training, etc.—should allow AI capabilities to unfold in ways that better humanity.
2. Start small and contained.
There isn't a day that Diana Bald, president of Blue Orange Digital, doesn't use AI for something. Designing employee and client onboarding plans. Formulating meeting and workshop agendas. Creating job descriptions and customizing career development programs. AI touches practically every element of what she does, and she's always looking for more ways to leverage AI models.
"AI is expanding the capabilities of data science," Bald says. "Capable of analyzing massive data sets, AI enables us to perform more complex types of data analysis, including natural language processing, image recognition, and deep learning (a form of AI that mimics the human brain). AI excels at handling unstructured data (such as images and voice) and extracting meaningful insights—tasks that were previously challenging or time-consuming."
However, she doesn't feel that businesses need to jump into the mix all at once. Bald reports that Blue Orange Digital's solutions architects, data scientists, and engineers start with pilot projects. The "start small" mindset allows the team to test, learn, and iterate AI-based technologies prior to scaling up. When pilots show promise, Bald gives the green light to incrementally initiate AI across the company's operations. This ensures a seamless transition and a slow, controlled familiarization of AI across all the people in the business.
Once people grasp the fundamentals of how to use AI, they are encouraged to find optimal use cases and models for specific tasks. This is all done within an environment that's already invested in cybersecurity. For leaders very new to figuring out where AI belongs in their ecosystem, Bald suggests starting with automating routine tasks or adding chatbots rather than jumping headfirst into AI-enhanced customer behavior forecasting, internal talent discovery, or logistics optimization.
3. Approach innovations with curiosity.
Caution is always needed when evaluating new technology. At the same time, Michael Scharff, CEO and cofounder of Evolv AI, doesn't want leaders to lose their feelings of curiosity toward AI innovations. He sees time and again that those who work in harmony with AI—while proceeding at a speed they can handle—come out on top.
"Brands and companies that adopt AI for experimentation and testing will have a more significant competitive advantage," asserts Scharff. "It is essential to experiment with generative AI now. Like when the internet disrupted a wide range of industries in the early 2000s, the AI revolution forces companies to adapt or be left behind. Likely, your employees are already using generative AI at work. Get curious and see what happens."
What does curiosity look like from a practical standpoint? For Evolv AI, it means constant experimentation when it comes to customer experience. Scharff and his team frequently lean on AI to help organize notes and meeting agendas, effectively personalize content, and write complex code for variants they want to demo with prospective clients.
By asking questions about AI, leaders can take away the mystery. This urges everyone to keep the conversation going and share their thoughts and solutions.
AI is becoming embedded in business and the fabric of society at large. Is this reality somewhat intimidating? Yes. Nevertheless, companies can't just pretend it doesn't exist. Now may be the best moment to take the lead in an industry.
How Can Humans Best Use AI?
ASHBOURNE, UNITED KINGDOM- JULY 30: A competitor sheepdog takes part in the 2021 English National ... [+] Sheep Dog Trials at Blore Pastures, near Dovedale, in the Peak District on July 30, 2021 in Ashbourne, United Kingdom. Top handlers and dogs from across England are taking part in the historic trial after last years event was cancelled due to the pandemic. (Photo by Christopher Furlong/Getty Images)
Getty ImagesOften a little stress can sharpen the mind. A recent journey, by train, from Paris to Oxford was disrupted by first a cancelled train and then predictably, a delayed one. This complicated an otherwise pleasant day because I was supposed to be sitting in front of my laptop participating in the aperture 4X4 discussion forum on AI (artificial intelligence). Instead, I found myself nearly hanging out of the window of the train trying to get good phone reception as I spoke at the forum.
In order to compensate for the poor connection I felt obliged to say something colourful and interesting, and thus put forward the view that the best comparison for understanding how humanity can use AI is the tv programme 'One Man and his Dog'.
One Man and his Dog
One Man and his Dog was a very popular, though quirky, BBC programme based on sheepdog trials across Great Britain and Ireland, which at its peak in the 1980's had some 8 million viewers (still running on BBC Alba). In very simple terms it is a sheepdog trial, with farmers herding sheep with the help of their sheep dog, or in technical terms, humans performing a complex task, under pressure, with the aid of a trained, intelligent non-human.
While the comparison of AI with 'One Man and his Dog' was initially speculative, the more I think about it the more I consider it apt as a framework to understand how humans should use AI. I have not herded sheep, but imagine it can be as or more difficult as sorting data, as unlike data sheep have minds of their own. The combination of (wo)man and dog as a very productive team illustrates how the best uses of AI are beginning to emerge – by doctors, soldiers and scientists deploying AI to second guess and bolster their own decision making.
In addition, like AI, dogs can be trained to attack and defend, but while dogs make valuable companions I struggle to see how AI/robots can fulfil this function. There is a persuasive argument of how this could happen in book The LoveMakers, and in the behaviour of many people who find the metaverse an appealing place to 'live' (I am worried by the appearance of the LOVOT VOT family robot in Japan and by the growing use of the AI relationship app Replika).
While dogs can sense our emotions and perhaps intuit what we are thinking, the increasingly alarming aspect of artificial intelligence is that it can determine what we are thinking. A recent edition of Nature journal described how AI can be used to analyse human brain activity, and translate this accurately into words and images.
If the analogy of sheepdogs and AI is less eccentric than readers might have initially thought, it does I hope highlight the need for society to have frameworks and rules of thumb to parse the use and impact of AI.
Economically, AI is already leading to a repricing of the role of people, like software engineers, whom it can replace, but also to a reappraisal to the training and role of those who can use it to be more productive. I suspect that such is the rate of deployment of AI that in time many of the products and solutions it creates will quickly become commoditised.
Regulate AI
The practical aspects of this phenomenon are gathering speed – in the last week alone KPMG has announced a partnership with Microsoft MSFT to drive the use of AI in its businesses and the role of Palantir on the side of the Ukrainian military is becoming more clear. Also, actors and screenwriters in Hollywood are striking at the prospect that some of their work could be replaced by AI.
In addition in China, measures have been announced to control the use of generative AI by requiring firms producing these tools to be licensed by the government where these tools are targeted at 'the general public'.
This move is consistent with my broad thesis that the US, EU and China will increasingly tackle new trends (notably technologies) in very different ways. America has the AI stock bubble (see Nvidia), the EU has its recent AI Act and now China is controlling the production points of generative AI.
In the US, regulators are beginning to catch up with international counterparts. The Federal Trade Commission is investigating whether OpenAI's ChatGPT produces false information. More broadly, the OECD has warned of the negative effects of AI on labour markets.
My view is that if they want to see how humans and robots should work together, One Man and his Dog is a good place to start.
Australians' AI Adoption Hesitation: Concerns Over Impact And Regulation, Says KPMG
A recent study conducted by KPMG presents a somewhat surprising revelation – a significant portion of Australians are exhibiting resistance to adopting artificial intelligence (AI) technology.
The study found that approximately 60% of Australians expressed concerns about AI's long-term societal impact. Additionally, a third of the population believes that current regulation does not suffice.
The Deep-Rooted ConcernsAustralia's resistance stands out in the era of technological advancement, where the whole world is racing towards AI integration.
This sentiment echoes an underlying fear of AI technologies getting ahead of legislation, potentially leading to a 'Wild West' situation with AI at the helm.
Many believe it highlights a crucial gap between technological progress and its societal reception — a gap that needs urgent attention.
This KPMG study's findings indicate a significant shift in Australians' perceptions. It reveals a deep-rooted concern for the larger, possibly unforeseen, consequences of the unchecked proliferation of AI technologies.
This trepidation isn't confined to just a small section of society; it envelops a staggering 71% of the population. However, the worry doesn't stop at the long-term implications of AI.
An additional third of Australians expressed apprehension about AI technology. They underscored their belief that the existing regulatory framework is insufficient to monitor and control AI applications effectively.
A Call For Responsible AI IntegrationAccording to industry experts, the study's revelation emphasizes the urgent need for constructive dialogue among policy-makers, technology leaders, and the public.
This dialogue should focus on ensuring comprehensive regulations, ethical considerations, and effective controls over AI use. On the other hand, it is important to remember that AI adoption and integration is not a zero-sum game.
While it is crucial to mitigate the potential risks and address public concerns, it is equally important to harness the benefits of AI.
AI has the potential to improve many sectors, including healthcare, education, and transport, to name just a few. However, to realize this potential, Australians need to feel confident about the technology's responsible use.
This confidence will only come when citizens are assured that robust, thoughtful regulation is in place to protect their interests.
To uphold competitiveness, Australia needs to find a balanced approach to AI adoption.
Experts urge the Australian government, industry leaders, and regulatory bodies to work collaboratively to address these concerns. They also highlight the need for drafting and enforcing clear and comprehensive regulations.
In addition, public engagement initiatives should be encouraged to demystify AI and its implications for everyday life. In the race for technological advancement, the goal should not only revolve around AI adoption.
With proactive measures and public engagement, this resistance should be transformed into acceptance and confidence. This will propel the country towards a balanced, productive future with AI.

Comments
Post a Comment