GHCC - a full-stack eCommerce tech platform bags the Best Tech ...
Artificial Intelligence
Artificial intelligence (AI), sometimes known as machine intelligence, refers to the ability of computers to perform human-like feats of cognition including learning, problem-solving, perception, decision-making, and speech and language.
Early AI systems had the ability to defeat a world chess champion, map streets, and compose music. Thanks to more advanced algorithms, data volumes, and computer power and storage, AI evolved and expanded to include more sophisticated applications, such as self-driving cars, improved fraud detection, and "personal assistants" like Siri and Alexa.
Today, researchers are using AI to improve predictions, diagnoses, and treatments for mental illnesses. The intersection of machine learning and computational psychiatry is rapidly creating more precise, personalized mental health care.
You Have No Idea What Artificial Intelligence Really Does
WHEN SOPHIA THE ROBOT first switched on, the world couldn't get enough. It had a cheery personality, it joked with late-night hosts, it had facial expressions that echoed our own. Here it was, finally — a robot plucked straight out of science fiction, the closest thing to true artificial intelligence that we had ever seen.
There's no doubt that Sophia is an impressive piece of engineering. Parents-slash-collaborating-tech-companies Hanson Robotics and SingularityNET equipped Sophia with sophisticated neural networks that give Sophia the ability to learn from people and to detect and mirror emotional responses, which makes it seem like the robot has a personality. It didn't take much to convince people of Sophia's apparent humanity — many of Futurism's own articles refer to the robot as "her." Piers Morgan even decided to try his luck for a date and/or sexually harass the robot, depending on how you want to look at it.
"Oh yeah, she is basically alive," Hanson Robotics CEO David Hanson said of Sophia during a 2017 appearance on Jimmy Fallon's Tonight Show. And while Hanson Robotics never officially claimed that Sophia contained artificial general intelligence — the comprehensive, life-like AI that we see in science fiction — the adoring and uncritical press that followed all those public appearances only helped the company grow.
But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophia's conversational skills became more focused on the fact that they were partially scripted in advance.
Ben Goertzel, CEO of SingularityNET and Chief Scientist of Hanson Robotics, isn't under any illusions about what Sophia is capable of. "Sophia and the other Hanson robots are not really 'pure' as computer science research systems, because they combine so many different pieces and aspects in complex ways. They are not pure learning systems, but they do involve learning on various levels (learning in their neural net visual systems, learning in their OpenCog dialogue systems, etc.)," he told Futurism.
But he's interested to find that Sophia inspires a lot of different reactions from the public. "Public perception of Sophia in her various aspects — her intelligence, her appearance, her lovability — seems to be all over the map, and I find this quite fascinating," Goertzel said.
Hanson finds it unfortunate when people think Sophia is capable of more or less than she really is, but also said that he doesn't mind the benefits of the added hype. Hype which, again, has been bolstered by the two companies' repeated publicity stunts.
"Sophia and the other Hanson robots are not really 'pure' as computer science research systems..."
Highly-publicized projects like Sophia convince us that true AI — human-like and perhaps even conscious — is right around the corner. But in reality, we're not even close.
The true state of AI research has fallen far behind the technological fairy tales we've been led to believe. And if we don't treat AI with a healthier dose of realism and skepticism, the field may be stuck in this rut forever.
NAILING DOWN A TRUE definition of artificial intelligence is tricky. The field of AI, constantly reshaped by new developments and changing goalposts, is sometimes best described by explaining what it is not.
"People think AI is a smart robot that can do things a very smart person would — a robot that knows everything and can answer any question," Emad Mousavi, a data scientist who founded a platform called QuiGig that connects freelancers, told Futurism. But this is not what experts really mean when they talk about AI. "In general, AI refers to computer programs that can complete various analyses and use some predefined criteria to make decisions."
Among the ever-distant goalposts for human-level artificial intelligence (HLAI) are the ability to communicate effectively — chatbots and machine learning-based language processors struggle to infer meaning or to understand nuance — and the ability to continue learning over time. Currently, the AI systems with which we interact, including those being developed for self-driving cars, do all their learning before they are deployed and then stop forever.
"They are problems that are easy to describe but are unsolvable for the current state of machine learning techniques," Tomas Mikolov, a research scientist at Facebook AI, told Futurism.
Right now, AI doesn't have free will and certainly isn't conscious — two assumptions people tend to make when faced with advanced or over-hyped technologies, Mousavi said. The most advanced AI systems out there are merely products that follow processes defined by smart people. They can't make decisions on their own.
In machine learning, which includes deep learning and neural networks, an algorithm is presented with boatloads of training data — examples of whatever it is that the algorithm is learning to do, labeled by people — until it can complete the task on its own. For facial recognition software, this means feeding thousands of photos or videos of faces into the system until it can reliably detect a face from an unlabeled sample.
Our best machine learning algorithms are generally just memorizing and running statistical models. To call it "learning" is to anthropomorphize machines that operate on a very different wavelength from our brains. Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.
Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.
If you train an algorithm to add two numbers, it will just look up or copy the correct answer from a table, Mikolov, the Facebook AI scientist, explained. But it can't generalize a better understanding of mathematical operations from its training. After learning that five plus two equals seven, you as a person might be able to figure out that seven minus two equals five. But if you ask your algorithm to subtract two numbers after teaching it to add, it won't be able to. The artificial intelligence, as it were, was trained to add, not to understand what it means to add. If you want it to subtract, you'll need to train it all over again — a process that notoriously wipes out whatever the AI system had previously learned.
"It's actually often the case that it's easier to start learning from scratch than trying to retrain the previous model," Mikolov said.
These flaws are no secret to members of the AI community. Yet, all the same, these machine learning systems are often touted as the cutting edge of artificial intelligence. In truth, they're actually quite dumb.
Take, for example, an image captioning algorithm. A few years back, one of these got some wide-eyed coverage because of the sophisticated language it seemed to generate.
"Everyone was very impressed by the ability of the system, and soon it was found that 90 percent of these captions were actually found in the training data," Mikolov told Futurism. "So they were not actually produced by the machine; the machine just copied what it did see that the human annotators provided for a similar image so it seemed to have a lot of interesting complexity." What people mistook for a robotic sense of humor, Mikolov added, was just a dumb computer hitting copy and paste.
"It's not some machine intelligence that you're communicating with. It can be a useful system on its own, but it's not AI," said Mikolov. He said that it took a while for people to realize the problems with the algorithm. At first, they were nothing but impressed.
Image Credit: Victor TangermannWHERE DID WE GO so off course? The problem is when our present-day systems, which are so limited, are marketed and hyped up to the point that the public believes we have technology that we have no goddamn clue how to build.
"I am frequently entertained to see the way my research takes on exaggerated proportions as it progresses through the media," Nancy Fulda, a computer scientist working on broader AI systems at Brigham Young University, told Futurism. The reporters who interview her are usually pretty knowledgeable, she said. "But there are also websites that pick up those primary stories and report on the technology without a solid understanding of how it works. The whole thing is a bit like a game of 'telephone' — the technical details of the project get lost and the system begins to seem self-willed and almost magical. At some point, I almost don't recognize my own research anymore."
"At some point, I almost don't recognize my own research anymore."
Some researchers themselves are guilty of fanning this flame. And then the reporters who don't have much technical expertise and don't look behind the curtain are complicit. Even worse, some journalists are happy to play along and add hype to their coverage.
Other problem actors: people who make an AI algorithm present the back-end work they did as that algorithm's own creative output. Mikolov calls this a dishonest practice akin to sleight of hand. "I think it's quite misleading that some researchers who are very well aware of these limitations are trying to convince the public that their work is AI," Mikolov said.
That's important because the way people think AI research is going will depend on whether they want money allocated to it. This unwarranted hype could be preventing the field from making real, useful progress. Financial investments in artificial intelligence are inexorably linked to the level of interest (read: hype) in the field. That interest level — and corresponding investments — fluctuate wildly whenever Sophia has a stilted conversation or some new machine learning algorithm accomplishes something mildly interesting. That makes it hard to establish a steady, baseline flow of capital that researchers can depend on, Mikolov suggested.
Mikolov hopes to one day create a genuinely intelligent AI assistant — a goal that he told Futurism is still a distant pipedream. A few years ago, Mikolov, along with his colleagues at Facebook AI, published a paper outlining how this might be possible and the steps it might take to get there. But when we spoke at the Joint Multi-Conference on Human-Level Artificial Intelligence held in August by Prague-based AI startup GoodAI, Mikolov mentioned that many of the avenues people are exploring to create something like this are likely dead ends.
One of these likely dead ends, unfortunately, is reinforcement learning. Reinforcement learning systems, which teach themselves to complete a task through trial and error-based experimentation instead of using training data (think of a dog fetching a stick for treats), are often oversold, according to John Langford, Principal Researcher for Microsoft AI. Almost anytime someone brags about a reinforcement-learning AI system, Langford said, they actually gave the algorithm some shortcuts or limited the scope of the problem it was supposed to solve in the first place.
The hype that comes from these sorts of algorithms helps the researcher sell their work and secure grants. Press people and journalists use it to draw audiences to their platforms. But the public suffers — this vicious cycle leaves everyone else unaware as to what AI can really do.
There are telltale signs, Mikolov says, that can help you see through the misdirection. The biggest red flag is whether or not you as a layperson (and potential customer) are allowed to demo the technology for yourself.
"A magician will ask someone from the public to test that the setup is correct, but the person specifically selected by the magician is working with him. So if somebody shows you the system, then there's a good likelihood you are just being fooled," Mikolov said. "If you are knowledgeable about the usual tricks, it's easy to break all these so-called intelligent systems. If you are at least a little bit critical, you will see that what [supposedly AI-driven chatbots] are saying is very easy to distinguish from humans."
Mikolov suggests that you should question the intelligence of anyone trying to sell you the idea that they've beaten the Turing Test and created a chatbot that can hold a real conversation. Again, think of Sophia's prepared dialogue for a given event.
"Maybe I should not be so critical here, but I just can't help myself when you have these things like the Sophia thing and so on, where they're trying to make impressions that they are communicating with the robot at so on," Mikolov told Futurism."Unfortunately, it's quite easy for people to fall for these magician tricks and fall for the illusion, unless you're a machine learning researcher who knows these tricks and knows what's behind them."
Unfortunately, so much attention to these misleading projects can stand in the way of progress by people with truly original, revolutionary ideas. It's hard to get funding to build something brand new, something that might lead to AI that can do what people already expect it to be able to do, when venture capitalists just want to fund the next machine learning solution.
If we want those projects to flourish, if we ever want to take tangible steps towards artificial general intelligence, the field will need to be a lot more transparent about what it does and how much it matters.
"I am hopeful that there will be some super smart people who come with some new ideas and will not just copy what is being done," said Mikolov. "Nowadays it's some small, incremental improvement. But there will be smart people coming with new ideas that will bring the field forward."
More on the nebulous challenges of AI: Artificial Consciousness: How To Give A Robot A Soul
Read This Next
PopeGPT
The Pope Just Released a Guide to Artificial Intelligence
AI Headache
Gizmodo and Kotaku Staff Furious After Owner Announces Move to AI Content
Pied Piper
Bankrate Posts AI-Generated Article, Deletes It When We Point Out It's Full of Errors
Losing Grip
An AI Is Inventing Fake Quotes by Real People and Publishing Them Online
Crreettiive Marketing
This Clickbait Site Is Even Generating Images Using AI and the Results Are Unintentionally Hilarious
So-called Artificial Intelligence Is Scary, But That Doesn't Make It Bad
If you sit down to watch Netflix, most of the shows and movies that appear on your home screen are chosen by machine. Even the images used to advertise shows can change from person to person, based on what it knows about you. And when you start watching a show, more often than not your TV will use methods achieved through machine learning to add extra detail for a sharper image, interpolate new frames to make things smoother, or separate dialogue from the rest of the audio track to make it easier to hear.
If you're using a smartphone you're interacting with some form of AI almost constantly, from the obvious (recorder apps that transcribe your voice into text) to the invisible (algorithms that learn your routine to optimise your battery).
Loading
Photos taken with smartphones are increasingly AI generated. That is to say the raw data captured by the camera is interpreted by a range of machine learning processes. This is no surprise to people who have been watching that industry – lenses and sensors are not getting that much bigger, yet images are greatly improving – but it's so fast you may not notice. Colours and details can be wholly invented by the phone when shooting in low light or at long zooms, or people's faces can be constructed to an estimation of what they look like if the shot was blurry. Some users of Samsung phones were recently surprised to find their zoomed-in images of the moon were fed through a specifically designed algorithm that adds texture and details of the moon, which are of course very predictable, to even the most over-exposed or featureless of shots.
You might even ask a smart speaker to dim your lights, hear it respond politely and feel briefly like a Star Trek character, even though you know the thing is hardly more "intelligent" than an analogue dial.
The point is that machine learning and AI have been developing for a long time, and have resulted not only in new products and technologies, but in huge efficiency gains that make our lives nicer in ways we wouldn't have been able to predict 15 years ago. And its ongoing development could keep doing that.
Loading
RMIT's Professor Matthew Warren said the current negative dialogue risked drowning out discussions of the opportunities advancing AI research could bring. Beyond consumer products, AI could revolutionise transport, logistics, sustainability efforts and more.
"We can't get cybersecurity professionals. But the application of AI in security operation centres, in terms of looking at all those security logs and predicting when attacks are occurring, it's actually going to solve a lot of the skill problems that industry is talking about," he said.
"In the medical environment, you talk about a lack of specialised services, you're going to see AI rolled out that can assist doctors in terms of identifying skin cancers of example. Processes at the moment where there's an element of human error."
While he did acknowledge the string of high-profile experts and groups who have spoken up to say they believe AI is a path to ruin, he said in many cases it was over-reported or encouraged by the companies that make the products themselves.
"A lot of it is hype, around trying to raise the profile of products or services," he said. "Global warming is our No.1 extinction factor, not AI and androids taking over the earth, which is what some people have jumped to."
To be clear not every AI is innocuous, and all indications are that governments, including our own, are trying to get out ahead of it to avoid arriving too late as they did at the dawn of social media.
Chatbots and generative AI images in particular promise to boost efficiency across a range of jobs and industries, but pose regulatory questions such as do we need to label AI-generated content, so people can consider the source properly? If the models are fed on a diet of original creative works, shouldn't the humans responsible for that work be compensated when the models are put to commercial use? Do the bots reinforce unfair biases present in the training data?
None of those challenges involve evil robots, yet their ongoing relevance does seem to carry a glimmer of those fears. Whether that's the reports of creepy sinister things being said by an early version of Microsoft's Bing chatbot, or a viral story about a military drone that decided it would be more effective without its human handlers and killed them. That first example is a lot less concerning in the context of a brand-new system, designed for creative writing, being stress tested. And the second one didn't happen at all; a US Air Force colonel was misunderstood when he was talking about simulations and the story got out of control.
Loading
Even stories about more grounded AI issues are clearly preceded and engendered by our expectations of AI disaster. Take the faked photo of a bomb at the Pentagon which was billed as an attack using AI. Or the recent decision of the Bild newspaper to shed 200 staff in a reorganisation, which was widely headlined to imply a major news organisation was replacing journalists with AI. In fact, the company is keeping all its journalists and writers; the AI connection was that the chief executive had cited the technology as a competitive pressure, which helped inform the company's decision to refocus on quality writing and investigative journalism.
It's no wonder people are scared. But if we're going to take advantage of the best of AI while avoiding the worst, it's initiative and healthy scepticism we need rather than fear. And maybe the tiniest little bit of optimism.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.
Comments
Post a Comment