Cardano (ADA) Sets New DeFi TVL Record; TMS Network (TMSN ...



mostly ai :: Article Creator

Fact Check: Image Of Senator Rand Paul In A Red Bathrobe Is AI-generated

An image created using artificial intelligence showing U.S. Senator Rand Paul wearing a red bathrobe and sitting barefoot on a flight of stairs has been taken seriously online following changes to dress code enforcement on the Senate floor.

"Rand Paul showed up to work at the Capitol today barefoot and wearing a bathrobe after the Senate changed the dress code to accommodate John Fetterman," reads a Facebook post from Sept. 23, 2023, sharing the fake image.

CBS News reported on Wednesday that the Senate passed a resolution requiring business attire on the Senate floor just days after Chuck Schumer had told the Sergeant at Arms to stop enforcing the unofficial dress code and allow informal dress.

Jeff Charles shared the AI-generated image in a post on messaging platform X, formerly known as Twitter, on Sept. 22. The account's bio states, "BREAKING = Satire."

Charles said in an email to Reuters that the image was satirical and created using AI.

Siwei Lyu, Professor of Computer Science and Engineering, at the University at Buffalo, said in an email that artifacts in the image indicate it is "likely to be created with AI models."

Uli Gaulke, a filmmaker and Professor for Film and Generative AI at the University of Hong Kong and Hong Kong Baptist University, also said in an email that the image was AI-generated. "Mostly AI is wrong with details like fingers," Gaulke said, highlighting irregularities in the fingers, background and skin.

Paul's office did not immediately respond to a Reuters request for comment.

Synthetic media. The image of Sen. Rand Paul wearing a red bathrobe is AI-generated.

This article was produced by the Reuters Fact Check team. Read more about our fact-checking work.

Our Standards: The Thomson Reuters Trust Principles.


Can AI Code? In Baby Steps Only

Tiernan Ray and ClipDrop by Stability.Ai

The first thrilling days of OpenAI's release to the public last winter of ChatGPT brought with it evidence of the program's ability to generate computer code, something that was a revelation to developers. It seemed at the outset that ChatGPT was so good at code, in fact, that suddenly, even people with little coding knowledge could use it to generate powerful software, so powerful it could even be used as malware to threaten computer networks. 

Many months of experience, and formal research into the matter, have revealed that ChatGPT and other such generative AI cannot really develop programs, per se. The best they can do is offer baby steps, mostly for simple coding problems, which may or may not be helpful to human coders.

Also: How to use ChatGPT to write code

"What generative has opened everyone's eyes to is the fact that I can almost have a partner when I'm doing a task that essentially gives me suggestions that move me past creative roadblocks," said Naveen Rao, co-founder and CEO of AI startup MosaicML, which was acquired in August by Databricks. 

At the same time, said Rao, the level of assistance for coding is low. 

"They give you a scaffolding, some things that are repeatable, but they don't give you anything particularly good," he said. "If I say, go solve this really hard problem, then they can't do that, right? They don't even write particularly good code; it's like someone who's been doing it for a year or two, kind of, level code."

Indeed, some studies have found large language models such as GPT-4 are well below those of human coders in their overall level of code quality.

A recent study by Sayed Erfan Arefin and colleagues at Texas Tech University scholars tested GPT-4 and its predecessor, GPT-3.5, in example coding problems from the online platform LeetCode -- problems that are the kinds asked of job applicants to Google and other tech giants. 

The programs were assessed based on two core challenges, "organizing data for efficient access (using appropriate data structures)" and "creating workflows to process data (using effective algorithms)." They were also evaluated on what's called "string manipulation," which intersects with both of the other two. 

Also: How to use ChatGPT to make charts and tables

When the language models were given what the authors called complete questions, where the programs were supplied with examples of solutions to the questions, GPT-4 answered only 26% of the questions correctly, versus 45% for human respondents. When some information was taken away, GPT-4's ability plummeted to 19% of questions answered correctly. GPT-3.5 was down at around 12% and 10%, respectively. 

The authors also examined the quality of the GPT code, both for success and failure. In either case, they found a consistent problem: GPT often struggled with a basic practice of coding, "defining variables in a consistent manner."

Correctness of GPT-3, GPT-4, and humans, for train and test sets, when given either full problem information, with example solutions, incomplete information. 

Texas Tech University

Scale is also an issue for AI code generation. The most encouraging results so far in studies of GPT-4 are mostly on baby problems. 

One study, by David Noever of cyber-security firm PeopleTec, tested how well GPT-4 could find faulty coding in samples of code, similar to existing programs on the market for vulnerability testing, such as Snyk, a form of "Static Application Security Testing," or SAST.

In some cases, GPT-4 found more errors than Snyk, the authors reported. But it also missed numerous errors. And, it was tested on a grand total of just over 2,000 lines of code. That is minuscule compared to full production applications, which can contain hundreds of thousands to millions of lines of code, across numerous linked files. It's not clear that successes on the toy problems will scale to such complexity.

Also: How ChatGPT can rewrite and improve your existing code

A study last month by Zhijie Liu and colleagues at ShanghaiTech University examined quality of code based upon correctness, understandability, and security. The examination challenged ChatGPT on LeetCode tasks, like Arefin and team at Texas Tech, and also tested its code generation on what's called the Common Weakness Environment, a test of vulnerabilities maintained by research firm MITRE.

Lou and team tested ChatGPT on tasks formulated either before or after 2021, because ChatGPT was trained only on material before 2021, so they wanted to see how the program did when it was tested on both established and newer challenges. 

The results are striking. For the newer problems, called "Aft.," for "after" 2021, Lui and team found very low rates of correctness in ChatGPT's code. "ChatGPT's ability to functionally correct code generation decreases significantly as the difficulty of the problem increases," they write. Only 15.4% of C-language program code was acceptable, and none of it was acceptable for the hardest problems. And, "the code generated by ChatGPT for hard and medium problems is more likely to contain both compile and runtime errors." Human coders taking the test, on average, got 66% right. 

Also: How to use ChatGPT to create an app

For older problems, labeled "Bef.," the percent rises to 31% correct, which is still low. 

The team went through numerous examples and qualified the kinds of wrong answers ChatGPT gave in its lines of code. For example, while an overall program design might be in the right direction, a given line of code would show a fundamental wrong use of something as simple as evaluating a variable, an error it's hard to imagine a beginner programmer making. 

Example of wrong code generated by ChatGPT. The program is supposed to sort boxes into categories by description. In line 12, the code decides that if a box is neither "bulky" nor "heavy," it should be sorted into the category of "both" — exactly the opposite for a box description that should be "neither."

ShanghaiTech University

Liu and team arrive at a series of fascinating general conclusions and also mitigating factors. For one, they find that ChatGPT struggles with novel problems: "ChatGPT may have limitations when generating code for unfamiliar or unseen problems in the training dataset, even if the problems are easy with logic from human perspective."

But which programming language is used matters: the technology does better with certain programming languages that are "strongly typed" or more "expressive."

Also: How does ChatGPT actually work?

"In general, the probability of ChatGPT generating functionally correct code is higher when using languages with more strongly expressive power (e.G., Python3)," they write. 

Another shortcoming is that ChatGPT can be convoluted so that its errors are harder to fix. "The code generation process of ChatGPT may be careless," they write, "and the generated code may fail to meet some of the detailed conditions described, resulting in it being difficult to successfully generate or fix (to functional correct)."

And on the Common Vulnerabilities test by MITRE, "the code generated by ChatGPT often exhibits relevant vulnerabilities, which is a severe issue," they write. Fortunately, they note, ChatGPT is able to correct many of those vulnerabilities in subsequent prompts when supplied with more detailed information from the MITRE data set. 

All three studies suggest it is very early in the use of generative AI for programming. It is, as Rao said, helpful in simple assistant tasks, where the programmer is in charge. 

Also: The 10 best ChatGPT plugins (and how to make the most of them)

It's possible that progress will come from new approaches that break programming paradigms. For example, recent Google work trains language models to reach out to the internet for tools to solve tasks. And work by Google's DeepMind unit trains language models to go more deeply into engineering its own prompts to improve performance -- a kind of self-reflexive programming that seems promising. 

Something deeper may ultimately be required, says Rao.

"I don't think it can be solved with prompts," says Rao. "I think there's actually some fundamental problems we still have to solve -- there's still something fundamentally missing." 

Added Rao, "We can basically throw so much data at a large neural network that it's a hundred lifetimes or more of human experience, and yet, a human with much less experience can solve novel problems better, and not make certain kinds of basic errors."

More on AI tools How to write better ChatGPT prompts for the best generative AI results How to use Opera's built-in AI chatbot (and why you should) How does ChatGPT actually work? How to use ChatGPT to do research for papers, presentations, studies, and more How to use Bing Chat (and how it's different from ChatGPT) How to use Google Bard now

Cost And Complexity Shouldn't Hold Back AI Adoption

Craig Walker is the Founder & CEO of Dialpad. He has 20+ yrs experience as an entrepreneur, venture capitalist, tech executive and attorney.

getty

The adoption of artificial intelligence is running into barriers within businesses—but not for the reasons you might think. Despite all the headlines about AI-driven job losses and disruption, professionals in customer service and sales are mostly optimistic about its impact. A bigger obstacle appears to be perceptions of AI's high cost of implementation and a lack of clear, company-wide policies governing AI's use.

These are among the findings from my company's annual State of AI at Work Report, which surveyed over 1,000 sales and customer service professionals at various seniority levels across a wide range of industries.

• Some 70% of respondents said they aren't scared of AI stealing their job, and 79% of those who are using AI said it had a positive impact on their performance.

• Even so, 84% of respondents—including over half of those whose company is already using the technology—said their employer lacks company-wide AI policies.

• Over a third (37%) of respondents said a lack of funds was preventing more AI adoption.

To me, the survey findings underline both how far and how fast we've come on the AI journey—and how far we still have to go. They suggest many businesses are feeling stuck over how to invest in generative AI capabilities, daunted by overly pessimistic views on its complexity and expense.

Yes, AI solutions can be costly and complex, making them an intimidating prospect, especially for smaller firms. (The survey found that 52% of companies with under $100 million in revenues see AI as inaccessible.) But the truth is that most companies don't need to reinvent the wheel to reap significant benefits from AI.

To start, they should be thinking about where they can get the greatest impact by applying AI tools to enhance specific areas of their business. Then, they should look for out-of-the-box products that come with the costs and complexity baked in, and that can just plug into their existing systems. Not only is this more cost-effective, but it can also make life easier for internal IT teams and encourage higher adoption from the employees and end users—which in turn helps them maximize the benefits of using these AI tools.

Customer service, sales operations, recruitment and internal collaboration are all examples of functions where generative AI can have an outsized impact at a relatively low cost. We've seen countless customers deploy AI in these areas with intuitive, easy-to-use tools that employees can adopt quickly, no matter the size of the business.

For example, AI sentiment analysis now makes it possible to obtain customer satisfaction (CSAT) scores from every call, which provides a much more holistic understanding of customer satisfaction compared to the traditional survey method.

AI also empowers sales and customer service employees to manage their own development—another important, but overlooked, way in which AI can even the playing field for employees. AI-powered feedback during and after calls helps agents see exactly how to improve their performance for the next interaction before their manager has to step in to tell them. It can also sharply reduce the amount of "busy work" that distracts agents from more valuable tasks—a benefit of AI that was cited by 47% of survey respondents.

One reason AI implementation can be so expensive is that many providers bolt on parts of their offering, such as server capacity and transcription, from third-party vendors and then pass on those costs to clients. That's why it's usually better to opt for providers with full-stack integrated solutions that better enable them to control costs.

Companies shouldn't have to make exorbitant investments in technical expertise or training to benefit from AI. The key is to choose the AI tools you use wisely, taking into account factors like ease of use, the trajectory and stability of those providers and whether they are built in-house. As more businesses understand this and experience the productivity gains from AI, I believe the barriers currently holding back wider adoption will fade away.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?






Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Reimagining Healthcare: Unleashing the Power of Artificial ...

Christopher Wylie: we need to regulate artificial intelligence before it ...