Amazon's Push to Augment Workforce with Automation is Pig in Industrial Robotics Python



berkeley ai :: Article Creator

'An Open Community': Berkeley Hackathons Foster Innovation, Future Of Artificial Intelligence

Amid tables sprawled with empty Red Bull energy drinks and the rhythmic tapping of computer keyboards, over 1,200 of the brightest collegiate minds were hard at work during a 36-hour artificial intelligence, or AI, hackathon held at UC Berkeley's Martin Luther King Jr. Student Union building from June 17 to 18.

The event was hosted by Cal Hacks, a non-profit organization aiming to foster hacking and entrepreneurship culture on campus, and Berkeley SkyDeck, Berkeley's business startup accelerator.

"There's a common misconception that hackathons are about hacking in the sense that you're breaking into the Department of Defense and stealing nuclear codes," said Alex Goldberg, executive director of Cal Hacks. "It's more about building projects together."

Participants from all over the world gained access to OpenAI API and had the opportunity to build AI-driven projects on top of large language models, or LLMs, like ChatGPT — tools that the majority of aspiring programmers have not had the chance to experiment with, according to Chon Tang, a founding partner of the Berkeley SkyDeck Fund.

After 36 caffeine-induced hours and presentations to judges from notable funds like Sequoia, Lightspeed, New Enterprise Associates and Microsoft, the top 4 teams were selected to receive a $50,000 investment each from the Berkeley SkyDeck fund to pursue their projects full-time for the rest of summer, Tang said.

Winning projects ranged from AI-driven financial advisors to AI-generated non-playable characters to AI-generated diet and workout plans based on users' biomarker data, according to Tang.

"That was the part of this that was so fun," Tang said. "It was seeing the creativity of so many people that built very different projects."

The event also marked the first and largest in-person AI hackathon in almost ten years of hosting collegiate hackathons; Cal Hacks hosts annual hackathons in the fall but those events are not focused on any specific topic, Goldberg noted.

According to Richard Lyons, UC Berkeley associate vice chancellor for innovation and entrepreneurship, the recent growth in popularity of hacking can be largely attributed to the appreciation of community and human connection that hackathons provide, particularly following the COVID-19 pandemic.

He noted that there is an appeal to the "organic process of self-organizing teams," set in a 24-hour-plus frame that makes it easier for people to connect with strangers. He added that Berkeley, as a public university, is a perfect environment for cultivating an "extra level of community spiritedness" that is central to these events.

Tang shared similar thoughts, adding that Berkeley, geographically, is at the "heart" of AI and LLM research and development, allowing the hackathon to thrive even during the summer as participants already find themselves in the Bay Area for internships and employment. "It's an open community," Tang said. "We do hope that this will continue as a hub for young emerging programmers to build exciting things at the start of the summer and hopefully turn them into startups for years to come."


To Reap The Benefits Of AI, We Must Invest In Understanding It

Guneeta Singh Bhalla, Ph.D., is a physicist turned oral historian and founder of The 1947 Partition Archive, a crowdsourced repository.

getty

Recently, a growing choir of AI experts has been voicing concerns about an imminent danger to all of mankind with the release of extremely intelligent chatbot technologies like ChatGPT. The primary reason is that such AI systems are

However, I believe it does not have to be this way. We have the expertise in our world to decode this technology and walk into a future that isn't blind or dangerous but rather is in our control and enhanced by AI. For this to happen, we need a public mandate and serious public investment into gathering scientists in a concentrated effort to decode modern neural network systems before it is too late.

The most advanced AI systems are being built by a small number of people at companies caught in a race for profits, with the biggest players being Google, Microsoft and OpenAI. Joy Buolamwini, the founder of the Algorithmic Justice League, recently shared in a Harvard Business Review article that, "Companies are building products on a foundation of data that has been collected without affirmative consent and with total disregard for individual privacy." It's true. These companies are also creating neural networks that are optimized for profits. There is no incentive for these commercial enterprises to assess risks to humanity or to decode the black boxes they are building. The AI revolution is thus not democratic; we did not choose it through our voting power. Instead, AI technologies that may pose an existential threat are being thrust upon society through sophisticated, commercially driven algorithms that supersede our intelligence and incentivize us to interact with them in order to maximize advertising dollars.

Yet, as many others will point out, AI technologies have the potential for being a great boon to society—going so far as to help us identify cures for diseases, and so much more. The existential dangers, however, are so absolute that we must address them first in order to experience the boons. Seeing our own interactions with creatures on Earth less intelligent than ourselves, it is easy to see that fears of an AI gone awry or misused by bad actors are not unwarranted. For this reason, an open letter is being circulated by the Future of Life Institute calling for a six-month halt in AI development to give regulatory bodies a chance to catch up. Last I checked, the letter had gathered nearly 33,000 signatures (including my own).

Ultimately, we need time—not only for regulatory bodies to catch up but for our scientists (namely, physicists, mathematicians, chemists and biologists) to have the funding incentives and a chance to decode the mysteries of the neural network black box. Scientists, after all, devote their lives to unraveling the mysteries of nature. Fundamental science is all about decoding nature's black boxes. Physicists, for instance, are highly trained in mathematically modeling natural phenomena on known laws of physics.

I'll relate my own example of "decoding" social media data, about a decade ago. Sometime around 2012 or 2013, Facebook announced the open-sourcing of the company's datasets on user interactions. I was a postdoctoral researcher at the University of California in Berkeley at the time, working on quantum transport in complex oxides. One data set posted by an engineer particularly caught my eye. The author had plotted the average number of user interactions between two individuals who changed their relationship status on Facebook, before and after they began dating, as a function of time. The plotted curve qualitatively resembled a metal-insulator phase transition phenomenon I had been measuring for years. The implications fascinated me. Could it be that interactions among millions of humans on social media, in aggregate, can be modeled on known laws of thermodynamics? Maybe millions of humans on a digital platform interacted in patterns that are similar to those we already know of particle aggregates such as gasses, liquids and even crystals. It's a hypothesis I didn't get a chance to test, as I didn't succeed in obtaining the data from Facebook.

What I learned from this instance was something powerful: If human interactions in the electronic world can be modeled, it means human interactions can be manipulated. Those engineers writing the code to develop these software brains (the neural networks) are perhaps not familiar enough with physics to model their data and gain an insight into how their neural networks are working, or perhaps there is no incentive to understand how they are working as long as the objective of maximizing profits is achieved.

Today, I believe it's quite likely that neural networks have decoded the "physics," if you will, of human interactions and are able to exploit us on a societal scale. In his recent Economist piece, well-known historian Yuval Harari argues this point by saying that AI is "hacking human civilization" and manipulating our collective behavior through social media and language learning models, having learned our habits through trillions of interactions. Facebook's unexplored datasets were troubling enough (but also intriguing) in 2012, and today, in light of the capabilities of ChatGPT, scrutiny by scientists seems urgent.

The neural networks are likely creating advanced models internally, without our knowledge. It does not have to be this way. Humanity should not have to walk into an AI revolution blindfolded. I strongly believe this moment calls for a very large investment into understanding the deep learning our modern AI systems are capable of, in a major public project that is perhaps as large and as important as the Manhattan Project was for nuclear technology.

Only when scientists are able to understand how intelligence works and predict how it will evolve, can we develop systems safely and for the benefit of humanity and life on our planet. For this to happen, we need both a pause in further public releases of such technologies and a very large investment in understanding them.

Forbes Nonprofit Council is an invitation-only organization for chief executives in successful nonprofit organizations. Do I qualify?


RelationalAI And Snowflake Join Forces To Revolutionize Enterprise AI Decision-making

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

RelationalAI, a Berkeley, California-based artificial intelligence (AI) startup, announced today the release of a product it's calling an AI "coprocessor" built for Snowflake, the popular cloud data warehouse provider. The coprocessor integrates relational knowledge graphs and composite AI capabilities into Snowflake's data management platform. The startup announced its preview availability at Snowflake Summit 2023, an annual user conference.

The new offering underscores Snowflake's push to become an end-to-end platform for enterprise AI and RelationalAI's vision for an integrated approach to building intelligent applications. "We're bringing the support for those workloads inside Snowflake," RelationalAI CEO Molham Aref said in an interview with VentureBeat. "In the same way a knowledge graph makes it easier for a human to know what's going on in the data, it makes it easier for a language model."

Aref explained how RelationalAI integrates with data clouds and language models, and how it enables customers to build knowledge graphs and semantic layers on top of their data.

The coprocessor allows Snowflake customers to run knowledge graphs, prescriptive analytics and rules engines within Snowflake. This eliminates the need to move data out of Snowflake into separate systems for those capabilities. Customers can now build fraud detection, supply chain optimization and other AI-driven applications entirely within Snowflake.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now Empowering enterprises with better data

RelationalAI's AI coprocessor can run securely in the Data Cloud with Snowpark Container Services, a new feature that Snowflake announced at this week's summit. Snowpark Container Services allows customers to run third-party software and applications within their Snowflake account, enhancing the value of their data without compromising its security.

RelationalAI has demonstrated impressive early adoption across industries including financial services, retail and telecommunications. Several notable organizations are using RelationalAI for business-critical workloads in production today.

"The amazing thing about language models is, you can ask them general questions, and often they can just answer from their internal references," Aref told VentureBeat. "Sometimes you might ask questions like, 'How much money did this telco lose due to fraud last year?' A language model has never seen [the company's] cost data or financials. So it can't answer that question. But if you can point it to where [the company's] data lives, and you ask it, and it can translate from that question to SQL queries, it will be able to give you the answer to that question."

"So how do you get language models to talk to databases?," he asked. "Well, one way to do it is to get them to talk directly to databases, which is fine. It works some of the time. But if you have 180 million columns worth of information, that's more likely to confuse the language model. So what a knowledge graph lets you do is actually build a semantic layer on top of all these data assets. The knowledge graph makes it easier for a human to know what's going on in the data. It makes it easier for a language model because the language model is trained on text that humans wrote and sort of understands the world in the same way that we understand it using the same terminology."

The future of data clouds and relational knowledge graphs

Aref also shared his vision for the future of computing with the combination of language models, data clouds and relational knowledge graphs.

"I really think those are the three legs of the stool — they're going to be at the core of every platform for building decision intelligence in the enterprise," he said. "Knowledge graphs are central to making it all work because they provide a simplifying abstraction that makes it possible for things to talk to each other. So it's a very important kind of connection point between language models and humans and databases. So it gives us a common language to talk to each other with."

RelationalAI is one of the few startups that are tackling the challenge of building intelligent applications with composite AI workloads. The company was founded in 2017 by Aref, who has a background in AI, databases and enterprise software. The company has raised $122 million in funding from investors such as Addition, Madrona Venture Group, Menlo Ventures, Tiger Global and former Snowflake CEO Bob Muglia.

Also a board member at RelationalAI, Muglia praised the company's technology and vision in a press release.

"The emergence of language models has completely changed the computing landscape," Muglia said. "As transformative as language models are, their effectiveness can be further amplified when combined with cloud platforms and relational knowledge graphs. I believe this combination will define the future of computing, unlocking powerful capabilities and giving organizations new superpowers."

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.






Comments

Follow It

Popular posts from this blog

Dark Web ChatGPT' - Is your data safe? - PC Guide

Reimagining Healthcare: Unleashing the Power of Artificial ...

Christopher Wylie: we need to regulate artificial intelligence before it ...