What to expect from AI in 2023 • TechCrunch

As a fairly commercially successful author once wrote, “The night is dark and terrifying; the day is bright and beautiful and hopeful.” This is a good image for AI, which, like all technologies, has its strengths and weaknesses.

Art generation models such as stable diffusionfor example, have led to and empowered incredible bursts of creativity app and even A whole new business modelOn the other hand, its open source nature allows malicious persons to use it to create deepfake Massive—all the while Artists protest that they profit from their work.

What will AI look like in 2023? Will regulation ease the worst that AI can bring, or are the floodgates opening? Will powerful and innovative new forms of AI emerge? Chat GPTdisrupting an industry once considered safe from automation?

Expect more (problematic) art generation AI apps

with the success of lens, an AI-powered selfie app from Prisma Labs that went viral.And expect them to be tricked Creating NSFW images,and disproportionately sexualize and change female figure.

Maximilian Gahntz, a senior policy researcher at the Mozilla Foundation, expects the integration of generative AI into consumer technology will amplify the impact of such systems, both good and bad. said.

For example, Stable Diffusion was fed billions of images from the Internet until it “learned” to associate specific words and concepts with specific images. Text generation models have been routinely easily fooled into supporting offensive views or producing misleading content.

Mike Cook, a member of the Knives and Paintbrushes open research group, agrees with Gahntz that generative AI continues to prove to be a major (and problematic) transformative force. But he believes 2023 must be the year generative AI “finally speaks out.”

Prompts by TechCrunch, models by Stability AI, generated by the free tool Dream Studio.

“It is not enough to motivate the professional community. [to create new tech] — For technology to become a long-term part of our lives, it must either bring someone a lot of money or have a meaningful impact on the daily lives of the general public,” Cook said. “So I do expect to see a real push to allow generative AI to actually do one of these two things, but with mixed success.”

Artists Lead Efforts to Opt Out of Datasets

Deviant Art release An AI art generator built on Stable Diffusion and fine-tuned to artwork from the DeviantArt community. The art generator has faced major complaints from longtime DeviantArt users criticizing the platform’s lack of transparency in training systems with uploaded art.

The creators of the most popular systems, OpenAI and Stability AI, say they are taking steps to limit the amount of harmful content their systems generate. But judging by many generations on social media, it’s clear there’s a lot to do.

“To address these issues, datasets need to be actively curated and subject to considerable scrutiny, including by a community that tends to stay ahead of the bar,” Gerntz said. We compare the process to the ongoing debate over content moderation on social media.

Stability AI, which primarily funds the development of Stable Diffusion, recently bowed to public pressure, suggesting that artists will now be able to opt out of datasets used to train the next generation of Stable Diffusion models. doing. Through the website HaveIBeenTrained.com, Rightsholders will be able to request an opt-out before the training begins in the coming weeks.

OpenAI does not provide such an opt-out mechanism, instead preferring to partner with organizations like Shutterstock to license some of its image galleries. but given legal And it’s a complete publicity headwind facing alongside Stability AI, and it’s likely only a matter of time before it follows suit.

A court may eventually force its hand.Microsoft, GitHub, OpenAI in the US sued In a class action lawsuit accusing Copilot, GitHub’s service that intelligently suggests lines of code, for violating copyright law by regurgitating sections of licensed code without giving credit.

Perhaps in anticipation of legal challenges, GitHub recently added a setting to prevent public code from appearing in Copilot submissions, and will introduce the ability to see the source of code submissions. But they are imperfect measures.at least 1 instancewith the filter settings, Copilot output a big chunk of copyrighted code with all attribution and license text.

The UK, in particular, is expected to face more criticism over the next year as it considers rules to remove the requirement that systems trained using public data be used strictly for non-commercial purposes.

Open source and decentralized efforts will continue to grow

In 2022, a handful of AI companies dominated the stage, primarily OpenAI and Stability AI. But, as Gahntz puts it, the pendulum could see him return to open source in 2023, as the ability to build new systems outstrips “resource-rich and powerful AI labs.”

A community approach may come under more scrutiny as the system is built and deployed, he said. “Open models and open datasets point to many of the flaws and harms associated with generative AI, enabling much of the important research that is often very difficult to conduct. Become.”

OpenFold

Image credit: Results from OpenFold, an open-source AI system for predicting protein shape, compared to AlphaFold2 from DeepMind.

Examples of such community-focused efforts include EleutherAI, an effort backed by AI startup Hugging Face, and large-scale language models by BigScience. Stability AI has funded many communities itself, such as those focused on music generation. harmony When OpenBioMLa rough collection of biotechnology experiments.

Training and running advanced AI models still requires funding and expertise, but as open source initiatives mature, distributed computing could challenge traditional data centers.

BigScience has taken a step towards enabling distributed development with the recent release of the open source Petals project. Similar to Folding@home, Petals allows people to contribute computing power to run large-scale AI language models that would normally require high-end GPUs or servers.

“Modern generative models are computationally expensive to train and run. Back-of-the-envelope estimates place ChatGPT’s daily spending at about $3 million,” says Chandra, a senior researcher at the Allen AI Institute. Bhagavachula said in an email. “It is important that we address this in order to make it commercially viable and more widely accessible.”

Chandra points out, however, that as long as the methods and data remain unique, large labs will remain competitive.In a recent example, OpenAI released Point E., a model that can generate 3D objects given text prompts. However, while OpenAI open-sourced the model, it did not disclose the source of Point-E’s training data or make that data public.

OpenAI Point E

Point-E generates a point cloud.

“I think open source efforts and decentralization efforts are absolutely valuable and will benefit more researchers, practitioners and users,” said Chandra. “But despite being open source, resource constraints prevent many researchers and practitioners from accessing the best models.”

AI companies bow to upcoming regulations

Regulations like the EU AI law can change the way companies develop and deploy AI systems. More local initiatives are also possible, such as New York City’s AI Recruitment Act. The law requires that AI and algorithm-based technologies for hiring, hiring, or promotion be audited for bias before they are used.

Chandra believes these regulations are necessary. Especially given the increasingly apparent technical flaws of generative AI, such as its tendency to spew out false information.

“This makes it difficult to apply generative AI to many areas (such as healthcare) where mistakes can lead to very high costs. There are challenges surrounding misinformation and disinformation,” she said. “[And yet] AI systems are already making decisions with moral and ethical implications. ”

However, next year will only bring regulatory threats. Expect more fights over rules and lawsuits before fines or prosecutions. However, it could still compete for a spot in the most lucrative categories of future legislation, such as the AI ​​law risk category.

Currently written rules classify AI systems into one of four risk categories, each with different requirements and levels of scrutiny. Systems in the riskiest category, “high-risk” AI (credit scoring algorithms, robotic surgery apps, etc.) must meet certain legal, ethical and technical requirements before being allowed to enter the European market. must meet certain standards. The least risky category, “minimal or no risk” AI (spam filters, AI-enabled video games, etc.) is transparent, such as making users aware that they are interacting with her AI system. imposes only the obligation of

Os Keyes, Ph.D., a candidate from the University of Washington, expressed concern that companies would aim to minimize their risk levels to minimize their liability and visibility to regulators.

“Put aside that worry, [the AI Act] It’s the most positive thing I’ve seen on the table,” they said. everything from parliament. ”

But investment is not a sure thing

Even if an AI system works well enough for most people but is very harmful for some, “there is still a lot of homework left” before companies make it widely available. “There’s a business case for all of this, too. If your model produces a lot of junk, consumers won’t like it,” he added. And this is also about fairness.”

It’s unclear if companies will be persuaded by that argument going into next year, especially as investors seem keen to put money beyond promising generative AI.

Stability AI in the middle of the Stable Diffusion controversy Raised $101 million at a valuation of over $1 billion from notable backers including Coatue and Lightspeed Venture Partners.Open AI is Said Valued at $20 billion at entry advanced talk Get more funding from Microsoft. (Microsoft previously invested $1 billion in OpenAI in 2019).

Of course, they could be exceptions to the rule.

Jasper AI

Image credit: jasper

Apart from self-driving companies Cruise, Wayve and WeRide, and robotics company MegaRobo, the best-funded AI companies this year were software-based, according to Crunchbase. content squareThe company, which sells services that provide AI-driven recommendations for web content, closed a $600 million round in July. Uniforetsells software for “conversational analytics” (think call center metrics) and conversational assistants. landed $400 million in February. in the meantime, high spotits AI-powered platform provides sales and marketers with real-time, data-driven recommendations. captured $248 million in January.

Investors may pursue safer methods, such as automating customer complaint analysis and lead generation. These are not as “fascinated” as generative AI. I’m not suggesting that high-profile big investments won’t be made, but they will be reserved for influential players.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *