Black Hat 2023 – AI will get massive defender prize cash

Digital Safety

Black Hat is massive on AI this yr, and for a very good cause

Black Hat 2023: AI gets big defender prize money

The Black Hat keynote trotted out a litany of safety issues AI tries to repair, with an accompanying dizzy array of ones it would trigger unwittingly, or actually, simply described an enormous new assault floor created by the factor that was alleged to “fix” safety.

But when DARPA has its method, its AI Cyber Challenge (AIxCC) will repair that by dumping enormous quantities (thousands and thousands) of {dollars} as prize cash towards fixing AI safety issues, to roll out in coming years at DEF CON. That’s sufficient for some aspiring groups to spin up their very own skunkworks of the keen, to deal with the problems DARPA, together with its collaborators from trade, suppose are essential.

The highest 5 groups at subsequent yr’s DEF CON stand to haul in US$ 2 million every within the semifinal spherical – no small sum for budding hackers – adopted by over $8 million in prize cash (complete) if you happen to win within the finals. That’s not chump change, even if you happen to don’t reside in your mother’s basement.

Problems with AI

One main challenge of some present AI (like language fashions) is that it’s public. By gorging itself on as a lot of the web as it might probably slurp up, it tries to create an more and more correct zeitgeist of all issues helpful akin to relationships of questions and solutions we could be asking, inferring context, and making assumptions, and making an attempt to create a prediction mannequin.

However few corporations need to belief a public mannequin, which can use their inner delicate information to feed the beast and make it public. There isn’t a type of chain of belief within the decision-making of what Giant Language Fashions puke into the general public sphere. Is there a dependable redaction of delicate info, or a mannequin that may attest to its integrity and safety? No.

What about defending legally protected issues like books, photos, code, music, and the like from being pseudo-assimilated into the large ball of goo used to coach LLMs? One might argue they’re not likely utilizing the factor itself improperly, however they actually are utilizing it to coach their merchandise for industrial success within the market. Is that correct? Authorized wonks haven’t precisely figured that out.

ChatGPT – an indication of issues to come back?

I attended a session on ChatGPT phishing, which additionally guarantees to be a newly supercharged menace, since LLMs also can assimilate pictures, together with associated conversations and different information, to synthesize the tone and nuance of a person after which maybe ship a artful e mail you’d be hard-pressed to detect as bogus. Which looks like unhealthy information, actually.

The excellent news although is that with multimodel LLM performance popping out quickly, you possibly can ship your bot to a Zoom assembly to take notes for you, decide intent based mostly on individuals’ interplay, choose temper and ingest the content material of the paperwork proven whereas screen-sharing and inform you what, if something, you need to in all probability reply to and nonetheless look like you had been there. That truly could be a very good function, if extremely tempting.

However what would be the precise finish results of all this AI LLM development? Is it going to be for the betterment of humanity, or will it burst just like the crypto blockchain bubble did some time in the past? And, If the rest, are we ready to face the true penalties, of which there can be manyhead-on?

Associated studying: Will ChatGPT start writing killer malware?

Date: 2023-08-14 05:30:00

Source link



Related articles

Alina A, Toronto
Alina A, Toronto
Alina A, an UofT graduate & Google Certified Cyber Security analyst, currently based in Toronto, Canada. She is passionate for Research and to write about Cyber-security related issues, trends and concerns in an emerging digital world.


Please enter your comment!
Please enter your name here