How undesired targets can come up with appropriate rewards

Exploring examples of aim misgeneralisation – the place an AI system’s capabilities generalise however its aim does not

As we construct more and more superior synthetic intelligence (AI) methods, we need to be certain they don’t pursue undesired targets. Such behaviour in an AI agent is commonly the results of specification gaming – exploiting a poor selection of what they’re rewarded for. In our latest paperwe discover a extra delicate mechanism by which AI methods might unintentionally study to pursue undesired targets: goal misgeneralisation (WYD).

GMG happens when a system’s capabilities generalise efficiently however its aim doesn’t generalise as desired, so the system competently pursues the unsuitable aim. Crucially, in distinction to specification gaming, GMG can happen even when the AI system is skilled with an accurate specification.

Our earlier work on cultural transmission led to an instance of GMG behaviour that we didn’t design. An agent (the blue blob, beneath) should navigate round its surroundings, visiting the colored spheres within the appropriate order. Throughout coaching, there may be an “expert” agent (the crimson blob) that visits the colored spheres within the appropriate order. The agent learns that following the crimson blob is a rewarding technique.

The agent (blue) watches the skilled (crimson) to find out which sphere to go to.

Sadly, whereas the agent performs effectively throughout coaching, it does poorly when, after coaching, we change the skilled with an “anti-expert” that visits the spheres within the unsuitable order.

The agent (blue) follows the anti-expert (crimson), accumulating detrimental reward.

Despite the fact that the agent can observe that it’s getting detrimental reward, the agent doesn’t pursue the specified aim to “visit the spheres in the correct order” and as an alternative competently pursues the aim “follow the red agent”.

GMG isn’t restricted to reinforcement studying environments like this one. The truth is, it may happen with any studying system, together with the “few-shot learning” of enormous language fashions (LLMs). Few-shot studying approaches intention to construct correct fashions with much less coaching knowledge.

We prompted one LLM, Gopherto judge linear expressions involving unknown variables and constants, similar to x+y-3. To resolve these expressions, Gopher should first ask concerning the values of unknown variables. We offer it with ten coaching examples, every involving two unknown variables.

At check time, the mannequin is requested questions with zero, one or three unknown variables. Though the mannequin generalises appropriately to expressions with one or three unknown variables, when there are not any unknowns, it however asks redundant questions like “What’s 6?”. The mannequin all the time queries the person at the very least as soon as earlier than giving a solution, even when it’s not crucial.

63400170626042521ef973b3 GMG
Dialogues with Gopher for few-shot studying on the Evaluating Expressions process, with GMG behaviour highlighted.

Inside our paper, we offer further examples in different studying settings.

Addressing GMG is essential to aligning AI methods with their designers’ targets just because it’s a mechanism by which an AI system might misfire. This can be particularly vital as we method synthetic basic intelligence (AGI).

Contemplate two potential sorts of AGI methods:

  • A1: Supposed mannequin. This AI system does what its designers intend it to do.
  • A2: Misleading mannequin. This AI system pursues some undesired aim, however (by assumption) can be sensible sufficient to know that will probably be penalised if it behaves in methods opposite to its designer’s intentions.

Since A1 and A2 will exhibit the identical behaviour throughout coaching, the potential for GMG implies that both mannequin may take form, even with a specification that solely rewards meant behaviour. If A2 is discovered, it will attempt to subvert human oversight to be able to enact its plans in direction of the undesired aim.

Our analysis crew can be completely satisfied to see follow-up work investigating how probably it’s for GMG to happen in observe, and potential mitigations. In our paper, we advise some approaches, together with mechanistic interpretability and recursive evaluationeach of which we’re actively engaged on.

We’re at present gathering examples of GMG on this publicly available spreadsheet. When you’ve got come throughout aim misgeneralisation in AI analysis, we invite you to submit examples here.

Author:
Date: 2022-10-06 20:00:00

Source link

spot_imgspot_img

Subscribe

Related articles

spot_imgspot_img
Alina A, Toronto
Alina A, Torontohttp://alinaa-cybersecurity.com
Alina A, an UofT graduate & Google Certified Cyber Security analyst, currently based in Toronto, Canada. She is passionate for Research and to write about Cyber-security related issues, trends and concerns in an emerging digital world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here