Educating fashions to specific their uncertainty in phrases

We present {that a} GPT-3 mannequin can study to specific uncertainty about its personal solutions in pure language—with out use of mannequin logits. When given a query, the mannequin generates each a solution and a stage of confidence (e.g. “90% confidence” or “high confidence”). These ranges map to possibilities which can be nicely calibrated. The mannequin additionally stays reasonably calibrated below distribution shift, and is delicate to uncertainty in its personal solutions, moderately than imitating human examples. To our data, that is the primary time a mannequin has been proven to specific calibrated uncertainty about its personal solutions in pure language. For testing calibration, we introduce the CalibratedMath suite of duties. We examine the calibration of uncertainty expressed in phrases (“verbalized probability”) to uncertainty extracted from mannequin logits. Each sorts of uncertainty are able to generalizing calibration below distribution shift. We additionally present proof that GPT-3’s skill to generalize calibration is determined by pre-trained latent representations that correlate with epistemic uncertainty over its solutions.

Author:
Date: 2022-05-28 03:00:00

Source link

spot_imgspot_img

Subscribe

Related articles

spot_imgspot_img
Alina A, Toronto
Alina A, Torontohttp://alinaa-cybersecurity.com
Alina A, an UofT graduate & Google Certified Cyber Security analyst, currently based in Toronto, Canada. She is passionate for Research and to write about Cyber-security related issues, trends and concerns in an emerging digital world.

LEAVE A REPLY

Please enter your comment!
Please enter your name here