English
 
Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT
 
 
DownloadE-Mail
  Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback

Conitzer, V., Freedman, R., Heitzig, J., Holliday, W. H., Jacobs, B. M., Lambert, N., Mossé, M., Pacuit, E., Russell, S., Schoelkopf, H., Tewolde, E., Zwicker, W. S. (2024): Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback. - In: Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., Berkenkamp, F. (Eds.), Proceedings of the 41st International Conference on Machine Learning, (Proceedings of Machine Learning Research ; 235), Cambridge, MA : MRL Press, 9346-9360.

Item is

Files

show Files
hide Files
:
Conitzer_2024_2404.10271v1.pdf (Any fulltext), 3MB
 
File Permalink:
-
Name:
Conitzer_2024_2404.10271v1.pdf
Description:
-
Visibility:
Private
MIME-Type / Checksum:
application/pdf
Technical Metadata:
Copyright Date:
-
Copyright Info:
-
License:
-

Locators

show
hide
Description:
-
Description:
-

Creators

show
hide
 Creators:
Conitzer, Vincent1, Author
Freedman, Rachel1, Author
Heitzig, Jobst2, Author              
Holliday, Wesley H.1, Author
Jacobs, Bob M.1, Author
Lambert, Nathan1, Author
Mossé, Milan1, Author
Pacuit, Eric1, Author
Russell, Stuart1, Author
Schoelkopf, Hailey1, Author
Tewolde, Emanuel1, Author
Zwicker, William S.1, Author
Affiliations:
1External Organizations, ou_persistent22              
2Potsdam Institute for Climate Impact Research, ou_persistent13              

Content

show
hide
Free keywords: -
 Abstract: Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans’ expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about “collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.

Details

show
hide
Language(s): eng - English
 Dates: 2024-05-0220242024
 Publication Status: Finally published
 Pages: -
 Publishing info: -
 Table of Contents: -
 Rev. Type: Peer
 Identifiers: MDB-ID: No data to archive
PIKDOMAIN: RD4 - Complexity Science
Organisational keyword: RD4 - Complexity Science
Organisational keyword: FutureLab - Game Theory & Networks of Interacting Agents
Research topic keyword: Inequality and Equity
Regional keyword: Global
Model / method: Decision Theory
Model / method: Machine Learning
 Degree: -

Event

show

Legal Case

show

Project information

show

Source 1

show
hide
Title: Proceedings of the 41st International Conference on Machine Learning
Source Genre: Book
 Creator(s):
Salakhutdinov, Ruslan, Editor
Kolter, Zico, Editor
Heller, Katherine, Editor
Weller, Adrian, Editor
Oliver, Nuria, Editor
Scarlett, Jonathan, Editor
Berkenkamp, Felix, Editor
Affiliations:
-
Publ. Info: Cambridge, MA : MRL Press
Pages: - Volume / Issue: - Sequence Number: - Start / End Page: 9346 - 9360 Identifier: -

Source 2

show
hide
Title: Proceedings of Machine Learning Research
Source Genre: Series
 Creator(s):
Affiliations:
Publ. Info: -
Pages: - Volume / Issue: 235 Sequence Number: - Start / End Page: - Identifier: ISSN: 2640-3498