Deutsch
 
Datenschutzhinweis Impressum
  DetailsucheBrowse

Datensatz

DATENSATZ AKTIONENEXPORT

Freigegeben

Buchkapitel

Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback

Urheber*innen

Conitzer,  Vincent
External Organizations;

Freedman,  Rachel
External Organizations;

/persons/resource/heitzig

Heitzig,  Jobst
Potsdam Institute for Climate Impact Research;

Holliday,  Wesley H.
External Organizations;

Jacobs,  Bob M.
External Organizations;

Lambert,  Nathan
External Organizations;

Mossé,  Milan
External Organizations;

Pacuit,  Eric
External Organizations;

Russell,  Stuart
External Organizations;

Schoelkopf,  Hailey
External Organizations;

Tewolde,  Emanuel
External Organizations;

Zwicker,  William S.
External Organizations;

Volltexte (frei zugänglich)
Es sind keine frei zugänglichen Volltexte in PIKpublic verfügbar
Ergänzendes Material (frei zugänglich)
Es sind keine frei zugänglichen Ergänzenden Materialien verfügbar
Zitation

Conitzer, V., Freedman, R., Heitzig, J., Holliday, W. H., Jacobs, B. M., Lambert, N., Mossé, M., Pacuit, E., Russell, S., Schoelkopf, H., Tewolde, E., Zwicker, W. S. (in press): Position: Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback. - In: Salakhutdinov, R., Kolter, Z., Heller, K., Weller, A., Oliver, N., Scarlett, J., Berkenkamp, F. (Eds.), Proceedings of the 41st International Conference on Machine Learning, (Proceedings of Machine Learning Research ; 235), Cambridge, MA : MRL Press, 9346-9360.


Zitierlink: https://publications.pik-potsdam.de/pubman/item/item_29847
Zusammenfassung
Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans’ expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about “collective” preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.