Reinforcement Learning from Experience Feedback: Application to Economic Policy

Reinforcement Learning from Experience Feedback: Application to Economic Policy
READ MORE...
Volume/Issue: Volume 2024 Issue 114
Publication date: June 2024
ISBN: 9798400277320
$20.00
Add to Cart by clicking price of the language and format you'd like to purchase
Available Languages and Formats
English
Prices in red indicate formats that are not yet available but are forthcoming.
Topics covered in this book

This title contains information about the following subjects. Click on a subject if you would like to see other titles with the same subjects.

Economics- Macroeconomics , Economics / General , LLMs , GAI , RLHF , RLAIF , RLXF , experience feedback , RL model , applying RLXF , Reinforcement Learning from AI feedback , Reinforcement Learning method , Artificial intelligence , Economic classification

Summary

Learning from the past is critical for shaping the future, especially when it comes to economic policymaking. Building upon the current methods in the application of Reinforcement Learning (RL) to the large language models (LLMs), this paper introduces Reinforcement Learning from Experience Feedback (RLXF), a procedure that tunes LLMs based on lessons from past experiences. RLXF integrates historical experiences into LLM training in two key ways - by training reward models on historical data, and by using that knowledge to fine-tune the LLMs. As a case study, we applied RLXF to tune an LLM using the IMF's MONA database to generate historically-grounded policy suggestions. The results demonstrate RLXF's potential to equip generative AI with a nuanced perspective informed by previous experiences. Overall, it seems RLXF could enable more informed applications of LLMs for economic policy, but this approach is not without the potential risks and limitations of relying heavily on historical data, as it may perpetuate biases and outdated assumptions.