What part of the alignment problem does this plan aim to solve? - Towards HH AI at low human-compute cost: a prosaic variant of outer alignment. Why has that part of the alignment problem been chosen? - Because it's easily measurable and the subject of a lot of research in academia ML. How does this plan aim to solve the problem? - By providing a new method, Rewindable Auto-regressive INference (RAIN), that "allows pre-trained LLMs to evaluate their own generation and use the evaluation results to guide rewind and generation for AI safety". What evidence is there that the methods will work? - The usual graphs and stats and benchmarks. Hard to tell if it's legit without spending a couple hours thinking about whether the authors measured what they think they measured and how relevant it is. What are the most likely causes of this not working? - Results being selected by the authors to achieve publication, despite being irrelevant for the underlying motivation. - Impractical method due to being inefficient (in compute, human cost or training data) relative to other similar methods. Irrelevant due to not being interesting enough to actually get implemented in any frontier AI. - Prosaic alignment being effectively useless due to not addressing the parts of alignment relevant to existential risk.