As of March 2019, the hotel has been running for 10 months. Many guests are still at a relatively early stage with their work. Below we have collated a list of outputs that people have volunteered to share thus far.

This should not be interpreted as an exhaustive list of everything of value that the hotel has produced. We have only included things for which the value can be independently verified. This list likely captures less than half of the actual value.

Total expenses as of March 2019

Money: So far ~£66,500* has been spent on hosting our residents, of which ~£7,600 was contributed by residents. Everything below is a result of that funding.

Time: ~4,000 person-days spent at the hotel.

Outputs as of March 2019

Summary

  • The incubation of three scalable EA organisations
  • One online course produced
  • 29 posts on LessWrong and the EA Forum (with a total of ~1000 karma)
  • 4 EA retreats hosted, 2 organised
  • 18 online courses followed
  • 2 internships and 1 job earned at EA organisations

Key

Title with link [C] (K)
C = counterfactual likelihood of happening without the EA Hotel.
K = Karma on EA Forum/Less Wrong.

AI Safety related

Anonymous 1:
One 3 month work trial earned at a prominent X-risk organisation.

RAISE:
Nearly the entirety of this online course was created at the hotel.

Linda Linsefors:
Posts on the AI Alignment Forum:
Optimization Regularization through Time Penalty [0%] (12)
(“This post resulted from conversations at the EA Hotel and would not therefore not have happened without the hotel.”)
The Game Theory of Blackmail (24)
(“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“)

Chris Leong:
“I’ve still got a few more posts on infinity to write up, but here’s the posts I’ve made on LessWrong since arriving [with estimates of how likely they were to be written had I not been at the hotel]”:
Summary: Surreal Decisions [50%] (27)
An Extensive Categorisation of Infinite Paradoxes [80%] (-4)
On Disingenuity [50%] (34)
On Abstract Systems [50%] (14)
Deconfusing Logical Counterfactuals [75%] (18)
Debate AI and the Decision to Release an AI [90%] (8)

John Maxwell
Courses taken:
Improving Your Statistical Inferences (21 hours)
MITx Probability
Statistical Learning
Formal Software Verification
ARIMA Modeling with R
Introduction to Recommender Systems (20-48 hours)
Text Mining and Analytics
Introduction to Time Series Analysis
Regression Models
“At the hotel, I was working on projects and self-study to prep for seeking a machine learning job. I think the 6 months I spent there helped my resume & skills become significantly stronger for this kind of job than they would’ve been otherwise. I also optimized for acquiring ML knowledge relevant to AI Safety and EA-related machine learning project ideas of mine, and this effort felt pretty successful. After my stay, I was unexpectedly offered a high-paying remote job which let me set my own hours, but didn’t have anything to do with machine learning. After extensive consideration of the pros & cons, I took the job. I’m now planning to do that part-time from a low cost of living location, and spend the rest of my time studying ML with a stronger AI Safety focus, plus writing up some ideas of mine related to AI Safety. Although the things I did at the hotel didn’t help me get this sweet remote job, the learning and thinking I did felt quite valuable on its own. My time spent at the hotel provided further evidence to me that I’m capable of self-directed study & research. I also decided that further direct optimization for industry career capital won’t help me a lot in thinking about AI Safety better–this was part of why I didn’t go for a machine learning role as originally planned. I’ve donated thousands of dollars to the hotel, and I’m happy to chat with donors considering donations of $1000 or greater regarding the pros & cons of the hotel as a giving opportunity.”

Anonymous 2:
Courses:
Probabilistic Graphical Models
Model Thinking
MITx Probability
LessWrong posts:
Annihilating aliens & Rare Earth suggest early filter (8)
Believing others’ priors (9)
AI development incentive gradients are not uniformly terrible (23)
EA Forum post:
Should donor lottery winners write reports? (29)

Rationality or community building related

Retreats hosted:

EA London Retreats:
Life Review Weekend (Aug. 24th – 27th)
Careers Week (Aug. 27th – 31st)
Holiday/EA Unconference (Aug. 31st – Sept. 3rd)

EA Glasgow (March 2019)

Denisa Pop:
Researching and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
Helped organise the EA Values-to-Actions Retreat [33%]
Helped organise the EA Community Health Unconference [33%]

Toon Alfrink
EA forum posts:
EA is vetting-constrained [10%] (96)
The Home Base of EA [90%] (12)
Task Y: representing EA in your field [90%] (11)
LessWrong posts:
We can all be high status [10%] (61)
The housekeeper [10%] (26)
What makes a good culture? [90%] (30)

Matt Goldenberg
The entirety of Project Metis [5%]
Posts on LessWrong:
The 3 Books Technique for Learning a New Skill [5%] (125)
A Framework for Internal Debugging [5%] (20)
S-Curves for Trend Forecasting [5%] (87)
What Vibing Feels Like [5%] (9)
How to Understand and Mitigate Risk [5%] (47)

Global health related

Derek Foster: Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness and Well-Being Policy Report published by the Global Happiness Council [99%].
Hired as a research analyst for Rethink Priorities [95%].

Animal welfare related

Max Carpendale:
Posts on the EA Forum:
The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question (28)
Sharks probably do feel pain: a reply to Michael Tye and others (19)
Why I’m focusing on invertebrate sentience (48)

Frederik Bechtold
Received an (unpaid) internship at Animal Ethics [1%].

Saulius Šimčikas
Posts on the EA Forum:
Rodents farmed for pet snake food [99%] (64)
Will companies meet their animal welfare commitments? [96%] (109; winner of 3rd place EA Forum Prize for Feb 2019)

Magnus Vinding
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%].
Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel.”)
“I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”) [50%]”.

*this is the total cost of the project to date (29 March 2019), not including the purchase of the building (£132,276.95 including building survey and conveyancing).

Close Menu