As of October 2019, the hotel has been running for 17 months. Many guests are still at a relatively early stage with their work. Below we have collated a list of outputs that people have volunteered to share thus far.

This should not be interpreted as an exhaustive list of everything of value that the hotel has produced. We have only included things for which the value can be independently verified. This list likely captures less than half of the actual value.

Total expenses as of October 2019

Money: So far ~£110,400* has been spent on hosting our residents, of which ~£17,900 was contributed by residents. Everything below is a result of that funding.

Time: ~7,600 person-days spent at the hotel.

Outputs as of October 2019

Summary

  • The incubation of 3 EA projects with potential for scaling.
  • 18 online course modules followed
  • 2.5 online course modules produced
  • 46 posts on LessWrong and the EA Forum (with a total of ~1500 karma)
  • 2 papers published
  • 3 AI Safety events, 1 rationality workshop, 2 EA retreats hosted; and 2 EA retreats organised
  • 2 internships and 2 jobs earned at EA organisations

Key

Title with link [C] (K)
C = counterfactual likelihood of happening without the EA Hotel.
K = Karma on EA Forum, Less Wrong, (Less Wrong; Alignment Forum).

AI Safety related

Events hosted:
(Note that these will appear again below under the main organizers/lecturers’ names)
AI Safety Learning By Doing Workshop (August 2019)
AI Safety Technical Unconference (August 2019) (retrospective written by a participant)
AI Safety Learning By Doing Workshop (October 2019)

Anonymous 1:
One 3 month work trial earned at a prominent X-risk organisation.

RAISE:
Nearly the entirety of this online course was created at the hotel.

Davide Zagami:
RAISE lessons on Inverse Reinforcement Learning + their Supplementary Material [1%]
(68)
RAISE lessons on Fundamentals of Formalization [5%] (32)
Coauthored the paper Categorizing Wireheading in Partially Embedded Agents, and presented a poster at the AI Safety Workshop in IJCAI 2019 [15%]

Linda Linsefors:
Posts on the AI Alignment Forum:
Optimization Regularization through Time Penalty [0%] (12)
(“This post resulted from conversations at the EA Hotel and would not therefore not have happened without the hotel.”)
The Game Theory of Blackmail (24)
(“I don’t remember where the ideas behind this post came from, so it is hard for me to say what the counterfactual would have been. However, I did get help improving the post from other residents, so it would at least be less well written without the hotel.“)
Organized the AI Safety Learning By Doing Workshop (August and October 2019)
Organized the AI Safety Technical Unconference (August 2019) (retrospective written by a participant)

Chris Leong:
“I’ve still got a few more posts on infinity to write up, but here’s the posts I’ve made on LessWrong since arriving [with estimates of how likely they were to be written had I not been at the hotel]”:
Summary: Surreal Decisions [50%] (27)
An Extensive Categorisation of Infinite Paradoxes [80%] (-4)
On Disingenuity [50%] (34)
On Abstract Systems [50%] (14)
Deconfusing Logical Counterfactuals [75%] (18)
Debate AI and the Decision to Release an AI [90%] (8)

John Maxwell:
Courses taken:
Improving Your Statistical Inferences (21 hours)
MITx Probability
Statistical Learning
Formal Software Verification
ARIMA Modeling with R
Introduction to Recommender Systems (20-48 hours)
Text Mining and Analytics
Introduction to Time Series Analysis
Regression Models
“At the hotel, I was working on projects and self-study to prep for seeking a machine learning job. I think the 6 months I spent there helped my resume & skills become significantly stronger for this kind of job than they would’ve been otherwise. I also optimized for acquiring ML knowledge relevant to AI Safety and EA-related machine learning project ideas of mine, and this effort felt pretty successful. After my stay, I was unexpectedly offered a high-paying remote job which let me set my own hours, but didn’t have anything to do with machine learning. After extensive consideration of the pros & cons, I took the job. I’m now planning to do that part-time from a low cost of living location, and spend the rest of my time studying ML with a stronger AI Safety focus, plus writing up some ideas of mine related to AI Safety. Although the things I did at the hotel didn’t help me get this sweet remote job, the learning and thinking I did felt quite valuable on its own. My time spent at the hotel provided further evidence to me that I’m capable of self-directed study & research. I also decided that further direct optimization for industry career capital won’t help me a lot in thinking about AI Safety better–this was part of why I didn’t go for a machine learning role as originally planned. I’ve donated thousands of dollars to the hotel, and I’m happy to chat with donors considering donations of $1000 or greater regarding the pros & cons of the hotel as a giving opportunity.”

Anonymous 2:
Courses:
Probabilistic Graphical Models
Model Thinking
MITx Probability
LessWrong posts:
Annihilating aliens & Rare Earth suggest early filter (8)
Believing others’ priors (9)
AI development incentive gradients are not uniformly terrible (23)
EA Forum post:
Should donor lottery winners write reports? (29)

Anonymous 3:
Distance Functions are Hard (40; 14)
What are concrete examples of potential “lock-in” in AI research? (14; 8)
Non-anthropically, what makes us think human-level intelligence is possible? (10)
The Moral Circle is not a Circle (17)
Cognitive Dissonance and Veg*nism (6)
What are we assuming about utility functions? [1%] (17;8)
8 AIS ideas [1%] (N/A)

Luminita Bogatean:
Courses: Python Programming: A Concise Introduction [20%]

Samuel Knoche:
Code for Style Transfer, Deep Dream and Pix2Pix implementation [5%]
Code for lightweight Python deep learning library [5%]

X-Risks related

Markus Salmela:
Coauthored the paper Long-Term Trajectories of Human Civilization [99%]

David Kristoffersson:
Incorporated Convergence [95%]
Applied for 501c3 non-profit status for Convergence [non-profit status approved in 2019] [95%]
Built new website for Convergence [90%]
Designed Convergence presentation (slides, notes) and held it at the Future of Humanity Institute [80%]
Defined a recruitment plan for a researcher-writer role and publicized a job ad [90%]
Organizing AI Strategy and X-Risk Unconference (AIXSU) [1%]

Rationality or community building related

Events hosted:
(Note that these will appear again below under the main organizers/lecturers’ name)
EA London Retreats: Life Review Weekend (Aug. 24th – 27th 2018); Careers Week (Aug. 27th – 31st 2018); Holiday/EA Unconference (Aug. 31st – Sept. 3rd 2018)\
EA Glasgow (March 2019)
Athena Rationality Workshop (June 2019) (retrospective)

Denisa Pop:
Researching and developing presentations and workshops in Rational Compassion: see How we might save the world by becoming super-dogs [0%]
Helped organise the EA Values-to-Actions Retreat [33%]
Helped organise the EA Community Health Unconference [33%]
Becoming Interim Community Manager at the hotel and offering residents counseling/coaching sessions (productivity & mental health) [0%]

Toon Alfrink:
EA forum posts:
EA is vetting-constrained [10%] (96)
The Home Base of EA [90%] (12)
Task Y: representing EA in your field [90%] (11)
LessWrong posts:
We can all be high status [10%] (61)
The housekeeper [10%] (26)
What makes a good culture? [90%] (30)

Matt Goldenberg:
Organizer and instructor for the Athena Rationality Workshop (June 2019)
The entirety of Project Metis [5%]
Posts on LessWrong:
The 3 Books Technique for Learning a New Skill [5%] (125)
A Framework for Internal Debugging [5%] (20)
S-Curves for Trend Forecasting [5%] (87)
What Vibing Feels Like [5%] (9)
How to Understand and Mitigate Risk [5%] (47)

Global health and development related

Anders Huitfeldt
Scientific Article: Huitfeldt, A., Swanson, S. A., Stensrud, M. J., & Suzuki, E. (2019). Effect heterogeneity and variable selection for standardizing causal effects to a target population. European Journal of Epidemiology. https://doi.org/10.1007/s10654-019-00571-w
Post on EA Forum: Effect heterogeneity and external validity
Post on LessWrong: Effect heterogeneity and external validity in medicine

Derek Foster: Priority Setting in Healthcare Through the Lens of Happiness – Chapter 3 of the 2019 Global Happiness and Well-Being Policy Report published by the Global Happiness Council [99%].
Hired as a research analyst for Rethink Priorities [95%].

Kris Gulati:
Distinction in MU123 and MST124, (Mathematics Modules) via the Open University.
Completed ‘Justice’ (Harvard MOOC; Verified Certificate).
Completed Units 1 (Introduction) 2 (Mathematical Typesetting), MST125 (Pure Maths module), The Open University.
Completed Unit 1, M140 (Statistics), The Open University
Completed Week 1, GV100 [Intro to Political Theory], London School of Economics, [Auditing module].

Animal welfare related

Max Carpendale:
Posts on the EA Forum:
The Evolution of Sentience as a Factor in the Cambrian Explosion: Setting up the Question [50%] (28)
Sharks probably do feel pain: a reply to Michael Tye and others [50%] (19)
Why I’m focusing on invertebrate sentience [75%] (48)
[After March 2019:]
Interview with Jon Mallatt about invertebrate consciousness [50%] (70; winner of 1st place EA Forum Prize for Apr 2019)
My recommendations for RSI treatment [25%] (42)
Thoughts on the welfare of farmed insects [50%] (18)
Interview with Shelley Adamo about invertebrate consciousness [50%] (37)
My recommendations for gratitude exercises [50%] (31)
Interview with Michael Tye about invertebrate consciousness [50%] (32)
Got a research position (part-time) at Animal Ethics [25%]

Rhys Southan
Editing and partially re-writing a book on meat, treatment of farmed animals, and alternatives to factory farming (as a paid job). The name of the book or its author can’t be named yet, as a non-disclosure agreement was signed. [70%]
Wrote an academic philosophy essay about a problem for David Benatar’s pessimism about life and death, and submitted it to an academic journal. It is currently awaiting scores from reviewers. [10%]
“I got a paid job writing an index for a book by a well-known moral philosopher. This job will help me continue to financially contribute to the EA Hotel.” [20%]

Frederik Bechtold:
Received an (unpaid) internship at Animal Ethics [1%].

Saulius Šimčikas:
Posts on the EA Forum:
Rodents farmed for pet snake food [99%] (64)
Will companies meet their animal welfare commitments? [96%] (109; winner of 3rd place EA Forum Prize for Feb 2019)

Magnus Vinding:
Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique [99%].
Revising journal paper for Between the Species. (“Got feedback and discussion about it I couldn’t have had otherwise; one reviewer happened to be a guest at the hotel.”)
“I got the idea to write the book I’m currently writing (“Suffering-Focused Ethics”) [50%]”.

Nix Goldowsky-Dill:
EA Forum Comment Prize ($50), July 2019, for “comments on the impact of corporate cage-free campaigns” (11)

*this is the total cost of the project to date (29 March 2019), not including the purchase of the building (£132,276.95 including building survey and conveyancing).

Close Menu