Trustees:

Greg Colbourn has been into Effective Altruism (EA) since the early days of the movement, signing the Giving What We Can pledge in 2010. He founded CEEALAR in 2018. In addition to helping to run CEEALAR, he is focusing on trying to get a global moratorium on AGI in place. We need a global moratorium on further development of AGI to allow safety, alignment and interpretability research to catch up, and prevent (near-term) human extinction. CEEALAR welcomes application by people wanting to work on this.

Florent Berthet Florent co-founded EA France, founded and directed an alternative school in Lyon, taught entrepreneurship to engineering students for 3 years, and has been involved in various EA initiatives. These days, after having worked as a project manager at a startup, he is considering different options for his next career move (ideally: project management or community building for an impactful organisation).

Sasha Cooper has been involved with the EA movement in various capacities since its inception, and currently works as a software developer for Founders Pledge.

Advisors:

Laura Green – Laura co-founded EA France and ran the organisation as executive director. She previously studied Political Science (undergrad) and Economics (Master’s) at Sciences Po Paris. She is currently involved in a variety of EA projects around community building, European governance and improving institutional decision-making.

Kat Woods is the founder of the Nonlinear Fund, a longtermist meta charity. Before that Kat co-founded multiple high impact charities, including Charity Entrepreneurship (a charity incubator and cause prioritization research organization that helps identify and launch high impact charities in global poverty and animal welfare) and Charity Science Health (a GiveWell-funded charity which increases vaccinations in India using behavioral nudges).

Sanjay Joshi – Sanjay has a background in finance, and has previously qualified as an actuary and worked for a major credit agency. He currently focuses on impact investing, and oversees SoGive, a think tank researching ways to do good. He has also founded a pandemic prevention organisation, a for-profit Edtech company, and an AI-driven mental health chatbot. He was one of the first trustees of (and helped found) Effective Altruism UK.

_________*Outputs Key

Staff:

Greg Colbourn (Executive Director) — See above. Greg is unsalaried in this role.

Beth Anderson (Operations Manager(part-time)) — Beth Anderson graduated from Durham University with a BA in French, Education and Sociology. At university, she founded a student group to raise funds and awareness for refugees, and enjoyed mobilising students to have a positive impact. Before joining CEEALAR she spent 3 years working at On Purpose, a social enterprise which develops leaders for the social impact space, in a role focussed on programme management, L&D and impact measurement. She is interested in inclusive community building and communicating EA ideas to new audiences.

David Staley (Operations Manager(part-time)) — Autodidact, endlessly curious and passionate about (too) many things, has spent over a decade trying to piece together a functional understanding of the global systems that are shaping our future. On his search for understanding he discovered LessWrong and some years of lurking later had an opportunity to work on a small project supporting the AI Safety community.Through that he visited CEEALAR and first experienced the easy closeness and camaraderie that he came to associate with EA. Previously spent many years in the trenches at Tesco, where he developed a strong support mindset and an appreciation for the value of little things done well – a surprisingly valuable foundation for managing operations at CEEALAR.

Current Guests:

Michele Campolo Michele is doing research in AI alignment. He focuses on AI systems capable of reasoning critically about ethics and final goals. At CEEALAR he has learned more about mathematics and computer science, AI, ethics and metaethics, moral psychology, Effective Altruism and cause prioritisation. In 2020 he worked on the topic of goal-directedness with a team formed at AI Safety Camp. Previously, he obtained a bachelor’s degree in mathematics in Udine, Italy.

Progress updates

Outputs*

Onicah O. Ntswejakgosi She holds a First Class Bachelor of Arts (in Chinese Studies) Degree from the University of Botswana, and a Masters Degree in International Relations from the University of Seoul. She speaks English, Setswana, Chinese and Korean. Driven by her passion for social entrepreneurship, she founded a Social Enterprise called Mil’Oni Africa; an organization that works with youth to advance SDGs. She is also the Founder of a non-profit organization called Wailing Women, an organization that empowers, equips and uplifts women and girls globally with a special focus on Mental Health and overall wellbeing. Wailing Women has impacted thousands of women worldwide. Onicah is also a multiple award winning Debater of international repute; with her most outstanding achievement being Best Female Speaker in Africa. As an Adjudicator, she has sat on many Chief Adjudication Panels, and has equipped 40000+ youth across Africa and Asia with Critical Thinking Skills and Effective Communication Skills. At CEEALAR, Onicah has collaborated with 2 co-founders (Ramika and Daniela) to co-found an organization called Demystify which is committed to the Mental Wellbeing and Empowerment of Women in Effective Altruism.

Progress updates

Outputs*

Siao Si Looi Siao studied architecture with a focus on landscape urbanism and designing for conditions of uncertainty caused by climate change. My master’s thesis was on rethinking architecture’s response to climate change to be about intervening instead of reducing impact.

Progress updates

Outputs*

Nia Gardner Nia studied Computer Science and Economics at university and has worked as a software engineer for three years. She is offering her software engineering skills to other grantees, from help developing quantitive trading models to code reviews. She has been organising EA Manchester for two years and is currently organising a machine learning for alignment bootcamp in Germany.

Progress updates

Outputs*

Vinay Hiremath Vinay previously studied computer science and recently graduated with a master’s degree in computational neuroscience. He hopes to better understand the connection between cognition in biological and artificial systems, and its relevance on the future of AI alignment and governance. He is currently building side projects to test the waters, and plans to start a PhD or other research-intensive position in 2022. Apart from that, he enjoys running in the mornings, cooking and baking vegan treats, playing with tech gadgets (while trying not to break them), and traveling (when not in a pandemic).

Progress updates

Outputs*

Special acknowledgement for his contributions to CEEALAR’s infrastructure.

Jaeson Booker Jaeson is an independent researcher in AI Technical Alignment. He received his BS in Applied Computer Science at Dominican University, and then worked on the founding team of several startup companies. He has previous experience in blockchain engineering and VR development. He’s currently focused on finding scalable solutions to the Alignment Problem.

Progress updates

Outputs*

Anonymous This grantee studied Artificial Intelligence at The University of Edinburgh (after attending an EA career planning event at CEEALAR in 2018)They went on to found a successful startup in the cryptocurrency/decentralized finance space.They are now interested in starting new ventures, both for and non-profit. They “Read PubMed for fun”, and currently leverages this by formulating novel approaches to Global Health and Development.They also intend to capture the low-hanging fruit and productivity gains associated with helping address some of the serious mental/physical health concerns that can be found within the EA community itself.They also build quantitative trading models, they are a frequent forecaster and will often lend assistance where possible towards projects they feel are neglected or under-resourced (such as trying to reduce the likelihood of a take-off in insect farming)

Outputs*

Anonymous This grantee has a PhD in Theoretical Physics from Durham University on the topic of the AdS/CFT correspondence and black holes. He is working on skilling-up in programming/ML with the aim of transitioning into AI safety technical research. He first became interested in Effective Altruism around 2014 and became more active in the community while at Durham, taking on the role of EA Durham Co-President and facilitating an EA Oxford Intro Fellowship.

Progress updates

Outputs*

Anonymous This grantee studied Maths and Computer Science and is now working on a project with AI Safety Labs on “Quantifying the cost of assuming markov rewards”. They want AI to be positive for society in the end and have everyone be in involved in determining humanity’s future. Still, they first heard about EA through Giving What We Can, and are just as excited about that and animal welfare as they are about AI alignment work.

Éloïse Benito “I am 23 years old, I am French. Between 2017 and 2022, I studied computer science and got my master degree. I have known effective altruism in 2018 with two French youtubers, but I enter the community in March 2020. My main interests are anti-speciesism, AI safety and moral philosophy. I want to be a AI safety searcher, I participated in the first French AI safety bootcamp. I am applying for bootcamps in this field and maybe I will apply for a PhD.”

Progress updates

Outputs*

Stan Pincent — Stan Pinsent is a researcher at CEARCH, identifying neglected, cost-effective interventions in a number of fields. He has a quantitative background, but his stay at CEEALAR has allowed him to get immersed in writing, an area in which he wants to develop.

Bryce Robertson“I’m from New Zealand, where I received bachelor degrees in law and philosophy. In April this year I was blown away when I learned of GPT-4 and tried it for myself. Since that day l’ve more or less dropped everything (including closing my video agency) and made learning about Al my focus. I quickly discovered that while the future could be fantastic, the possibility of existential risks is significant – exacerbated by the lack of attention and resources Al safety and governance receives compared to Al capabilities. So I’ve decided to pivot my career to one that will have the maximum possible impact on society’s successful transition to a world with ASI. Currently looking for a suitable position.”

Past Guests:

Anonymous Their second time staying at CEEALAR – first time they stayed as an Incubatee of the Charity Entrepreneurship Incubation Program and this time to upskill in AI Governance and Policy. They are currently working on AI Governance issues in International Relations, leveraging their background in Political Science and Philosophy. Previously, they have done research on future generations, large-scale global health interventions and biosecurity threats caused by dual-use research of concern.

.

Samuel Knoche (Guest Representative, Aug-Oct 2021; Feb-Aug 2022) is a writer and an autodidact. He studied Computer Science for two years before becoming disillusioned with the higher education system and dropping out. He explains his decision in “The Case For Dropping out of College” published in Quillette. He is now studying to ultimately contribute to ML and AI safety research. He is also exploring ideas around EA strategy and philosophy.

Progress updates

Outputs*

Guillaume Corlouer Guillaume is an independent researcher in AI safety working on interpretability and reducing the risks of conflicts between AIs. He has a PhD in informatics from the University of Sussex on the topic of estimating information flow between neural time series in the human cortex during visual perception.

Progress updates

Outputs*

Hamish Hamish used to work in medtech and data science, but now he draws cartoons. He’s working on communicating and distilling ideas from the EA and AI Safety ecosystem through a range of media including videos and websites. Examples of his work can be seen at aisafety.world, and animations on the AXRP youtube channel.

Progress updates

Outputs*

Daniela Tiznado She is a Mexican food industry engineer. Her passion for the food industry and dedication to effective altruism drives her to explore innovative methodologies, such as alternative proteins, and develop prevention systems that can mitigate the impact of catastrophes. She is an active member of the EA (Effective Altruism) Mexico community and has participated as a volunteer in several EAG (Effective Altruism Global) events. Additionally, she volunteers for ALLFED (Alliance to Feed Earth in Disasters), where she contributes her skills and knowledge towards addressing food-related challenges on global catastrophic risks.

Progress updates

Outputs*

Luminita Bogatean (Project Manager) — She is building skills through study and immersive experience in the areas of Project Management, AI Engineering and Experience Design (as an application of Decision Science and Cognitive Science).

Progress updates

Outputs*

Aaron Maiwald is an undergraduate in cognitive science, with a focus on mathematics and philosophy. During his last stay at the hotel, he created a German Podcast about effective altruism called “Gutes Einfach Tun”. Currently, he spends most of his time studying mathematics, machine learning and issues in AI Safety and Global Priorities Research.

Theo Knopfer Theo is a former teacher that has transitioned into research. He is applying to PhD programmes in International Relations, while conducting research on EU anti-terrorism policy in military AI and biodefence. He is also the newsletter editor of EA France and a volunteer at Effective Thesis. He is passionate about learning and reading; he is a budding rationalist, a tabletop RPG fan, and so much more.

Progress updates

Outputs*

Jenny Liu Zhang [Guest representative, Nov 2021-Jan 2022] is a designer, developer, and organizer. She leads an initiative to create online mental health and self-reflection activities for young people. Currently she is operationalizing her research in wellbeing and mutual aid to share at the UN Internet Governance Forum 2021.

Progress updates

Special acknowledgement for her contributions to CEEALAR’s infrastructure.

Lucas Teixeira has a background in philosophy and social sciences. Currently studying computer science and mathematics and transitioning into AI Alignment.

Akash Arora Has a background in computer science and social work. While living here he studied strategies to scale-up mental health resources (primarily access to medicine and CBT) in low-income countries.

Severin T. Seehrich [Guest representative, Feb-Apr 2022] Severin holds a M.A. in philosophy and is passionate about all things mind, communication, and self-development. His first contact with Effective Altruism was in 2019. Since then, he volunteered as a writer for the German Effective Altruism homepage, facilitated regular online circling sessions and occasional workshops for the EA/rationalist community, offered life coaching for EAs, and helped organize CHERI’s 2021 summer research program on existential risk. His current community building projects include organizing summer solstice celebration for the UK EA community, and founding a longtermist hub in Berlin.

Progress updates

Outputs*

Laura C — Laura has been involved in the community since 2015 and is currently doing operations for the Swedish EA regranting platform. She took over most of Denisa’s tasks as interim community manager while she was away.

Peter Barnett Peter has an MSc in theoretical physics, and is currently transitioning into AI safety work. He is also a research analyst at Nonlinear, researching ways to improve AI safety research. His hobbies include running, climbing, folk dancing, mandolin and boardgames.

Progress updates

Outputs*

Jennifer Waldmann Jennifer previously co-founded and co-directed EA Magdeburg. She studied Applied Statistics (B. Sc.) at Otto von Guericke University Magdeburg. She is passionate about intensive meditation practice and finding ways to reduce existential and suffering risks.

Oliver Bramford is training to become a mindfulness meditation teacher, and intends to teach mindfulness to EAs and other altruistic professionals. His expects to receive his teacher certification in February 2023, from University of California, Berkeley. Oliver is passionate about fostering wisdom at the heart of power, and safeguarding a flourishing future of mind. He intends to work primarily with the longtermist community and adjacent groups. He is taking the year of 2022 to visit and learn from diverse intentional communities and develop his mindfulness services.

Jaeson Booker — Blockchain Engineer, Insight Data Science (LinkedIn). Jaeson has been involved in many crypto projects. He helped develop the curriculum for the world’s first University-level Blockchain class while at Domincan University. He has assisted with teaching students how to create smart contracts, tokens, and decentralized applications. He has worked with Insight Data Science to create a project for incentivizing data science contributions using Ethereum. He founded BSV startup MOAT, and is co-founder of the Castle Company, working to scale the number of developer communities for blockchain engineers around the world. His girlfriend joined him at the hotel.

Progress updates

Outputs*

James Faville is an independent researcher looking into ways we can prevent astronomical future suffering of sentient beings. He has formerly interned at the Center for Reducing Suffering, Animal Charity Evaluators, and Wild Animal Initiative.

Simmo Simpson ran events with EA Taiwan (2016-8) before working in Ops for Rethink Charity (2018-20). He now works part-time for the longest running successful Authentic Relating organization — www.authrev.org. With the rest of his time, Simmo is working on community building, content creation and workshop facilitation to help people build lives which combine purpose and impact with eudaimonia, joy and longevity.

Progress updates

Outputs*

Nick Stares — is a computational social scientist interested in the future of economics, urbanism, and self-directed learning. Currently, he is studying models for non-authoritarian, participatory, and self-governing social systems.

Progress updates

Outputs*

Charlie Steiner I’m a physics PhD (experimental condensed matter at UIUC) and long-time LW-er trying to think more seriously about value learning. I’m currently writing about evading Goodhart’s Law with induction and about what we could possibly mean by superhuman standards for morality.

Outputs*

Anonymous — An undergraduate in data science, interested in currently researching allostasis in the context of cognitive science, and social agential dynamics, and plans for continuing studies and research into artificial intelligence.

Progress updates

Anonymous — they study mathematics at Imperial College London.

Progress updates

Outputs*

Heye Groß — is an autodidact and writer, recently having become very interested in the interplay of aging biology and effective altruism. He is currently investigating how longevity science can be seen from both a shorttermist and longtermist perspective and how progress in the field will influence other cause areas. Other interests include learning how to learn and effective science communication. He also runs the ocean conservation nonprofit DEEPWAVE, which is raising awareness about overfishing, ocean and climate science to the public and politicians in Germany and abroad.

Guillaume Corlouer — Guillaume has studied pure mathematics and is doing research  in computational neuroscience on the neural basis of consciousness. Within effective altruism he is very interested in global priorities research, AI safety and is currently working on optimal philanthropy for existential risks reduction.

Quinn Dougherty — Quinn is a software developer who aspires to be a theoretical computer scientist one day. He read the Sequences in 2016 and quit a 7-year music career to study math. He’s a longtermist looking for levers to push on to AI existential safety or the epistemic competence problem

Progress updates

Outputs*

Quratulain Zainab —Q is currently doing an undergrad degree in mathematics and economics. Previously, she organized events for the EA Dubai group and then later virtual EA Middle East events. Her interests outside of effective altruism include programming and machine learning.

Derek Foster — has a background in philosophy, education, public health and health economics. While living here, he co-authored a chapter of the 2019 Global Happiness Policy Report, which focused on ways of incorporating subjective well-being into healthcare prioritization. He now works on animal welfare, mental health and grant-making for Rethink Priorities.

Progress updates

Outputs*

Jack Harley —Jack has studied neuroscience at the University of Oxford and has biotech experience. Jack has an interest in anti-aging research, and is writing scientific articles to communicate the science of anti-aging biotechnology. His goal is to increase the amount of interest in anti-aging research in the public and private sectors.

Progress updates

Outputs*

Anonymous —He is a senior software engineer volunteering for diverse EA organisations.

Progress updates

Outputs*

Kris Gulati Kris spent his time learning Math and applying for economics graduate school. He is primarily interested in the economics of science and innovation, as well as long-run growth/persistence.

Progress updates

Outputs*

Rhys Southan [Guest Representative, Mar 2020-July 2021] — a writer and philosopher with a focus on animal ethics and population ethics. He completed a master’s degree in philosophy at The University of Oxford in 2017. He has been published in the New York Times, Aeon Magazine and Modern Farmer. While here, Rhys is researching and writing on animal ethics. He plans to start a philosophy PhD in 2020.

Progress updates

Outputs*

Jason GL is writing a book on political economy. His goal is to lay out an internally consistent vision of how and why people can and should cooperate to optimize their total freedom. Before CEEALAR, Jason graduated from Harvard Law School and worked as a litigator for consumers, employees, and tenants, and as an advisor for homeless shelters and local governments. After CEEALAR, Jason will look for opportunities to bridge the worlds of law and data science. His hobbies include board games, guitar, singing, kayaking, and mixing cocktails.

Jesse Parent is a researcher, writer, and strategist interested in understanding what influences an agent’s capacity for understanding. As part of an interdisciplinary research lab, he manages projects investigating computation, cognition, and the developmental processes influencing learning and comprehension in both natural and artificial systems. He is also a consultant with a group focused on open-sourcing AI research. Jesse enjoys discussing significance of mentoring, and the ethics surrounding technological advances. You may hear City Pop playing in his room, or see him running alongside the ocean. Jesse intends to pursue graduate research in computational and cognitive sciences.

David Kristoffersson“I’m co-founder of Convergence and an existential risk strategy researcher. I have a background as R&D Project Manager and Software Engineer at Ericsson and have worked with FHI. My plan is to define a strategic perspective, research agenda, and research program on the overall question of how to ensure a beneficial future. The intention is to point at structures in research for ensuring a good long-term future, to illuminate important gaps, and to give better language for talking and thinking about these questions. This is a complex undertaking and I only expect an early version of this research structuring to result from it.

Progress updates

Outputs*

Aron Mill“Hey everyone, my name is Aron. I am from Germany (currently Berlin) and I recently graduated as a Mechanical Engineer. Since then I am researching for the Alliance to Feed the Earth in Disasters (ALLFED) and hope to contribute to sustain civilization and thus our long-term future. I love learning and enjoy music deeply. Please feel free to contact me with either an interesting read, a topic for a discussion or some artists.

Progress updates

Outputs*

Ronny Fernandez— is a PhD student in philosophy at Rutgers university. He is working on developing and empirically testing disagreement resolution techniques. He is also studying foundational mathematics, ML, and RL, in hopes of contributing to the field of AI alignment in the future.

Davide Zagami [Guest Representative, July 2019-Feb 2020]— is currently studying and thinking about AI Alignment. Some of his time is allocated to acquiring better knowledge of Machine Learning. His last participation to AI Safety Camp culminated in a workshop paper about wireheading. He previously worked at RAISE where he developed lessons for an online course about AI Safety. He majored in Computer Engineering with concentration in game theory, game development, and thesis on the Internet of Things.

Progress updates

Outputs*

Justin Shovelain — the founder of the quantitative long term strategy organisation Convergence. Over the last seven years he has worked with MIRI, CFAR, EA Global, Founders Fund, and Leverage, and done work in EA strategy, fundraising, networking, teaching, cognitive enhancement, and AI safety research. He has a MS degree in computer science and BS degrees in computer science, mathematics, and physics.

Progress updates

Outputs*

Michael Aird is currently contracting as a researcher/writer for Convergence, which does existential risk reduction strategy research. He previously received a First Class Honours degree in Psychology, published a peer-reviewed paper, and worked as a teacher. He’s passionate about effective altruism, longtermism, and research communication.

Outputs*

Max Carpendale“I’m doing research on invertebrates sentience, suffering, and related subjects. I believe the subject is impactful because invertebrates are much more numerous than vertebrates, yet they receive very little attention from animal advocates. I also think that having a better understanding of sentience (especially edge cases like invertebrates) might help with issues related to digital sentience in the future. I also write about subjects that seem important to the EA community that I feel well-placed to write about. For example, I recently wrote a guide on managing repetitive strain injuries for EA’s.

Outputs*

Markus Salmela — studies human health, philosophy and social sciences. He has worked on research projects relating to existential risks and long term forecasting, and also organised EA-events. He is currently writing about longevity research from an existential risk perspective.

Outputs*

Linda Linsefors — an independent AI Safety student and researcher. She has previously completed a PhD in Quantum cosmology, organised an AI Safety Camp and interned at MIRI. Linda is currently learning more ML and RL, and also thinking about wireheading and the relation between learning and theory, among other things, including organizing AI Safety events.

Outputs*

Fredi Backtoldt“I’m studying philosophy at Goethe Universität Frankfurt, currently writing my master thesis on the Demandingness Objection to ethical theories. On the side, I started to volunteer for Animal Ethics, where I now also do an internship. The hotel with its great atmosphere helps me to put my values into action and that’s what I’m trying to do here!

Progress updates

Outputs*

Tom Frederik Lieberum“I graduated with a B.Sc. Physics from RWTH Aachen this September. My engagement in EA has been co-organizing EA Aachen since 2018 and participating in the German Effective Altruism Network (GEAN). While at the hotel, I am studying the RAISE course material to prepare for a switch to studying AI fulltime and do remote work for GEAN. I also want to improve my life by debugging and habit building. My interests include, but are not limited to: AI, philosophy, rationality, hiking and Badminton.”

Progress updates

Tom Cares — an entrepreneur and political activist. He is creatively plotting to facilitate and guide implementations of liquid democracy, that would hinder the ability of powerful interests to shape public policy to exploit the weak. He is also working to create a new class of financial instruments to drive investment into personal potential.

Toon Alfrink — is the founder of RAISE, a startup which aimed to upgrade the pipeline for junior AI Safety researchers, primarily by creating an online course. He co-founded LessWrong Netherlands in 2016. He has given talks about EA and AI Safety, addressing crowds at various venues including festivals and fraternities. He worked part time on managing the project, using his experience of living in a Buddhist temple as a reference for creating the best possible living and working environment..

Gavin Leech“I’m a data scientist at a giant insurer. I write an EA blog here. I like talking about the far future, machine learning, analytic philosophy, statistics as applied epistemology, tech solutions to social problems, and social solutions to tech problems. During my first stay at Athena I worked on the Prague AI Safety Camp and wrote my first LessWrong piece, on technological unemployment. I’ll be back!”

Mathilde Guittard“Senior Associate at Invesco (finance), I work on big real estate projects around the world. It is a bit an earn to give situation that allows me to donate monthly to EA charities. I try to spread the word. I also care about social entrepreneurship and finding effective ways to improve global health (mostly through nutrition) in order to reduce poverty/human suffering in order to increase equality of chance so the best people can make the world a better place. Still pondering if my next move will be in impact investment or in global health.”

John Maxwell“I’m a software developer and entrepreneur, and I’ve been thinking about effective altruism for nearly ten years now. I co-founded MealSquares, a nutritionally complete food bar company, and I have a degree in computer science from UC Berkeley. At the hotel, I am focused on acquiring deeper knowledge of machine learning and thinking about AI safety (this essay of mine won $2000 in the AI alignment prize).”

Evan Sandhoefner — graduated from Harvard in 2017 with a degree in economics and computer science. He worked as a program manager at Microsoft for a short time before leaving to pursue EA work directly. For now, he’s independently studying a wide range of EA-relevant topics, with a particular interest in consciousness.

Alexandr Nil“I came to know and embrace effective altruism through the writings of David Pearce. Alas I’m in the Hotel only for ten days – an official “vacation” from my earning-to-give software developer job in Berlin. While I’m here, I’m finishing a project proposal related to Pearce’s Abolitionist Project, working on a blog-post draft, deciding whether and how to switch to ~direct EA work (including considering the Hotel Manager role), continue volunteering for LEAN’s editorial team and experience a better way of living an EA life in general.”

Alexandra Johnson“I’m a current graduate student and researcher with an interest in policy and operations. I’ve been involved with effective altruism for the past 3 years or so. I have a degree in engineering and I’m currently in an operations role with Convergence Analysis, focused on existential risk strategy, while also working at Lawrence Berkeley National Laboratory on health related topics. Previously, my research work has spanned health and animal welfare.”

Rafe Kennedy — works on macrostrategy & AI strategy and studies maths and statistics, with the goal of contributing towards AI Safety. Previous work as a grantee has included game-theoretic modelling of AI development and visualisations of statistical concepts. He holds a master’s in Physics & Philosophy from Oxford and has previously worked as a software engineer at a venture-backed data science startup. After his stay with us, Rafe went on to work at MIRI.

Saulius Šimčikas — a Research Analyst at Rethink Priorities, mostly working on topics related to animal welfare. Previously, he was a research intern at Animal Charity Evaluators, organized Effective Altruism events in the UK and Lithuania, and earned to give as a programmer. Living in the hotel helped him to focus on work.

Arron Branton — moved from London to Blackpool, quitting his job to focus on learning programming full time. He worked on creating a video game for the Google Play store and Apple’s App store, which was planned to be released in April 2019. He’s since moved on to join ‘The Singularity Group’, a community founded by the gaming personality ‘Athene’ to raise money for effective charities through video-game livestreaming and development.

Chris Leong — currently focusing his research on infinite ethics, but his side-interests include decision theory, anthropics and paradoxes. He helped found the EA society at the University of Sydney and managed to set up an unfortunately short-lived group at the University of Maastricht whilst on exchange. He represented Australia at the International Olympiad in Informatics and won a Gold in the Asian Pacific Maths Olympiad. He’s studied philosophy and psychology and occasionally enjoys dancing Salsa.

Rory Švarc — Rory self-studied machine learning and economics at the hotel after completing graduate studies in philosophy at Cambridge, with the hope of transitioning into an EA research role. After his stay with us, Rory went on to the Center on Long-Term Risk, and then Longview Philanthropy.

Hoagy Cunningham — graduated from Oxford in 2017 with a degree in Politics, Philosophy and Economics, and is now teaching himself all the Maths, Neuroscience and Computer Science he can get his hands on that might point the way towards a future of safe AI. He currently works for RAISE, porting Paul Christiano’s IDA sequence to their teaching platform, and adding exercises.

Roshawn Terell — an AI Researcher, Information Theorist, Cognitive scientist, who works to build bridges between distant fields of knowledge. He is mostly self-taught, having worked on multiple research projects. He has had the opportunity to lecture at Oxford, presenting his ideas to Post Docs and Postgrads. He is presently engaged in applying his cognitive science theories towards developing more sophisticated artificial intelligence.

Edward Wise — became interested in Effective Altruism at Oxford University, and aims to research the interaction between the ethics of effective altruism and left-wing political philosophy

Matt Goldenberg [Guest Representative, Apr-Jun 2019] — a community builder and serial entrepreneur. His current research is on the systematization of creating impactful organizations.

Jaime Sevilla — independent researcher with a background in Mathematics and Computer Science. “I stayed for a week in the hotel, where I did some shallow research on open source game theory. This exercise was useful for me to explore the possibility of and ultimately decide against working independently on technical AI safety research.” Jaime later went on to intern at the Future of Humanity Institute and has received a grant from the Effective Altruism Foundation to support his work.

Felicity & Max Reddel“We are currently finishing up our B.Sc.s in Artificial Intelligence. Currently, we are working on theses on AI governance topics. Max is inspecting the balance between short- and long-term efforts and Felicity is working on deepfakes and trying to evaluate their societal importance.”

Nuño Sempere.“I first stayed at the EA Hotel during September 2018, learning about development policy and global poverty, and working on a randomized trial for the European Summer Program on Rationality, on which I continued working for the next year. Although the project ultimately failed, I acquired a breadth of knowledge and skills, which I value. My second stay was at the end of October 2019; I programmed an implementation of proportional approval voting for the Center for Election Science (none existed before), and attended the AI Safety Learning by Doing Workshop, which I’ve so far found valuable.”

Morgan Sinclaire — Morgan’s research focuses mostly on AI safety. Having finished their MS in math, they moved here at the end of July to focus on important research full-time. Currently, they are still forming their own models of AI safety, and hope to form a coherent research agenda. On the side, they also published one post on the Alignment forum and two posts on the EA forum in the month they’ve been here.

Progress updates

Outputs*

Denisa Pop [Community Manager]— For more than 10 years now she has been passionate about how humans can get in sync with each other and with the rest of the planet, and how we, as humanity, can overcome our excessive fight or flight response and instead have a balanced autonomic nervous system. She believes science & spirituality can work well together in finding these answers. Having worked as a scientist (PhD) and as a counseling psychologist, she now moved into being a living part of the experiment, while serving as community manager. She’s also passionate about bringing people together through larger events, such as workshops and conferences, and hopes to organize more again in the future.

Progress updates

Outputs*

Ruyi Shi A current LSE student majoring in Philosophy, Logic and Scientific Method. Passionate about philosophy, also has wide interests in social science disciplines including social anthropology, sociolinguistics, the intersection between behavioral economics and psychology etc. Her main career aspiration lies in academia, but she is currently also exploring other potential (intellectually stimulating, intrinsically motivating and socially impactful) pathways to take concrete steps to develop alongside and beyond academia. Ruyi has considerable previous engagement with EA ideas and affiliated/relevant organizations. She enjoys arts and crafts, creative trinkets, language learning and traveling. She is an ardent explorer with insatiable curiosity to “know the causes of things”, an abstract thinker with a highly analytical mind who desires to make a difference in this world and an introvert who genuinely values deep and meaningful connections.

Sam Deverett — Sam is participating in AI Safety Hub’s summer research program. He is leading a team focused on designing mechanisms for mitigating collusion between AI systems. Sam has a Bachelor’s in Data Science from UC Berkeley and a couple years of work experience in tech as a Data Scientist & Machine Learning Engineer.

Harmony Hart“I want to work with people and organizations reducing suffering and risks to humanity’s longterm future. I care about effective reasoning and effective altruism. I’m currently working as the Communications Manager at Family Empowerment Media. With degrees in journalism and English, I have twenty years of experience as a writer and editor. Outside work, I like to watch birds and people, cold plunge and dance contact improv. I’ve practiced a form of interpersonal meditation called circling for five years, and deeply value honest human relating.”

Bruno Parga — I have sometimes heard that I should write a book about my life. I have worked behind the scenes at a Presidential inauguration, had the police called on me for doing my job as a Latino person, lived through a country’s worst social unrest ever and had to flee from another country within three days due to a war breaking out. Now I want some goddamned peace and tranquility to research AI safety..

Aditya SK — Aditya S.K works with Animal Ethics on raising concern about wild animal suffering and anti- speciesism. His work primarily focuses on promoting the field Welfare Biology among the academia in India. He also works with ALLFED on GCR preparedness in the CRM and Fundraising team. Aditya has his background in animal protection law from The National Academy of Legal Studies and Research and has been actively involved with multiple animal advocacy organisations in India to raise concern about animal suffering. He has been involved with EA since 2018 and is also the organiser for EA Hyderabad. His interests lie in cause prioritisation, animal welfare, s-risks and EA community building..

Chris Lonsberry — Chris is an experienced professional with a background in application support and sales support for the PI System software suite. He has been active in the EA community since 2020 and last year he made the decision to change to part-time work in order to explore high-impact career options in areas such as global priorities research, entrepreneurship, AI safety governance, and community building. Chris enjoys learning and working with big ideas. In the past 12 months he has studied AI safety, algorithms, climate change, coaching, cybersecurity, longtermism, machine learning, and philosophy. He is co-founding a local EA group and volunteers at AI Safety Quest and his local bicycle co-op. Passionate about staying active, Chris enjoys inline skating, cycling, and running in his free time.

Since his time at CEEALAR, Chris became the Director of AI Safety Quest where he regularly interacts with other CEEALAR guests and alums.

Shardul Dabir — Shardul Dabir is currently acting as the Director of EA India via a National Community Building Grant (CBG) via CEA (Centre for Effective Altruism) and is a mid-career professional with an educational background in engineering, applied sciences, management, entrepreneurship, and liberal arts from India’s leading national and private universities and a working experience of ~6 years spanning across banking, innovation, consulting, administration, operations and most importantly in field/ecosystem catalysis/building primarily via the 3.5 years spent at the Good Food Institute (GFI India) – which is the only EA-aligned organization that has built a relatively solid case study of successfully building a field/ecosystem around the EA aligned cause area of Alternative Protein – by successfully adapting it and contextualizing it in and for India from the grounds up.

He was involved in setting up GFI India and scaling up this movement in 2018 as one of the first full-time core team members and subsequently helped build a community/professional network GFIdeas India (1200 members as of 2023 and Talent pool building Initiatives like ISPIC with 1800+ participants over 2019-2022 from 500+ universities).

  • Participant, EAGx Virtual, 2020 & 2022
  • Participant, EA India Retreat hosted by Indian Network for Impact (INI) in August 2022
  • Volunteer, EAGx Singapore, September 2022
  • Volunteer, EAGx India, January 2023
  • Participant, EA India community builders retreat, January 2023
  • Participant, Charity Entrepreneurship retreat, India, January 2023.

Please send us a short bio if you are staying (or have stayed with us) and would like to appear here.