July 2023
July 2023
Foresight Institute
Probably not my actual first thought, but like pretty much so, was that I didn't want to die. Life was amazing. I really loved my friends and family, and it pained me every day to just think about the fact that they may not exist one day, and I may not be able to get to be close to them. Growing up in a suburb in Hamburg, there really wasn't very much to do about it at that time. I then got really into the philosophy of existentialism and just tried to get meaning in my own life.
Through that, I got hooked on more transhumanist ideas. I thought: hey, maybe there is something that we can do long term about extending the boundaries that we are creating meaning in. Through that, I discovered that personal longevity is actually just one part of it because civilization itself is not secure. When you're a child, that's not really something that you think about very much. You just think history is going to go on, and all you have to do is just remain a part of it. When I first discovered existential risk scenarios, I realised: holy shit, it's much worse than I thought.
After that, I got really into existential risk and, in particular, AI. I focused my master's thesis at the London School of Economics (LSE) in philosophy of science on AI ethics, using John Rawls’ Reflective Equilibrium as an approach. There I became part of the LSE EA community. I had read about EA before, but it took me some time to understand that this community also cared about x-risks. I first was under the misconception that EAs mostly focused on very measurable things right now, mostly in global health, on things like bed nets against malaria. But then I talked to a few people, and I realised that's only the first layer. Once you dig deeper, they care very much.
I wrote my thesis shortly after Superintelligence by Nick Bostrom came out, and I incorporated some of that in my thesis too. But my first exposure to all this was through the rationalist community. I read the whole Sequences (by Eliezer Yudkowski), and I read most of MIRI’s publications.
From there, I discovered Foresight online while doing my research. They were the only really optimistic organisation about the future I could find that focused on a wide set of technologies. Foresight was founded in 1986 based on the book Engines of Creation (by K. Eric Drexler). That was a fantastic, phenomenal book and still is. So I cold-emailed them: Hey, I really love what you're doing, and you seem deeply optimistic about the future. Nevertheless, you have a deeply scientific background. Can I help you somehow? Very much to my surprise, they said to come “over” to San Francisco on a kind of internship, which was supposed to be a few months. Since then, I've never left.
The Sequences and the rationality community led me to find Foresight, as Christine Peterson, who co-founded it, is on the advisory board of MIRI. So she was really immersed in that early stage. At the same time, being part of the EA community also helped. I think Foresight's community and focus are a little bit more tech-native and maybe more tech optimistic than some of the EA communities. That's probably because most of our constituents are scientists and technologists. They're researchers or entrepreneurs deeply involved in science and tech in the fields of molecular nanotechnology, neurotechnology, computing, space, and biotech. They already have a relatively tech-oriented approach to the future and maybe a more optimistic one. But nevertheless, I think one interesting thing is that they all care very much about the long-term future, which usually you don't often find in the tech community. I think what we can contribute really well is this more differential technology development approach to technology development, which is the term introduced by Bostrom a long, long time ago, but never really got picked up until recently. It basically means you can have a tech-driven approach where you speed up safety and security-enhancing technologies rather than just generally accelerating technologies. I think that what we can bring to bear in general to the different technological risk paradigms is a very deep technical incentive of a community that cares a ton about this and is really good in creating security-first technologies.
For example, for AI, K. Eric Drexler, who co-founded Foresight, wrote the Agoric Open Systems papers more than 30 years ago with Mark Miller, who's now a Senior Fellow at Foresight. This was a very early exploration of a decentralised framework of multiple agents, some humans, and some AI, interacting in a very complex human-computer society, doing that through market mechanisms. I also think that Eric's work on comprehensive AI services, which came out of the Future of Humanity Institute later when he was there, is somewhat inspired by that. Mark has also deeply worked on that more from a cryptography and security approach. I think that we bring this different type of perspective to AI alignment, which is more of this kind of security and cryptography mindset. I think that could actually benefit some of the alignment field since the security and cryptography people think a lot about secure architecture design of multiple different agents cooperating and keeping each other in check. That's a different approach to thinking about safety, and I think it's one that's perhaps undervalued within EA. So I'm currently trying to bridge the cryptography and security communities a bit more with the traditional alignment communities. We wrote a book on this, and we are doing a workshop on this now later this year.
Another angle that we're exploring is the whole Brain Emulation Workshop with Anders Sandberg. Because as AI timelines are falling, there's also something to be said for the fact that maybe we should revisit the whole brain emulation roadmap that was written in 2007. Can we speed up whole brain emulations to potentially be a differential technology that could help with AI safety? There are a bunch of different people that think this at least has a shot again, maybe also out of sheer desperation.
Those are the two AI safety focus strategies. Then we have the existential fellowship where, because Foresight is very positive futures oriented, we basically bring a lot of risk-focused people who nevertheless have a positive approach to the future into this fellowship where they are slotting in much more with our other technical programs. We are trying to create a bit more of a space in which risk and policy-oriented people, for example, in biotech, can actually talk to people that work on gene editing for mutual discourse. So people working in biotech can learn a bit more about what are the long-term risks, and people working on the biosecurity and bio policy side can learn a bit more about what are the technical capabilities right now. I think that's still a bit missing.
For our five technology areas (bio, nano, neuro, computing, space), we have one Google group each that consists of between 150 and 400 people working in that field. That's a mix between researchers, entrepreneurs, and funders, and they meet for monthly virtual seminars where they update each other with what they are working on. People stay in the loop on what is happening in the fringe parts of that technology. Then we do annual in-person workshops and have fellowships in each of these areas as well as prizes.
That's a multi-layered approach, very common for early-stage ecosystem building. I often match people individually. For example, many of our fellows working on policy with people working on individual technologies. That takes a lot of time but makes tailored connections happen.
At virtual seminars, in addition to asking how we can help with the development of the technology presented, I usually ask one question at the end of the technical presentation, which is: what are the potential risks that you foresee with this? I then try to onboard them a bit to think about this. We have the in-person workshops, which is, I think, really where the rubber hits the road, where we invite our fellows and try to really connect them around these topics. This year we're starting a session in each of these workshops that will be specifically focused on what are potential differential technology approaches to developing safety-enhancing aspects of the technology first.
We did a survey of all of our technical groups and asked them how many longtermist ideas do you know and do you think it would be useful for you, the science and tech community, to engage more with EA ideas. Do you think it would be useful for EA to engage more with the science and tech realities that you work on?
And we had really quite stunning results. Over 80% said they would really want this more to happen, and we would be game to participate.
If people want to know about our seminars, events, and fellowships, each has an application form on our website.