We'll be hosting our next virtual social on September 10th. Then, on September 19th, we'll be taking a field trip as a group and go hiking together (we'll pick a location close to the city that optimizes for waterfall sightings)! Also, on September 28th, we'll host our first reading group discussion! "Reading group" you ask in utter puzzlement and bamboozled bewilderment? Well scroll down for deets! Of course, we have more events planned, so make sure you follow us on Facebook or Meetup for all the latest updates!
|
Do you know of any career openings at companies in NYC or the greater NYC area that may be of interest to EA's? We're compiling a job board, and we'd love to enlist your help! If you have any leads at EA-aligned organizations, fill out this form here and we'll broadcast the opportunity to the community!
|
Career Coaching and 1-on-1 Advising
|
Looking to change jobs but unsure where to start? Thinking of donating but not certain what the latest charity recommendations are? Wanna chat about EA to clarify your understanding of a specific topic? (We know... the AI stuff can be a bit confusing.)
We're here to help!
As the community organizers, it's our job to make sure you feel confident and comfortable navigating the Sea of EA (which is not a real place - don't get your hopes up). To that end, if you're interested in talking with us about anything related to EA and how it pertains to you and your choices, you should absolutely let us know!
We're friendly, and we promise that we'll do our best to help!
|
Aaron has been reading The Honor Code by Kwame Anthony Appiah, the New York Times ethics writer. He proposes a different framing for moral progress, and he uses 4 historical examples (the end of the duel in England, the end of foot binding in China, the abolition of slavery in the US, and the ongoing struggle to end honor killings in the Middle East) to support the idea that honor can play a profound role in moral revolutions.
Arushi has been reading Human Compatible by Stuart Russell, the founder of CHAI (Center for Human-Compatible Intelligence). In a sort of spiritual successor to Superintelligence, Russell makes the case for advanced AI as an existential risk. He also proposes an interesting approach to solving the AI control problem and aligning AI's interests with ours - very simply put, his solution is to just build in a preference to always ask humans what they want. Sometimes the simplest and most obvious solutions really are the best!
|
Yours until the malevolent AI takeover, A&A
|
|
|
|