Artificially Intelligent

Any mimicry distinguishable from the original is insufficiently advanced.

  • CFAR Retrospective

    | 1434 words

    I recently read CFAR Workshop in Hindsight, which had the following tl;dr: I believe it changed me in subtle ways and improved several skills (mostly “meta skills”) marginally. This differs quite a bit from the somewhat overblown expectations I had before the workshop, but I’m still quite certain the workshop...

  • My Routine

    | 1121 words

    Recently, I’ve been about as productive as I’ve ever been. One reason for this is my current routine, which can roughly be described as “systematize literally everything”. I currently work from home. Here’s a snapshot of what my routine currently looks like. Monday-Saturday ~0800: wake up 0800-0900: eat breakfast, respond...

  • Be Responsible

    | 570 words

    In high school, seniors go on a seven-day class-wide camping trip. When it came my turn, I was placed into a group of 28, in which we formed groups of four. Each of these groups was responsible for one night of dinner. My group of four decided that we wanted...

  • A Math Student's Guide to Options

    | 2709 words

    Disclaimer: This represents slightly more than everything I knew about options when I put my entire net worth into SPY puts in March.1 I learned most of this in about 30 minutes of reading. This introduction should be accessible to someone that knows statistics and calculus, but little finance. The...

  • Be Goal-directed

    | 546 words

    A question that I’ve recently grown fond of is “what’s your goal?”. Usually, this question arises when someone is describing course of action and wondering if it’s good or bad. Sometimes, during this process, people get trapped in this mindset where they think the course of action is either good...

  • Does SGD Produce Deceptive Alignment?

    | 4898 words

    Cross posted from the AI Alignment Forum. Deceptive alignment was first introduced in Risks from Learned Optimization, which contained initial versions of the arguments discussed here. Additional arguments were discovered in this episode of the AI Alignment Podcast and in conversation with Evan Hubinger. Very little of this content is...

  • Expected Money at Augur

    | 706 words

    A contract that pays out $1 if Trump loses is currently available for about 65 cents on Augur. If you think Trump has an 80% chance to lose, this means you can approximately 1.2x your money in about a month. This is a good deal. The Kelly Criterion says you...

  • France Bet Postmortem

    | 398 words

    On the 18th of October 2020, I assigned a 50% chance that France’s daily COVID cases would not exceed 40,000 by the end of October. My friend Sidney assigned 70% chance, so we bet about it. As of October 22nd, I have lost this bet. Abram Demski recommends that people...

  • Miscellaneous Mediocre Models

    | 1404 words

    The world is very complicated. It is easier to engage with the world productively if you have simple models of how various parts of it work. These are some of the models I use. Some of them are more grounded in reality than others. Acids/Bases epistemic status: probably literally true...

  • The Solomonoff Prior is Malign

    | 5098 words

    Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. This argument came to my attention from this post by Paul Christiano. I also found this clarification helpful. I found these counter-arguments stimulating and have included some discussion of them. Very little of this content is original....