LessWrong TTS Posts Collection

A curated collection of LessWrong posts discussing Text-to-Speech technology, audio narration, and converting text to audio
15 posts collected | Compiled on 2026-03-03

LessWrong audio: help us choose the new voice

We make AI narrations of LessWrong posts available via our audio player and podcast feeds:

LessWrong (Curated & Popular)
Curated posts and posts with 125+ karma.
https://pod.link/1630783021
LessWrong (30+ Karma)
Curated posts and posts with 30+ karma.
https://pod.link/1698192712

We’re thinking about changing our narrator’s voice.

There are three new voices on the shortlist. They’re all similarly good in terms of comprehension, emphasis, error rate, etc. They just sound different—like people do.[^\[1\]^](#fn-AtrhhbdjzxxKmezgk-1)

We think they all sound similarly agreeable. But, thousands of listening hours are at stake, so we thought it’d be worth giving listeners an opportunity to vote—just in case there’s a strong collective preference.

Listen and vote


Please listen here:

https://files.type3.audio/lesswrong-poll/

And vote here:

https://forms.gle/JwuaC2ttd5em1h6h8

It’ll take 1-10 minutes, depending on how much of the sample you decide to listen to.

Don’t overthink it—we’d just like to know if there’s a voice that you’d particularly love (or hate) to listen to.

We’ll collect votes until Monday December 16th. Thanks!

Other feedback?


We’re always keen for general feedback on the narration service. Please do share thoughts on the form, or in the comments on this post.

[^ge3cmoqx46a]: We periodically the various text-to-speech services, with a particular focus on ElevenLabs, Speechify, Azure, Amazon, BeyondWords, OpenAI, Murf.ai, Play.ht, Deepgram and a few open source libraries. The current shortlist was created based on factors like performance, reliability, customisation options and price. We won’t consider other voice options this winter, but if there are other voices you particularly like, please do let us know, and we’ll make sure they’re considered in future reviews. ↩︎

Things Solenoid Narrates

I spend a lot of time narrating various bits of EA/longtermist writing.

The resulting audio exists in many different places. Surprisingly often, people who really like one thing don't know about the other things. This seems bad.[^shn4hjix8r]

A few people have requested a feed to aggregate 'all Solenoid's narrations.'

Here it is. (Give it a few days to be up on the big platforms.) I'll update it ~weekly.[^4nkler6hjwr]

![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/7307867c2730c95494dca239b81f782a12e0ed6b5962a222.png)

Solenoid Narrates

And here's a list of things I've made or am working on, shared in the hope that more people will discover more things they like:

Human Narrations


* ~920 episodes so far including all non-paywalled ACX posts and SSC archives going back to 2017, with some classic posts from earlier.  * Archive. Patreon. * Human narrations of all the Curated posts. Patreon. * Narrations of most of the core resources for AISF's Alignment and Governance courses, and a fair few of the additional readings.  * Alignment, Governance *  Many pages on their website, plus their updated career guide. * This is now AI narrated and seems to be doing perfectly well without me, but lots of human narrations of classic EA forum posts can be found in the archive, at the beginning of the feed. * I'm not making these now, but I previously completed many human narrations of Metaculus' 'fortified essays'. * I did about half the narration for Radio Bostrom, creating audio versions of some of Bostrom's key papers.
  • Miscellaneous: 
* Lots of smaller things. Carlsmith's Power-seeking AI paper, etc.

AI Narrations


Last year I helped TYPE III AUDIO to create high-quality AI narration feeds for EA Forum and LessWrong, and many other resources.

  • Every LessWrong post above 30 karma is included on this feed. 
* Spotify
  • Every EA Forum post above 30 karma is included on this feed: 
* Spotify
  • Also: 
* ChinAI * AI Safety Newsletter * Introduction to Utilitarianism

Other things that are like my thing


There's a partially complete (ahem)* map of the EA/Longtermist audio landscape here. The Future

I think AI narration services are already sharply reducing the marginal value of my narration work. I expect non-celebrity[^tesgpfwp3s] human narration to be essentially redundant within 1-2 years. AI narration has some huge advantages too, there's no denying it. Probably this is a good thing. I dance around it here.

Once we reach that tipping point, I'll probably fall back on the ACX podcast and LW Curated podcast, and likely keep doing those for as long as the Patreon income continues to justify the time I spend.

[^shn4hjix8r]: I bear some responsibility for this, first because I generally find self-promotion cringey[4] and enjoy narration because it's kind of 'in the background', and second because I've previously tried to maintain pseudonymity (though this has become less relevant considering I've released so much material under my real name now.)

[^4nkler6hjwr]: It doesn't have ALL episodes I've ever made in the past (just a lot of them), but going forward everything will be on that feed.

[^tesgpfwp3s]: As in, I think they'll still pay Stephen Fry to narrate stuff, or authors themselves (this is very popular.)

[^t5r35wdcuxb]: Which is not to say I don't have a little folder with screenshots of every nice thing anyone has ever said about my narration...

How do I read things on the internet

This is a linkpost - I recommend reading it at the original URL for a better reading UX

I have a somewhat elaborate process for reading things that I find on the web. I've been inspired to share it because after many a long iteration it finally feels adequate!

Reading things on the web seems like it should be easy, and yet - I've been failing at it for years! 🙀

In this article I explore my current workflow and challenges that made it into what it is today.

Workflow


![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/87f4a55b3486f9b55a07b6a3f5a7acee9bc326acbbaa012a.png)

The goal of the workflow is to: Enable you to reliably read things you want to read (and retain learnings from it) while minimizing effort and attention spent.

Discovery


This is part of the pipeline that received relatively less optimization attention, mostly by virtue of me suffering from abundance of content rather than scarcity. I include it primarily for completeness’ sake.

Some ways in which discovery is happening for me are as follows:

  • Follow-up from previous things I read
* people often link related content * Ampie extension is helpful to discover the broader conversation about a given piece - which includes links to related content * I like aggregator newsletters because they introduce an additional layer of curation over raw subscription streams like RSS.
  • I'm also subscribed to a few "normal" newsletters
I'd love to see a Goodreads-style platform created for discovering and tracking articles.
  • Ampie gives me some of the same benefits, but less systematically
  • Curius.app is a more social version of Hypothes.is and also covers some of the similar ground
Reading Inbox
When I encounter something that I think would be worth my attention - I save it to Readwise Reader
> * This also serves as a trigger to create an SRS item in Roam

The first problem of reading things on the internet is that there are too many things out there one is tempted to read.

Even if you have a good curation process there is always too much content and too little time.

My first approach to managing the reading inbox was to keep things I want to read in endless browser tabs and breathe a sigh of relief when my browser crashed and all the open tabs disappeared

When I noticed that this process didn't actually achieve the goal of helping me to read things I wanted to read - I started pushing myself to add things to Pocket/Instapaper to have a clear backlog of things to read

  • Which put me in a situation when I had hundreds of articles in Instapaper instead of as persistently open tabs (somehow that only marginally impacted number of open tabs I had 😅)
The result wasn't amazing - I went from not reading things and having them eating into my attention to not reading them and forgetting about them.

Arguably it was an improvement, as attention is an important and scarce resource, but as the point of this workflow is to help me actually read things instead of collecting the things I wish I have read - it was a failure.

A better way to direct my limited attention was called for! And I found it in Spaced Repetition

Spaced Repetition in Roam


When a piece is added to Readwise Reader - a Roam "block" for it is automatically created
> * It's tagged with to/read and configured to become an SRS Card

The core pillar of directing my attention programmatically is Spaced Repetition - I use it extensively for inbox processing, engaging with content over time and developing habits.

How does this work:

  • When the item is originally added to Roam - it's scheduled for a review in one of the next few days
  • During my daily reading time, I review each suggested item. If I want to read it, I do so right then. Otherwise, I either:
* I reschedule it further into the future * Or mark it as "done" if I'm not interested in the piece anymore

It proved to be a good match for reading inbox handling. The above process has the effect of:

  • keeping the things I want to read salient
  • sorting them by excitement - things that I'm repeatedly not excited to read end up scheduled exponentially further in the future
Listen to content in audio form first
For any new piece of content I want to engage with - listen to an audio version of it first
> * This is often sufficient to get what I was hoping for from a piece ✅
* If not - it serves as a first-pass skim read before deeper engagement

This is one of the core pillars of my reading flow — I think reading things in audio form is underappreciated.

Audio form dramatically extends the range of environments and situations when it's convenient for you to read.

  • I listen to audiobooks, podcasts and TTS version of articles when I bike to places, do chores and sometimes even while taking a shower (though I've been avoiding the latter lately).
  • This allows me to read more - in fact it increases my reading throughput to a degree that I can first-pass read things faster than I find new things to read!
Read it ~once~ mindset

An important stepping stone to make audio form work well for me was overcoming “only read a given thing once” mindset.

What I mean by that is that when I originally started using TTS to read things - after listening to an article - I felt like "I read this, I'm done with it an and don't need to engage with it anymore".

And while it's actually true for many types of content (opinion pieces, news articles, fiction) - I found it that for deeper, more technical pieces - just listening to something once, often wasn't quite satisfactory. I wanted to highlight paragraphs, add notes, play with presented models.

As a consequence I was avoiding listening to all content as I had a vague sense of unease "but what if it's a piece I want to engage deeper with and by listening to it, I'd lose an opportunity to derive full benefit from it".

Eventually I realized that it was silly 🙃

My new process is:

  • listen to all the content that comes my way first
* this is sufficient level of engagement for a large chunk of what I want to read
  • for things that need deeper engagement
* put them on top of the queue of the to-read things * read them again (likely in text form this time), highlight and annotate them, play with models they present, find follow-up reading
Spaced Repetition reminds me to engage with the piece until I mark it as fully processed
> * See also use multiple levels of processing information for learning & understanding

I've been previously using a custom automation setup that allowed me to create a podcast feed of transcribed articles from things saved to Instapaper (https://github.com/Stvad/pollycast/ ).

I've since transitioned to mostly using Readwise Reader TTS.

The reason for having custom setup was a better UX for playing audio inside the podcast apps and a better voice quality. Reader TTS got both of those things to a "good enough" stage.

In-depth reading


For items that survived this far in the pipeline - I'll read them at a designated reading time (usually)

I do most of my reading on an iPad and use Readwise Reader or Hypothes.is for annotation purposes.

Highlights and notes sync to Readwise, then to Roam. The next day I review them in Roam, converting notes that I want to engage with more into SRS items for ongoing review.

Support structures


Some things that I found helpful to make focused reading something that I do reliably and in a productive way

Incorporate “focused reading time” to be a stable part of my daily routine

  • I devote 30+ focused minutes to reading every day after lunch.
Getting an iPad and using it as my primary reading device
  • Having a dedicated device with good reading UX affordances elevates the overall experience considerably.
  • I've primarily used a phone or a laptop prior to getting a tablet, but both provide a subpar experience for reading.
  • I've also had a Kindle for a while, but I found it frustrating to use for web and PDF content and so it mostly languished in my drawer.
  • Physical books have nice aesthetics, but overall unsatisfactory UX, so reading a print book is something that I'd do occasionally, but it requires additional effort/accommodation for the sake of the experience.
Beeminder was very useful as a way to introduce daily reading habit
  • But when I relied just on Beeminder - it was an "effortful" habit. What I mean by that is - I'd do it because I committed to it, but it wasn't part of a routine, which constantly made me scramble to fulfill Beeminder requirements last moment.
  • Making it into a predictable routine is what made it “effortless”, Beeminder has applied optimization pressure to help me get there though.
Switch in mindset around listening to a TTS version of an article as necessarily a final step in the process to potentially an intermediate step of pipeline

Things I'm still unhappy about


Taking notes alongside reading

  • I have a keyboard case for the iPad, but I read things in portrait mode and it's annoying to get it in and out of the case each time I want to take a note.
* Plausibly I should just have an external keyboard on-hand
  • Handwriting sucks
  • voice keyboards are meh
  • Audio notes are created out of context and I need to manually tie them to the original piece later
Having the SRS around what should I read to be in an external app (Roam) is a bit awkward
  • The two places are disconnected after the original SRS item creation, and so I need to mark any given article as read twice - once in SRS system and once in Reader.
  • Ideally a domain-specific SRS implementation would be a part of the reading app experience.
  • Generally - incorporating learnings from this WF into one tool would be great.
* Readwise Reader is getting there, but still has ways to go before it'd be an ideal reading app for me. * I make due by building automations and tweaks around the core experience. But the degree to which I can do that is limited and makes me wish once again for a world with more malleable software

Conclusion


Overall I'm finally happy with this workflow, which prompts me to share it 🙂.

I imagine some of its aspects are peculiar to how I interact with the information out there. But I hope that people can adopt chunks of the workflow that work well for their peculiarities. And that if you recognize some of the struggles I went through in yourself - you may find my solutions useful.

If you do give it a try or if you have your own peculiar ways of interacting with the information you find on the internet - I'd be curious to know

*

Misc


More things I do or have tried around reading

Display highlights & notes on the page when I visit it at a later point

When I revisit a web page that I've previously read - I want to be able to see how I have interacted with it - see highlights I've made, notes I've taken etc.

There are several tools that afford for that, but I haven't found a perfect solution so far.

  • Readwise Reader has a browser extension that allows you to save things to read later and annotate the web-page in-place
* It'd then display the highlights and notes when you visit the page later. Unfortunately it'd only be the annotations taken within the reader. * I hope that eventually they'll have a better integration with "Readwise 1.0" which is what I use to manage all my highlights from different sources.
  • Hyphothes.is
* Inherently displayed as part of original page. But you have to remember to trigger it to see the annotations. * Only highlights made in Hypothes.is are displayed. * Browser extension to augment your browsing experience with additional context * Theoretically this can support annotations from arbitrary sources, in practice it currently only pulls in highlights from an Instapaper export data.

Failed experiments

I experimented with improving iPad web annotation UX

spritz speed reading
  • I was interested in reading using Spritz/RSVP (at least for the first pass)
  • It's currently redundant though, as audio serves the role of "first pass/quick skim".
  • And if the piece needs a second pass - I want to engage with it deeper.
Copying things into Roam and reading them there
  • The idea was to use Roam's linking facilities for a deeper engagement with the piece. Roam is not a great environment to read in though, and copying fidelity is subpar.
Kindle

Announcement: AI Narrations Available for All New LessWrong Posts

TYPE III AUDIO is running an experiment with the LessWrong team to provide automatic AI narrations on all new posts. All new LessWrong posts will be available as AI narrations (for the next few weeks).

You might have noticed the same feature recently on the EA Forum, where it is now an ongoing feature. Users there have provided excellent feedback and suggestions so far, and your feedback on this pilot will allow further improvements.

How to Access


On Post Pages

Click the speaker icon to listen to the AI narration:

!The speaker icon is located beneath the title and author, next to the post's publication date.

Podcast Feeds

Perrin Walker (AKA Solenoid Entity) of TYPE III AUDIO will continue narrating most curated posts for now.

Send us your feedback.


Please send us your feedback! This is an experiment, and the software is improved and updated daily based on user feedback.

You could share what you find most useful, what's annoying, bugged or difficult to understand, how this compares to human narration, and what additional features you'd like to see.

  • For comments on a specific narration, use the feedback button on the audio player or visit t3a.is
  • For general feedback or suggestions, please comment on this post or email us at lesswrong@type3.audio.
  • Writers interested in having their work narrated or requesting a particular narration, please contact team@type3.audio.
Is this just text-to-speech on posts?

It's an improvement on that.

We spoke with the Nonlinear Library team about their listeners' most-requested upgrades, and we hope our AI narrations will be clearer and more engaging than unimproved TTS. Some specific improvements:

  • Audio notes to indicate headings, lists, images, etc.
  • Image alt-text is narrated.
  • Specialist terminology, acronyms and idioms are handled gracefully. Footnotes too.
  • LaTeX math notation handled gracefully in all cases and narrated in some simple cases.
  • We skip reading out long URLs, academic citations, and other things that you probably don't want to listen to.
  • Episode descriptions include a link to the original post. According to Nonlinear, this is their most common feature request!
  • More podcast feed options.
We'd like to thank Kat Woods and the team at Nonlinear Library for their work, and for giving us helpful advice on this project.

[Mostly solved] I get distracted while reading, but can easily comprehend audio text for 8+ hours per day. What are the best AI text-to-speech readers? Alternatively, do you have other ideas for what I could do?

For me, the physical act of scanning words takes active focus compared to the analogue in listening, which is automatic. (I don't think my comprehension or engagement is lower when listening, to be clear).

I've tried the naturalreaders.com 'pro' version, but experienced a few issues:

  • It separates the text into small chunks, and pauses between each one. it aims to separate them by sentences, but sometimes separates them by comma, and rarely after a lone word (both of which make my intuition assume a new sentence has started).
  • The voice doesn't seem to know what it's reading, fails to emphasize what should be, etc. It's nowhere near the quality of Solenoid Entity's readings of the sequences, for example.
As a result, I think my brain doesn't register this AI-read text as 'something to listen to,' so it takes some active focus to continue listening, and eventually my focus shifts to something else while the audio keeps playing in the background. This does not happen with human-read text.

Anyone who can help me with this might have a high potential impact, since I'd be listening to text for a large portion of my day and am trying to use myself to do everything I can to help with alignment.

I Converted Book I of The Sequences Into A Zoomer-Readable Format

If I (a 19 year old male) texted "www.readthesequences.com" to my roommate, the probable outcome is that he would skim the site for under a minute, text back something like "seems interesting, I'll def check it out sometime", and then proceed to never read another word. I have another friend, one that I would consider a smart guy. He would consistently rank above me in our high school's math team, and he scored in the 1500's (≥3SD) on his SATs. The same dude _did not read a single book_ during the entirety of his high school career.[^1]

Attention is one's scarcest resource, and actually _reading_ something longer than a paragraph is a trivial inconvenience, especially for my generation.

What, then, _does_ manage to hold the fickle eyeballs of zoomers like me? Well, TikTok, mostly. However, there _is_ one (very popular) genre of TikTok video worth investigating. In this genre of video, a Reddit post is broken into sub-paragraph chunks of text, and these chunks are sequentially rendered onscreen while a text-to-speech program reads them to the user. The text is overlaid upon a background video, which is either gameplay from the mobile game _Subway Surfers_, or parkour footage from _Minecraft_. The background gameplay provides engaging novelty to the user's visual cortex, while the synthetic voice ensures that the user doesn't have to go through the hard work of translating symbols into sounds. Really, it's all quite hypnotizing.

The fact that these videos are often recommended by TikTok's algorithm implies that they are among the most-engaging videos that our civilization produces. Therefore, to reduce the effort-cost of reading the sequences, I gave the TikTok treatment to Book I ("Map and Territory") of _Rationality: From AI to Zombies_.

(__Update__: Circa 2023-02-09, all these links are dead. This was in response to an AWS alert notifying me that 85 gigabytes (or more) of data had been transferred out. I really shouldn't have used a public S3 bucket to serve video in the first place, as it exposed me to an unacceptable amount of risk in the form of a denial of wallet attack. I've got a second batch of videos in the works, which I intend to distribute via a more secure mechanism.)

(__Update 2__: THE VIDEOS ARE HERE, CLICK HERE)

Predictably Wrong

Fake Beliefs

Noticing Confusion

Mysterious Answers

Interlude: The Simple Truth

Do whatever you want with these videos. I may or may not convert the other 5 books of R:AZ, and I may or may not upload them to TikTok. If you want another work of text converted to video, please pitch it to me in the comments, or DM me.

[^1]: No, not even the books assigned in English class. He used SparkNotes.

How and why to turn everything into audio

If you love podcasts and audiobooks and find yourself occasionally facing that particular nerd-torture of discovering that an obscure book isn’t available on Audible, read on.

I’m kind of obsessed with listening to content (hence building the Nonlinear Library), and there are easy ways to turn pretty much all reading materials into audio, including most books and even blog posts, like LessWrong.

In this post I’ll share my system to turn everything into audio and my rationale for people who haven’t yet discovered the joys of reading with your ears.

If you’re already sold on listening to everything, skip to the section “Apps I use and recommend” for the practical nitty-gritty of how to turn everything into audio.

Read while doing other things


Have you ever reluctantly dragged yourself away from a really engaging EA post so that you could make dinner or drive to work? Have you ever procrastinated on doing the household chores because what you’re reading is so much more exciting and probably higher impact too, now that you think about it? If you have an audio version, you don’t have to choose between reading and chores: you can keep reading as you cook or travel. It’s a way to actually productively multi-task.

Generally, if you convert your books to audio, you can read at times when your mind is not occupied, but holding a physical book would be inconvenient. For example, you can read while you are:

  • Commuting to work
  • Exercising
  • Cleaning your house
  • Running errands
  • Traveling
  • Cooking
  • Brushing your teeth
  • Showering (yes, I do do this. And yes, it is awesome)
With audiobooks, you can consume content almost all the time, if you want. This is great if you’re a bibliophile whose bookshelves and internet browsers are overflowing with enticing unread material: you can get through more content and make the more mundane parts of your day more interesting.

Listen and read at the same time


A little-known life hack: you can read a book with your eyes and listen to the audio version at the same time. It’s called immersive reading or two-channel reading. I find that this requires much less energy than reading on its own. It’s more like a movie, where if your attention flags, it’ll draw your mind back in. There’s also something about engaging more of your senses. When I do this, I can stay focused for longer and reading feels more relaxing. So, if you’d like to read but are feeling tired or are struggling to read something important but kind of dry, try listening and reading at the same time.

How to convert text to audio
============================

Lots of people like the idea of listening to books and articles, but aren’t sure how to get texts into audio format. Here’s what I look for in a text-to-speech app, and some recommendations of specific apps to try.

What I look for in a text-to-voice app


In my opinion, the best text-to-voice apps display the text you’re reading as they play the audio, and allow you to skip around by clicking on the text. This means that if you don’t catch something, or your attention wanders, you can easily go back to the last part you heard. You can also skim the text and skip forward if you’re bored or if the current section isn’t relevant.

Here’s an example of what I mean:

![](https://i.imgur.com/RakAXAZ.png)

On voices, sometimes people give up on these apps very quickly because they find the reading voices annoying.

Firstly, if you haven't tried for awhile, try again. The voices have gotten way better in the last couple of years. Something to do with AI getting better at a completely non-terrifying pace. Nothing to worry about there...

Secondly, if you find the voices weird to start with, I suggest that you persevere for a while \- most people get used to the voices after only a few hours. I even feel affectionate and nostalgic towards some of the weird robot voices in my apps - I’ve read so many books that way!

I also have an Android so I don’t know what apps are best if you have an iPhone. If you have any recommendations for people with different phones, please post them in the comments!

In fact, make recommendations for any apps that you use in the comments. These apps work well enough for my uses, but they’ve all got their own issues and I make no claim that they’re the best out there.

Apps I use and recommend


Evie for books on your phone

My favorite app for listening to books is Evie. It’s free if you’re not fussy about voices. It costs money if you want to pay for nicer voices.

Unfortunately, it’s only available on Android, and it only works on DRM-free books. Most ebooks that you buy on e-readers like Kindle are DRM-protected. There are two solutions to this:

  • You buy your book the usual way, then get a DRM-free version from LibGen.
  • Remove the DRM from Kindle books using Epubor Ultimate, turning it into a PDF, then opening it in Evie.

@Voice for articles on your phone

@Voice is my favorite app for listening to articles on your phone. It’s also free. It allows you to make playlists of articles that you want to read. Also, like Evie, you can see the text as you listen and skip around.

Natural Reader for articles on your computer

Natural Reader is my go-to for listening to articles on a computer. It’s more difficult to pause than @Voice and Evie, and you can’t make playlists, but it’s convenient if you’re on a computer and want to listen as you read.

Other apps


The Nonlinear Library uses Beyondwords.io and Amazon Polly voices. These work well, but they’re designed for industrial use; they don’t work as well for personal use. It’s better if there’s a regular source of content that you want to convert for a large number of people and send it to podcast platforms. If you want to do this for your personal use, Tayl is probably better, though I don’t have as much experience with it.

Lots of people like Audible, and the voices are recorded by humans rather than generated by text-to-speech algorithms. However, you can only listen to audiobooks that they have in their collection, so you can’t find an article and listen to it with Audible. Additionally, it’s expensive and doesn’t show you where you are in the text. However, if you have a strong preference for non-robotic voices, this might be the app for you.

Kindle has an immersive reading mode that allows you to listen and read at the same time if you own both the ebook and audiobook. You can often buy the audiobook at a discount if you buy it along with the ebook. This is more expensive than using Evie but might be a good option for people who prefer human readers.

Many people really like Pocket, so you might want to give it a shot. For me and many others I’ve asked, it’s glitchy to the point of unusability. The main problems for me are that if you pause, it’ll often lose your place, and that it doesn’t have the feature where it highlights the text as you read and allows you to start from where you want. Its playlist feature is also extremely rigid.

You can get EA-related audio content from the Nonlinear Library


These apps are what I use, but honestly, they’re still quite bad. For whatever reason, the TTS industry seems terrible, which is part of why I decided to build the Nonlinear Library which automatically turns top rationalist content into podcast format. The Nonlinear Library now has separate feeds for the EA Forum, LessWrong, and the Alignment Forum, as well as “top of all time” playlists featuring around 400 of the most upvoted posts from each forum for your binging pleasure.

This post was written collaboratively by Kat Woods and Amber Dawn Ace as part of Nonlinear’s experimental Writing Internship program. The ideas are Kat’s; Kat explained them to Amber, and Amber wrote them up. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise.

If you would be interested in working with Amber to write up your ideas, fill out this form.

Announcing the LessWrong Curated Podcast

You can now listen to LessWrong Curated posts in podcast form on Spotify, Apple Podcasts, Audible, ~and~ ~Libsyn~ and BuzzSprout (which has an RSS feed, so it's available everywhere).

![](https://39669.cdn.cke-cs.com/rQvD3VnunXZu34m86e5f/images/3aae6a674c4ece7adbbeb756ae536358b91a4a7cd55cccfe.png)

So far the last 5 curated posts are available, posts by Eliezer Yudkowsky, Duncan Sabien, lsusr, and Paul Christiano.

This is created and recorded by Solenoid Entity, who spent the last five years editing the SSC podcast, succeeded Jeremiah as narrator and publisher in 2020, and also makes the more recent Metaculus Journal Podcast. I reached out to him last week[^tkmgjbtemwh] with an offer to do this work and he has quickly done some excellent recordings, which I'm very grateful for.

This is a new experiment and project, and so these 1-2 weeks are a great time to give me and Solenoid Entity feedback about what you like and dislike about the podcast, what would make it better for you, your experience as an author having your writing narrated, etc. You can leave comments here anytime, or talk to us via the intercom chat in the bottom right of the screen, or PM me personally via any channel.

Below are the 5 current available LessWrong Curated Podcasts.

[^tkmgjbtemwh]: Hat Tip to Tamera Lanham and Mattieu Putz for the suggestion at dinner!

New: use The Nonlinear Library to listen to the top LessWrong posts of all time

Update #1: It’s a rite of passage to binge the top LessWrong posts of all time, and now you can do it on your podcast app.

We (Nonlinear) made “top of all time” playlists for LessWrong, the EA Forum, and the Alignment Forum. Each is around ~400 of the most upvoted posts:

Update #2: The original Nonlinear Library feed includes top posts from the EA Forum, LessWrong, and the Alignment Forum. Now, by popular demand, you can get forum-specific feeds: Stay tuned for more features. We’ll soon be launching channels by tag, so you can listen to specific subjects, such as longtermism, rationality, animal welfare, or global health. Enter your email here to get notified as we add more channels.

Below is the original explanation of The Nonlinear Library and its theory of change.

*

We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.

In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.

Listen here: Spotify, Google Podcasts, Pocket Casts, Apple, or elsewhere

Or, just search for it in your preferred podcasting app.

Goal: increase the number of people who read EA research
========================================================

A koan: if your research is high quality, but nobody reads it, does it have an impact?

![](https://lh4.googleusercontent.com/IKxCQlaUa9K-WrhrRwT4u3u9YmVjFoQgaO2xzDiULoxYv9bRTYN7Se98gAdrQQzA5AgGRYwx-qGToxof-pgovWMWEA-aSi9MNJYYp-ZZF8c5mm6XQHGJrSQsV2QawQA4iFCxoPb-=s1600)

Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact.

Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better

Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA and rationalist articles read, we’re increasing the impact of all of that content.

This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact.

Here are some purely hypothetical numbers just to illustrate this way of thinking:

Imagine that you, a researcher, have spent 100 hours producing outstanding research that is relevant to 1,000 out of a total of 10,000 EAs.
> Each relevant EA who reads your research will generate $1,000 of positive impact. So, if all 1,000 relevant EAs read your research, you will generate $1 million of impact.
> You post it to the EA Forum, where posts receive 500 views on average. Let’s say, because your report is long, only 20% read the whole thing - that’s 100 readers. So you’ve created 100*1,000 = $100,000 of impact. Since you spent 100 hours and created $100,000 of impact, that’s $1,000 per hour \- pretty good!
> But if you were to spend, say 1 hour, promoting your report -  for example, by posting links on EA-related Facebook groups - to generate another 100 readers, that would produce another $100,000 of impact. That’s $100,000 per marginal hour or ~$2,000 per hour taking into account the fixed cost of doing the original research.
> Likewise, if another 100 EAs were to listen to your report while commuting, that would generate an incremental $100,000 of impact - at virtually no cost, since it’s fully automated.
> In this illustrative example, you’ve nearly tripled your cost-effectiveness and impact with one extra hour spent sharing your findings and having a public system that turns it into audio for you.  

Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers.

Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high.

Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section.

Why it’s useful
==================

EA needs more audio content


EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format.

There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes.

There are a lot of listeners


The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners.

![](https://lh4.googleusercontent.com/tllNGUvT0bJJH2MCIJQrWynBm7R3pHS8MVgszQBI0yiAE7yr0V-sZ9nYA-aVwRbps99sOJGM8G0FOFUItnRMk9Un-CFxC94gOmkz1Tq0yLVXdNGuHiv4sDZuoTgNZVA8IvUtSSND=s1600)

Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things.

Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts - especially if you’re listening at 2-3x speed - and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests.

Existing text-to-speech solutions are sub-optimal


We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:

  • They are glitchy, frequently crashing, losing your spot, failing at handling formatting edge cases, etc.
  • Their playlists don’t work or exist, so you’ll pause every 2-7 minutes to pick a new article to read, making it awkward to use during commutes, workouts, or chores. Or maybe you can’t change the order, like with Pocket, which makes it unusable for many.
  • They’re platform specific, forcing you to download yet another app, instead of, say, the podcast app you already use.
  • Pause buttons on headphones don’t work, making it exasperating to use when you’re being interrupted frequently.
  • Their UI is bad, requiring you to constantly fiddle around with the settings.
  • They don’t automatically add new posts. You have to do it manually, thus often missing important updates.
They use old, low-quality voices, instead of the newer, way better ones. Voices have improved a lot* in the last year.
  • They cost money, creating yet another barrier to the content.
  • They limit you to 2x speed (at most), and their original voices are slower than most human speech, so it’s more like 1.75x. This is irritating if you’re used to faster speeds.
In the end, this leads to only the most motivated people using the services, leaving out a huge percentage of the potential audience. (2)

How The Nonlinear Library fixes these problems


To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds.

We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service.

Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (3) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource.

Why not have a human read the content?


The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software.

On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%.

Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities.

Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI.

On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer.

Future Playlists (“Bookshelves”)
===================================

There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:

  • Top of all time playlists: a playlist of the top 300 upvoted posts of all time on the EA Forum, one for LessWrong, etc. This allows people to binge all of the best content EA has put out over the years. Depending on their popularity, we will also consider setting up top playlists by year or by topic. As the library grows we’ll have the potential to have even larger lists as well.
  • Playlists by topic (or tag): a playlist for biosecurity, one for animal welfare, one for community building, etc.
  • Playlists by forum: one for the EA Forum, one for LessWrong, etc.
  • Archives. Our current model focuses on turning new content into audio. However, there is a substantial backlog of posts that would be great to convert.
  • Org specific podcasts. We'd be happy to help EA organizations set up their own podcast version of their content. Just reach out to us.
  • Other? Let us know in the comments if there are other sources or topics you’d like covered.
Who we are ==========

We're Nonlinear, a meta longtermist organization focused on reducing existential and suffering risks. More about us.

Footnotes
=========

(1)  Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves.

(2)  For those of you who want to use TTS for a wider variety of articles than what the Nonlinear Library will cover, the ones I use are listed below. Do bear in mind they each have at least one of the cons listed above. There are probably also better ones out there as the landscape is constantly changing.

(3) The current upvote thresholds for which articles are converted are: 25 for the EA forum 30 for LessWrong No threshold for the Alignment Forum due to low volume

This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels.

TTS audio of "Ngo and Yudkowsky on alignment difficulty"

My impression is that some people were put off by the length of the articles in Late 2021 MIRI Conversations. Personally, I've used my iPhone's text-to-speech functionality to listen to these and similarly long LessWrong posts as I do other things. After someone else commented on how convenient that seemed, I thought I should try posting a text-to-speech audio version of "Ngo and Yudkowsky on alignment difficulty" and see if that made the content more accessible.

If you find TTS audio versions of longer posts helpful or have other feedback, please let me know. I'm planning to generate TTS versions of the other MIRI conversations after getting feedback here. ~In the future, we may even want some sort of integrated TTS service for long LessWrong posts~. Edit: thanks to Steven Byrnes for pointing out that we already have such a service from the Nonlinear Library. Here's their version of "Ngo and Yudkowsky on alignment difficulty".

Here is a SoundCloud link for my version.

The mp3 files are available at this Google Drive folder.

I generated the audio files with Amazon Polly using the neural version of the English/US voice Joanna.

Following TTS audio of technical discussions is difficult at first. I've used my iPhone's TTS for years, and it still took me a few minutes to adapt to the Amazon voice. I suggest listening for at least 10 minutes, and not getting too invested in following all the details, especially at first.

I've striped out the timestamps on the posts, since they're difficult to follow and distracting in an audio-only format. If any of the participants would like me to add them back, make other minor changes, or remove this post entirely, I'd be happy to oblige.

Listen to top LessWrong posts with The Nonlinear Library

Crossposted from the EA Forum.

We are excited to announce the launch of The Nonlinear Library, which allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs.

In the rest of this post, we’ll explain our reasoning for the audio library, why it’s useful, why it’s potentially high impact, its limitations, and our plans. You can read it here or listen to the post in podcast form here.

Listen here: Spotify, Google Podcasts, Pocket Casts, Apple, or elsewhere

Or, just search for it in your preferred podcasting app.

Goal: increase the number of people who read EA research
========================================================

A koan: if your research is high quality, but nobody reads it, does it have an impact?

![](https://lh4.googleusercontent.com/IKxCQlaUa9K-WrhrRwT4u3u9YmVjFoQgaO2xzDiULoxYv9bRTYN7Se98gAdrQQzA5AgGRYwx-qGToxof-pgovWMWEA-aSi9MNJYYp-ZZF8c5mm6XQHGJrSQsV2QawQA4iFCxoPb-=s1600)

Generally speaking, the theory of change of research is that you investigate an area, come to better conclusions, people read those conclusions, they make better decisions, all ultimately leading to a better world. So the answer is no. Barring some edge cases (1), if nobody reads your research, you usually won’t have any impact.

Research → Better conclusion → People learn about conclusion → People make better decisions → The world is better

Nonlinear is working on the third step of this pipeline: increasing the number of people engaging with the research. By increasing the total number of EA and rationalist articles read, we’re increasing the impact of all of that content.

This is often relatively neglected because researchers typically prefer doing more research instead of promoting their existing output. Some EAs seem to think that if their article was promoted one time, in one location, such as the EA Forum, then surely most of the community saw it and read it. In reality, it is rare that more than a small percentage of the community will read even the top posts. This is an expected-value tragedy, when a researcher puts hundreds of hours of work into an important report which only a handful of people read, dramatically reducing its potential impact.

Here are some purely hypothetical numbers just to illustrate this way of thinking:

Imagine that you, a researcher, have spent 100 hours producing outstanding research that is relevant to 1,000 out of a total of 10,000 EAs.
> Each relevant EA who reads your research will generate $1,000 of positive impact. So, if all 1,000 relevant EAs read your research, you will generate $1 million of impact.
> You post it to the EA Forum, where posts receive 500 views on average. Let’s say, because your report is long, only 20% read the whole thing - that’s 100 readers. So you’ve created 100*1,000 = $100,000 of impact. Since you spent 100 hours and created $100,000 of impact, that’s $1,000 per hour \- pretty good!
> But if you were to spend, say 1 hour, promoting your report -  for example, by posting links on EA-related Facebook groups - to generate another 100 readers, that would produce another $100,000 of impact. That’s $100,000 per marginal hour or ~$2,000 per hour taking into account the fixed cost of doing the original research.
> Likewise, if another 100 EAs were to listen to your report while commuting, that would generate an incremental $100,000 of impact - at virtually no cost, since it’s fully automated.
> In this illustrative example, you’ve nearly tripled your cost-effectiveness and impact with one extra hour spent sharing your findings and having a public system that turns it into audio for you.  

Another way the audio library is high expected value is that instead of acting as a multiplier on just one researcher or one organization, it acts as a multiplier on nearly the entire output of the EA research community. This allows for two benefits: long-tail capture and the power of large numbers and multipliers.

Long-tail capture. The value of research is extremely long tailed, with a small fraction of the research having far more impact than others. Unfortunately, it’s not easy to do highly impactful research or predict in advance which topics will lead to the most traction. If you as a researcher want to do research that dramatically changes the landscape, your odds are low. However, if you increase the impact of most of the EA community’s research output, you also “capture” the impact of the long tails when they occur. Your probability of applying a multiplier to very impactful research is actually quite high.

Power of large numbers and multipliers. If you apply a multiplier to a bigger number, you have a proportionately larger impact. This means that even a small increase in the multiplier leads to outsized improvements in output. For example, if a single researcher toiled away to increase their readership by 50%, that would likely have a smaller impact than the Nonlinear Library increasing the readership of the EA Forum by even 1%. This is because 50% times a small number is still very small, whereas 1% times a large number is actually quite large. And there’s reason to believe that the library could have much larger effects on readership, which brings us to our next section.

Why it’s useful
==================

EA needs more audio content


EA has a vibrant online community, and there is an amazing amount of well researched, insightful, and high impact content. Unfortunately, it’s almost entirely in writing and very little is in audio format.

There are a handful of great podcasts, such as the 80,000 Hours and FLI podcasts, and some books are available on Audible. However, these episodes come out relatively infrequently and the books even less so. There’s a few other EA-related podcasts, including one for the EA Forum, but a substantial percentage have become dormant, as is far too common for channels because of the considerable amount of effort required to put out episodes.

There are a lot of listeners


The limited availability of audio is a shame because many people love to listen to content. For example, ever since the 80,000 Hours podcast came out, a common way for people to become more fully engaged in EA is to mainline all of their episodes. Many others got involved through binging the HPMOR audiobook, as Nick Lowry puts it in this meme. We are definitely a community of podcast listeners.

![](https://lh4.googleusercontent.com/tllNGUvT0bJJH2MCIJQrWynBm7R3pHS8MVgszQBI0yiAE7yr0V-sZ9nYA-aVwRbps99sOJGM8G0FOFUItnRMk9Un-CFxC94gOmkz1Tq0yLVXdNGuHiv4sDZuoTgNZVA8IvUtSSND=s1600)

Why audio? Often, you can’t read with your eyes but you can with your ears. For example, when you’re working out, commuting, or doing chores. Sometimes it’s just for a change of pace. In addition, some people find listening to be easier than reading. Because it feels easier, they choose to spend time learning that might otherwise be spent on lower value things.

Regardless, if you like to listen to EA content, you’ll quickly run out of relevant podcasts - especially if you’re listening at 2-3x speed - and have to either use your own text-to-speech software or listen to topics that are less relevant to your interests.

Existing text-to-speech solutions are sub-optimal


We’ve experimented extensively with text-to-speech software over the years, and all of the dozens of programs we’ve tried have fairly substantial flaws. In fact, a huge inspiration for this project was our frustration with the existing solutions and thinking that there must be a better way. Here are some of the problems that often occur with these apps:

  • They are glitchy, frequently crashing, losing your spot, failing at handling formatting edge cases, etc.
  • Their playlists don’t work or exist, so you’ll pause every 2-7 minutes to pick a new article to read, making it awkward to use during commutes, workouts, or chores. Or maybe you can’t change the order, like with Pocket, which makes it unusable for many.
  • They’re platform specific, forcing you to download yet another app, instead of, say, the podcast app you already use.
  • Pause buttons on headphones don’t work, making it exasperating to use when you’re being interrupted frequently.
  • Their UI is bad, requiring you to constantly fiddle around with the settings.
  • They don’t automatically add new posts. You have to do it manually, thus often missing important updates.
They use old, low-quality voices, instead of the newer, way better ones. Voices have improved a lot* in the last year.
  • They cost money, creating yet another barrier to the content.
  • They limit you to 2x speed (at most), and their original voices are slower than most human speech, so it’s more like 1.75x. This is irritating if you’re used to faster speeds.
In the end, this leads to only the most motivated people using the services, leaving out a huge percentage of the potential audience. (2)

How The Nonlinear Library fixes these problems


To make it as seamless as possible for EAs to use, we decided to release it as a podcast so you can use the podcast app you’re already familiar with. Additionally, podcast players tend to be reasonably well designed and offer great customizability of playlists and speeds.

We’re paying for some of the best AI voices because old voices suck. And we spent a bunch of time fixing weird formatting errors and mispronunciations and have a system to fix other recurring ones. If you spot any frequent mispronunciations or bugs, please report them in this form so we can continue improving the service.

Initially, as an MVP, we’re just posting each day’s top upvoted articles from the EA Forum, Alignment Forum, and LessWrong. (3) We are planning on increasing the size and quality of the library over time to make it a more thorough and helpful resource.

Why not have a human read the content?


The Astral Codex Ten podcast and other rationalist podcasts do this. We seriously considered this, but it’s just too time consuming, and there is a lot of written content. Given the value of EA time, both financially and counterfactually, this wasn’t a very appealing solution. We looked into hiring remote workers but that would still have ended up costing at least $30 an episode. This compared to approximately $1 an episode via text-to-speech software.

On top of the time costs leading to higher monetary costs, it also makes us able to make a far more complete library. If we did this with humans and we invested a ton of time and management, we might be able to convert seven articles a week. At that rate, we’d never be able to keep up with new posts, let alone include the historical posts that are so valuable. With text-to-speech software, we could have the possibility of keeping up with all new posts and converting the old ones, creating a much more complete repository of EA content. Just imagine being able to listen to over 80% of EA writing you’re interested in compared to less than 1%.

Additionally, the automaticity of text-to-speech fits with Nonlinear’s general strategy of looking for interventions that have “passive impact”. Passive impact is the altruistic equivalent of passive income, where you make an upfront investment and then generate income with little to no ongoing maintenance costs. If we used human readers, we’d have a constant ongoing cost of managing them and hiring replacements. With TTS, after setting it up, we can mostly let it run on its own, freeing up our time to do other high impact activities.

Finally, and least importantly, there is something delightfully ironic about having an AI talk to you about how to align future AI.

On a side note, if for whatever reason you would not like your content in The Nonlinear Library, just fill out this form. We can remove that particular article or add you to a list to never add your content to the library, whichever you prefer.

Future Playlists (“Bookshelves”)
===================================

There are a lot of sub-projects that we are considering doing or are currently working on. Here are some examples:

  • Top of all time playlists: a playlist of the top 300 upvoted posts of all time on the EA Forum, one for LessWrong, etc. This allows people to binge all of the best content EA has put out over the years. Depending on their popularity, we will also consider setting up top playlists by year or by topic. As the library grows we’ll have the potential to have even larger lists as well.
  • Playlists by topic (or tag): a playlist for biosecurity, one for animal welfare, one for community building, etc.
  • Playlists by forum: one for the EA Forum, one for LessWrong, etc.
  • Archives. Our current model focuses on turning new content into audio. However, there is a substantial backlog of posts that would be great to convert.
  • Org specific podcasts. We'd be happy to help EA organizations set up their own podcast version of their content. Just reach out to us.
  • Other? Let us know in the comments if there are other sources or topics you’d like covered.
Who we are ==========

We're Nonlinear, a meta longtermist organization focused on reducing existential and suffering risks. More about us.

Footnotes
=========

(1)  Sometimes the researcher is the same person as the person who puts the results into action, such as Charity Entrepreneurship’s model. Sometimes it’s a longer causal chain, where the research improves the conclusions of another researcher, which improves the conclusions of another researcher, and so forth, but eventually it ends in real world actions. Finally, there is often the intrinsic happiness of doing good research felt by the researcher themselves.

(2)  For those of you who want to use TTS for a wider variety of articles than what the Nonlinear Library will cover, the ones I use are listed below. Do bear in mind they each have at least one of the cons listed above. There are probably also better ones out there as the landscape is constantly changing.

(3) The current upvote thresholds for which articles are converted are: 25 for the EA forum 30 for LessWrong No threshold for the Alignment Forum due to low volume

This is based on the frequency of posts, relevance to EA, and quality at certain upvote levels.

Would you benefit from audio versions of posts?

I know two people who have a hard time reading, but totally enjoy audio versions of blogposts and were only able to consume the sequences via audiobook.

Curious how common this is, and how much low hanging fruit there is vis-a-vis either making audio versions of posts easily accessible, or finding a high quality text-to-speech thing to provide it by default.

Kickstarting the audio version of the upcoming book "The Sequences"

LessWrong is getting ready to release an actual book that covers most of the material found in the Sequences.

There have been a few posts about it in the past, here are two: the title debate, content optimization.

We've been asked if we'd like to produce the audiobook version and the answer is yes. This is a large undertaking. The finished product will probably be over 35 hours of audio.

To help mitigate our risk we've decided to Kickstarter the audiobook.  This basically allows us to pre-sell it so we're not stuck with a large production cost and no revenue.

The kickstarter campaign is here: https://www.kickstarter.com/projects/1267969302/lesswrong-the-sequences-audiobook

If you haven't heard of us before we've already produced some sequences into audiobooks.  You can see them and listen to samples which are indicative of the audio quality here.

LessWrong podcasts

Today we're announcing a partnership with Castify to bring you Less Wrong content in audio form. Castify gets blog content read by professional readers and delivers it to their subscribers as a podcast so that you can listen to Less Wrong on the go. The founders of Castify are big fans of Less Wrong so they're rolling out their beta with some of our content.

!Castify

Note: The embedded player (above) isn't live as of this posting, but should be deployed soon.

To see how many people will use this, we're having the entireMysterious Answers to Mysterious Questions core sequence read and recorded. We thought listening to it would be a great way for new readers to get caught up and for others to check out the quality of Castify's work. We will be adding more Less Wrong content based on community feedback, so let us know which content you'd like to see more of in the comments.

For instance: Which other sequences would you like to listen to? Would there be interest in an ongoing podcast channel for the promoted posts?

Learning with Audiobooks

I use Audiobooks, recorded lectures and podcasts to learn to a massive extent and I find them extremely useful. I haven’t heard them discussed on here much, so I thought I would broach the subject. Here on lesswrong in terms on scholarship textbooks are widely favored and textbooks on math are especially encouraged. I’ve listened to some textbooks that worked well in audio and even if you can’t learn math very well with them, there are plenty of other useful things to learn as well.

One barrier to using audiobooks that a lot of people have described to me is that they can read much faster than they can get through audiobooks. It’s pretty easy to find applications that speed up audio, though, so this doesn't seem like a great reason. Another barrier is that it is harder to find audio sources of things, again, clever use of the internet can still find you most things.

The two main benefits that I would expect most people to receiving when listening to audio rather than reading are:

(1) That you can accomplish some other manual task while listening to an audiobook.

I work at a manual job so I have more time available to listen than the average person, but I think that most people underestimate the amount of time that they could spend at activities that you can listen to audiobooks while doing. Some examples are commuting, cleaning, making things, sports, shopping, shaving and falling asleep. Doubling you productivity is just not something that can be overlooked.

(2) That listening to audiobooks is less effortful than reading.

This has definitely been my experience and I would be interested in hearing what other people’s experiences are to see if I really am typical in this respect. I love reading, but it’s the sort of activity that I have to take breaks from and whereas with audiobooks I can actually just happily listen to them with all of my waking hours. I remember being read to as a kid and it’s just like that.