Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 1 Post
  • 39 Comments
Joined 9 days ago
cake
Cake day: July 17th, 2025

help-circle

  • Way to move the goalposts.

    If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.

    Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.

    So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.



  • Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.

    What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?

    Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.

    Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.

    Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.













  • I haven’t claimed that it is. The point is, the only two plausible scenarios I can think of where we don’t eventually reach AGI are: either we destroy ourselves before we get there, or there’s something fundamentally mysterious about the biological computer that is the human brain - something that allows it to process information in a way we simply can’t replicate any other way.

    I don’t think that’s the case, since both the brain and computers are made of matter, and matter obeys the laws of physics. But it’s at least conceivable that there could be more to it.