• 0 Posts
  • 25 Comments
Joined 9 months ago
cake
Cake day: January 1st, 2024

help-circle


  • hikaru755@lemmy.worldtoMemes@lemmy.mlCapitalist logix
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    6 days ago

    the argument that “being selfless is selfish” is not useful

    Yes, that’s my entire point.

    and provably false

    Depends on how you define “selfish”. Again, that’s exactly what I’m trying to demonstrate here. Reducing the definition of selfish to mean “getting something out of it” makes it meaningless because every decision is made in the hopes of getting something out of it in some way, even if it’s obscure. To make it useful, you need to look at what someone is getting out of it in order to get to a useful definition.


  • That would be an extremely reductive definition that doesn’t really tell us much about how caring for others is actually experienced and how it manifests in the world.

    Exactly, that’s my point.

    How would this for example explain sacrificing yourself to save another person, if the very core of caring is to create positive emotions in yourself?

    In this case it would be about reducing negative emotions, choosing the lesser of two evils. Losing a loved one and/or having to live with the knowledge that you could have saved them but chose not to can inflict massive emotional pain, potentially for the rest of your life. Dying yourself instead might seem outright attractive in comparison.

    this idea that caring is in its essence transactional

    That’s not actually how I’m seeing it, and I also don’t think it’s a super profound insight or something. It’s just a super technical way of viewing the topic of motivation, and while it’s an interesting thought experiment, it’s mostly useless.


  • hikaru755@lemmy.worldtoMemes@lemmy.mlCapitalist logix
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    6 days ago

    Well, but what does “caring” mean? It means that their well-being affects your emotions. At its very core, you wanting to help people you care about comes from wanting to create positive emotions in yourself or avoiding negative ones (possibly in the future, it doesn’t have to be an immediate effect). If those emotions weren’t there, you wouldn’t actually care and thus not do it.

    Edit to clarify: I’m not being cynical or pessimistic here, or implying that this means that everyone is egotistical because of this. The point I was trying to make is that defining egotism vs. Altruism is a little bit more complex than just looking at whether there’s something in it for the acting person. We actually need to look at what’s in it for the acting person.


  • hikaru755@lemmy.worldtoMemes@lemmy.mlCapitalist logix
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    6 days ago

    I mean, you’re not wrong, but your point is also kinda meaningless. Of course, you only ever do things because there’s something in it for you, even if that something is just feeling good about yourself. If there was truly nothing in it for you, then why would you do it?

    But that misses the point of the “people are inherently selfish” vs “people are inherently generous” discussion, because it’s not actually about whether people do things only for themselves at the most literal level, instead it’s about whether people inherently get something out of doing things for others without external motivation. So your point works the same on both sides of the argument.


  • The algorithm is actually tailored to find out if/when you fall asleep while watching videos, and then recommends longer videos in autoplay when it believes you are, because they’ll get to play you more ads and cash out more.

    You might be misremembering / misinterpreting a little there. This behavior is not intentional, it’s just a side effect of how the algorithm currently works. Showing you longer videos doesn’t equate to showing you more ads. On the contrary, if you get loads of short videos you’ll have way more opportunities to see pre-roll ads, but with longer videos, you’re just to just the mid-roll spots in that video. So YouTube doesn’t really have an incentive to make it work like that, it’s just accidental.

    Here’s the spiffing Brit video on this, which I think you might have gotten this idea from: https://youtu.be/8iOjeb5DTZI

    Edit: to be clear, I fully agree that YouTube will do anything to shove ads down our throats no matter how effective they actually are. I’m just saying that this example you’ve brought is not really that.





  • It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

    This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.

    That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.

    LLMs do not make decisions.

    What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.








  • Not really. Timezones, at their core (so without DST or any other special rules), are just a constant offset that you can very easily translate back and forth between, that’s trivial as long as you remember to do it. Having lots of them doesn’t really make anything harder, as long as you can look them up somewhere. DST, leap seconds, etc., make shit complicated, because they bend, break, or overlap a single timeline to the point where suddenly you have points in time that happen twice, or that never happen, or where time runs faster or slower for a bit. That is incredibly hard to deal with consistently, much more so that just switching a simple offset you’re operating within.


  • You’re not wrong, but the way you put it makes it sound a little bit too intentional, I think. It’s not like the camera sees infrared light and makes a deliberate choice to display it as purple. The camera sensor has red, green and blue pixels, and it just so happens that these pixels are receptive to a wider range of the light spectrum than the human eye equivalent, including some infrared. Infrared light apparently triggers the pixels in roughly the same way that purple light does, and the sensor can’t distinguish between infrared light and light that actually appears purple to humans, so that’s why it shows up like that. It’s just an accidental byproduct of how camera sensors work, and the budgetary decision to not include an infrared filter in the lens to prevent it from happening.