• 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle







  • Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

    • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
    • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
    • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
    • etc.

    For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

    For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.



  • Put simply, some states get more electors than other states to account for greater population, and each state decides how their electors are supposed to vote according to their statewide popular vote. Most states apply all of their electors to the winner of the popular vote in their state, while some apply them proportionally. Most do the former (“winner takes all”).

    This leads to a discrepancy between the popular vote and the electoral vote, and it’s mathematically biased against states with higher populations. So, votes in the more populous states (which tend to vote Democrat) are worth “less” in the electoral college than those in less populous states, leading to Democrats winning the popular vote yet losing the actual election… which has happened in every election they’ve lost since Bush v. Gore, if I’m not mistaken. I’ll double check that and edit if I’m wrong.

    Edit: Sorry, it did not happen for Bush v. Kerry, Bush won the popular vote in that one by less than 1%. However, in the other two (Bush v. Gore and Trump v. Clinton) the popular votes were actually won handily by Gore and Clinton, not by Bush or Trump.

    Edit 2: This is also notably NOT made worse by gerrymandering, because the number of electors you get is equal to the combined number of senators and congressmen your state gets. Since all states apply their electors based on the popular vote result, it doesn’t matter what party alignments your congresspeople have, so gerrymandering plays no role here.

    The mathematical bias comes from the fact that every state gets two senators no matter what the population is, and only your congressperson count is proportional to population, but both count toward your number of electors. So, less populous states have proportionally somewhat more “electors per capita” than states with higher populations.



  • That’s a very interesting suggestion and I’d love to see it done, actually, regardless of what I’m about to write.

    The problem is that mods aren’t bot sweepers or disinformation sniffers. They’re just regular people… and there are relatively few of them. They probably have, on average, a better radar than most users, but when it comes to malicious actors they aren’t going to be perfect. More importantly, they have a finite amount of time and effort they can put into moderation. It’s way better to organically crowd-source these kinds of things if it’s possible, and the kind of community Lemmy has makes it possible.

    Banning these comments makes the community susceptible to all kinds of manipulation, especially in the run-up to a US election (let alone this one). The benefit of banning these comments is comparatively very minimal: effectively removing one type of ad hominem attack in arguments that have always featured ad hominem attacks, in one form or another.



  • keegomatic@lemmy.worldtoWorld News@lemmy.worldSidebar Update: Civility
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    4 months ago

    I think that public call-outs of suspicious behavior is the only real and continuous way to teach new or under-informed users what bots and disinformation actors (ESPECIALLY these) sound like. I don’t remember the last time I personally called out someone I thought was a paid/malicious account or a bot… maybe never have on Lemmy. But despite the incivility, I truly believe the publicity of these comments is good for creating a resilient community.

    I’ve been on forums or aggregators similar to Lemmy for a long time, and I think I have a pretty good radar when it comes to identifying suspicious account behavior. I think reading occasional accusations from within your community help you think critically about what’s being espoused in the thread, what the motivations of different users are, and whether to disbelieve or believe the accuser.

    Yes, sometimes it’s used as a personal attack. But it’s better to have it out in the open so that the reality of online discourse (extremely frequent attempted manipulation of opinions) is clear to everyone, and the community can respond positively or negatively to it and organically support users that are likely victims.