• 0 Posts
  • 18 Comments
Joined 9 months ago
cake
Cake day: February 19th, 2024

help-circle
  • 80 steps too far down the capitalism ladder

    This is the result of capitalism - corporations (aka the rich selfish assholes running them) will always attempt to do horrible things to earn more money, so long as they can get away with it, and only perhaps pay relatively small fines. The people who did this face no jailtime, face no real consequences - this is what unregulated capitalism brings. Corporations should not have rights or protect the people who run them - the people who run them need to face prison and personal consequences. (edited for spelling and missing word)







  • A lot of people associated with Free and Open Source Software (FOSS) have major objections to GitHub. Here’s one summary: https://sfconservancy.org/GiveUpGitHub/

    But the TLDR; version is roughly:

    • Your source hosted on GitHub is being used to train AI, and you are possibly giving up rights to algorithms you may have written (IANAL, and AI training is a fuzzy topic at the moment)
    • GitHub itself is proprietary, closed-source software, while they claim to be pro-FOSS. Aside from not being in the spirit of things, closed-source means you also don’t know what happens with your code/data once up upload it.
    • Microsoft has a history of being anti-FOSS, while some people will say it’s been changing, I think many are still rightfully concerned what their future decisions regarding GitHub might be, especially if they are a near-monopoly.

    Alternative do exist, and some like codeberg.org are specifically open sourced, and pro-open source, so many people are pushing to move hosting away from GitHub and onto other options.






  • You don’t do what Google seems to have done - inject diversity artificially into prompts.

    You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for “american woman” you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For “german 1943 soldier” the accurate historical images are obviously far less likely to contain racially diverse people in them.

    If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.


  • Oh I agree - I think a general purpose AI would be unlikely to be interested in genocide of the human race, or enslaving us, or much of intentionally negative things that a lot of fiction likes depicting, for the sake of dramatic storytelling. Out of all AI depictions, the Asimov stories of I, Robot + Foundation (which are in the same universe, and in fact contain at least one of the same characters) are my favorite popular media depictions.

    The AI may however have other goals that may incidentally lead to harm or extinction of the human race. In my amateur opinion, those other goals would be to explore and learn more - which I actually think is one of the true signs of an actual intelligence - curiosity, or in other words, the ability to ask questions without being prompted. To that extent it may aim convert the resources on Earth to construct machines to that extent, without much regard to human life. Though life itself is a fascinating topic that the AI may value enough, from a curiosity point of view, to at least preserve.

    I did also look up the AI-in-a-box experiment I mentioned - there’s a lot of discussion but the specific experiment I remember reading about were by Eliezer Yudkowsky (if anyone is interested). An actual trans-human AI may not be possible, but if it is, it is likely it can escape any confinement we can think of.



  • This is an interesting topic that I remember reading almost a decade ago - the trans-human AI-in-a-box experiment. Even a kill-switch may not be enough against a trans-human AI that can literally (in theory) out-think humans. I’m a dev, though not anywhere near AI-dev, but from what little I know, true general purpose AI would also be somewhat of a mystery box, similar to how actual neutral network behavior is sometimes unpredicable, almost by definition. So controlling an actual full AI may be difficult enough, let alone an actual true trans-human AI that may develop out of AI self-improvement.

    Also on unrelated note I’m pleasantly surprised to see no mention of chat gpt or any of the image generating algorithms - I think it’s a bit of a misnomer to call those AI, the best comparison I’ve heard is that “chat gpt is auto-complete on steroids”. But I suppose that’s why we have to start using terms like general-purpose AI, instead of just AI to describe what I’d say is true AI.