• Jestzer@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 month ago

    The rule of any article asking asking a question in its title is that the answer is always no.

  • flamingo_pinyata@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 month ago

    AI is actually great at typing the code quickly. Once you know exactly what you want. But it’s already the case that if your engineers spend most of their time typing code, you’re doing something wrong. AI or no AI.

  • agamemnonymous@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 month ago

    I think the obvious answer is “Yes, some, but not all”.

    It’s not going to totally replace human software developers anytime soon, but it certainly has the potential to increase productivity of senior developers and reduce demand for junior developers.

  • A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    … it will take many years … and designs will change considerably before we are there.

  • pathief@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Even if the AI was at the point if outputing exactly what you want correcly, decision makers would still need to be able to specify exactly what they want and need. “I want a website that pops” isn’t going to cut it.

  • OmnislashIsACloudApp@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    people look at this stuff as a yes or no and that’s a major misunderstanding.

    I work in tech, and I can tell you 100% you could not just give a job to AI and call it a day.

    I cannot even imagine this type of response generation ever being capable of that without developing some sort of true intelligence if for no other reason than to turn bad prompts by people who do not understand what they want or what is possible into functional projects.

    that said, but I do believe is possible is that it makes like 5 to 10% of the job a little bit faster. programming is like 10 to 20% writing code and 80 to 90% understanding what that code should be and why it isn’t working that way yet.

    Even the code you get from it is generally wrong but sometimes useful.

    best case scenario I could see right now is not that it replaces jobs but that it makes people more effective, kind of like giving a framer a nail gun instead of a box of nails and a hammer except not that big of an efficiency gain.

    ultimately this might mean you do the job with 8 people instead of 10, or something like that.

    if it reduced the total number of jobs because it was a tool that made people more effective - did it take the job away?

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Not until it’s better at QA than I am. Good luck teaching a machine how stupid end-users can be.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    1 month ago

    In the long run, sure.

    In the near term? No, not by a long shot.

    There are some tasks we can automate, and that will happen. That’s been a very long-running trend, though; it’s nothing new. People generally don’t write machine language by physically flipping switches these days; many decades of automation have happened since then.

    I also don’t think that a slightly-tweaked latent diffusion model, of the present “generative AI” form, will get all that far, either. The fundamental problem: taking an incomplete specification in human language and translating it to a precise set of rules in machine language making use of knowledge of the real world, isn’t something that I expect you can do very effectively by training on a existing corpus.

    The existing generative AIs work well on tasks where you have a large training corpus that maps from something like human language to an image. The resulting image don’t have a lot by way of hard constraints on their precision; you can illustrate that by generating a batch of ten images for a given prompt that might all look different, but a fair number look decent-enough.

    I think that some of that is because humans typically process images and language in a way that is pretty permissive of errors; we rely heavily on context and our past knowledge about the real world to obtain meaning up with the correct meaning. An image just needs to “cue” our memories and understanding of the world. We can see images that are distorted or stylized, or see pixel art, and recognize it for what it is.

    But…that’s not what a CPU does. Machine language is not very tolerant of errors.

    So I’d expect a generative AI to be decent at putting out content intended to be consumed by humans – and we have, in fact, had a number of impressive examples of that working. But I’d expect it to be less-good at putting out content intended to be consumed by a CPU.

    I think that that lack of tolerance for error, plus the need to pull in information from the real world, is going to make translating human language to machine language less of a good match than translating human language to human language or human language to human-consumable image.