• Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    248
    arrow-down
    5
    ·
    2 months ago

    Wow, the text generator that doesn’t actually understand what it’s “writing” is making mistakes? Who could have seen that coming?

    I once asked one to write a basic 50-line Python program (just to flesh things out), and it made so many basic errors that any first-year CS student could catch. Nobody should trust LLMs with anything related to security, FFS.

    • skillissuer@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      103
      arrow-down
      4
      ·
      edit-2
      2 months ago

      Nobody should trust LLMs with anything

      ftfy

      also any inputs are probably scrapped and used for training, and none of these people get GDPR

      • mox@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        2 months ago

        also any inputs are probably scraped

        ftfy

        Let’s hope it’s the bad outputs that are scrapped. <3

      • curbstickle@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        2 months ago

        Eh, I’d say mostly.

        I have one right now that looks at data and says “Hey, this is weird, here are related things that are different when this weird thing happened. Seems like that may be the cause.”

        Which is pretty well within what they are good at, especially if you are doing the training yourself.

    • SketchySeaBeast@lemmy.ca
      link
      fedilink
      English
      arrow-up
      84
      arrow-down
      4
      ·
      2 months ago

      I wish we could say the students will figure it out, but I’ve had interns ask for help and then I’ve watched them try to solve problems by repeatedly asking ChatGPT. It’s the scariest thing - “Ok, let’s try to think about this problem for a moment before we - ok, you’re asking ChatGPT to think for a moment. FFS.”

        • djsaskdja@reddthat.com
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          2 months ago

          Has critical thinking ever been taught? Feel like it’s just something you have or you don’t.

          • Sauerkraut@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            16
            ·
            2 months ago

            Critical thinking is essentially learning to ask good questions and also caring enough to follow the threads you find.

            For example, if mental health is to blame for school shootings then what is causing the mental health crisis and are we ensuring that everyone has affordable access to mental healthcare? Okay, we have a list of factors that adversely impact mental health, what can we do to address each one? Etc.

            Critical thinking isn’t hard, it just takes time, effort.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              2 months ago

              I have the impression that most people (or maybe it’s my faith in Humanity that’s at an all time low and it’s really just “some people”) just want pre-chewed explanations given to them rather than spend time and energy figuring things out themselves - basically baby pap as ideas food rather than cooking their own ideas food out of raw ingredients.

              Certainly that would help explain the resurgence of Populist sloganeering and continued popularity of Religion (with it’s ever popular simple explanations of “Deity did it” and “it’s the will of Deity”)

              • Clinicallydepressedpoochie@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                2 months ago

                In my mind, it’s really just entitlement. Something along the lines of, “well, I don’t know the answer and why should I have to know if someone else is going to figure it out.”

                In a tired way, I understand it. Everyday I just want some of my time back for myself. If I’m always the one who has to work through all the problems for my ideas just to be ignored then I’m just going to be perpetually frustrated. So if my ideas are half baked and the solutions i barf up aren’t to your liking, well, figure it out yourself.

                Not to say that I am this way. I don’t get frustrated when my ideas are ignored. I do get frustrated, though, when others eat up half baked ideas knowing they are just that.

                Sorry, if what I’ve wrote so far has gotten a bit confusing. I’ll wrap it up and say, it’s entitlement. People don’t want to think for themselves because it’s time consuming. They think the world should order itself in a way that fulfills their needs with minimal effort on their part. Except, to understand how the world would be ordered for that to be reality, they can’t comprehend because no one has really figured that one out. So they fall back on god and gods an easy out because, duh, he’s god.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            2 months ago

            Critical thinking, especially Skepticism, does not make for good Consumers or mindless followers of Political Tribes.

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 months ago

            British primary schools used to have something called ‘problem solving’ it was usually a simple maths problem described in words that required some degree of critical thinking to solve. e.g. A frog is at the bottom of a 30m well, it climbs 7m each day but in the night it slides 3m back down in its sleep. You can’t just calculate 30/(7-3) because it doesn’t account for the day the frog gets over the top and thus doesn’t slide back down in its sleep.

            Not the most complex problem but pretty good for kids under 10 to start getting the basics.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            Nah, it’s something you’re lucky enough to learn coincidentally or you don’t. And if you found out too late in life, you might be too stubborn to learn it at that point.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        1
        ·
        2 months ago

        I had a chat w/ my sibling about the future of various careers, and my argument was basically that I wouldn’t recommend CS to new students. There was a huge need for SW engineers a few years ago, so everyone and their dog seems to be jumping on the bandwagon, and the quality of the applicants I’ve had has been absolutely terrible. It used to be that you could land a decent SW job without having much skill (basically a pulse and a basic understanding of scripting), but I think that time has passed.

        I absolutely think SW engineering is going to be a great career long-term, I just can’t encourage everyone to do it because the expectations for ability are going to go up as AI gets better. If you’re passionate about it, you’re going to ignore whatever I say anyway, and you’ll succeed. But if my recommendation changes your mind, then you probably aren’t passionate enough about it to succeed in a world where AI can write somewhat passable code and will keep getting (slowly) better.

        I’m not worried at all about my job or anyone on my team, I’m worried for the next batch of CS grads who chatGPT’d their way through their degree. “Cs get degrees” isn’t going to land you a job anymore, passion about the subject matter will.

        • Aceticon@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          ·
          edit-2
          2 months ago

          Outsourcing killed a lot of the junior and even mid-level career level opportunities in CS and AI seems on track to do the same.

          The downside is that going into CS now (and having gone into CS in the last decade or so, especially in English-speaking countries) was basically the career equivalent of just out of the starting line running full speed into a brick wall.

          The upside is that for anybody who now is a senior techie things have never been this good because there are significantly fewer people at that level than there is need for such people, since in the last decade or so a lot of people haven’t had the chance to progress in their careers to that point.

          Whilst personally this benefits me, I’m totally against this shit and what it has done to the kids entering my career.

          • sugar_in_your_tea@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 months ago

            Yup, and that’s why I’ll discourage people from entering my career, not because it’s a bad gig and it’s going away, but because the bar for competency is about to go up. Do it if you’re passionate and you’ll probably do well for yourself, but don’t do it if you’re just looking for a good job. If you just want a good job, go into nursing, accounting, or the trades.

            • Aceticon@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              2 months ago

              I think it’s even worse than just the bar for competency going up: even for a coding wizard going into the career, it’s a lot harder to squeeze through the bottleneck which is getting an entry level position nowadays unless they have some public proof out on the Net of how good they’re at coding (say, commits in open source projects, your own public projects, or even Youtube videos about it).

              This is something that will negativelly impact perfectly capable young developers who have an introvert personality type (which are most of them in my experience, even in domains such as Hacking) since some of the upsides of Introversion are a greater capacity for really focusing on on things and for detailed analysis - both things that make for the best programmers - and self publicising isn’t a part of the required skillset for good developers (though sooner or later the best ones will have to learn some “image management” if they end up in the Corporate world)

              I’m a bit torn on this since on one side salesmanship being more of a criteria determining one’s chances of getting a break at the start of one’s career as a developer is bad news (good coding and good salesmanship tend to be inverselly correlated) but on the other side a junior developer with some experience actually working with other people on real projects with real users (because they contributed to existing open source projects) has already started learning what we have to teach fresh-out-of-Uni developers to make them professionals.

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                2 months ago

                it’s a lot harder to squeeze through the bottleneck

                Eh, I think that’s overblown. As someone involved in hiring, we go through a ton of crappy candidates before finding someone half-decent, and when we see someone who actually knows what they’re doing, we rush them through the process. The problem is that we’re not a big tech company, we’re in manufacturing, but we do interesting things w/ software. So getting on at one of the big tech companies may be challenging, but if you broaden the scope a little, there are tons of jobs waiting. We’ve had junior positions open for months because the hiring pool is so trash, but when we see a good candidate, we can get an offer to them by the end of the week.

                We don’t care too much about broader visibility (though I will look at your code if you provide a link), we expect competency on our relatively simple coding challenges, as well as a host of technical questions. We also don’t mind hiring immigrants, we’ve sponsored a number of immigrants on our team.

                introversion

                As an introvert myself, I totally get it. I got my job because a recruiter reached out to me, not because I was particularly good at following up with applications. And that’s why I tend to tell people to not get into CS. I encourage them to take CS classes if they’re offered, but not to make it a career choice, and this is for two reasons:

                • manage expectations of the future of CS - junior jobs are likely to contract a bit w/ AI
                • thin the field so it’s easier to find the good candidates - we have to go through 5-10 candidates before we find someone we like
                • Aceticon@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  I see. That does change the idea I had about things a bit.

                  It’s been a while since I was last hiring.

                  I wasn’t aware that the problem nowadays in the West (or at least the US) was an excess of people who don’t really have a natural skill for it choosing software development as a career.

                  That kind of thing was one of the main problems with outsourcing to India maybe a decade ago: the profession was comparatively very well paid for the country so it attracted far too many people without the right skills resulting in a really low average quality of the programmers there - India had really good programmers just like everywhere else but then had a ton of people also working as programmers who should never had gone into it, so the experience of those having to deal with outsourced programming in India usually was pretty bad (I remotelly was a technical lead for a small outsourced team in India from London, and they were really bad whilst, curiously, the good programmers from the Indian Subcontinent I worked with had emigrated from there and were working in London and New York).

                  • sugar_in_your_tea@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    2 months ago

                    Yeah, there was a huge spike in demand for software engineers in the mid-2010s or so, and a massive explosion during COVID, so a lot of less qualified people were handed jobs as long as they could write very basic Python/JavaScript. But after the drop in tech demand after COVID, companies realized they overhired, so they did a ton of layoffs, mostly shucking those low-skilled employees (at least in the first round or two).

                    Maybe it’s different in my area, but we have a ton of applicants whose only dev experience is a bootcamp program, and they apply for a full-time position when many aren’t even qualified for a part-time intern position. They would utterly fail to answer most of our questions, and not make any progress on our (relatively simple) programming challenges. We have a mix of theory (i.e. OO principles, ACID, etc) and practical questions (e.g. concurrency vs parallelism in JavaScript or Python, duck typing, etc). CS grads with little practical experience would fly through the theory, but fail on the practical questions. Bootcamp “grads” would fail miserably at theory, and maybe get half of the practical questions, but then fail on the challenge. Dedicated hobbyists and passionate CS/bootcamp grads would do okay at theory (esp. if we change terminology) and fly through the practical questions and coding challenge.

                    It’s pretty easy to weed a lot of those out in our first round, but the sheer volume of terrible applicants makes hiring super time-consuming. I mostly do second round interviews now (my boss, the director, has the thankless first round job), but I’ve done my fair share of first round interviews as well. I was involved in interviews almost 10 years ago, and I remember there being a lot fewer bad applicants then (i.e. of 5 applicants, 2 were hirable; now it’s like 1 in 10, on a good week).

                    To be fair, I don’t work for a tech company, so we’re not really anyone’s first pick. My company manufacturers things, and our software wing is pretty new and IMO really interesting (we do a lot of complex modeling), but it’s not a place most would think to apply for, which is probably why we attract more desperate people. Then again, my last company was similar and much less visible (had something like 30-40 employees, current company has hundreds locally and thousands globally), yet we got better applicants. The main difference here is time, the CS programs at local universities have a ton more enrollment (some are actually turning away people now), whereas when I went it wasn’t very popular, and I don’t recall bootcamps really being a thing.

                    As for India, they have a lot of great talent and their IT/programming programs are super competitive (i.e. something like 5% of applicants get in, and only 40-60% of grads get jobs). However, the common thread I’ve seen is that Indian developers are very reliant on requirements, and they’ll build pretty much exactly what you specify (i.e. if something seems off, they won’t raise concerns). A lot of this is cultural, and I’ve heard horror stories from my Indian coworkers about managers telling them to do ridiculous things because of what a client said instead of pushing back. For example, my coworker spent well over a week re-implementing a standard Android behavior because the client specified something slightly outside what was possible through standard APIs, but probably would’ve preferred the 5 min solution. An American dev would just ask the customer and probably end up doing the 5-minute solution. Maybe these are isolated incidents, but I’ve seen similar behavior in different projects. If you know this upfront, you can get a good result with regular checkins and small adjustments as you go, but a lot of people don’t understand that.

                    As it stands, since we can’t hire good devs here in the US, we’ve been forced to hire outside firms to fill our ranks. We now have a team in Europe and another in India because we can’t find the proper talent here, and not for lack of trying. And I don’t think our expectations are out of whack, I just think the good devs already have good jobs and the less qualified devs are the ones getting laid off. We have hired a few good devs in the last few years (not rockstars, just solid devs), so it’s not like everyone who lost their jobs aren’t qualified, there just seems to be a lot of unqualified people.

      • pirat@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        2 months ago

        Altering the prompt will certainly give a different output, though. Ok, maybe “think about this problem for a moment” is a weird prompt; I see how it actually doesn’t make much sense.

        However, including something along the lines of “think through the problem step-by-step” in the prompt really makes a difference, in my experience. The LLM will then, to a higher degree, include sections of “reasoning”, thereby arriving at an output that’s more correct or of higher quality.

        This, to me, seems like a simple precursor to the way a model like the new o1 from OpenAI (partly) works; It “thinks” about the prompt behind the scenes, presenting only the resulting output and a hidden (by default) generated summary of the secret raw “thinking” to the user.

        Of course, it’s unnecessary - maybe even stupid - to include nonsense or smalltalk in LLM prompts (unless it has proven to actually enhance the output you want), but since (some) LLMs happen to be lazy by design, telling them what to do (like reasoning) can definitely make a great difference.

        • Blaster M@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          And that’s why I’m the one that fixes the PC when it breaks… because even good programmers may even consider the pc to be magicboxes if they’ve never turned a screwdriver in their life…

    • blackjam_alex@lemmy.world
      link
      fedilink
      English
      arrow-up
      54
      arrow-down
      3
      ·
      2 months ago

      My experience with ChatGPT goes like this:

      • Write me a block of code that makes x thing
      • Certainly, here’s your code
      • Me: This is wrong.
      • You’re right, this is the correct version
      • Me: This is wrong again.
      • You’re right, this is the correct version
      • Me: Wrong again, you piece of junk.
      • I’m sorry, this is the correct version.
      • (even more useless code) … and so on.
      • saltesc@lemmy.world
        link
        fedilink
        English
        arrow-up
        34
        ·
        edit-2
        2 months ago

        All the while it gets further and further from the requirements. So you open five more conversations, give them the same prompt, and try pick which one is least wrong.

        All the while realising you did this to save time but at this point coding from scratch would have been faster.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        2 months ago

        I interviewed someone who used AI (CoPilot, I think), and while it somewhat worked, it gave the wrong implementation of a basic algorithm. We pointed out the mistake, the developer fixed it (we had to provide the basic algorithm, which was fine), and then they refactored and AI spat out the same mistake, which the developer again didn’t notice.

        AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion), but you really do need to know what you’re doing. I recommend new developers avoid AI like the plague until they can use it to cut out the mundane stuff instead of filling in their knowledge gaps. It’ll do a decent job at certain prompts (i.e. generate me a function/class that…), but you’re going to need to go through line-by-line and make sure it’s actually doing the right thing. I find writing code to be much faster than reading and correcting code so I don’t bother w/ AI, but YMMV.

        An area where it’s probably ideal is finding stuff in documentation. Some projects are huge and their search sucks, so being able to say, “find the docs for a function in library X that does…” I know what I want, I just may not remember the name or the module, and I certainly don’t remember the argument order.

        • 9488fcea02a9@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          17
          ·
          2 months ago

          AI is fine if you know what you’re doing and can correct the mistakes it makes (i.e. use it as fancy code completion)

          I’m not a developer and i havent touched code for over 10 yrs, but when i heard about my company pushing AI tools on the devs, i thought exactly what you said. It should be a tool for experienced devs who already know what they’re doing…

          Lo and behold they did the opposite… They fired all the senior people and pushed AI on the interns and new grads… and then expected AI to suddenly make the jr devs work like the expensive Sr devs they just fired…

          Wtf

        • slaacaa@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          2 months ago

          AI is like having an intern you can delegate to. If you give it a simple enough task with clear direction, it can come up with something useful, but you need to check.

      • TaintPuncher@lemmy.ml
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 months ago

        That sums up my experience too, but I have found it good for discussing functions for SQL and Powershell. Sometimes, it’ll throw something into its garbage code and I’ll be like “what does this do?” It’ll explain how it’s supposed to work, I’ll then work out its correct usage and solve my problem. Weirdly, it’s almost MORE helpful than if it just gave me functional code, because I have to learn how to properly use it rather than just copy/paste what it gives me.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 months ago

          That’s true. The mistakes actually make learning possible!

          Man, designing CS curriculum will be easy in future. Just ask it to do something simple, and ask your CS students to correct the code.

    • WalnutLum@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      I like using it like a rubber ducky. I even have it respond almost entirely in quacks.

      Note: it’s a local model running for free. Don’t pay anyone for this slop.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      2 months ago

      What llm did you use, and how long ago was it? Claude sonnet usually writes pretty good python for smaller scripts (a few hundred lines)

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 months ago

        It was ChatGPT from earlier this year. It wasn’t a huge deal for me that it made mistakes, because I had a very specific use case and just wanted to save some time; I knew I’d have to troubleshoot grafting it into my function, but even after I pointed out that it was using depreciated syntax (and how to correct it), it just spat out the code again with even more errors and still using depreciated syntax.

        All LLMs will fail like this in some way, because they don’t actually understand what they’re generating (i.e. they have no mechanism for self-evaluating the veracity of their statements).

        • Terrasque@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          2 months ago

          This is a very simple one, but someone lower down apparently had issue with a script like this:

          https://i.imgur.com/wD9XXYt.png

          I tested the code, it works. If I was gonna change anything, probably move matplotlib import to after else so it’s only imported when needed to display the image.

          I have a lot more complex generations in my history, but all of them have personal or business details, and have much more back and forth. But try it yourself, claude have a free tier. Just try to be clear in the prompt what you want. It might surprise you.

          • Telorand@reddthat.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 months ago

            I appreciate the effort you put into the comment and your kind tone, but I’m not really interested in increasing LLM presence in my life.

            I said what I said, and I experienced what I experienced. Providing me an example where it works is in no way a falsification of the core of my original comment: LLMs have no place generating code for secure applications apart from human review, because they don’t have a mechanism to comprehend or proof their own work.

              • Telorand@reddthat.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                2 months ago

                It’s already hard to not write buggy code, but I don’t think you will detect them by just reviewing LLM code, because detecting issues during code review is much harder than when you’re writing code.

                Definitely. That’s what I was trying to drive at, but you said it well.