If so, I’d like to know about that questions:

  • Do you use an code autocomplete AI or type in a chat?
  • Do you consider environment damage that use of AIs can cause?
  • What type of AI do you use?
  • Usually, what do you ask AIs to do?
  • 1hitsong@lemmy.ml
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    1 month ago

    I don’t.

    I played around with it twice, but both times it gave me nonfunctioning code. It seemed stupid to use it when I’d still have to go back and rewrite it anyway.

  • 404@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 month ago

    Generating quick programs like “a python script that calculates the mean value of two hex colours, outputting the result as a HTML file displaying the resulting three-color gradient”? Yeah, AI is decent at stupid simple tasks like that, and it’s much faster than me writing the script or calculating the values myself. I tend to generate things like these when I’m working on something else, don’t want to spend time on things outside the project I’m working on, and can’t find a website that does the thing I want.

    Touching my actual code? Hell no.

  • strlcpy@lemmy.sdf.org
    link
    fedilink
    arrow-up
    14
    ·
    1 month ago

    My colleague uses it to generate rambling code, often pointlessly rewriting existing logic to solve all kinds of hallucinated problems, which he doesn’t understand a bit, then dumps it on me and acts offended when asked to explain any of it.

  • Rayquetzalcoatl@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    1 month ago

    No, I don’t. I often have to fix the work of my colleague and my boss, who do use it. I often have to gently point out to my boss that just because the chatbot outputs results for things, doesn’t mean those results are accurate or helpful.

  • BartyDeCanter@lemmy.sdf.org
    link
    fedilink
    arrow-up
    9
    ·
    29 days ago

    I don’t use AI when I’m learning a new system, framework or language because I won’t actually learn it.

    I don’t use AI when I need to make a small change on a system I know well, because I can make it just as fast and have better insight into how it all works.

    I don’t use AI when I’m developing a new system because I want to understand how it works and writing the code helps me refine my ideas.

    I don’t use AI when I’m working on something with security or copyright concerns.

    Basically, the only time I use AI is when I’m making a quick throw away script in a language I’m not fluent in.

  • saplyng@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    30 days ago

    I use AI as a rubber duck, to compliment the rubber ducks on my desk when they don’t give enough feedback. So it’s use is mostly conceptual, I find that models that provide “thinking” output perhaps more useful than whatever its actual answer is because it asks questions about edge cases I might not have considered.

    As for code generation, I hate it. It outputs garbage, forgets things, hallucinates, and whatever thing it writes I’ll have to rewrite anyway to actually make it compile.

    As I’m fairly isolated at work I think it makes a good pair programmer partner, so to speak. Offering suggestions that I can take into consideration and research heavily if I think it’s a good one.

  • melfie@lemy.lol
    link
    fedilink
    arrow-up
    7
    ·
    29 days ago

    I use Copilot with mostly Claude Sonnet 4.5. Don’t use the autocomplete because it’s useless and annoying. I mostly chat with it, give it specific instructions for how to implement small changes, carefully review its code, make it fix anything I don’t like, then have it write test scripts with curl calling APIs and other methods to exercise the system in a staging environment and output data so I can manually verify it and make sure all of its changes are working as expected in case I overlooked something in the automated tests.

    As far as environmental impact, training is where most of the impact occurs, and inference, RAG, querying vector databases, etc. is fairly minimal AFAIK.

  • CodenameDarlen@lemmy.worldOP
    link
    fedilink
    arrow-up
    10
    arrow-down
    4
    ·
    1 month ago

    My answer (OP): I use AI for short and small questions, like things I already know but I forgot, like “how to sort an array”, or about Linux commands, which I can test just in time or read the man page to make sure it works as intended.

    I consider my privacy and environment, so I use a local AI (16b) for most of my questions, but for more complex things that I really need any possible help I use Deep Seek Coder v3.1 (671b) in the cloud via ollama.

    I don’t use autocomplete code because it annoys me and don’t let me think about the code, I like to ask when I think I need it.

  • Lysergid@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 month ago
    • I don’t use AI code autocomplete. It was giving me nonsense and interrupts my thought process when I write code. Standard non-AI autocomplete is much better. I tried to use chat to generate medium size logic (up to 100 lines). Mostly it does not work or refining prompt takes more time than writing code myself so I stopped using it for medium sized tasks. I use it for small tasks up to 20 lines where I need an example of how to use specific API. What it does well is generating test cases (not tests themselves). I once tried to summarize set of made up requirements (can elaborate if anyone interested in), it instantly gave me idea of how far we are from AGI as it failed miserably.
    • I do not consider for my usage since I use it maybe once-twice a week on average. But generally, I think it’s a huge waste of resources, not only natural but financial and human
    • Claude 4 sonnet at work. Mistral for personal curiosity episodes
    • I already covered work part. For personal, mostly “searching” random info I couldn’t find via DDG or offloading social rituals such as congratulations
  • Colloidal@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    30 days ago

    Single function text prediction, class boilerplate, some refactoring.

    It’s decent when you inherit outrageously bad legacy code and you want better comments and variable names than “A, x, i”, etc.

    You do have to do it within an editor that highlights all changes so you can carefully review, though.

    Not so much a productivity boost, but rather a bad intern you can delegate boring, easy tasks to. I’d rather review that kind of code than write it, but of you’re the other way around, it’s a punishment.

    • calcopiritus@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      30 days ago

      Maybe naming single-letter variables I can see being easier to review than to do.

      Any other kind of refactoring though, IDE refactoring tools are instantaneous and deterministic.

      • Colloidal@programming.dev
        link
        fedilink
        arrow-up
        1
        ·
        29 days ago

        When the code your have to deal with is an ASP (not .NET) created by apes throwing shit in a wall, the kind of holistic bullshit an AI makes is an improvement.

  • lightnsfw@reddthat.com
    link
    fedilink
    arrow-up
    4
    ·
    30 days ago

    Not a programmer but when I need a more complicated powershell script for something I ask copilot first and then I fix whatever it shits out so that it actually works how I want. It usually saves me some time…

  • tatterdemalion@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    30 days ago

    As for actual coding, I use ChatGPT sometimes to write SDK glue boilerplate or learn about API semantics. For this kind of stuff it can be much more productive than scanning API docs trying to piece together how to write something simple. Like for example, writing a function to check if an S3 bucket is publicly accessible. That would have taken me a lot longer without ChatGPT.

    In short: it basically replaced google and stack overflow in my workflow, at least as my first information source. I still have to fall back to a real search engine sometimes.

    I do not give LLMs access to my source code tree.

    Sometimes I’ll use it for ideas on how to write specific SQL queries, but I’ve found you have to be extremely careful with this use case because ChatGPT hallucinates some pretty bad SQL sometimes.