• hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 days ago

    Legitimately, no. I tried to use it to write code and the code it wrote was dog shit. I tried to use it to write an article and the article it wrote was dog shit. I tried to use it to generate a logo and the logo it generated was both dog shit and raster graphic, so I wouldn’t even have been able to use it.

    It’s good at answering some simple things, but sometimes even gets that wrong. It’s like an extremely confident but undeniably stupid friend.

    Oh, actually it did do something right. I asked it to help flesh out an idea and turn it into an outline, and it was pretty good at that. So I guess for going from idea to outline and maybe outline to first draft, it’s ok.

    • saigot@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      12 hours ago

      My experience is that while it’s useful for creating code from scratch it’s pretty alright if you give it a script and ask it to modify it to do something else.

      For instance I have a cron job that runs every 15min and attempts to extract .rar files in a folder and email me if it fails to extract. Problem is if something does go wrong it emails me every 15minutes until I fix it. This is especially annoying if its stuck copying a rar at 99%.

      I asked deepseek to store failed file names in a file and have the script ignore those files for an increasing amount of time for each failure. It did a pretty good job, although it changed the name of a variable halfway through (easy fix) and added a comment saying it fixed a typo despite changing nothing about that line. I probably probably would have written almost identical code but it definitely saved me time and effort

    • jsomae@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 day ago

      Crappy but working code has its uses. Code that might or might not work also has its uses. You should primarily use LLMs in situations where you can accept a high error rate. For instance, in situations where output is quick to validate but would take a long time to produce by hand.

    • Bob Robertson IX @discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      The output is only as good as the model being used. If you want to write code then use a model designed for code. Over the weekend I wrote an Android app to be able to connect my phone to my Ollama instance from off my network. I’ve never done any coding beyond scripts, and the AI walked me through setting up the IDE and a git repository before we even got started on the code. 3 hours after I had the idea I had the app installed and working on my phone.

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I didn’t say the code didn’t work. I said it was dog shit. Dog shit code can still work, but it will have problems. What it produced looks like an intern wrote it. Nothing against interns, they’re just not gonna be able to write production quality code.

        It’s also really unsettling to ask it about my own libraries and have it answer questions about them. It was trained on my code, and I just feel disgusted about that. Like, whatever, they’re not breaking the rules of the license, but it’s still disconcerting to know that they could plagiarize a bunch of my code if someone asked the right prompt.

        (And for anyone thinking it, yes, I see the joke about how it was my bad code that it trained on. Funny enough, some of the code I know was in its training data is code I wrote when I was 19, and yeah, it is bad code.)