College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

  • WackyTabbacy42069@reddthat.com
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    3
    ·
    1 year ago

    AI doesn’t necessitate a machine even being capable of stringing the complex English language into a series of steps towards something pointless and unattainable. That in itself is remarkable, however naive it may be in believing you that a foldable phone can be inflated. You may be confusing AI for AGI, which is when the intelligence and reasoning level is at or slightly greater than humans.

    The only real requirement for AI is that a machine take actions in an intelligent manner. Web search engines, dynamic traffic lights, and Chess bots all qualify as AI, despite none of them being able to tell you rubbish in proper English

    • TimewornTraveler@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      13
      ·
      edit-2
      1 year ago

      The only real requirement for AI is that a machine take actions in an intelligent manner.

      There’s the rub: defining “intelligent”.

      If you’re arguing that traffic lights should be called AI, I’m on the same page. We believe the same things: that ChatGPT isn’t any more “intelligent” than a traffic light. But you want to call them both intelligent and I want to call neither so.

      • throwsbooks@lemmy.ca
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        1 year ago

        I think you’re conflating “intelligence” with “being smart”.

        Intelligence is more about taking in information and being able to make a decision based on that information. So yeah, automatic traffic lights are “intelligent” because they use a sensor to check for the presence of cars and “decide” when to switch the light.

        Acting like some GPT is on the same level as a traffic light is silly though. On a base level, yes, it “reads” a text prompt (along with any messaging history) and decides what to write next. But that decision it’s making is much more complex than “stop or go”.

        I don’t know if this is an ADHD thing, but when I’m talking to people, sometimes I finish their sentences in my head as they’re talking. Sometimes I nail it, sometimes I don’t. That’s essentially what chatGPT is, a sentence finisher that happened to read a huge amount of text content on the web, so it’s got context for a bunch of things. It doesn’t care if it’s right and it doesn’t look things up before it says something.

        But to have a computer be able to do that at all?? That’s incredible, and it took over 50 years of AI research to hit that point (yes, it’s been a field in universities for a very long time, with most that time people saying it’s impossible), and we only hit it because our computers got powerful enough to do it at scale.

      • sin_free_for_00_days@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        I’m with you on this and think the AI label is just stupid and misleading. But times/language change and you end up being a Don Quixote type figure.