But is it convincing enough to attend meetings for me
Ugh someone recently sent me LLM-generated meeting notes for a meeting that only a couple colleagues were able to attend. They sucked, a lot. Got a number of things completely wrong, duplicated the same random note a bunch of times in bullet lists, and just didn’t seem to reflect what was actually talked about. Luckily a coworker took their own real notes, and comparing them made it clear that LLMs are definitely causing more harm than good. It’s not exactly the same thing, but no, we’re not there yet.
You just have to love that these assholes are so lazy that they first use an LLM to write their work, but then are also too lazy to quickly proof read what the LLM spat out.
People caught doing this should be fired on the spot, you’re not doing your job.
I hosted a meeting with about a dozen attendees recently, and one attendee silently joined with an AI note taking bot and immediately went AFK.
It was in about 5 minutes before we clocked it and then kicked it out. It automatically circulated its notes. Amusingly, 95% of them were “is that a chat bot?” “Steve, are you actually on this meeting?” “I’m going to kick Steve out in a minute if nobody can get him to answer”, etc. But even with that level of asinine, low impact chat, it still managed to garble them to the point of barely legible.
Also: what a dick move.
Asking the important questions
Or family reunions.
…Asking for a friend.
What does an AI look like in jorts?
a virtual replica of you is able to embody your values and preferences with stunning accuracy.
I’m calling BS on this one. “Values and preferences” are such a far cry from Actual Personality that it’s meaningless. Just more LLM hype
Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy.
Okay, but can it embody my traumas?
Maybe some of the symptoms of the traumas that you exhibited during the interview.
lol because people always behave in ways consistent with how they tell an interviewer they will.
If I can make a version of me that likes its job then that will be a deviation from the template that’s worth having. Assuming this technology actually worked, an exact digital replica of me isn’t particularly useful, It’s just going to automate the things I was going to do anyway but if I was going to do them anyway they aren’t really a hassle worth automating.
What I want is a version that has all of my knowledge, but infinitely more patience and for preference one that actually understands tax law. I need an AI to do the things I hate doing, but I can see the advantage of customizing it with my values to a certain extent.