only advertisers, not consumers, prefer video over text.
This is so goddamn true it’s not even funny. One thing in particular that really pisses me off is when I’m looking for a relatively simple piece of information, like how to beat a level I’m stuck on in a game, and every single result is a 15 minute video I need to scrub through in order to find the same information I could have scanned a text block for and found an under a minute.
I will search around for 10 minutes before I watch a video if I need a simple answer. I realize that’s probably a bigger waste of my time, but I don’t want to encourage that shit by adding to the algorithm.
I guess I’m screwed either way then, but at least I don’t have to wait for the video to get to the fucking point to find out whether or not it’s even helpful.
At a large technology conference I attended recently I saw a demonstration where the URL of the video was handed to an AI bot. Some very detailed prompts for requests for information were given to the bot and it pulled out all the info the user requested.
So maybe we’ll have ubiquitous AI to do the scrubbing for those 5 second answers now buried in 15 minute videos.
Much more detailed than that. In the video there was a 3 piece band playing for a few seconds on screen. The user prompt asked: “Tell me where I can buy the shirt the keyboardist is wearing at timestamp 32 seconds”. The Bot found the website of the vendor selling the shirt.
Okay, that’s pretty neat, but at the same time basically the same as loading a still image into a current AI image matching suite and having it identify a keyboard, then a shirt near the keyboard, then reverse image search that tshirt. It’s super cool to be able to do, but kinda standard at this point.
I guess the interactivity, being able to feed in a url on the fly is the value add. I still would have liked my “generate subtitles then search them” imaginary bot more though.
I have a related thought. A lot of people are not good at reading. Those people are underrepresented on a text platform like this, but they’re out there.
The us’ education system is spotty. That doesn’t help. There’s also a long podcast about how reading is often taught badly: https://features.apmreports.org/sold-a-story/ (yes, it has a transcript)
But there are probably a lot of people who secretly sigh with relief when they find that five minute video instead of the two paragraph answer. They legit struggle to read it, and that’s uncomfortable and embarrassing.
Verbally like read aloud? Probably worse because at least with video you can usually fast forward and see the preview to get a gist of what you’re looking at. Like if it’s a video game walkthrough I can fast forward until I see a part I recognize
Enshittification is when a company initially provides a good service, often at competitive prices. But as their market saturates (or they just establish as a monopoly), they start turning to shittier and shittier ways to increase profit due to demand that businesses must continually grow profits, or they fail.
“Enshittification” isn’t the same as “stuff gets shittier”.
From the wiki page:
This is so goddamn true it’s not even funny. One thing in particular that really pisses me off is when I’m looking for a relatively simple piece of information, like how to beat a level I’m stuck on in a game, and every single result is a 15 minute video I need to scrub through in order to find the same information I could have scanned a text block for and found an under a minute.
I will search around for 10 minutes before I watch a video if I need a simple answer. I realize that’s probably a bigger waste of my time, but I don’t want to encourage that shit by adding to the algorithm.
The search engines like it if you take longer to get your result, they get to show you more ads.
I guess I’m screwed either way then, but at least I don’t have to wait for the video to get to the fucking point to find out whether or not it’s even helpful.
This is another reason I’m a fan of Kagi.
At a large technology conference I attended recently I saw a demonstration where the URL of the video was handed to an AI bot. Some very detailed prompts for requests for information were given to the bot and it pulled out all the info the user requested.
So maybe we’ll have ubiquitous AI to do the scrubbing for those 5 second answers now buried in 15 minute videos.
Sounds like it searched the subtitles, found the time stamps, and returned the relevant text. Useful, but ultimately a pretty simple bot.
Much more impressive if it “watched” the video for the first time, formed it own subtitles, then pulled the data out. That would be a feat.
Much more detailed than that. In the video there was a 3 piece band playing for a few seconds on screen. The user prompt asked: “Tell me where I can buy the shirt the keyboardist is wearing at timestamp 32 seconds”. The Bot found the website of the vendor selling the shirt.
Okay, that’s pretty neat, but at the same time basically the same as loading a still image into a current AI image matching suite and having it identify a keyboard, then a shirt near the keyboard, then reverse image search that tshirt. It’s super cool to be able to do, but kinda standard at this point.
I guess the interactivity, being able to feed in a url on the fly is the value add. I still would have liked my “generate subtitles then search them” imaginary bot more though.
Or game completion guides that insist on using 10 minute video clips instead of just putting a mark on a map.
Makes you wonder how much google search getting shitty enhances their YouTube profits.
I have a related thought. A lot of people are not good at reading. Those people are underrepresented on a text platform like this, but they’re out there.
Something like half of US adults cannot read at an 6th grade level. ( https://www.apmresearchlab.org/10x-adult-literacy )
The us’ education system is spotty. That doesn’t help. There’s also a long podcast about how reading is often taught badly: https://features.apmreports.org/sold-a-story/ (yes, it has a transcript)
But there are probably a lot of people who secretly sigh with relief when they find that five minute video instead of the two paragraph answer. They legit struggle to read it, and that’s uncomfortable and embarrassing.
It’s a fair point. Someone close to me would be like this.
That said, if a chatbot can explain verbally, would that be better than video in most cases ?
Verbally like read aloud? Probably worse because at least with video you can usually fast forward and see the preview to get a gist of what you’re looking at. Like if it’s a video game walkthrough I can fast forward until I see a part I recognize
Not to mention, if the truth socialers can see each other, they’ll be able to see what fugly chuds they all are.
There’s no way any of them would be on camera without the bare minimum of: old baseball cap backwards, bandana, tacticool sunglasses
Beard or goatee. Filmed vertically in a pickup truck. Parked and non-moving optional.
Unfortunately written walk throughs are often terribly written and the pictures are usually a fucking mess.
Just goes to show that enshitification hits everyone.
Why does this go to show that?
I’m saying even Truth social isn’t immune to wall street fuckery.
Ok, that’s not enshittification though.
Enshittification is when a company initially provides a good service, often at competitive prices. But as their market saturates (or they just establish as a monopoly), they start turning to shittier and shittier ways to increase profit due to demand that businesses must continually grow profits, or they fail.
“Enshittification” isn’t the same as “stuff gets shittier”.
Steam forums are your friend.
LOL. Steam forums are a dumpster fire of insanity and chaos.
Yes but also they tend to have useful info burroed in.