• 0 Posts
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 28th, 2023

help-circle
  • The Legend of Zelda: The Minish Cap is one of my favorite games of all time. It’s the last isometric Zelda game, and they made it a swan song. The main quest it pretty short, but it’s the sort of cozy game where doing the sidequests just feels right.

    In the game, you shrink down to the size of a mouse to traverse rafters and explore tiny temples and float on lillypads. It’s the sort of thing that would be no big deal in a 3D game, but is wildly ambitious in 2D. Not only do they pull it off, but they fill the environments with lush, lived-in detail that springs to life when you shrink down and look at it up close. The art style still sticks with me after 20 years.

    Also, forget all the “hey, listen” stuff, your sidekick Ezlo just sasses you the entire time. It’s great.







  • That’s the only way to offer free services?! What about donation-based models? Maybe Mozilla could have set up something like what Brave has, except not based around a sketchy cryptocurrency.

    Please correct me if I’m mistaken, but I thought Brave only gave donatable tokens to users as a reward for watching ads… ads which Brave curated for the user based on their activity. It’s just targeted ad revenue with extra steps.

    At first blush, it seems to me that both Brave and Anonym want to be the middleman for targeted advertising. What am I missing?


  • Solid point. A laptop battery is around 60Wh, and charging that in 1 minute would pull 3.6kW from the outlet, or roughly double what a US residential outlet can deliver.

    Supercaps stay pretty cool under high current charging/discharging, but your laptop would have to be the size of a mini fridge.

    The research paper itself was only talking about using the tech for wearable electronics, which tend to be tiny. The article probably made the cars-and-phones connection for SEO. Good tech, bad journalism.








  • I’m an engineer who works in an industrial environment, and I regularly have to repair or reprogram hazardous equipment. Here are a few takeaways I got from the descriptions of the Tesla incident:

    • Lockout/tagout was not being respected. If you don’t have a lock, yank the fuse and stick it in your pocket. But whatever you do, when working on a machine, you must maintain exclusive control so nobody activates it while you’re inside the approach boundary.
    • Why was the engineer in the approach boundary for a “software update?” I feel like I’m missing some important context there.
    • Where were the hazard indicators? A hazardous device needs sound or light indicators, so nobody forgets they left it plugged in.
    • Where was the machine guarding? If it can kill you, entering the hazardous area should shut the machine off with or without LOTO. I’m partial to interlocked gates, but cordons and light curtains are popular for a reason.
    • If the machine guarding was disabled, where were the observers? The last time I activated a machine with the light curtains overriden, I had three other engineers on standby, one at the E-Stop, one with a rescue hook, and one just to watch.

  • Windows 11 needs Secure Boot and/or TPM workarounds, and while Linux is better than it used to be, but it still hates peripherals. Only 5% of Americans work in the tech industry. Fry cooks and forklift operators often lack the education needed to find these workarounds, and are too busy and tired making ends meet to seek out that education.

    In the modern corporate environment, most companies would rather replace their machines wholesale than risk unplanned downtime due to unforeseen glitches. They apply the principles of preventative maintenance to IT.

    I like Linux (Mint is good stuff), and I believe in what it stands for. But the human desire for simplicity, reliability, and familiarity should never be construed as a lack of virtue.


  • Thevenin@beehaw.orgtoMemes@lemmy.mlbit of a hot take
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Another way to say it is that every movement needs a carrot, a stick, and an ultimatum. The carrot is evangelizing the injustice (MLK), the stick is direct action (Malcolm X), and the ultimatum is an implicit show of force and dedication that demonstrates how many people will resort to the stick if the carrot is not accepted (the mach on Washington).

    While I am nearly always in the peaceful outreach camp, I strongly suspect that my efforts will not see fruition until breathless WSJ editorials start describing environmentalists as “dangerous” and “unamerican.”


  • Exactly. It’s a niche, but it’s a legitimate niche. I needed a “portable desktop” that could run games as well as Solidworks simulations, and a gaming laptop was perfect for me.

    It’s a Samsung Series 7 Gamer, and it’s lasted me 11 years so far (yes, you read that right). If I could go back and do anything differently, I would unplug the battery to preserve it for the rare instances when I actually needed it.


  • It’s absolutely true that the training process requires downloading and storing images

    This is the process I was referring to when I said it makes copies. We’re on the same page there.

    I don’t know what the solution to the problem is, and I doubt I’m the right person to propose one. I don’t think copyright law applies here, but I’m certainly not arguing that copyright should be expanded to include the statistical matrices used in LLMs and DPMs. I suppose plagiarism law might apply for copying a specific style, but that’s not the argument I’m trying to make, either.

    The argument I’m trying to make is that while it might be true that artificial minds should have the same rights as human minds, the LLMs and DPMs of today absolutely aren’t artificial minds. Allowing them to run amok as if they were is not just unfair to living artists… it could deal irreparable damage to our culture because those LLMs and DPMs of today cannot take up the mantle of the artists they hedge out or pass down their knowledge to the next generation.


  • It doesn’t change anything you said about copyright law, but current-gen AI is absolutely not “a virtual brain” that creates “art in the same rough and inexact way that we humans do it.” What you are describing is called Artificial General Intelligence, and it simply does not exist yet.

    Today’s large language models (like ChatGPT) and diffusion models (like Stable Diffusion) are statistics machines. They copy down a huge amount of example material, process it, and use it to calculate the most statistically probable next word (or pixel), with a little noise thrown in so they don’t make the same thing twice. This is why ChatGPT is so bad at math and Stable Diffusion is so bad at counting fingers – they are not making any rational decisions about what they spit out. They’re not striving to make the correct answer. They’re just producing the most statistically average output given the input.

    Current-gen AI isn’t just viewing art, it’s storing a digital copy of it on a hard drive. It doesn’t create, it interpolates. In order to imitate a person’t style, it must make a copy of that person’s work; describing the style in words is insufficient. If human artists (and by extension, art teachers) lose their jobs, AI training sets stagnate, and everything they produce becomes repetitive and derivative.

    None of this matters to copyright law, but it matters to how we as a society respond. We do not want art itself to become a lost art.