• 0 Posts
  • 277 Comments
Joined 2 years ago
cake
Cake day: July 14th, 2023

help-circle
  • I think the best way to handle this would be to just encode everything and upload all files. If I wanted some amount of history, I’d use some file system with automatic snapshots, like ZFS.

    If I wanted to do what you’ve outlined, I would probably use rclone with filtering for the extension types or something along those lines.

    If I wanted to do this with Git specifically, though, this is what I would try first:

    First, add lossless extensions (*.flac, *.wav) to my repo’s .gitignore

    Second, schedule a job on my local machine that:

    1. Watches for changes to the local file system (e.g., with inotifywait or fswatch)
    2. For any new lossless files, if there isn’t already an accompanying lossy files (i.e., identified by being collocated, having the exact same filename, sans extension, with an accepted extension, e.g., .mp3, .ogg - possibly also with a confirmation that the codec is up to my standards with a call to ffprobe, avprobe, mediainfo, exiftool, or something similar), it encodes the file to your preferred lossy format.
    3. Use git status --porcelain to if there have been any changes.
    4. If so, run git add --all && git commit --message "Automatic commit" && git push
    5. Optionally, automatically craft a better commit message by checking which files have been changed, generating text like Added album: "Satin Panthers - EP" by Hudson Mohawke or Removed album: "Brat" by Charli XCX; Added album "Brat and it's the same but there's three more songs so it's not" by Charli XCX

    Third, schedule a job on my remote machine server that runs git pull at regular intervals.

    One issue with this approach is that if you delete a file (as opposed to moving it), the space is not recovered on your local or your server. If space on your server is a concern, you could work around that by running something like the answer here (adjusting the depth to an appropriate amount for your use case):

    git fetch --depth=1
    git reflog expire --expire-unreachable=now --all
    git gc --aggressive --prune=all
    

    Another potential issue is that what I described above involves having an intermediary git to push to and pull from, e.g., running on a hosted Git forge, like GitHub, Codeberg, etc… This could result in getting copyright complaints or something along those lines, though.

    Alternatively, you could use your server as the git server (or check out forgejo if you want a Git forge as well), but then you can’t use the above trick to prune file history and save space from deleted files (on the server, at least - you could on your local, I think). If you then check out your working copy in a way such that Git can use hard links, you should at least be able to avoid needing to store two copies on your server.

    The other thing to check out, if you take this approach, is git lfs. EDIT: Actually, I take that back - you probably don’t want to use Git LFS.


  • Sure - Wikipedia says it better than I could hope to:

    As English-linguist Larry Andrews describes it, descriptive grammar is the linguistic approach which studies what a language is like, as opposed to prescriptive, which declares what a language should be like.[11]: 25  In other words, descriptive grammarians focus analysis on how all kinds of people in all sorts of environments, usually in more casual, everyday settings, communicate, whereas prescriptive grammarians focus on the grammatical rules and structures predetermined by linguistic registers and figures of power. An example that Andrews uses in his book is fewer than vs less than.[11]: 26  A descriptive grammarian would state that both statements are equally valid, as long as the meaning behind the statement can be understood. A prescriptive grammarian would analyze the rules and conventions behind both statements to determine which statement is correct or otherwise preferable. Andrews also believes that, although most linguists would be descriptive grammarians, most public school teachers tend to be prescriptive.[11]: 26



  • You can run a NAS with any Linux distro - your limiting factor is having enough drive storage. You might want to consider something that’s great at using virtual machines (e.g., Proxmox) if you don’t like Docker, but I have almost everything I want running in Docker and haven’t needed to spin up a single virtual machine.


  • I’d just like to interject for a moment. What you’re referring to as Alpine Linux Alpine Linux is in fact Pine’s fork, Alpine / Alpine Linux Pine Linux, or as I’ve taken to calling it, Pine’s Alpine plus Alpine Linux Pine Linux. Alpine Linux Pine Linux is an operating system unto itself, and Pine’s Alpine fork is another free component of a fully functioning Alpine Linux Pine Linux system.


  • Wow, there isn’t a single solution in here with the obvious answer?

    You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.

    Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:

    1. Set up a reverse proxy - I use Traefik but there are a few other solid options - and configure it to use Let’s Encrypt and your domain name.
    2. Your reverse proxy should have ports 443 and 80 exposed, but should upgrade http requests to https.
    3. Add Jellyfin as a service and route in your reverse proxy’s config.

    On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.

    If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.

    Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.

    If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.


  • Why should we know this?

    Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.

    What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?

    Why do we think they were generated with AI?

    When you say “generated with AI,” what do you mean?

    • Generated entirely with AI, without even editing? Then why do they have so many 5 star reviews?
    • Generated with AI and then heavily edited?
    • Written partly by hand with some pieces written by unedited GenAI?
    • Written partly by hand with some pieces written by edited GenAI?
    • AI was used for ideation?
    • AI was used during editing? E.g., Grammarly?
    • GenAI was used during editing?E.g., “ChatGPT, review this chapter and give me any feedback. If sections need rewritten go ahead and take a first pass.”
    • AI might have been used, but we don’t know for sure, and the issue is that some passages just “read like AI?”

    And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).


  • Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)

    If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.

    For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:

    • q4_K_M (the default): 43 GB
    • fp16: 141 GB
    • q8: 75 GB
    • q6_K: 58 GB
    • q5_k_m: 50 GB
    • q4: 40 GB
    • q3_K_M: 34 GB
    • q2_K: 26 GB

    This is why I run a lot of Q4_K_M 70B models on two 3090s.

    Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.

    TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.



  • It’s a discussion of principle.

    This is a foreign concept?

    It appears to be a foreign concept for you.

    I don’t believe that it’s a fundamentally bad thing to converse in moderated spaces; you do. You say “giving somebody the power to arbitrarily censor and modify our conversation is a fundamentally bad thing” like it’s a fact, indicating you believe this, but you’ve been given the tools to avoid giving others the power to moderate your conversation and you have chosen not to use them. This means that you are saying “I have chosen to do a thing that I believe is fundamentally bad.” Why would anyone trust such a person?

    For that matter, is this even a discussion? People clearly don’t agree with you and you haven’t explained your reasoning. If a moderator’s actions are logged and visible to users, and users have the choice of engaging under the purview of a moderator or moving elsewhere, what’s the problem?

    It is deeply bad that…

    Why?

    Yes, I know, trolls, etc…

    In other words, “let me ignore valid arguments for why moderation is needed.”

    But such action turns any conversation into a bad joke.

    It doesn’t.

    And anybody who trusts a moderator is a fool.

    In places where moderator’s actions are unlogged and they’re not accountable to the community, sure - and that’s true on mainstream social media. Here, moderators are performing a service for the benefit of the community.

    Have you never heard the phrase “Trust, but verify?”

    Find a better way.

    This is the better way.



  • Yes, I know, trolls etc. But such action turns any conversation into a bad joke. And anybody who trusts a moderator is a fool.

    Not just trolls - there’s much worse content out there, some of which can get you sent to jail in most (all?) jurisdictions.

    And even ignoring that, many users like their communities to remain focused on a given topic. Moderation allows this to happen without requiring a vetting process prior to posting. Maybe you don’t want that, but most users do.

    Find a better way.

    Here’s an option: you can code a fork or client that automatically parses the modlog, finds comments and posts that have been removed, and makes them visible in your feed. You could even implement the ability to reply by hosting replies on a different instance or community.

    For you and anyone who uses your fork, it’ll be as though they were never removed.

    Do you have issues with the above approach?


  • As a user, you can:

    • Review instance and community rules prior to participating
    • Review the moderator logs to confirm that moderation activities have been in line with the rules
    • If you notice a discrepancy, e.g., over-moderation, you can hold the mods accountable and draw attention to it or simply choose not to engage in that instance or community
    • Host your own instance
    • Create communities in an existing instance or your own instance

    If you host your own instance and communities within that instance, then at that point, you have full control, right? Other instances can de-federate from yours.


  • I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.

    I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.






  • Good point!

    If OP is hourly, those 3 hours should be billed as work - probably under a generic HR-related category if one is available.

    If OP is salaried exempt, then this would fall under “doing any work at all” (all that’s needed to be paid for the day) and if sick time is tracked by day and not by hour, then OP doesn’t need to use one. If it’s tracked hourly then OP should make sure to only use 5 sick hours (or less, depending on how long the work-related conversations took) and depending on employer policies may not need to use any sick time at all.

    This also cut into the time OP could have been using to rest. It would be very reasonable for OP to need an extra day to recover, as a result.