

Probably “copying” Apple’s iNoun naming convention?
I’m surprisingly level-headed for being a walking knot of anxiety.
Ask me anything.
Special skills include: Knowing all the “na na na nah nah nah na” parts of the Three’s Company theme.
I also develop Tesseract UI for Lemmy/Sublinks
Avatar by @SatyrSack@feddit.org


Probably “copying” Apple’s iNoun naming convention?


Granted, I don’t think the instance level URL filters were meant to be used for the domains of other instances like I was doing here. They’re more for blocking spam domains, etc.
e.g. I also have those spam sites you see in c/News every so often in that block list (e.g. dvdfab [dot] cn, digital-escape-tools [dot] phi [dot] vercel [dot] app, etc) , so I never see/report them because they’re rejected immediately.
During one of the many, many spam storms here, it was desired by admins for those filters to stop anything that matched them from federating-in instead of just changing the text to removed on the frontend. So it is a good feature to have. Just maybe applied too widely.
Though I think if a user edited their own description to include a widely-blocked URL (no URLs are blocked by default), they’d just be soft-banning themselves from everywhere that has that domain blocked.
If a malicious community mod edited their communities’ descriptions to a include a widely-blocked URL, then yeah, that could cut off new posts coming in to any instance that has that domain blocked (old posts and the community itself would still be available).
All of those would require instances to have certain URLs blocked. The list of blocked URLs for an instance is publicly available from the info in getSite API call, so it wouldn’t be hard to game if someone really wanted to. Fortunately, most people are too busy gaming the “delete account” feature right now 🙄.
The person who cross-posted it was probably definitely from your local instance.
You only ever interact with your local instance’s copy of any community, even remote ones. If the community is to a remote instance that is either offline or since de-federated, there’s nothing that prohibits you from interacting with it*. Because lemm.ee is no longer there to federate out the post/comments to any of the community’s subscribers, only people local to your instance will see it.
*Admins can remove the community and, prior to it going offline, mods can lock it. But if an instance just disappears, you can still locally interact with any of its communities on your instance; the content just won’t federate outside your instance.


Lol. I guess now I gotta decide which is more annoying: Not having content from c/Books or having to deal with unwanted spillover from .ml. I don’t have the chutzpah to ask the mods to change the community description lol
Just figured this might catch other people off guard like it did me. I never would have expected the community description to be evaluated for the URL filter (only posts/comments).


I wouldn’t recommend it to anyone in real life. There are parts that are just way too jarring.
Ugh, this. And I hate that it’s like that.
Like, I used to have my instance open to whoever to sign up. My guiding principle was to have a place that wasn’t overrun with [parts that are just way too jarring]. Holy shit was that an impossible goal to do alone so I shuttered it up and now it’s just a private instance / testbed for Tesseract.
My friends knew I was active on Reddit, and that was fine. But I wouldn’t tell them I spend any amount of time here because what they would see going to almost any random instance will probably definitely not look good on me by association despite that I’m nowhere near that.
So if anyone shares this desire, I am open to un-mothballing my instance, rebranding, and taking on new admins and re-opening to users who also want a place like that.


I’ve seen that and my best guess has always been the initial/self upvote got “lost in the mail” during federation. AFAIK, the post creation and the initial upvote are separate activities that need to federate. Someone correct me if I’m wrong.
If your instance is resolving a post manually that it doesn’t already know about (and it it’s not coming in from being subscribed) then it will not get the initial upvote, but I don’t think that’s what you’re referring to here.


Like someone else said: Block the news and politics communities if you like to browse /all. You can always unblock them later.
It was with heavy heart but I also blocked silence7@slrpnk.net. Nothing against them, and they post nothing but quality material in what I fully believe to be good faith, but they’re just…too much. The only reason I had to block them individually is they post in more than just news/politics communities but never goes off-brand and only posts news/politics/“everything is a bummer” things. There’s probably a few other people like that, but shouldn’t be many.
That should just leave you with the few oddball posts where it’s just the people that don’t follow the no news/politics rules.


I haven’t been to Odysee for a good while, but is it still Rumble-lite?
I only learned of Odysee because I saw a video linked to it here and went directly to the video. When I saw it had embed code, I added support in Tesseract UI so the videos would play from the post. Then I went to the main site and saw the front page full of rightwing nutjob rants and vaccine skepticism and was like “nope”. Had I saw that beforehand, I wouldn’t have added embed support, but the work was already done so I left it in. That’s basically why I refuse to add embed support for Rumble.
Wondering if ownership/leadership/policies have changed since about 2 years ago when I wrote the embed components for it and last interacted with it.


I totally get that.
The closest active alternative I can find is !screengrabs@piefed.social but it’s for still images. Maybe if the clip fits the theme there, they’ll allow it?


Only one I can find is !movieclips@lemmy.world but it’s 3 years old and has 0 submissions. Maybe you can revive it? Surprisingly, the mod for it is still active on the platform.
Otherwise, “if you build it, they will come”.


Maybe AI should be more like a parent and simply say “I don’t know. Go read a book, find out, and let me know”.
Pretty sure my mom did know the answer but I learned more by reading a book and telling her what I learned.


We already have robotic peeping Toms, so yeah, robotic house burglars tracks.


I also run (well, ran) a local registry. It ended up being more trouble than it was worth.
Would you have to docker load them all when rebuilding a host?
Only if you want to ensure you bring the replacement stack back up with the exact same version of everything or need to bring it up while you’re offline. I’m bad about using the :latest tag so this is my way of version-controlling. I’ve had things break (cough Authelia cough) when I moved it to another server and it pulled a newer image that had breaking config changes.
For me, it’s about having everything I need on hand in order to quickly move a service or restore it from a backup. It also depends on what your needs are and the challenges you are trying to overcome. i.e. When I started doing this style of deployment, I had slow, unreliable, ad heavily data-capped internet. Even if my connection was up, pulling a bunch of images was time consuming and ate away at my measly satellite internet data cap. Having the ability to rebuild stuff offline was a hard requirement when I started doing things this way. That’s now no longer a limitation, but I like the way this works so have stuck with it.
Everything a service (or stack of services) needs is all in my deploy directory which looks like this:
/apps/{app_name}/
docker-compose.yml
.env
build/
Dockerfile
{build assets}
data/
{app_name}
{app2_name} # If there are multiple applications in the stack
...
conf/ # If separate from the app data
{app_name}
{app2_name}
...
images/
{app_name}-{tag}-{arch}.tar.gz
{app2_name}-{tag}-{arch}.tar.gz
When I run backups, I tar.gz the whole base {app_name} folder which includes the deploy file, data, config, and dumps of its services images and pipe that over SSH to my backup server (rsync also works for this). The only ones I do differently are ones with in-stack databases that need a consistent snapshot.
When I pull new images to update the stack, I move the old images and docker save the now current ones. The old images get deleted after the update is considered successful (so usually within 3-5 days).
A local registry would work, but you would have to re-tag all of the pre-made images to your registry (e.g. docker tag library/nginx docker.example.com/nginx) in order to push them to it. That makes updates more involved and was a frequent cause of me running 2+ year old versions of some images.
Plus, you’d need the registry server and any infrastructure it needs such as DNS, file server, reverse proxy, etc before you could bootstrap anything else. Or if you’re deploying your stack to a different environment outside your own, then your registry server might not be available.
Bottom line is I am a big fan of using Docker to make my complex stacks easy to port around, backup, and restore. There’s many ways to do that, but this is what works best for me.


Yep. I’ve got a bunch of apps that work offline, so I back up the currently deployed version of the image in case of hardware or other failure that requires me to re-deploy it. I also have quite a few custom-built images that take a while to build, so having a backup of the built image is convenient.
I structure my Docker-based apps into dedicated folders with all of their config and data directories inside a main container directory so everything is kept together. I also make an images directory which holds backup dumps of the images for the stack.
docker save {image}:{tag} | gzip -9 > ./images/{image}-{tag}-{arch}.tar.gzdocker load < ./images/{image}-{tag}-{arch}.tar.gzIt will backup/restore with the image and tag used during the save step. The load step will accept a gzipped tar so you don’t even need to decompress it first. My older stuff doesn’t have the architecture in the filename but I’ve started adding that lately now that I have a mix of amd64 and arm64.


Technically, yeah. But with less people going to Wikipedia directly there would probably stand to be less chance of getting any new contributors. I’m not sure how the foundation gets all its money, but the more traffic they serve the more they can prove their relevance which might matter for funding
I use the web version rather than the app, but I want to say the app can store the library on the SD card if you have one of sufficient size lying around and if the Redmi has the slot for one. But as someone else said, there are smaller versions you can download if you can’t fit the full one.
Not trying to push Kiwix on you, but I just can’t emphasize enough how handy it is to have offline Wikipedia always on hand.


If she had simply resigned from her position when she began experiencing health issues, it would have allowed her successor to be nominated and approved under a democratic administration.
There’s no guarantee that would have happened. See Mitch McConnell and Merrick Garland
Yeah. Agrovoltaics is just smart land use. Some crops even grow better in the partial shade.
Atkinson Hyperlegible is my new jam. I’m dyslexic and it helps tremendously even though that’s not its primary goal. It also looks a lot better than OpenDyslexic which I used to use.
Loaded “Hyperlegible” onto my Kobo, the reader app on my phone, and set it as the default font on my desktop environment.
Also added it as an option in Tesseract UI (which I swear I’ll be releasing “soon”).