Just your normal everyday casual software dev. Nothing to see here.

People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.

been trying to lower my social presence on services as of late, may go inactive randomly as a result.

  • 0 Posts
  • 821 Comments
Joined 3 years ago
cake
Cake day: August 15th, 2023

help-circle
  • they have already done this for at least the last year or so easily. I noticed it when I requested by data package last year, theres a clear area on it that infers both your gender and your age in it.

    They are just acting like this is a “new system” but in reality its a system that have had for awhile now, just didn’t have any public facing usage of it.

    For anyone else that wants to see it themselves, and they have a discord takeout. its located at the very bottom in the events file in activity/analytics.

    the end of the file shows json objects that indicate what your predicted age and gender is.

    edit: I found this file isn’t a static location, it is still in the activity/analytics directory in one of your event files, but you need to search “age” or “gender” to find it if it isn’t at the bottom


  • mmmmm lets see here

    for the pavillion 16…

    $34.99 a month with no protection plans available and no option to buy out over time and requires a 650 credit score…

    or $979.99 for a one off purchase with the ability to have a protection plan ontop of it (this same laptop is also on a sale via HP for 429$ if you use their financing system…)

    I don’t see how this is helpful to the general consumer. The typical user doesn’t replace their laptop every year, and at the prices they give you end up breaking even around 2 years. You would need to be processing an upgrade at least every 2.3 years in order to make your money worth it. I’ve had the same laptop for 8+ years now. I don’t know anyone who had a laptop fail outside of accidental damage prior to the 2 year mark. Most system issues appear prior to the one year warranty end date if something was /going/ to happen.

    I can see how this could be helpful for a company that has temp workers… but even then it’s not like the company couldn’t just re-provision the laptop and give it to the next person.

    I could see this being more handy if accidental was included on it, but out of the current offerings it’s just not worth it for anyone.




  • sadly, it’s a little more complex than just enabling it. The supported self host deployment uses docker, and the docker containers that are available don’t contain the interfaces for voice or video calling as they are not up to date.

    If I understand it right, to enable it would mean you need to either pull the source yourself and run it off of docker, or make a custom docker image using a version of stoat web that contains the ability to do voice calls.

    reading the draft of the linked issue, it looks like the author isn’t doing voice call for the reason that they don’t know the proper way to integrate it into the docker image.

    So to answer it: yes it looks like you can use voice servers on the current self hosted model, but you can’t use pre-existing docker images, and it will require you to manually add the new web UI in and patch where needed.






  • The scary part is how it already somewhat is.

    My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.

    the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn’t work -> AI lints the final product once working -> AI submits the patch as pull.

    Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says “yes this is good” or “no send this back in”


  • They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

    And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s

    As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.




  • Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

    My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

    I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT’s and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.



  • as much as I would love this. If it ever did become a thing, what you would see wouldn’t be companies taking the fine, you would see companies “off-branching” and having income be reported on a parent company that is contracted to the offending company. like in the case of alphabet, they would likely just migrate the android division to be a contractee that they have full control over that they never terminate the contract for. They no longer “own” android legally, they contract android to do their bidding. So when it ends up in court, it ends up as a “well Android did it not us” much like how Amazons third party delivery services worked when they tried to enforce unionization laws.




  • are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

    I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT’s with docker installed then running their own containers(but that’s not what I do, or what I am asking for).

    I currently do use one CT that has docker installed with all my docker images, which I wouldn’t do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.

    One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn’t good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.

    For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I’ve seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT’s do as they no longer share resources). When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).

    edit: clarification and general visibility so it wasnt bunched together.


  • I don’t like how everything is docker containerized.

    I already run proxmox, which containerizes things by design with their CT’s and VM’s

    Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn’t make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox’s system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.

    My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.

    A docker system just adds extra headway that isn’t needed. So yes, just give me a standard installer.