So, just to be clear, you modified the system instructions with the mentioned “Absolute Mode” prompt, and ChatGPT was still so wordy on your account?
Can you tell one or two of those questions to counter-check?
Just to give an impression of how the tone will change after applying the above mentioned custom instructions:
OpenAI aims to let users feel better, catering the user’s ego, on the costs of reducing the usefulness of the service, rather than getting the message across directly. Their objective is to keep more users on the cost of reducing the utility for the user. It is enshittification in a way, from my point of view.
It turns ChatGPT to an emotionless yet very on-point AI, so be aware it won’t pet your feelings in any way no matter what you write. I added the instructions to the original post above.
Well I’ve heard Cybertrucks are getting cheap because not many people want them.
Well, such a license could just obligat to open source the AI model that has been trained on it. If the instance prohibits training of AI models, or allow it, would be a separate condition that’s up to the instance owner, and its users can decide if they want to contribute under that condition, or not.
Goldman Sachs would not publish it that prominantly if it didn’t help their internal goals. And their intention is certainly not to help the public or their competitors. There are independent studies of some topics that are all well made and get to opposite conclusions. Invedtment firms just do what serves them. I wouldn’t trust anything that they publish.
There are studies that suggest that the information investment firms publish is not based on what they believe to be true, but on what they want others, including their competitors, believe to be true. And in many cases for serving their investment strategy, it benefits them to publish the opposite of what they believe to be true.
If Goldman Sachs said that, then most likely the opposite is true.
I’m surprised how everyone here believes what that capitalist company is saying, just because it fits their own narrative of AI being useless.
Why not banning them in schools, are they needed for studying?
It’s not helpful because it’s not discussing content but attacking a person’s character. This leads to emotions running high rather than letting your reasoning win the discussion.
When there’s a post about privacy issues, expect alternatives with more privacy be mentioned. It’s just that there are so many moments that big corporations violate user’s privacy nowadays, so that’s why you see it that often.
120 hz dynamically allocated, which means when you read text or do office work you save energy on a lower frame rate, and when you need higher frame rates for scrolling, movie or gaming it automatically increases it up to 120 hz. 120 hz on a 4 k display is something you can’t get from other brands. I have to uprade from my Lenovo X1 Carbon and have to buy a complete new Laptop just to get more RAM, but would have to downgrade the display as Lenovo doesn’t offer good display options in their Laptops anymore. I’m not going to sacrifice my eye sight to save Lenovo production costs. Fortunately, there is Framework now with their user orientated approach. And in the future, I won’t have to throw away a perfectly working high quality display and keyboard just to upgrade RAM, CPU, or ports, as all components can be swapped and independently upgraded on a Framework.
Well, I guess the Famework’s 9 hours of battery life for office work is enough for most use cases. You need to set battery capacity in relation to power consumption, and Framework laptops have great power management with the AMD processors.
That’s a great idea. Can we not apply a license to that social content that forces AI models trained on it to be open source?
I stopped buying their phones when they started trying to get control over the hardware I paid for.
I think this has an effect most people don’t think of: Media will just lose it’s value as a trusted source for information. We’ll just lose the ability of broadcasting media as anything could be faked. Humanity is back to “word of mouth”, I guess.
I don’t get how a software can be in alpha or beta version and by the developers be called ready for production environments. It doesn’t make sense by itself. In some way it’s not an honest way of communication, telling us two contrary things at the same time.
Alpha versions are actually quite severe. It means that features can be removed or added breaking the whole system. It means not providing an upgrade path for database changes. It means new bugs will be introduced by new features. Beta normally means a feature freeze but still not considered stable enough for production, due to bugs and security issues. RC, a “release candidate” is almost ready but you give it a bit more of testing time to make sure no critical bugs are left. And after that you get the version that is safe for productive use.
They are far away from a productive version, but telling us to use their development version as such.
You are right. I’ve updated the naming. Thanks for your feedback, very much appreciated.