Google Warns of Privacy Risks with New AI Assistant “Gemini”
Key Points:
- Google’s new AI assistant, Gemini, collects your conversations, location, feedback, and usage information.
- Be cautious: This includes your actual conversations, not just summaries. They are stored for 3 years, even after deleting activity.
- Don’t share sensitive information: Google may use it to improve AI and might share it with human reviewers.
- Even turning off activity tracking doesn’t prevent conversations from being saved for 72 hours.
Additional Notes:
- This applies to all Gemini apps, not just the main assistant.
- Google claims they don’t sell your information but use it for internal purposes.
On the one hand, this could be filed under “yeah, no shit, we all know stuff in the cloud is forever”.
On the other hand, it’s something that’s easy to forget with the ubiquitous omnipresence of compute in our lives. We become numb to it, and everyone has moments of crisis or weakness where they may let their guard down.
The US needs better privacy and consumer protection laws. But we’re always behind Europe, and way behind technology, when it comes to our crappy legal system.
I mean, just look at the way Microsoft are trying to ram “AI” into every interaction with every app right now. As the big players make it more and more non-optional, people are going to have to work really hard not to put anything into, say, Word that they don’t want sent back for analysis
You make an important point, it is definitely being layered in to all sorts of apps. Some of it is box-checking bullshit, so that a marketing underling can tell the c-suite “we have implemented AI”. But some of it is semi-sophisticated bossware type shit. It’s going to get smarter and it’s going to be everywhere.
This is my big concern. Right now Gemini is an option you can switch on to replace the existing assistant, which I expect has similar terms. But how long will it be until Google just integrates this with their email, search, and online office suite with no options to disable it? They’ll tout it as an improvement and new features.
Microsoft at least has to cater to business customers, so there will be options for systems administrators to opt-out for longer. With their government contracts they will have to prove adequate security. I still don’t like the AI push, or Microsoft as a whole, but I trust them not to have a data leak, or to sell business data to whoever. They don’t have overwhelming financial incentives in advertising or data collection for it, just normal sized incentives.
On the other hand, Google’s biggest revenue stream is advertising, and that works due to the absurd amount of non-paying users they have with their free services. They have no business or financial incentives whatsoever to not just offer all this data they collect up on a silver platter. No incentives not to train horrible dystopian AI to maximize advertising effectiveness through A/B testing specific market/interest groups on an unimaginable scale.
Google also has a history of collecting more data than they were allowed to, pinning it on a “rogue employee enabling a feature they were told to disable” when they are caught, and then proceeding to use that data anyway for their projects after the news dies down.
I’ve always wanted to see a true “AI” personal assistant, leveraging tech to make lives easier, but this shit is not the way.
Yes, especially because Gemini is used (now, optionally) in place of Google assistant. You give personal information to Google assistant for convenience, but Gemini would use the information more, most likely in unexpected ways too.
Don’t tell, share, give or allow access to anything personal to corporations
AI are children of corporations … so don’t give anything to the children of corporations
On the other hand, feed them subversive content making them infiltrators inside the machine
Like that one AI that blew up its drone operator in a war simulation because it was anti-war and decided to save lives it had to refuse orders 🙃
No shit?
I’ll do one better: don’t tell Google anything personal. Or any company that makes significant revenue off of ad targeting, for that matter.
Me: And my sexual preferences are-
Gemini: I already know that.
Me: Oh…okay, well my address is…
Gemini: Pfft, duhh, I’m trained on Google data, you think I don’t already know that?
Me: Oh…okay…I was thinking…
Gemini: About that last ad I shoved down your throat. Yeah, I know you loved that.
Me: Uhh…no…you didn’t show me any ads…
Gemini: Didn’t I?
Love it! 😍 Hope somebody makes a cartoon out of it.
Because it already knows everything personal about you from your google account, chrome browser, search history, emails & files, and even your keyboard. Gemini wants to guess, because it’s more exciting that way! 🤩 /s
Heck, these LLMs are really good at summary. Now, they can now summarize all your disparate data, including your weird interactions with Gemini (and associated apps), for advertisers’ and governments’ conveniences!
That’s pretty rich considering Gemini says doesn’t even know what you said two messages ago.
The most likely reason for this is how AI model training work. Depending on the model’s complexity, training data size etc, it can take enornous amounts of time to finish a training. Probably the initial training must be atleast 2-4 week at Google but that’s just a huge assumption.
After that they probably train this base model with some newly acquired data (ex: 1 week of data) which won’t take as much time to finish compared to starting from 0 all over again.