Several years ago I created a Slack bot that ran something like Jupyter notebook in a container, and it would execute Python code that you sent to it and respond with the results. It worked in channels you invited it to as well as private messages, and if you edited your message with your code, it would edit its response to always match the latest input. It was a fun exercise to learn the Slack API, as well as create something non-trivial and marginally useful in that Slack environment. I knew the horrible security implications of such a bot, even with the Python environment containerized, and never considered opening it up outside of my own personal use.
Looks like the AI companies have decided that exact architecture is perfectly safe and secure as long as you obfuscate the input pathway by having to go through a chat-bot. Brilliant.
And so Microsoft decided this wasn’t a big enough vulnerability to pay them a bounty. Why the fuck would you ever share that with them then, if you could sell it to a black-hat hacking org for thousands?
I’m sure nothing will go wrong with tons of critical business documents being routed through copilot for organizations…