• 1 Post
  • 867 Comments
Joined 1 year ago
cake
Cake day: February 10th, 2025

help-circle

  • There’s a lot of interesting tech for everyone if you now where to look.

    Since September 2024, ICE has paid more than $1.6 million to a Maryland company that integrates a type of cell-site simulator popularly known as a “stingray” into government vehicles.

    Eff developed software that can run on cheap hardware which will detect Stingrays: https://www.eff.org/deeplinks/2025/03/meet-rayhunter-new-open-source-tool-eff-detect-cellular-spying


    A laptop with Linux and Kismet can use bluetooth to scan for bluetooth devices by MAC address.

    Axon is the largest manufacturer of body cameras.

    Public information shows that have a registered MA-L(Mac Address Block Large):

    MAC Prefix: 00:25:DF

    With a good antenna these can be detected for 1000+ yrds or more with direct line of sight, such as from a drone that’s less than 250g (doesn’t require FAA registration for recreational use, like fox hunting).

    These tools should allow you to steer clear of any civil disturbance and maintain your social credit score.


    Federal operations use encrypted packet radios so you can’t listen to their com chatter, but local PD often are just using a trunked system without encryption. You can buy a $500 scanner to listen to these, or use 2 cheap($50) software defined radios and some open source software: https://www.youtube.com/watch?v=g9KJrtIO8_4

    This should let you hear the local PD/Fire department/Ambulance line (you cannot transmit with the RTL SDRs used in the example, they’re not capable of doing so… so you won’t risk committing federal crimes). This will allow you to avoid areas of unrest and otherwise be a good citizen.

    Stay safe






  • You’re right, I just compared the author list to the news article and not to the paper. Sorry, took me a bit to absorb that one.

    Yeah, it’s an interesting paper. They’re specifically trying a different method of extracting text.

    I’m not taking the position that the text isn’t in the model, or that it isn’t possible to make the model repeat some of that text. We know 100% that the text that they’re looking for is part of the training set. They mention that fact themselves in the paper and also choose books that are public domain and so guaranteed to be in the training set.

    My contention was with the idea that you can just sit down at a model and give it a prompt to make it recite an entire book. That is simply not true outside of models that have been manipulated to do so (by training them on the book text for several hundred epochs, for example).

    The purpose of the work here was to demonstrate a way to prove that a specific given text is part of a training set (which useful for identifying any potential copyright issues in the future, for example). It is being offered as proof that you can just prompt a model and receive a book when it actually proves the opposite of that.

    Their process was to, in phase 1, prompt with short sequences (I think they used 50 tokens like the ‘standard’ experiments, I don’t have it in front of me) and then, if the model returned a sequence that matched the ground truth then they would give it a prompt to continue until it refused to continue. They would then ‘score’ the response by looking for sections in the response where it matched the written text and measuring the length of text which matched (a bit more complex than that, but the details are in the text)

    In order to test a sequence they needed 52 prompts telling the model to continue, in the best case, to get to the end/a refusal.

    The paper actually gives a higher score than ~40%. For The Great Gatsby, a book which is public domain and considered a classic, they achieved a score of 97.5%. I can’t say how many prompts this took but it would more than 52. The paper doesn’t include all of the data.

    Yes, you can extract a significant portion of text of items that are in the training set with enough time and money (it cost $134 to extract The Hobbit, for example). You can also get the model to repeat short sentences from text a high percentage of the time with a single prompt.

    However, the response was to a comment that suggested that these two things were both combined and that you could use a single magical prompt to extract an entire book.

    Yet most AI models can recite entire Harry Potter books if prompted the right way, so that’s all bullshit.

    The core of the issue, about copyright, is that a work has to be ‘highly transformative’. Language models transform a book in such complex ways that you have to take tens of thousands or hundreds of thousands of samples from the, (I don’t know the technical term) internal representational space of the model, in order to have a chance of recovering a portion of a book.

    That’s a highly transformative process and why training LLMs on copyrighted works was ruled to have a Fair Use exemption to claims of copyright liability.







  • That’s quite a claim, I’d like to see that. Just give me the prompt and model that will generate an entire Harry Potter book so I can check it out.

    I doubt that this is the case as one of the features of chatbots is the randomization of the next token which is done by treating the model’s output vector as a, softmaxxed, distribution. That means that every single token has a chance to deviate from the source material because it is selected randomly. In order to get a complete reproduction it would be of a similar magnitude as winning 250,000 dice rolls in a row.


    In any case, the ‘highly transformative’ standard was set in Authors Guild v. Google, Inc., No. 13-4829 (2d Cir. 2015). In that case Google made digital copies of tens of millions of books and used their covers and text to make Google Books.

    As you can see here: https://www.google.com/books/edition/The_Sunlit_Man/uomkEAAAQBAJ where Google completely reproduces the cover and you can search the text of the book (so you could, in theory, return the entire book in searches). You could actually return a copy of a Harry Potter novel (and a high resolution scan, or even exact digital copy of the cover image).

    The judge ruled:

    Google’s unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google’s commercial nature and profit motivation do not justify denial of fair use.

    In cases where people attempt to claim copyright damages against entities that are training AI, the finding is essentially ‘if they paid for a copy of the book then it is legal’. This is why Meta lost their case against authors, in that case they were sued for 1.) Pirating the books and 2.) Using them to train a model for commercial purposes. The judge struck 2.) after citing the ‘highly transformative’ nature of language models vs books.