I don’t understand why browsers support this “functionality”.
I don’t understand why browsers support this “functionality”.
The soldier has a blank shoulder patch even in the original photo. Odd.
The railing in the photo has blue and yellow stripes, which is unlikely in Russia, but I don’t see anything about the soldier himself that makes him obviously Ukrainian. (Maybe experts can distinguish by camo patterns?) The comments in Russian on that Reddit thread are ridiculing the use of this photo on a Russian poster but provide no further information.
I can’t find the source of the photo although I did find artistic interpretations of it from both the Ukrainian and Russian sides, with the corresponding patches on the soldier’s shoulder.
But note that the image on the billboard has the patch on the soldier’s shoulder replaced with a gray rectangle. (It’s easier to see in the full-size image.) Someone didn’t like the soldier’s nationality…
Removed by mod
I understand that that’s the intent. The problem is the methodology, which is as I said just multiplication by five. Calling it a gold standard implies that there’s actually some sophisticated analysis going on, and there isn’t.
The “gold standard in the field” is apparently to multiply the Hamas numbers by five. I’m not kidding. That’s where the 186,000 number comes from. This is low-effort bullshit.
Edit: Also this article is just wrong about what the 335,500 number is claimed to be. It is what you get if you extrapolate the 186,000 number to the end of the year, not to September.
always chooses the cheapest option
quality isn’t great
Capitalism is to blame!
I don’t understand why the father would just confess like that, but I suppose I shouldn’t expect good judgement from him.
FourPacketsOfPeanuts has already given a good answer specifically about Israel’s situation, but I want to say something about international law in general. Law may be written based on moral principles, but law is still not the same thing as morality. In our daily lives, we follow our moral principles because that’s what we believe is right, and we follow the law because otherwise cops will put us in jail.
The situation for a sovereign country is different - there are no cops and there is no jail. If other countries wanted to take hostile action, they would even if there was no violation of international law, and if they did not want to take hostile action, the wouldn’t even if there was a violation. Morality still exists (although morality at the scale of countries is necessarily not the same as morality at the scale of individuals) but the law might as well not exist because it is not enforced. It’s just pretty language that may be quoted when a country does what it was going to do anyway.
I’m not trying to imply that I think that Israel is violating international law. I’m saying that discussing whether it is or not is a purely intellectual exercise with no practical relevance. If I support Israel but you convince me that it is technically breaking some law, I’m still not going to change my mind. If you oppose Israel but I convince you that it is technically obeying every law to the letter, you’re still probably not going to change your mind.
So far “more data” has been the solution to most problems, but I don’t think we’re close to the limit of how much useful information can be learned from the data even if we’re close to the limit of how much data is available. Look at the AIs that can’t draw hands. There are already many pictures of hands from every angle in their training data. Maybe just having ten times as many pictures of hands would solve the problem, but I’m confident that if that was not possible then doing more with the existing pictures would also work.* Algorithm design just needs some time to catch up.
*I know that the data that is running out is text data. This is just an analogy.
Not really questionable - hospitals explicitly lose their protection if they are used for military activity.
What occasions are you referring to? I know people claim that Israeli use of white phosphorous munitions is illegal, but the law is actually quite specific about what an incendiary weapon is. Incendiary effects caused by weapons that were not designed with the specific purpose of causing incendiary effects are not prohibited. (As far as I can tell, even the deliberate use of such weapons in order to cause incendiary effects is allowed.) This is extremely permissive, because no reasonable country would actually agree not to use a weapon that it considered effective. Something like the firebombing of Dresden is banned, but little else.
Incendiary weapons do not include:
(i) Munitions which may have incidental incendiary effects, such as illuminants, tracers, smoke or signalling systems;
(ii) Munitions designed to combine penetration, blast or fragmentation effects with an additional incendiary effect, such as armour-piercing projectiles, fragmentation shells, explosive bombs and similar combined-effects munitions in which the incendiary effect is not specifically designed to cause burn injury to persons, but to be used against military objectives, such as armoured vehicles, aircraft and installations or facilities.
The issue I have with referring to the current situation as a bubble is that this isn’t just hype. The technology really is amazing, and far better than what people had been expecting. I do think that most current attempts to commercialize it are premature, but there’s such a big first-mover advantage that it makes sense to keep losing money on attempts that are too early in order to succeed as soon as it is possible to do so.
Multiple studies are showing that training on data contaminated with LLM output makes LLMs worse, but there’s no inherent reason why LLMs must be trained on this data. As you say, people are aware of it and they’re going to be avoiding it. At the very least, they will compare the newly trained LLM to their best existing one and if the new one is worse, they won’t switch over. The era of being able to download the entire internet (so to speak) is over but this means that AI will be getting better more slowly, not that it will be getting worse.
I don’t disagree, but before the recent breakthroughs I would have said that AI is like fusion power in the sense that it has been 50 years away for 50 years. If the current approach doesn’t get us there, who knows how long it will take to discover one that does?
It would be odd if AI somehow got worse. I mean, wouldn’t they just revert to a backup?
Anyway, I think (1) is extremely unlikely but I would add (3) the existing algorithms are fundamentally insufficient for AGI no matter how much they’re scaled up. A breakthrough is necessary which may not happen for a long time.
I think (3) is true but I also thought that the existing algorithms were fundamentally insufficient for getting to where we are now, and I was wrong. It turns out that they did just need to be scaled up…
According to this article:
The NEL was an offshoot of an ongoing digital lending project called the Open Library, in which the Internet Archive scans physical copies of library books and lets people check out the digital copies as though they’re regular reading material instead of ebooks. The Open Library lent the books to one person at a time—but the NEL removed this ratio rule, instead letting large numbers of people borrow each scanned book at once.
It sounds like what you’re describing is what they were doing before they did the thing for which they got sued.
As for AI, I think that in general using a copyrighted work to train an AI is a transformative use and therefore that it is permitted by law. Specific instances in which an AI outputs copyrighted text without any transformative modifications may still be copyright infringement They may also be fair use, in the way that copying a short excerpt from a longer document is fair use. I’m not a lawyer.
Anyway, if the courts rule against the AI companies, the enforcement of such a ruling would be disastrous for the ability for American companies to compete with international rivals who will still freely use the training data that American companies would no longer have access to. A law would be (or at least should be) passed to prevent that, although the tech companies might end up paying some nominal fee.
The important thing here isn’t that the AI is worse than humans. It’s than the AI is worth comparing to humans. Humans stay the same while software can quickly improve by orders of magnitude.
This is what international law has to say about incendiary weapons:
- It is prohibited in all circumstances to make the civilian population as such, individual civilians or civilian objects the object of attack by incendiary weapons.
- It is prohibited in all circumstances to make any military objective located within a concentration of civilians the object of attack by air-delivered incendiary weapons.
- It is further prohibited to make any military objective located within a concentration of civilians the object of attack by means of incendiary weapons other than air-delivered incendiary weapons, except when such military objective is clearly separated from the concentration of civilians and all feasible precautions are taken with a view to limiting the incendiary effects to the military objective and to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects.
- It is prohibited to make forests or other kinds of plant cover the object of attack by incendiary weapons except when such natural elements are used to cover, conceal or camouflage combatants or other military objectives, or are themselves military objectives.
This treeline is clearly not located within a concentration of civilians and it is concealing (or plausibly believed to be concealing) enemy combatants and therefore the use of incendiary weapons is unambiguously legal.
The fact that it won’t have any record of calls I missed while the phone was off or didn’t have reception, although actually that’s probably the fault of the service provider. They can send me texts I missed. Why can’t they send me a list of missed calls?