Sämtliche Apps geben intime Daten wie sexuelle Orientierung oder genaue Aufenthaltsorte preis.
Ich dachte, das sei der Zweck?
Sämtliche Apps geben intime Daten wie sexuelle Orientierung oder genaue Aufenthaltsorte preis.
Ich dachte, das sei der Zweck?
"Es gibt vermutlich Unternehmen in Europa, die sagen, dass der AI Act nicht genug Rechtssicherheit bietet, um weiterzumachen." Das, so die Sorge, könnte dazu führen, dass nur US-amerikanische Großkonzerne tatsächlich eine Chance im KI-Wettrennen haben können.
Es gab doch nie eine Chance, bei diesem "Wettrennen" mitzumachen. Es gibt nur 2 Fragen:
This is so that famous people and their heirs can get more free money.
The only thing this does for ordinary people is make them poorer.
This is a brutally dystopian law. Forget the AI angle and turn on your brain.
Any information will get a label saying who owns it and what can be done with it. Tampering with these labels becomes a crime. This is the infrastructure for the complete control of the flow of all information.
Oh! Look how happy those little girls are! Probably because they just learned that Stay-At-Home-Mom isn't the only acceptable career for a woman.
The FTC is worried that the big tech firms will further entrench their monopolies. They are doing a lot of good stuff lately; an underappreciated boon of the Biden Presidency. Lina Khan looks to be really set on fixing decades of mistakes.
I guess they just want to know if these deals lock out potential competitors.
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.
Despite the fact that Nvidia is now almost the main beneficiary of the growing interest in AI, the head of the company, Jensen Huang, does not believe that additional trillions of dollars need to be invested in the industry.
*Because of
You heard it, guys. There's no need to create competition to Nvidia's chips. It's perfectly fine if all the profits go to Nvidia, says Nvidia's CEO.
That was an annoying read. It doesn't say what this actually is.
It's not a new LLM. Chat with RTX is specifically software to do inference (=use LLMs) at home, while using the hardware acceleration of RTX cards. There are several projects that do this, though they might not be quite as optimized for NVIDIA's hardware.
Go directly to NVIDIA to avoid the clickbait.
Chat with RTX uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring generative AI capabilities to local, GeForce-powered Windows PCs. Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers.
Source: https://blogs.nvidia.com/blog/chat-with-rtx-available-now/
Download page: https://www.nvidia.com/en-us/ai-on-rtx/chat-with-rtx-generative-ai/
Currently, AI means Artificial Neural Network (ANN). That's only one specific approach. What ANN boils down to is one huge system of equations.
The file stores the parameters of these equations. It's what's called a matrix in math. A parameter is simply a number by which something is multiplied. Colloquially, such a file of parameters is called an AI model.
2 GB is probably an AI model with 1 billion parameters with 16 bit precision. Precision is how many digits you have. The more digits you have, the more precise you can give a value.
When people talk about training an AI, they mean finding the right parameters, so that the equations compute the right thing. The bigger the model, the smarter it can be.
Does that answer the question? It's probably missing a lot.
Explanation of how this works.
These "AI models" (meaning the free and open Stable Diffusion in particular) consist of different parts. The important parts here are the VAE and the actual "image maker" (U-Net).
A VAE (Variational AutoEncoder) is a kind of AI that can be used to compress data. In image generators, a VAE is used to compress the images. The actual image AI only works on the smaller, compressed image (the latent representation), which means it takes a less powerful computer (and uses less energy). It’s that which makes it possible to run Stable Diffusion at home.
This attack targets the VAE. The image is altered so that the latent representation is that of a very different image, but still roughly the same to humans. Say, you take images of a cat and of a dog. You put both of them through the VAE to get the latent representation. Now you alter the image of the cat until its latent representation is similar to that of the dog. You alter it only in small ways and use methods to check that it still looks similar for humans. So, what the actual image maker AI "sees" is very different from the image the human sees.
Obviously, this only works if you have access to the VAE used by the image generator. So, it only works against open source AI; basically only Stable Diffusion at this point. Companies that use a closed source VAE cannot be attacked in this way.
I guess it makes sense if your ideology is that information must be owned and everything should make money for someone. I guess some people see cyberpunk dystopia as a desirable future. I wonder if it bothers them that all the tools they used are free (EG the method to check if images are similar to humans).
It doesn’t seem to be a very effective attack but it may have some long-term PR effect. Training an AI costs a fair amount of money. People who give that away for free probably still have some ulterior motive, such as being liked. If instead you get the full hate of a few anarcho-capitalists that threaten digital vandalism, you may be deterred. Well, my two cents.
Häh? Die Datenschutzämter verteilen doch ständig Bußgelder. Klagen kann auch jeder. Das Google-Fonts-Urteil war, weil irgendeine Privatperson geklagt hat. Danach dann eine Abmahnwelle.