I'm having trouble understanding why disinformation produced by an AI is more of a problem than that produced by a person. Sure, theoretically it can be made to scale a lot more--though I would point out AI is not, at the moment, light on resources either. But it's unclear to me to what extent that makes a difference.
AI
Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionality. The distinction between the former and the latter categories is often revealed by the acronym chosen.
I don't believe the content itself won't be any more of an issue than human-generated misinformation. The main issue I see is that a single person can now achieve this on a large scale without ever leaving their mom's basement and at a much lower cost. It's the concentration of power that I find concerning.
This is an unfortunate future. Unless something is done fast, the majority of content on the internet will simply be generated content with bots interacting with other bots.
Unless we only allow users who verify their identity to participate on certain websites, I can't see how else you could solve this problem.
Even then, some bad actors with a verified identity could be generating content using AI and posting it as their own.
I'm not even sure how anyone will be able to trust or believe any photo, video, or written idea online in the next 5 to 10 years.
I think the idea with a verified identity is that each person only gets one. If that is the case and you find misinformation from them, it's easy to block the one account. It's not so easy to block if there are thousands of accounts made by the same person.
I don't know how you would be able to enforce a one ID per person limit though. Government identification requires either trust in the government and/or in the entity verifying your identity, and your government providing useful identification in the first place. Phones numbers don't work because a single person can acquire multiple numbers, many have none, and numbers get transferred to different people.
I think the basis solution is education: we need educate our children in critical thinking. Generative AI is only other one source of misinformation, like "pseudoscience" disguised as true science (false papers, manipulated data,...). It is not good that teenagers believe something is true only because it is in internet (blogs, youtube, etc)
Definitely, critical thinking abilities is something we're sorely lacking as a society. I don't think this is purely an education problem though. Thinking takes a lot of time and energy, both of which are scarce when you're spending it all on just trying to survive.
However, critical thinking would only help for things like scientific claims. If someone tells you "Bob from two states over ate a burger for lunch on June 19th", no amount of critical thinking can help you figure out whether what you read is true or not. It's an asinine example, but I think you can imagine a more serious lie that's equally impervious to critical thinking.