I think you may be mixing a couple of things together, but I'll take a crack at this.
When you get an Ai generated response from a search engine, this is usually a modified RAG (retrieval augmented generation) approach. How this works is that the content from web pages are already pre-processed into embeddings (numerical representations of the text). When you perform a search, your search text is turned into an embedding and compared (numerical similarity) to the websites to get the most related content for your search. That means that the LLM only parses and processes a very small subset of the returned websites to generate its response.
Another element you might be asking about is how can these agentic AI systems handle larger tasks (things like OpenClaw). That is a bit more complicated and dependent on the systems design, but basically boils down to two things. The first is the "reasoning models" first break concepts into smaller tasks meaning the LLM only has to worry about a subset of a larger task. Secondly, a lot of these systems will periodically merge all past context into a compressed state that the LLM can handle (basically summaries of summaries) or add them to a database for future/faster reference.
At the end of the day, your understanding of the limits of LLM are correct, all the progress we've really seen with LLMs (over the past couple of years) has been the creation of systems to work around their limitations. The base technology isn't getting much better, but the support around it is.