Technology

1158 readers
37 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
1
2
 
 

The article is a great critique of how what the author refers to as the "Efficiency Lobby" has been pursuing a narrow idea of task oriented intelligence focused on productivity. It's a narrow focus, driven by corporate interests, that necessarily leads to individualistic consumption of AI services, hindering genuine creativity, open-ended exploration, and collection.

A recent paper introduces MemOS with the potential to create a truly collaborative and community driven foundation for AI. The paper introduces a new approach to memory management for LLMs, treating memory as a governable system resource.

It uses the concept of MemCubes that encapsulate both semantic content and critical metadata like provenance and versioning. MemCubes are designed to be composed, migrated, and fused over time, unifying three distinct memory types: plaintext, activation, and parameter memories.

This architecture directly addresses the limitations of stateless LLMs, enabling long-context reasoning, continual personalization, and knowledge consistency. The paper proposes a mem-training paradigm, where knowledge evolves continuously through explicit, controllable memory units, blurring the lines between training and deployment paving the way to extend data parallelism to a distributed intelligence ecosystem.

It would be possible to build a decentralized network where there's a common pool of MemCubes acting as shareable and composable containers of memory, akin to a BitTorrent for knowledge. Users could contribute their own memory artifacts such as structured notes, refined prompts, learned patterns, or even "parameter patches" encoding specialized skills that are encapsulated within MemCubes.

Using a common infrastructure would allow anyone to share, remix, and reuse these building blocks in all kinds of ways. Such an architecture would directly address Morozov's critique of privatized "stonefields" of knowledge, instead creating a truly public digital commons.

This distributed platform could effectively amortize computation across the network, similar to projects like SETI@home. Instead of constantly recomputing information, users could build out a local cache of MemCubes relevant to their context from the shared pool. If a particular piece of knowledge or a specific reasoning pattern has already been encoded and optimized within a MemCube by another user, it can simply be reused, dramatically reducing redundant computation and accelerating inference.

The inherent reusability and composability of MemCubes make it possible to have a collaborative environment where all users contribute to and benefit from each other. Efforts like Petals, which already facilitate distributed inference of large models, could be extended to leverage MemOS to share dynamic and composable memory.

This has the potential to transform AI from a tool for isolated consumption to a medium for collective creation. Users would be free to mess about with readily available knowledge blocks, discovering emergent purposes and stumbling on novel solutions.

3
4
5
6
 
 

The New York Crimes panicking over China's EV dominance and calling for a "Manhattan Program" for EVs. Good luck with that. The US's neoliberal brain worms are dug in too deep. They couldn't do it for the MIC and they certainly won't do it for the auto industry.

7
8
9
10
11
12
13
14
15
16
17
18
 
 

Instead of just generating the next response, it simulates entire conversation trees to find paths that achieve long-term goals.

How it works:

  • Generates multiple response candidates at each conversation state
  • Simulates how conversations might unfold down each branch (using the LLM to predict user responses)
  • Scores each trajectory on metrics like empathy, goal achievement, coherence
  • Uses MCTS with UCB1 to efficiently explore the most promising paths
  • Selects the response that leads to the best expected outcome

Limitations:

  • Scoring is done by the same LLM that generates responses
  • Branch pruning is naive - just threshold-based instead of something smarter like progressive widening
  • Memory usage grows with tree size, there currently no node recycling
19
20
21
22
23
24
25
view more: next ›