100
60
34
submitted 2 months ago by abhi9u@lemmy.world to c/technology@lemmy.world
134
submitted 2 months ago by abhi9u@lemmy.world to c/technology@lemmy.world
32
submitted 5 months ago by abhi9u@lemmy.world to c/technology@lemmy.world
7
CPython Type System Internals: Video Series (codeconfessions.substack.com)
14
12
submitted 8 months ago by abhi9u@lemmy.world to c/technology@lemmy.world
27
submitted 8 months ago by abhi9u@lemmy.world to c/technology@lemmy.world
10
43
4

A primer on GPU architecture and computing

[-] abhi9u@lemmy.world 2 points 10 months ago

Yes, that makes much more sense.

[-] abhi9u@lemmy.world 4 points 10 months ago

Interesting. I'm just thinking aloud to understand this.

In this case, the models are looking at a few sequence of bytes in their context and are able to predict the next byte(s) with good accuracy, which allows efficient encoding. Most of our memories are associative, i.e. we associate them with some concept/name/idea. So, do you mean, our brain uses the concept to predict a token which gets decoded in the form of a memory?

[-] abhi9u@lemmy.world 2 points 10 months ago

Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.

[-] abhi9u@lemmy.world 3 points 10 months ago

Do you mean the number of tokens in the LLM's tokenizer, or the dictionary size of the compression algorithm?

The vocab size of the pretrained models is not mentioned anywhere in the paper. Although, they did conduct an experiment where they measured compression performance while using tokenizers of different vocabulary sizes.

If you meant the dictionary size of the compression algorithm, then there was no dictionary because they only used arithmetic coding to do the compression which doesn't use dictionaries.

[-] abhi9u@lemmy.world 1 points 11 months ago
[-] abhi9u@lemmy.world 2 points 11 months ago

I don't know. I have found that the folks on Technology community appreciate many of my computer science posts. But a dedicated Comp Science community which is active, will be awesome.

[-] abhi9u@lemmy.world 2 points 11 months ago
[-] abhi9u@lemmy.world 1 points 11 months ago

Thank you! That's helpful. I spent quite some time trying to understand the difference between UTF-8 and Python's representation and arrived at the same understanding as you wrote. However, most of the external documents simply say that strings in Python are UTF-8 which made me conclude that perhaps I am missing something and it might be safer to write it as utf-8.

I will look more in the code as you suggested.

[-] abhi9u@lemmy.world 3 points 11 months ago

Hi @qwop, I am the author. Thank you for reading and the kind words. I would like to understand the error I made better so that I don't repeat in future, and if I can fix it. Could you please clarify?

[-] abhi9u@lemmy.world 1 points 11 months ago

I have the same problem. The number of things I want to read and write about is scaling faster than I can tackle them :)

view more: next ›

abhi9u

joined 1 year ago