this post was submitted on 14 Mar 2026
18 points (95.0% liked)

Learn Programming

2134 readers
22 users here now

Posting Etiquette

  1. Ask the main part of your question in the title. This should be concise but informative.

  2. Provide everything up front. Don't make people fish for more details in the comments. Provide background information and examples.

  3. Be present for follow up questions. Don't ask for help and run away. Stick around to answer questions and provide more details.

  4. Ask about the problem you're trying to solve. Don't focus too much on debugging your exact solution, as you may be going down the wrong path. Include as much information as you can about what you ultimately are trying to achieve. See more on this here: https://xyproblem.info/

Icon base by Delapouite under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
 

I saw one example

int x = 10;
int y = 5;

bool isGreater = x > y;

printf("%d", isGreater);

But I could write this

int x = 10;
int y = 5;

printf("%d", x > y);

I am a complete beginner and I have no real reason why I would or would not want to deal with boolean variables, but I want to understand their raison d'être.

Edit: typo city

you are viewing a single comment's thread
view the rest of the comments
[–] Rossphorus@lemmy.world 2 points 6 hours ago* (last edited 6 hours ago)

Here's a little secret: All data types don't actually exist. When your code is compiled down to machine code your computer just operates on collections of bits. Types are a tool we use to make code more understandable to humans. int and unsigned int tell us information about how numerical values are interpreted. bool tells you that only two values are going to be used and that the value will somehow relate to true/false. char tells us that the value is probably some sort of text. But again, everything is just stored as bits under the hood.

You can go even further and define new types, even though they might still just be numbers under the hood - you could for instance define a new type, e.g. Age, which is still just an integer, but it tells you the reader more information about what it is and what it might be used for. You might define a new type just for error codes, so at a glance you can see that this function doesn't return an int, it returns an OsError, even if the errors are actually still just integer values.

C does however play somewhat loosely by these rules, and to try and 'make your life easier' will often 'coerce' types between eachother to make everything work. For instance, integer types can be coerced to booleans - 0 is false, everything else is true (even negative values). Later on you'll find that arrays can 'decay' to pointers, and other fun surprises. Personally, I think this implicit coercion between types is one of C's biggest mistakes. If types exist to help humans reason about code, what does it mean if the compiler needs to silently change types behind the scenes to make things work? It means the humans were sloppy and the compiler can only guess at what they actually wanted to do (and this is the best case scenario! What happens if the compiler coerces types in places humans didn't expect? Bugs!).

There exist a spectrum of different behaviours in this regard, some languages are what we call 'weakly typed' (like Javascript, or to a lesser extent C) and other languages which are 'strongly typed'. Weakly typed languages will try to implicitly convert types together to make things work: In Javascript if you try and add an integer to text "Hello!" + 52 the compiler will try to implicitly convert types until something makes sense, for instance in this case the compiler would say 'oh, they probably want to add the text "52" onto the end' and will produce "Hello!52". Sometimes this is handy, sometimes it introduces bugs. A strongly typed language will instead simply refuse to compile until you make the types line up.