XML is best suited for storing documents, JSON for transmitting application data over networks.
SVG is an example of an excellent use of XML, it doesn't mean we should use XML for transmitting data from a backend to a frontend.
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Follow the wormhole through a path of communities !webdev@programming.dev
XML is best suited for storing documents, JSON for transmitting application data over networks.
SVG is an example of an excellent use of XML, it doesn't mean we should use XML for transmitting data from a backend to a frontend.
I agree with everything this article said. A lot of software would work better if devs took the time to learn and appreciate XML. Many times I've found myself reinventing shit XML gives you for free.
...But at the same time, if I'm working on a developer-facing product of any kind, I know that choosing XML over JSON is going to turn a lot of people away.
When you receive an XML document, you can verify its structure before you ever parse its content. This is not a luxury. This is basic engineering hygiene.
This is actually why my colleagues and I helped kill off XML.
XML APIs require extensive expertise to upgrade asynchronously (and this expertise is vanishingly rare). More typically all XML endpoints must be upgraded during the same unscheduled downtime.
JSON allows unexpected fields to be added and ignored until each participant can be upgraded, separately and asynchronously. It makes a massive difference in the resilience of the overall system.
I really really liked XML when I first adopted it, because before that I was flinging binary data across the web, which was utterly awful.
But XML for the web is exactly where it belongs - buried and forgotten.
Also, it is worth noting that JSON can be validated to satisfy that engineering impulse. The serialize/deserialize step will catch basic flaws, and then the validator simply has to be designed to know which JSON fields it should actually care about. This gets much more resilient results than XMLs brittle all-in-one shema specification system - which immediately becomes stale, and isn't actually correct for every endpoint, anyway.
The shared single schema typically described every requirement of every endpoint, not any single endpoint's actual needs. This resulted in needless brittleness, and is one reason we had such a strong push for "microservices". Microservices could each justify their own schema, and so be a bit less brittle.
That said, I would love a good standard declarative configuration JSON validator, as long as it supported custom configs at each endpoint.
I'm not sure I follow the all-in-one schema issue? Won't each endpoint have its own schema for its response? And if you're updating things asynchronously then doesn't versioning each endpoint effectively solve all the problems? That way you have all the resilience of the xml validation along with the flexibility of supplying older objects until each participant is updated.
Won't each endpoint have its own schema for its response?
They should, but often didn't. Today's IT folks consider microservices the reasonable default. But the logic back when XML was popular tended to be "XML APIs are very expensive to maintain. Let us save time and only maintain one."
And if you're updating things asynchronously then doesn't versioning each endpoint effectively solve all the problems?
XML schema validation meant that if anything changed on any endpoint covered by the schema, all messages would start failing. This was completely preventable, but only by an expert in the XML specification - and there were very few such experts. It was much more common to shut everything down, upgrade everything, and hope it all came back online.
But yes, splitting the endpoint into separate schema files solved many of the issues. It just did so too late to make much difference in the hatred for it.
And really, the remaining issues with the XML stack - dependency hell due to sprawling useless feature set, poor documentation, and huge security holes due to sprawling useless feature set - were still enough to put the last nail in it's coffin.
There exists a peculiar amnesia in software engineering regarding XML
That’s for sure. But not in the way the author means.
There exists a pattern in software development where people who weren’t around when the debate was actually happening write another theory-based article rehashing old debates like they’re saying something new. Every ten years or so!
The amnesia is coming from inside the article.
[XML] was abandoned because JavaScript won. The browser won.
This comes across as remarkably naive to me. JavaScript and the browser didn’t “win” in this case.
JSON is just vastly simpler to read and reason about for every purpose other than configuration files that are being parsed by someone else. Yaml is even more human-readable and easier to parse for most configuration uses… which is why people writing the configuration parser would rather use it than XML.
Libraries to parse XML were/are extremely complex, by definition. Schemas work great as long as you’re not constantly changing them! Which, unfortunately, happens a lot in projects that are earlier in development.
Switching to JSON for data reduced frustration during development by a massive amount. Since most development isn’t building on defined schemas, the supposed massive benefits of XML were nonexistent in practice.
Even for configuration, the amount of “boilerplate” in XML is atrocious and there are (slightly) better things to use. Everyone used XML for configuration for Java twenty years ago, which was one of the popular backend languages (this author foolishly complains about Java too). I still dread the massive XML configuration files of past Java. Yaml is confusing in other ways, but XML is awful to work on and parse with any regularity.
I used XML extensively back when everyone writing asynchronous web requests was debating between using the two (in “AJAX”, the X stands for XML).
Once people started using JSON for data, they never went back to XML.
Syntax highlighting only works in your editor, and even then it doesn’t help that much if you have a lot of data (like configuration files for large applications). Browsers could even display JSON with syntax highlighting in the browser, for obvious reasons — JSON is vastly simpler and easier to parse.
God, fucking camel and hibernate xml were the worst. And I was working with that not even 15 years ago!
Making XML schemas work was often a hassle. You have a schema ID, and sometimes you can open or load the schema through that URL. Other times, it serves only as an identifier and your tooling/IDE must support ID to local xsd file mappings that you configure.
Every time it didn't immediately work, you'd think: Man, why don't they publish the schema under that public URL.
This seriously sounds like a nightmare.
It’s giving me Eclipse IDE flashbacks where it seemed so complicated to configure I just hoped it didn’t break. There were a lot of those, actually.
I love XML, when it is properly utilized. Which, in most cases, it is not, unfortunately.
JSON > CSV though, I fucking hate CSV. I do not get the appeal. "It's easy to handle" -- NO, it is not. It's the "fuck whoever needs to handle this" of file "formats".
JSON is a reasonable middle ground, I'll give you that
CSV >>> JSON when dealing with large tabular data:
1 can be solved with JSONL, but 2 is unavoidable.
{
"columns": ["id", "name", "age"],
"rows": [
[1, "bob", 44], [2, "alice", 7], ...
]
}
There ya go, problem solved without the unparseable ambiguity of CSV
Please stop using CSV.
Great, now read it row by row without keeping it all in memory.
Wdym? That's a parser implementation detail. Even if the parser you're using needs to load the whole file into memory, it's trivial to write your own parser that reads those entries one row at a time. You could even add random access if you get creative.
That's one of the benefits of JSON: it is dead simple to parse.
No:
Just user Zarr or so for array data. A table with more than 200 rows isn't ”human readable” anyway.
Yes..but compression
And with csv you just gotta pray that you're parser parses the same as their writer..and that their writer was correctly implemented..and they set the settings correctly
Compression adds another layer of complexity for parsing.
JSON can also have configuration mismatch problems. Main one that comes to mind is case (in)sensitivity for keys.
Nahh your nitpicking there, large csvs are gonna be compressed anyways
In practice I've never met a Json I cant parse, every second csv is unparseable
Biggest problem is, CSV is not a standardized format like JSON. For very simple cases it could be used as a database like format. But it depends on the parser and that's not ideal.
Exactly. I've seen so much data destroyed silently deep in some bioinformatics pipeline due to this that I've just become an anti CSV advocate.
Use literally anything else that doesn't need out of band “I'm using this dialect” information that has to match to prevent data loss.
IMHO one of the fundamental problems with XML for data serialization is illustrated in the article:
(person (name "Alice") (age 30))
[is serialized as]<person> <name>Alice</name> <age>30</age> </person>Or with attributes:
<person name="Alice" age="30" />
The same data can be portrayed in two different ways. Whenever you serialize or deserialize data, you need to decide whether to read/write values from/to child nodes or attributes.
That's because XML is a markup language. It's great for typing up documents, e.g. to describe a user interface. It was not designed for taking programmatic data and serializing that out.
JSON also has arrays. In XML the practice to approximate arrays is to put the index as an attribute. It's incredibly gross.
JSON is easier to parse, smaller and lighter on resources. And that is important in the web. And if you take into account all the features XML has, plus the entities it gets big, slow and complicated. Most data does not need to be self descriptive document when transferring through web. Fundementally these languages are two different kind of languages: XML is a general markup language to write documents, while JSON is a generalized data structure with support for various data types supported by programming languages.
while JSON is a generalized data structure with support for various data types supported by programming languages
Honestly, I find it surprising that you say “support for various data types supported by programming languages”. Data types are particularly weak in JSON when you go beyond JavaScript. Only number for numbers, no integer types, no date, no time, etc.
Regarding use, I see, at least to some degree, JSON outside of use for network transfer. For example, used for configuration files.
I'm sure XML has its uses
I'm also sure that for 99% of the applications out there, XML is overkill and over complicated, making things slower and more error prone
Use JSON, and you'll be fine. If you really really need XML then you probably already know why
Honestly, anyone pining for all the features of XML probably didn't live through the time when XML was used for everything. It was actually a fucking nightmare to account for the existence of all those features because the fact they existed meant someone could use them and feed them into your system. They were also the source of a lot of security flaws.
This article looks like it was written by someone that wasn't there, and they're calling people telling them the truth that they are liars because they think features they found in w3c schools look cool.
The fact that json serializes easily to basic data structures simplifies code so much. Most use cases don't need fully sematic data storage much of which you have to write the same amount of documentation about the data structures anyways. I'll give XML one thing though, schemas are nice and easy, but high barrier to entry in json.
It's true, though, that JSON is just better for most applications.
XMPP shows pretty well that XML can do things that cannot be done easily without it. XMPP wouldn't work nearly as well with JSON. Namespaces are a super power.