Agreed with the points about intelligence definition, but on a pragmatic note, I'll list some concrete examples of fields in AI that are not LLMs (I'll leave it up to your judgement if they're "more intelligent" or not):
- Machine learning, most of the concrete examples other people gave here were deep learning models. They're used a lot, but certainly don't represent all of AI. ML is essentially fitting a function by tuning the function's parameters using data. Many sub-fields like uncertainty quantification, time-series forecasting, meta-learning, representation learning, surrogate modelling and emulation, etc.
- Optimisation, containing both gradient-based and black-box methods. These methods are about finding parameter values that maximise or minimise a function. Machine learning is also an optimisation problem, and is usually performed using gradient-based methods.
- Reinforcement learning, which often involves a deep neural network to estimate state values, but is itself a framework for assigning values to states, and learning the optimal policy to maximise reward. When you hear about agents, often they will be using RL.
- Formal methods for solving NP-hard problems, popular examples include TSP and SAT. Basically trying to solve these problems efficiently and with theoretical guarantees of accuracy. All of the hardware you use will have had its validity checked through this type of method at some point.
- Causal inference and discovery. Trying to identify causal relationships from observational data when random controlled trials are not feasible, using theoretical proofs to establish when we can and cannot interpret a statistical association as a causal relationship.
- Bayesian inference and learning theory methods, not quite ML but highly related. Using Bayesian statistical methods and often MCMC methods to perform statistical inference of the posterior with normally intractable marginal likelihoods. It's mostly statistics with AI helping out to enable us to actually compute things.
- Robotics, not a field I know much about, but it's about physical agents interacting with the real world, which comes with many additional challenges.
This list is by no means exhaustive, and there is often overlap between fields as they use each other's solutions to advance their own state of the art, but I hope this helped for people who always hear that "AI is much more than LLMs" but don't know what else is there. A common theme is that we use computational methods to answer questions, particularly those we couldn't easily answer ourselves.
To me, what sets AI apart from the rest of computer science is that we don't do "P" problems: if there is a method available to directly or analytically compute the solution, I usually wouldn't call it AI. As a basic example, I don't consider computing y = ax+b coefficients analytically as AI, but do consider general approximations of linear models using ML AI.
I don't think this is appeasing a bully, this is actually giving him very little. Appeasement would have involved actually giving him something. The increase to 3.5% is back to around cold war levels, which seems very appropriate for the current geopolitical situation. The final 1.5% is essentially an accounting trick to make whatever expenses you like count towards the 5%, like road maintenance or technological R&D, it would be hard not to reach this target. Plus this money can now be increasingly spent on Europe's own companies instead of sending 1-2% of yearly GDP straight to the US economy, especially once economies of scale start picking up.
This is just what Europe was planning to do on its own, but framing it in a way that strokes Trump's ego and lets him claim it as his victory. Especially after a few years this will not be a positive change for the US. I'll happily sacrifice Rutte's pride if it means Europe gets exactly what it wanted.