There are two high-profile publications attempting to predict how artificial intelligence will develop in the near future:
AI-2027 and The Era of Experience Paper
Both are worth reading, or at least asking your favorite LLM to summarize and then explain the important points.
AI 2027 is a scenario written by several researchers about how AI development might unfold in the coming years, and how it all ends, which they then asked a popular blogger to rewrite in a semi-fictional storytelling format for easier consumption. To put it briefly, it all inevitably ends badly. Following abundant criticism, they split the ending into two versions: a gloomier one and one offering slightly unjustified hope. 2027 is just a convention; it maps out developments roughly until 2035.
Much of what they write may indeed unfold that way, but my opinion is that the world is more complex, and all these complexities and nuances matter, which is why things will ultimately turn out differently. Again, we don't know, and I'm not attempting to predict while they did, so it's easier to criticize. Here's what I agree with — there are reasons for concern, but on the other hand, progress is irreversible, we won't stop it, and we shouldn't try to slow it down but rather prepare for positive scenarios and work on increasing their probability. And if superintelligence can destroy us all — well, it will, and we'll be powerless. Write diaries and blogs; they'll be used as sorse for your virtual copies, your future desendants, so to speak. If we're lucky.
The Era of Experience is about something different. It says there are two very different approaches to AI development: RL (reinforcement learning) and LLM (large language models), and describes the difference. The authors describe a new era in artificial intelligence development, characterized by agents with superhuman abilities, learning primarily through their own experience rather than through analysis of human data.
I think they deliberately exaggerate the difference between these approaches because both directions abundantly borrow tools and approaches from each other, and although there is certainly a difference, one of the main insidious undertones is to show that their approach, RL, is better and more correct, and thereby increase their importance and secure more funding. Understandable why. But it's still interesting, of course.
My general conclusion from all this is that predictors significantly simplify the specifically non-human complexity of how all this develops, and a simplified understanding at a qualitative level will yield different results in the future. That is, everything will be different. And not just "hard to say how," but specifically impossible because the difference will be so great that we currently lack the means in language, culture, and understanding of the world to comprehend it, let alone describe it.
Both publications are less about the technical development of AI and more about socio-philosophical anxieties. As always, to properly handle great complexities, we must have order at home, and unfortatelly there's little order now and it's diminishing. You can look at this however you want – as a mental virus the planet uses to protect itself from parasites, or as human nature, or as anything else. But my own shtick about all this is that we need to try harder to understand the world more deeply and broadly, and this will help, and then we just watch how history unfolds. My allegory is that the universe isn't expanding; it remains as it was, but the level of detail that we perceive increases, which we experience as the expansion of space over time. And AI will still go through LSM (Large Scientific Models – a concept where models would be trained on scientific data rather than just text), where both RL and LLM are components.