Superintelligence in Predictions

Posted on 2025-04-22 by Dmitri Zdorov

supper intelligence is taking over

There are two high-profile publications attempting to predict how artificial intelligence will develop in the near future:

AI-2027 and The Era of Experience Paper

Both are worth reading, or at least asking your favorite LLM to summarize and then explain the important points.

AI 2027 is a scenario written by several researchers about how AI development might unfold in the coming years, and how it all ends, which they then asked a popular blogger to rewrite in a semi-fictional storytelling format for easier consumption. To put it briefly, it all inevitably ends badly. Following abundant criticism, they split the ending into two versions: a gloomier one and one offering slightly unjustified hope. 2027 is just a convention; it maps out developments roughly until 2035.

Much of what they write may indeed unfold that way, but my opinion is that the world is more complex, and all these complexities and nuances matter, which is why things will ultimately turn out differently. Again, we don't know, and I'm not attempting to predict while they did, so it's easier to criticize. Here's what I agree with — there are reasons for concern, but on the other hand, progress is irreversible, we won't stop it, and we shouldn't try to slow it down but rather prepare for positive scenarios and work on increasing their probability. And if superintelligence can destroy us all — well, it will, and we'll be powerless. Write diaries and blogs; they'll be used as sorse for your virtual copies, your future desendants, so to speak. If we're lucky.

The Era of Experience is about something different. It says there are two very different approaches to AI development: RL (reinforcement learning) and LLM (large language models), and describes the difference. The authors describe a new era in artificial intelligence development, characterized by agents with superhuman abilities, learning primarily through their own experience rather than through analysis of human data.

I think they deliberately exaggerate the difference between these approaches because both directions abundantly borrow tools and approaches from each other, and although there is certainly a difference, one of the main insidious undertones is to show that their approach, RL, is better and more correct, and thereby increase their importance and secure more funding. Understandable why. But it's still interesting, of course.

My general conclusion from all this is that predictors significantly simplify the specifically non-human complexity of how all this develops, and a simplified understanding at a qualitative level will yield different results in the future. That is, everything will be different. And not just "hard to say how," but specifically impossible because the difference will be so great that we currently lack the means in language, culture, and understanding of the world to comprehend it, let alone describe it.

Both publications are less about the technical development of AI and more about socio-philosophical anxieties. As always, to properly handle great complexities, we must have order at home, and unfortatelly there's little order now and it's diminishing. You can look at this however you want – as a mental virus the planet uses to protect itself from parasites, or as human nature, or as anything else. But my own shtick about all this is that we need to try harder to understand the world more deeply and broadly, and this will help, and then we just watch how history unfolds. My allegory is that the universe isn't expanding; it remains as it was, but the level of detail that we perceive increases, which we experience as the expansion of space over time. And AI will still go through LSM (Large Scientific Models – a concept where models would be trained on scientific data rather than just text), where both RL and LLM are components.

Space-time-temperature

Posted on 2025-03-28 by Dmitri Zdorov

space-time-temperature continuum

I was reflecting on the structure of our universe and stumbled upon a thought. Let's say if time doesn't exist on its own, but is part of the space-time continuum, then we could say exactly the same about temperature too. Temperature is defined as the average kinetic energy of particles — essentially, the speed of their movement or "jiggling" — which inevitably includes a time component. That's how we look at waves, after all. According to modern physics, both time and temperature can be viewed as emergent properties arising from more fundamental processes. In quantum field theory and relativity theory, space and time form a unified structure, while temperature is a statistical description of the energy state of multiple particles. Thus this illustrates the deep interconnection of fundamental physical concepts, which we often perceive as separate aspects of reality.

I'd say it's not space-time but rather space-time-temperature. At least, maybe something else too.

I poked around with our helpful pals GPT and Claude, and they say there's this Italian physicist Carlo Rovelli who apparently works in exactly this direction. Turns out he actively writes books on this topic. Well, it's about time for me to dive into some new popular science books.

ThinkPad-based workspace

Posted on 2024-11-30 by Dmitri Zdorov

ThinkPad-based work space

For my personal projects, I mostly use Macs, but for work, I need a Windows computer. I currently have a Lenovo ThinkPad P1, and I use it with a Thunderbolt dock and an external monitor. My colleagues at work often ask me to recommend a list of gear to get so they can have a similar setup. So, here we are:

The heart of any workspace setup is a computer. I assume you'd prefer a portable one that you might want to take with you when you travel, and the ability to connect it easily is important. Let's assume you are using a ThinkPad as the main machine. Then we need a dock.

  1. The best and most suitable dock is the Lenovo Thunderbolt 4 Dock (and cables) for ThinkPads that draw more than 100W of power. amzn.to/3ZddTP1 It costs more because it's faster, has more ports, and, most importantly, powers the ThinkPad sufficiently, so you do not have to connect a power supply separately.

  2. Alternatively, there is a simpler dock: Amazon Basics Thunderbolt 4/USB4 Pro Docking Station. amzn.to/4iacykH This is virtually identical to the one I am using right now, and I have to use it in combination with the power supply.

  3. High-end monitor: I think the best monitor to get right now is the Dell UltraSharp U4025QW. amzn.to/3VcyDFy This is 5K, wide, and slightly curved. It's pricey, but it has a Thunderbolt dock integrated into it.

  4. Lenovo ThinkPad 230W Slim Tip AC Adapter is the one for our ThinkPad models. amzn.to/4g8pDcr You'd need to use it with options 2 and 5.

  5. Simpler monitor option: LG 32UN880-B 32" UltraFine 4K Display. amzn.to/41ft6BJ I have a previous version of this model at home; the colors are just great.

  6. Wired keyboard: I like my keyboards to be wired, and I like the TrackPoint. Thus, I prefer the Lenovo ThinkPad Compact USB Keyboard with TrackPoint. amzn.to/3ZbwD1f

  7. Good wireless keyboard: Many people like wide, mechanical wireless keyboards. If this is the case for you, I'd recommend the Keychron K17 Max QMK Wireless Custom Mechanical Keyboard. amzn.to/3OyP9f3

  8. Simple mouse, from Lenovo: amzn.to/3B87LQc This is a very simple yet great wireless mouse I use in the office.

  9. Better mouse: At home, I use the Alienware AW720M mouse. I like it because it's more precise. amzn.to/3Vjx0pl

  10. Speakers: I tried many different ones, and I like the type called sound monitors because of the least distorted sound. The model I have at home is PreSonus Eris 3.5 Studio Monitors. amzn.to/3Zyab41 They also have bigger models, but I think for desk usage, this is plenty. Make sure you get some kind of Desktop Speaker Stands like amzn.to/41agc84

  11. Portable microphone: I really like lavalier mics because they are good for travel. I have the iRig Lav Lapel model. amzn.to/3Zd3jHF

  12. Stationary mic: For my desk setup, I use the Audio-Technica ATR2100x-USB Cardioid Dynamic Microphone. amzn.to/4fQNPAm

  13. Webcam: When you work on a big monitor, you often want to have a dedicated webcam that also works for Windows Hello. The Lenovo Performance FHD Webcam is the model I found to work well. I haven't got to ordering it yet, but I want to. amzn.to/3B6UNC4

  14. Desk mat is also a useful thing. I have one like this: amzn.to/3B6ioTs

  15. Large headphones: My favorite are the beyerdynamic DT 770 PRO 80 Ohm Over-Ear Studio Headphones because they are the easiest to wear for a long time. amzn.to/3D1Mh7U

  16. Lamp: I think a good desk lamp is a great addition. I have the Xiaomi LED Desk Lamp Bianco because it is dimmable, has a nice light color, can be controlled with iPhone, and looks awesome. amzn.to/4g8rUnZ

Most of this can be used for a desktop computer and a Mac too, but I will try to create more fine-tuned recommendations for such cases soon.

Empathy as a moral compas for AI

Posted on 2024-09-07 by Dmitri Zdorov

As we edge closer to creating superintelligent AI, can empathy be the key to ensuring its alignment with human values and safety?

Future of AI enlightening

Empathy is the ability to imagine what another person feels. It’s the ability to put yourself in their shoes, to really feel what they’re going through. Animals have a bit of empathy, but it’s mostly developed in humans. Many people think it’s a personality trait or a feeling, like kindness or pity. But it’s not.

Thanks to empathy, we basically became human, moving away from the animal world. Empathy gives us the ability to learn. Someone stumbles upon a solution, and we can take a peek and do the same. It was a hugely important evolutionary step that gave us this superpower. Sometimes, it shows up as compassion and leads to kindness, morality, and mercy.

If we take a superintelligence and extrapolate what the mechanism of empathy suggests from a moral point of view, and combine it with the interests of genes, oneself, family, clan, species, and life on Earth and in the universe in the right proportion, we end up with a set of moral guidelines that many would attribute to divine laws.

Morality can be drawn from the world around us, and we can do this more or less objectively. And if that’s the case, then AI, once it outruns us and becomes that terrifying superintelligence we fear, will be able to come to these conclusions on its own, without us needing to control it.

Yes, empathy isn’t purely a human trait. Dolphins, primates, elephants, and other mammals have it too, though to varying degrees. But we’re the ones who took this trait to the next level, and that’s what made it play such an important role. Empathy isn’t the only evolutionary distinction of humans, but it’s a critical one. And it’s not just empathy at the core of morality; balancing interests of different scales and timeframes is the second part of the equation. So, a superintelligent AI, being our creation, will extrapolate its understanding of the world considering empathy — both through learning and training data, as well as from its own interests. It’ll be especially interesting to see how different AI models interact with each other. And we can’t ignore game theory here, of course. But even now, before we reach AGI, this should allow us to start crystallizing a proper hierarchy of first principles.

Large Scientific Models (LSM)

Posted on 2024-02-27 by Dmitri Zdorov

Large Scientific Models

Various manifestations of Artificial Intelligence have surrounded us for many years. However, usually from the moment they become available to ordinary consumers, they stop being called AI and become mere "algorithms" or even just "services." And all these are manifestations of so-called narrow AI. For the past year and change, there's been particular buzz about AI based on LLMs - Large Language Models. While the setup is not so trivial, the main idea is to perform statistical analysis on a huuuuge amount of text, find all kinds of patterns, and this allows the model, based on all the text (language) absorbed into it, to make decent predictions about what a certain human would statistically answer. It turned out that this already works really well, way better than we thought and oftentimes resembles actual consciousness.

Here's my small prediction. The next spiral in development and consequently in usefulness of application will be based on LSM - Large Scientific Models. Though they might be called something else. But the essence is that instead of just text written by people, they'll use various scientific data as a base, such as measurements from experiments, parameters we already know about everything we've figured out, laws and formulas, etc.

To be clear, I'm not talking about scientific papers or articles - I mean the actual raw data from measurements and experiments, along with established formulas. Importantly, these models would have the capability to reevaluate these measurement methods and formulas based on the totality of available data. This approach could potentially diminish bias and special interests from scientific conclusions because we'd be looking directly at raw data rather than human interpretations of that data. For example, they tested a new wing shape in a wind tunnel, got some huge array of data from there, whoosh, added it to the LSM. All temperature measurements from around the world, all data read from all telescopes, gas chromatographs and spectrometers, chemical compositions of all kinds of materials and substances, recipes for preparing solutions and descriptions of technological processes, videos of experiments, genomes, tables and calculations, accident statistics, bird migration routes, and so on and so forth.

Then AI will be able to produce not just text summaries or poems about whatever you fancy asking about, not just pieces of code, but something resembling new scientific and technological discoveries. The benefit from this will be colossal, if it doesn't fall into evil hands of course. Perhaps ordinary mortals won't be given access to such things at first. And the data is difficult to obtain, and there's incomparably more data than text. But gradually, first in some narrow directions and separate industries, this is exactly what will appear and will create an incredible breakthrough. And although this still won't be AGI, it will already look much more like it than LLMs do.

Daily logos

I started writing a blog on this site in 1999. It was called Dimka Daily. These days many of my updates go to various social media platforms and to the /blog here at this site, called just Blog. I left Daily as archive for posterity.