Assorted Links #3

January flew by, but I had some time to consume both some old classics and some new pieces that are really interesting. Much of what I’ve been reading the last month has been making me want to write longer essays, hopefully I’ll find the time and inspiration to do that later on this month.

  1. This is the Dream Time by Robin Hanson

One of my personal favorites from Robin Hanson. He describes the aboriginal dreamtime, a time more real than reality itself where values, symbols, and laws for Aboriginal society are set. From there he goes on to say that it is likely that this particular era of human life will be the dream time of future generations.

Why? Because this era is richer than subsistence level, so rich that we have the luxury to do stupid and delusional things, and in Hanson’s view it is likely that coming generations will live closer to a subsistence level (kind of like the hunter-gatherers of the past)

I think this worldview hinges quite a bit on humans not being able to use AI to bring in a brighter Future. But at the same time, I’m sympathetic to the idea that this era may be more important than any other era of human life, if we manage to avoid extinction in the coming 500 years then humanity may stand before a Future that can be better than anything we can imagine.1

This also means that we, the people living in the Dream Time, can make our mark on the Future.

  1. Why is Everyone so Boring?, more from Hanson

As is typical for Hansonian theories, this recent blog post describes a theory of why the world has become boring. In short, it is just not worth it for most people to be lively, passionate, and opinionated in public, since they are then seen to be trying to “steal” status from others.

This leads to the world becoming more boring over time, as the tallest poppies become cut down. The only way to be able to be lively in public is to become “elite” enough to not be easy to cut-down for social credit—few have that.

The key part of the theory is that it relates to differences within individuals—the same individuals that are often boring in public are most of the time lively, opinionated, and passionate in private. The theory thus tries to explain this public-private discrepancy. And it is not a theory of what happens in the transition from child or teenager to adult, where it often seems that the non-boringness of the world disappears.

Paradoxically, I believe that I sometimes do the opposite. Where I’m often more passionate and lively on here (on the blog) than I am in private conversations. Especially when it comes to AI alignment, for reasons that I’m not entirely sure about. (Perhaps that is a topic for another post.)

  1. Building a generative pre-trained transformer (GPT) with Andrej Karpathy

If you are interested in the code and the technicals of the currently most popular machine learning architectures, then this video series by Andrej Karpathy, former director of AI at Tesla (among many other things), is great.

Specifically, I think that understanding the attention mechanism is important for understanding how these models work and why they are so popular compared to alternative models.

  1. DreamerV3 by DeepMind

While most of the hype has been generative AI models in the past years, it can be helpful to remember that activity is still going on in other AI fields as well. Reinforcement learning was the main scare in the early days of AI alignment, mainly due to ease with which one can mathematically define optimal reinforcement learning agents such as AIXI.2

This recent model from DeepMind researchers is a reinforcement learning agent using three different neural nets—the world model, a critic, and an actor. Using fixed hyperparameters, the model learns across domains. Among other things it is the first model to collect diamonds in Minecraft without using human data or curricula. This is a really cool development, though at the same time scary since it is quite scary that it means RL-algorithms may become strong and general without needing human data which somewhat reduces the likelihood of alignment success.

I planned to say something about the Transformer vs. Reinforcement Learning race to AGI here, but I feel like it’s too hard at this moment to really predict and I would need more space and time to write something valuable. I should say though that I’m quite hopeful that Deep learning and transformers won’t lead to AGI even with maximum scaling.3 But at the same time, transformers have led to incredible capabilities at a short time using very little else than scale, with the right architectural improvements I think AGI is not far away, unfortunately.

This has also inspired me to want to write a post about world models, optimization, and agency. We’ll see if that ever materializes.

  1. It’s hard to do contrarian science by SMTM and Natalia Coelho

Slime Mold Time Mold’s recent blog series ‘A Chemical Hunger’ has been widely shared and discussed due to its contrarian claim that environmental contaminants are what’s driving the obesity epidemic. I think you should read it if you haven’t.

At the same time, you should not believe their theories and interpretations of the available evidence wholesale. Natalia Coelho has written two posts, ‘It’s probably not Lithium’ and ‘On not getting contaminated by the wrong obesity ideas’, about how SMTM seem to be over-updating on evidence, misinterpreting studies, and straight up ignoring evidence in their blog posts. Perhaps the most concerning part is that they [SMTM] seem unwilling to engage with criticism, which is concerning. Natalia’s work really is great, I admire her determination and patience reading through tons of studies and even trying to replicate old results.

I think the entire thing serves as a reminder of how hard it is to do contrarian science. Consensus often appears for a reason, and most often it is because the available evidence support the consensus opinion. We should not underrate scientific consensus—and if we are going to try to do contrarian science, which I believe some people should, then we should do it in a way that maximizes the likelihood that it finds something real rather than maximizing the likelihood for getting large grants and social status.

Large, good-looking, and plausible sounding theories are all too alluring to many, including myself, and the risk is that the theory becomes popular because it sounds Very Important and Intelligent rather than becoming popular because it is True. For it was said by the old masters, ‘That which can be destroyed by the truth should be’.

  1. See also Holden Karnofsky’s ‘Most Important Century‘ series.
  2. Probably leading to being comparably late to the Deep Learning party. Not that I win Bayes points, deep learning was already a thing when I got interested in AI alignment.
  3. My credence in: maximum scaling ≠ AGI is low!!

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.