Assorted Links #1

This month, I’m trying a new type of post with links to content that I’ve enjoyed over the past month. (And some links that I will likely enjoy in the future) I will provide a couple of links and some short commentary on the content, in the hopes of sharing my interests in an easier way than to write long posts. Let me know if you think this is valuable or not.

  1. Anders Sandberg on Grand Futures

Anders’ talks about the Future, and what the possibilities are if we make the right choices (and have a bit of luck). I think this type of thinking is what makes Effective Altruism appealing to me. Believing that humanity can solve any challenge, even the ones that today seem like insurmountable, is a core part of my values and beliefs.

I hope that humanity one day defeats death, defeats our boundedness to earth, and that our descendants will experience bliss that we cannot even imagine. Taking this as serious possibilities improves the odds that we will one day reach them.

See also: Timeline of the far Future on Wikipedia

  1. Optimality is the Tiger by Veedrac

Agency seems to be a core part of why humans have conquered earth—we do stuff, we have goals, and we make and execute plans to reach those goals. Yet, we know quite little about how agency works or how to create agency. This blogpost argues that optimality is what creates agency.

The way I think of any intelligent system with any kind of objective (in the case of GPT-3 the objective is just predict the next word, with an RL agent the objective might be to win a round or find a gold coin) is that the intelligence, or optimality, of the system opens up a solution space for the system. The more optimality in the system, the wider the solution space—the more possible solutions the system can access. When the system becomes optimal enough, agents become a viable solution since they are more effective at solving problems.

Similarly, Veedrac argues that the danger of agentic AI may always arise when the AI is optimal enough. Even a GPT-3 could develop agentic behavior, simply because it is the optimal way to answer some type of prompts that it might get.

  1. Writing with Elicit

Despite my worries about AI x-risks, there’s some very useful tools using current AI systems being developed by the day now. One of the products that I’ve been using the most has been Elicit, a search engine for research papers.

It has been very helpful when writing my thesis and in discussions about various topics. You just ask the question you want answered, and Elicit finds relevant papers that it summarizes to make it easy to find answers. On top of that, the tool also helps you with building on the original question by suggesting possible follow-up questions.

  1. DeepMind on Goal Misgeneralization

Much of the early work in AI alignment was about reward functions and objective functions and how hard they are to specify in a way that makes them impossible to game or “hack”. In this work, DeepMind’s safety team show that even correctly specified reward functions can lead to goal misgeneralization and suboptimal behavior.

I think type of work is important, though I’m not yet sure if I think this is a dangerous problem in the limit of AI training. The problem seems to be that the training process is not advanced enough to let the AI system learn what strategies are best and thus it generalizes in the wrong direction. The question then becomes if we can create better training for AIs before we create sufficiently advanced training processes. This, I think, might not be the case.

  1. Anton Stjepan Cebalo on the Social Recession

For quite some time, loneliness has been talked about as a large problem. I’ve seen various graphs and stats about it, but this post really goes into the depths of the various data that exists about friendships, dating, trust, and more. It is US-centric, but it seems likely that the trends are similar in Europe as well.

I think this type of article is crucial for modeling the world and different phenomena. Flowing from the evidence to the conclusion, instead of the other way around, is easier when having a summary like this. Often when loneliness and social trends are discussed, it seems like there’s a set conclusion which is then used to explain the evidence rather than it being the other way around.

  1. Universal Induction presentation by Marcus Hutter

Both humans and artificial intelligences have to learn from experience, but there is no satisfying general theory of this process. Marcus Hutter presents Universal Induction as a general theory—a theory that improves upon other inductive approaches using Bayes’ theorem, Kolmogorov complexity, Turing machines, Occam’s razor, and Solomonoff’s Universal prior.

The talk is very good due to its simplicity and it helped me understand Hutter’s AIXI paper on a higher level. Universal induction is a key part of AIXI, as it drives the learning.

  1. Squigglepy

Quite recently, Squiggle, a programming language for probabilistic estimation developed by Quantified Uncertainty Research Institute, was released. With it, one can easily build simple models and visualizations to be used in forecasting.

Now, Rethink Priorities have released a python implementation of Squiggle. This makes it easier to use for me and I plan on playing around with it after finishing my thesis.

  1. Git Re-Basin

Stochastic Gradient Descent is performing remarkably well in high-dimensional non-convex optimization problems, why is this? This paper argues that it is due to some yet uncharacterized invariances in the training dynamics. These invariances make near independent training runs exhibit near identical characteristics.

They find that using different permutation strategies, they can swap hidden units in the layers of two models and the functionality of the network will remain the same.



As I’ve written about path-dependence in machine learning before, this paper is a good example of evidence in favor of a low path-dependence world. Perhaps this paper, and other papers like it, holds the key to understanding inductive bias in machine learning?

  1. The effects of salient ranks on educational outcomes

This paper investigates the effect of salient achievement ranks on subsequent performance in education. Due to random assignment to classes in China, students that achieve identical results in baseline tests are ranked differently relative to their classmates. This is exploited by the authors, which finds that this achievement rank that is salient to both parents, teachers, and students affects students’ performance. A higher rank leads to better performance.

Further survey evidence suggests that this effect is due to students’ higher self-perception and higher learning confidence as well as increased parental understanding of the ranks leading to stricter parental oversight and expectations.

These are interesting results, pretty much in line with my expectations of the effects of ranking. They also show differential effects for the different ranks—those with the lowest ranks have a negative effect. It would also be interesting to see the effects of ranking on a class or school level. Perhaps one could use class or school ranking to have students be able to study at their level without getting these ranking effects affecting their performance.

It is also worth noting that these are the effects of salient ranks. I think it is quite likely that there are “social” ranking effects as well, where students implicitly or explicitly know their ranking in class despite not having a salient benchmark of performance. (From my own experience, students quite quickly realize who’s a good student and who’s not even without having grades or test scores to benchmark with.)

That’s it for this month! I hope you found the links useful. If you have any thoughts on the post or the links, please let me know!

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.