Historically, Nash equilibrium has been the predominant solution concept in games. We review a recent stream of results on multi-agent learning in standard classes of games (both competitive, e.g. zero-sum games as well as cooperative, e.g. potential games) that showcase how the behavior of standard learning dynamics can deviate in unexpected ways from the predictions of equilibrium play. A wide range of behaviors is possible, and in fact common, such as cycles, bifurcations, chaos and even simultaneous local stability of Nash equilibrium and chaos. We will end by discussing open questions and challenges.