The increasing popularity of machine learning (ML) is both a blessing and a curse. On the one hand, ML has enabled some truly astounding feats, such as consistently winning in GO against the world's top players. On the other hand, many of the claimed achievements do not stand up to careful independent testing. In this talk I will give an overview of two approaches to machine learning: neural networks and boosting. I will briefly explain how these methods work and how they are applied. I will then focus on an ongoing project with David Sandwell and colleagues which demonstrates the power as well as the pitfall of using Adaboost for editing Bathymetry data.