Skip to main content

Helping teams build reliable AI systems and cloud infrastructure

Areas of Focus

Books

High Performance Spark Book Cover

High Performance Spark: Best Practices for Scaling and Optimizing Apache Spark

Apache Spark is amazing when everything clicks. But if you haven't seen the performance improvements you expected or still don't feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau, Rachel Warren, and Anya Bida walk you through the secrets of the Spark code base, and demonstrate performance optimizations that will help your data pipelines run faster, scale to larger datasets, and avoid costly antipatterns.

Read more

Ideal for data engineers, software engineers, data scientists, and system administrators, the second edition of High Performance Spark presents new use cases, code examples, and best practices for Spark 3.x and beyond. This book gives you a fresh perspective on this continually evolving framework and shows you how to work around bumps on your Spark and PySpark journey.

With this book, you'll learn how to:

  • Accelerate your ML workflows with integrations including PyTorch
  • Handle key skew and take advantage of Spark's new dynamic partitioning
  • Make your code reliable with scalable testing and validation techniques
  • Deploy Spark on Kubernetes and similar environments
  • Take advantage of GPU acceleration with RAPIDS and resource profiles
  • Get your Spark jobs to run faster and handle even larger datasets
  • Gain faster insights by reducing pipeline running times
View on Amazon
Scaling Machine Learning with Spark Book Cover

Scaling Machine Learning with Spark: Distributed ML with MLlib, TensorFlow, and PyTorch

Learn how to build end-to-end scalable machine learning solutions with Apache Spark. With this practical guide, author Adi Polak introduces data and ML practitioners to creative solutions that supersede today's traditional methods. You'll learn a more holistic approach that takes you beyond specific requirements and organizational goals--allowing data and ML practitioners to collaborate and understand each other better.

Read more

Scaling Machine Learning with Spark examines several technologies for building end-to-end distributed ML workflows based on the Apache Spark ecosystem with Spark MLlib, MLflow, TensorFlow, and PyTorch. If you're a data scientist who works with machine learning, this book shows you when and why to use each technology.

You will:

  • Explore machine learning, including distributed computing concepts and terminology
  • Manage the ML lifecycle with MLflow
  • Ingest data and perform basic preprocessing with Spark
  • Explore feature engineering, and use Spark to extract features
  • Train a model with MLlib and build a pipeline to reproduce it
  • Build a data system to combine the power of Spark with deep learning
  • Get a step-by-step example of working with distributed TensorFlow
  • Use PyTorch to scale machine learning and its internal architecture
View on Amazon

Let's Connect

I'm always interested in conversations about building production systems, technical leadership, and the future of data infrastructure.

Subscribe for Updates Get in Touch
Latest Writing · InfoQ

Context Engineering with Adi Polak

Exploring the evolution from prompt engineering to context engineering for building stateful AI systems. Covers managing AI memory, event-driven architectures with Kafka and Flink, and creating reusable skills for multi-agent workflows.

Read on InfoQ