具体描述
作 者:林大贵 著 定 价:99 出 版 社:清华大学出版社 出版日期:2018年01月01日 页 数:519 装 帧:平装 ISBN:9787302490739 ●第1章 Python Spark机器学习与Hadoop大数据 1
●1.1 机器学习的介绍 2
●1.2 Spark的介绍 5
●1.3 Spark数据处理 RDD、DataFrame、Spark SQL 7
●1.4 使用Python开发 Spark机器学习与大数据应用 8
●1.5 Python Spark 机器学习 9
●1.6 Spark ML Pipeline机器学习流程介绍 10
●1.7 Spark 2.0的介绍 12
●1.8 大数据定义 13
●1.9 Hadoop 简介 14
●1.10 Hadoop HDFS分布式文件系统 14
●1.11 Hadoop MapReduce的介绍 17
●1.12 结论 18
●第2章 VirtualBox虚拟机软件的安装 19
●2.1 VirtualBox的下载和安装 20
●2.2 设置VirtualBox存储文件夹 23
●2.3 在VirtualBox创建虚拟机 25
●2.4 结论 29
●第3章 Ubuntu Linux 操作系统的安装 30
●3.1 Ubuntu Linux 操作系统的安装 31
●部分目录
内容简介
本书从浅显易懂的“大数据和机器学习”原理说明入手,讲述大数据和机器学习的基本概念,如分类、分析、训练、建模、预测、机器学习(推荐引擎)、机器学习(二元分类)、机器学习(多元分类)、机器学习(回归分析)和数据可视化应用等。书中不仅加入了新近的大数据技术,还丰富了“机器学习”内容。为降低读者学习大数据技术的门槛,书中提供了丰富的上机实践操作和范例程序详解,展示了如何在单机Windows系统上通过Virtual Box虚拟机安装多机Linux虚拟机,如何建立Hadoop集群,再建立Spark开发环境。书中介绍搭建的上机实践平台并不于单台实体计算机。对于有条件的公司和学校,参照书中介绍的搭建过程,同样可以实现将自己的平台搭建在多台实体计算机上,以便更加接近于大数据和机器学习真实的运行环境。本书很好适合于学习大数据基础知识的初学者阅读,更适合正在学习大数据理论和技术的人员作为上机实践用的教等 林大贵 著 林大贵,从事IT行业多年,在系统设计、网站开发、数字营销、商业智慧、大数据、机器学习等领域具有丰富的实战经验。
Python, Spark, and Hadoop: Unleashing the Power of Big Data and Machine Learning for Real-World Applications In today's data-driven world, the ability to process, analyze, and extract meaningful insights from vast datasets is no longer a niche skill but a fundamental requirement for success across numerous industries. As the volume and complexity of data continue to explode, traditional computing methods falter, demanding the adoption of robust, scalable, and efficient big data technologies. This is where the synergistic power of Python, Apache Spark, and Apache Hadoop truly shines. This comprehensive guide delves deep into the practical application of these foundational technologies, empowering you to build and deploy sophisticated machine learning models and big data solutions that tackle real-world challenges. The journey begins with a solid understanding of the Big Data ecosystem and the pivotal roles played by Hadoop and Spark. We will demystify the core concepts of distributed computing, explaining how Hadoop's MapReduce framework laid the groundwork for processing massive datasets across clusters of commodity hardware. You'll gain a clear grasp of the Hadoop Distributed File System (HDFS) and its vital function in storing and managing petabytes of data reliably and efficiently. Furthermore, we'll explore the evolution from MapReduce to Spark, highlighting Spark's dramatic performance improvements through its in-memory processing capabilities and its versatile API that supports various workloads, including batch processing, real-time streaming, SQL queries, and graph processing. Python, with its elegant syntax, extensive libraries, and thriving community, has become the de facto programming language for data science and machine learning. This guide will equip you with the essential Python skills needed to interact seamlessly with Spark and Hadoop. We will cover fundamental Python concepts, data manipulation with libraries like Pandas, and the crucial data structures and algorithms that underpin effective data analysis. You'll learn how to leverage Python's rich ecosystem of machine learning libraries, such as Scikit-learn, TensorFlow, and PyTorch, and understand how to integrate these powerful tools with your big data pipelines. The heart of this guide lies in its practical, hands-on approach to building and deploying real-world applications. We will guide you through the process of setting up and configuring a Hadoop and Spark environment, whether it's a local development setup or a cluster deployment on cloud platforms like AWS, Azure, or Google Cloud. You'll gain proficiency in writing Spark applications using PySpark, the Python API for Spark, enabling you to harness Spark's distributed processing power for data transformation, feature engineering, and model training on large datasets. Machine learning is a cornerstone of extracting value from big data. This guide provides a comprehensive exploration of various machine learning algorithms, from classic techniques like linear regression, logistic regression, decision trees, and support vector machines to more advanced methods like ensemble learning (random forests, gradient boosting) and deep learning architectures (convolutional neural networks, recurrent neural networks). Crucially, we will focus on how to apply these algorithms within a distributed computing framework. You'll learn how to train models on distributed datasets using Spark MLlib, Spark's native machine learning library, and how to optimize model performance for large-scale scenarios. This includes understanding concepts like distributed training, hyperparameter tuning in a distributed environment, and model deployment strategies for big data applications. Beyond individual machine learning algorithms, the guide emphasizes the entire machine learning lifecycle within the context of big data. This encompasses data preprocessing techniques tailored for large datasets, such as handling missing values, feature scaling, encoding categorical variables, and dimensionality reduction. You'll learn how to perform effective feature engineering to create informative features that drive model accuracy, understanding how to do this efficiently on distributed data. Model evaluation and selection will be covered in detail, focusing on metrics relevant to big data problems and strategies for robust model validation. Furthermore, we will address the critical aspects of model deployment, including how to integrate trained models into real-time data processing pipelines and how to monitor their performance in production. The capabilities of Spark extend far beyond batch processing. This guide will introduce you to Spark Streaming, enabling you to build real-time data processing applications. You'll learn how to ingest data from various sources, such as Kafka or Kinesis, perform transformations and aggregations on streaming data, and even train and deploy machine learning models that can make predictions on incoming data streams. This opens up possibilities for building applications like real-time fraud detection systems, dynamic recommendation engines, and live anomaly detection. Graph processing is another powerful facet of Spark that will be explored. We will delve into GraphX, Spark's API for graph computation. You'll learn how to represent graph data, perform fundamental graph algorithms like PageRank and connected components, and apply these techniques to problems such as social network analysis, recommendation systems, and fraud detection in network structures. Real-world applications are the ultimate test of these technologies. Throughout the guide, you will encounter numerous case studies and practical examples that demonstrate how Python, Spark, and Hadoop are used to solve pressing business problems. These examples will span diverse domains, including: E-commerce and Retail: Building personalized recommendation systems, predicting customer churn, optimizing pricing strategies, and analyzing customer behavior. Finance: Detecting fraudulent transactions in real-time, assessing credit risk, algorithmic trading, and analyzing market trends. Healthcare: Analyzing medical records for disease prediction, identifying patterns in patient data, and developing personalized treatment plans. IoT and Sensor Data: Processing and analyzing data from connected devices for predictive maintenance, anomaly detection, and performance monitoring. Natural Language Processing (NLP): Sentiment analysis, topic modeling, text summarization, and building intelligent chatbots on large text corpora. We will also touch upon important considerations for working with big data, such as data governance, security, and performance optimization. You'll learn techniques for tuning Spark jobs, optimizing data storage, and ensuring the scalability and reliability of your big data solutions. Understanding the nuances of distributed systems, including data partitioning, shuffling, and fault tolerance, will be integral to building robust and efficient applications. This guide is designed for individuals who are passionate about leveraging the power of data to drive innovation. Whether you are a data scientist, a machine learning engineer, a software developer looking to expand your skillset, or a business analyst eager to harness the potential of big data, this book will provide you with the knowledge and practical experience needed to excel in this rapidly evolving field. By mastering the synergy of Python, Spark, and Hadoop, you will be well-equipped to tackle complex data challenges, build intelligent applications, and unlock unprecedented insights from the vast ocean of data that surrounds us.