Summarizer¶
-
class
pyspark.ml.stat.
Summarizer
[source]¶ Tools for vectorized statistics on MLlib Vectors. The methods in this package provide various statistics for Vectors contained inside DataFrames. This class lets users pick the statistics they would like to extract for a given column.
>>> from pyspark.ml.stat import Summarizer >>> from pyspark.sql import Row >>> from pyspark.ml.linalg import Vectors >>> summarizer = Summarizer.metrics("mean", "count") >>> df = sc.parallelize([Row(weight=1.0, features=Vectors.dense(1.0, 1.0, 1.0)), ... Row(weight=0.0, features=Vectors.dense(1.0, 2.0, 3.0))]).toDF() >>> df.select(summarizer.summary(df.features, df.weight)).show(truncate=False) +-----------------------------------+ |aggregate_metrics(features, weight)| +-----------------------------------+ |[[1.0,1.0,1.0], 1] | +-----------------------------------+ >>> df.select(summarizer.summary(df.features)).show(truncate=False) +--------------------------------+ |aggregate_metrics(features, 1.0)| +--------------------------------+ |[[1.0,1.5,2.0], 2] | +--------------------------------+ >>> df.select(Summarizer.mean(df.features, df.weight)).show(truncate=False) +--------------+ |mean(features)| +--------------+ |[1.0,1.0,1.0] | +--------------+ >>> df.select(Summarizer.mean(df.features)).show(truncate=False) +--------------+ |mean(features)| +--------------+ |[1.0,1.5,2.0] | +--------------+
New in version 2.4.0.
Methods
Methods Documentation
-
static
metrics
(*metrics)[source]¶ Given a list of metrics, provides a builder that it turns computes metrics from a column.
See the documentation of [[Summarizer]] for an example.
- The following metrics are accepted (case sensitive):
mean: a vector that contains the coefficient-wise mean.
sum: a vector that contains the coefficient-wise sum.
variance: a vector tha contains the coefficient-wise variance.
std: a vector tha contains the coefficient-wise standard deviation.
count: the count of all vectors seen.
numNonzeros: a vector with the number of non-zeros for each coefficients
max: the maximum for each coefficient.
min: the minimum for each coefficient.
normL2: the Euclidean norm for each coefficient.
normL1: the L1 norm of each coefficient (sum of the absolute values).
- Parameters
metrics – metrics that can be provided.
- Returns
an object of
pyspark.ml.stat.SummaryBuilder
Note: Currently, the performance of this interface is about 2x~3x slower then using the RDD interface.
New in version 2.4.0.
-
static