java - How do I calculate Spark Statistics that are not of type Double -


the spark documenation includes tools calculating min, max, mean statistics on values of type "double" how handle spark/java/cassandra scenario when trying handle column values of type float?

edited show resolution:

import org.apache.spark.sql.dataframe; import static org.apache.spark.sql.functions.*;  dataframe df = sqlcontext.read()         .format("org.apache.spark.sql.cassandra")         .option("table",  "sometable")         .option("keyspace", "somekeyspace")         .load();  df.groupby(col("keycolumn"))         .agg(min("valuecolumn"), max("valuecolumn"), avg("valuecolumn"))         .show(); 

cast it. (double) variable_here variable's value, double.


Comments

Popular posts from this blog

javascript - jQuery: Add class depending on URL in the best way -

caching - How to check if a url path exists in the service worker cache -

Redirect to a HTTPS version using .htaccess -