This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.
Spark provides three types of cluster set-up: standalone configuration, Mesos integration and YARN. Ignite should be able to start alongside spark using all three ways.
Ignite should provide the following RDDs:
This RDD can be properly partitioned and collocated with Ignite nodes. Cache name or optional cache configuration should be passed to construct an RDD so that user has an ability to create caches on the fly. User also may specify a predicate that is passed to ignite scan query.
This RDD is not partitioned and should be parallelized by Spark if necessary. Cache name and SQL clause should be passed to construct an RDD.
Utility object that takes any Spark RDD and stores it to Ignite using Streamer.