This Confluence has been LDAP enabled, if you are an ASF Committer, please use your LDAP Credentials to login. Any problems file an INFRA jira ticket please.
|Created||22 Sep 2017|
At the moment Apache Ignite's SQL engine doesn't support transactions.
SELECT statements are executed on top of committed data without creating a snapshot. DML statements are executed as a series of batched updates using
IgniteCache.invokeAll command. As a result it is neither possible to get consistent view of data, nor to update it with ACID semantics.
We need to implement transactions support in SQL on top new MVCC protocol.
TX SQL will be implemented on top of existing snapshot-based MVCC infrastructure. Writes obtain locks on keys. Reads do not obtain locks. Writes do not block reads. Reads can be converted to blocking mode using
SELECT ... FOR UPDATE statement.
In the first iteration only READ_COMMITTED mode will be supported.
All DML requests are split into two groups: with and without reduce step. If reduce step is not needed, locks are obtained on map nodes immediately. If reduce step is needed (e.g. non-collocated aggregation), then we cannot lock rows on mapper immediately, because we do not know target row set in advance. In this case filter condition should be re-evaluated as well by executing distributed query again (TBD),
Typical DML operation may modify any number of rows. it means we cannot store all modified rows on a near node. Current TX protocol must be extended, so that updates are stored on primary/backup nodes only and not transferred to near node.
No changes to existing API is needed. SQL statements will be enlisted into ongoing transaction if one is available. Only PESSIMISTIC/READ_COMMITTED is valid TX mode in the first iteration. Attempt to enlist SQL statement into any other TX mode will produce an exception.
COMMITcommands will be implemented on the server side. They will be mapped to appropriate methods on
Only MVCC-enabled transactional caches could be enlisted into transaction. An exception is throw If SQL statement use either
ATOMIC cache or
TRANSACTIONAL cache without MVCC support.
When executing DML statements locks are typically acquired in unpredictable order, what may cause deadlocks. Typically RDBMS vendors implement deadlock detectors which rollback last statement in case deadlock is found. Deadlock detection is expensive in distributed environment as it requires coordination between nodes over networks. In the first iteration we can define per-statement lock timeout. If locks cannot be obtained in the given timeout, statement is rolled-back and appropriate exception is thrown.