...
Motivation
Currently, IQ throws InvalidStateStoreException for any types of error, that means a user cannot handle different types of error.
Because of that, we should throw different exceptions for each type.
Proposed Changes
There are add four new exceptions:
Code Block | ||
---|---|---|
| ||
public class StateStoreMigratedException extends InvalidStateStoreException
public class StateStoreRetryableException extends InvalidStateStoreException
public class StateStoreFailException extends InvalidStateStoreException
public class StateStoreClosedException extends InvalidStateStoreException |
Three categories exception throw to user
- StateStoreRetriableException: The application instance in the state of rebalancing, the user just need retry and wait until rebalance finished.
StateStoreMigratedException: The store got migrated and not hosted in application instance, the users need to rediscover the store.
- StateStoreFailException: Fatal error when access state store, the user cannot retry or rediscover.
One internal exception: StateStoreClosedException.
The following is the public method that users will call:
...
- stores()
...
- get(k)
- range(from, to)
- all()
- approximateNumEntries()
...
- fetch(k)
- fetch(from, to)
...
- fetch(k, rf, tt)
- fetch(from, to, rf, tt)
- all()
- fetchAll()
...
- next()
- hasNext()
- peekNextKey()
...
To distinguish different types of error, we need to handle all InvalidStateStoreExceptions better during these public methods invoked. The main change is to introduce new exceptions that extend from InvalidStateStoreException. InvalidStateStoreException is not thrown at all anymore, but only new sub-classes.
Code Block | ||
---|---|---|
| ||
public# classTwo KafkaStreamscategory {exceptions public <T>class TRetryableStateStoreException store(final String storeName, final QueryableStoreType<T> queryableStoreType); } extends InvalidStateStoreException public class CompositeReadOnlyKeyValueStore<K,FatalStateStoreException V> implements ReadOnlyKeyValueStore<K, V> { public V get(final K key); public KeyValueIterator<K, V> range(final K from, final K to); public KeyValueIterator<K, V> all(); public long approximateNumEntries(); } extends InvalidStateStoreException # Retryable exceptions public class CompositeReadOnlySessionStore<K,StreamThreadNotStartedException V> implements ReadOnlySessionStore<K, V> { public KeyValueIterator<Windowed<K>, V> fetch(final K key); public KeyValueIterator<Windowed<K>, V> fetch(final K from, final K to); } extends RetryableStateStoreException public class StreamThreadRebalancingException extends RetryableStateStoreException public class CompositeReadOnlyWindowStore<K,StateStoreMigratedException V> implements ReadOnlyWindowStore<K, V> { public WindowStoreIterator<V> fetch(final K key, final long timeFrom, final long timeTo); public KeyValueIterator<Windowed<K>, V> fetch(final K from, final K to, final long timeFrom, final long timeTo); public KeyValueIterator<Windowed<K>, V> all(); public KeyValueIterator<Windowed<K>, V> fetchAll(final long timeFrom, final long timeTo); } class DelegatingPeekingKeyValueIterator<K, V> implements KeyValueIterator<K, V>, PeekingKeyValueIterator<K, V> { public synchronized boolean hasNext(); public synchronized KeyValue<K, V> next(); public KeyValue<K, V> peekNext(); } class MeteredWindowStoreIterator<V> implements WindowStoreIterator<V> { public boolean hasNext(); public KeyValue<Long, V> next(); public Long peekNextKey() } |
During user call one of above methods, we should check KafkaStreams state by the following rule when InvalidStateStoreException is thrown:
- If state is RUNNING or REBALANCING:
- StateStoreClosedException: should be wrapped to StateStoreRetriableException
- StateStoreMigratedException: should not be wrapped, directly thrown
- if state is PENDING_SHUTDOWN or ERROR or NOT_RUNNING:
- wrap InvalidStateStoreException(include subclass) to StateStoreFailException
Call Trace
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
...
title | Call trace 6: ReadOnlySessionStore#fetch(key) |
---|
extends RetryableStateStoreException
# Fatal exceptions
public class StreamThreadNotRunningException extends FatalStateStoreException
|
Various state store exceptions can classify into two category exceptions: RetryableStateStoreException and FatalStateStoreException. The user can use the two exceptions if they only need to distinguish whether it can retry.
- Retryable exceptions
- StreamThreadNotStartedException: will be thrown when streams thread state is CREATED, the user can retry until to RUNNING.
- StreamThreadRebalancingException: will be thrown when stream thread is not running and stream state is REBALANCING, the user just retry and wait until rebalance finished (RUNNING).
- StateStoreMigratedException: will be thrown when state store already closed and stream state is REBALANCING.
- Fatal exceptions
- KafkaStreamsNotRunningException: will be thrown when stream thread is not running and stream state is PENDING_SHUTDOWN / NOT_RUNNING / ERROR. The user cannot retry when this exception is thrown.
- StateStoreNotAvailableException: will be thrown when state store closed and stream thread is PENDING_SHUTDOWN / NOT_RUNNING / ERROR. The user cannot retry when this exception is thrown.
- UnknownStateStoreException: will be thrown when passing an unknown state store.
The following is the public methods that users will call to get state store instance:
- KafkaStreams
- store(storeName, queryableStoreType)
Info |
---|
Throw exceptions: StreamThreadNotStartedException, StreamThreadRebalancingException, KafkaStreamsNotRunningException, UnknownStateStoreException |
The following is the public methods that users will call to get store values:
- interface ReadOnlyKeyValueStore(class CompositeReadOnlyKeyValueStore)
- get(key)
- range(from, to)
- all()
- approximateNumEntries()
- interface ReadOnlySessionStore(class CompositeReadOnlySessionStore)
- fetch(key)
- fetch(from, to)
- interface ReadOnlyWindowStore(class CompositeReadOnlyWindowStore)
- fetch(key, time)
fetch(key, from, to)
- fetch(from, to, fromTime, toTime)
- all()
- fetchAll(from, to)
- @Deprecated fetch(key, timeFrom, timeTo)
- @Deprecated fetch(from, to, timeFrom, timeTo)
- @Deprecated fetchAll(timeFrom, timeTo)
- interface KeyValueIterator(class DelegatingPeekingKeyValueIterator)
- next()
- hasNext()
- peekNextKey()
Info |
---|
All the above methods could be throw following exceptions: StreamThreadRebalancingException, StateStoreMigratedException, KafkaStreamsNotRunningException, StateStoreNotAvailableException |
- CompositeReadOnlySessionStore#fetch(key) (v)
- WrappingStoreProvider#stores() (v)
- StreamThreadStateStoreProvider#stores() (v)
- MeteredSessionStore#fetch(key)
- MeteredSessionStore#findSessions()
- CachingSessionStore#findSessions()
- AbstractStateStore#validateStoreOpen() (v)
- ChangeLoggingSessionBytesStore#findSessions()
- RocksDBSessionStore.findSessions(k)
- RocksDBSessionStore.findSessions(from, to)
- RocksDBSegmentedBytesStore#fetch()
- SessionKeySchema#segmentsToSearch()
- Segments#segments() (v)
- Segments#getSegment()
- ConcurrentHashMap#get()
- Segments#isSegment()
- return segment
- retiurn segments
- Segments#getSegment()
- return
- Segments#segments() (v)
- return new SegmentIterator()
- SessionKeySchema#segmentsToSearch()
- return new WrappedSessionStoreIterator()
- RocksDBSegmentedBytesStore#fetch()
- return
- RocksDBSessionStore.findSessions(from, to)
- return
- RocksDBSessionStore.findSessions(k)
- return new MergedSortedCacheSessionStoreIterator()
- return new MeteredWindowedKeyValueIterator()
- CachingSessionStore#findSessions()
- return
- MeteredSessionStore#findSessions()
- MeteredWindowedKeyValueIterator#hasNext()
- MergedSortedCacheSessionStoreIterator#hasNext()
AbstraceMergedSortedCacheStoreIterator#hasNext()- FilteredCacheIterator#hasNext()
- WrappedSessionStoreIterator#hasNext()
- SegmentIterator#hasNext()
- Segment.range(from, to)
RocksDBStore.range(from, to)- RocksDBStore.validateStoreOpen() (v)
- return new RocksDBRangeIterator()
- return
- Segment.range(from, to)
- return
- SegmentIterator#hasNext()
- return
- return iterator(MeteredWindowedKeyValueIterator)
- MergedSortedCacheSessionStoreIterator#hasNext()
- WrappingStoreProvider#stores() (v)
- MeteredWindowedKeyValueIterator#next()
- MergedSortedCacheSessionStoreIterator#next()
AbstractMergedSortedCacheStoreIterator#next()- AbstractMergedSortedCacheStoreIterator#hasNext()
- FilteredCacheIterator#hasNext()
- WrappedSessionStoreIterator#hasNext()
- SegmentIterator#hasNext()
- Segment.range(from, to)
RocksDBStore.range(from, to)- RocksDBStore.validateStoreOpen() (v)
- return new RocksDBRangeIterator()
- return
- Segment.range(from, to)
- return
- SegmentIterator#hasNext()
- MergedSortedCacheSessionStoreIterator#nextStoreValue()
AbstractMergedSortedCacheStoreIterator#nextStoreValue()- WrappedSessionStoreIterator#next()
- SegmentIterator#next()
- RocksDBRangeIterator#next()
RocksDbIterator#next()- RocksDbIterator#getKeyValue()
- RocksIterator#next()
- return entry
- return
- RocksDBRangeIterator#next()
- return
- SegmentIterator#next()
- return
- WrappedSessionStoreIterator#next()
- return
- return
- MergedSortedCacheSessionStoreIterator#next()
...
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
...
Code Block | ||
---|---|---|
| ||
public interface QueryableStoreType<T> {
// TODO: pass stream instance parameter
T create(final KafkaStreams streams, final StateStoreProvider storeProvider, final String storeName);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> T getStore(final String storeName, final QueryableStoreType<T> queryableStoreType) {
final List<T> globalStore = globalStoreProvider.stores(storeName, queryableStoreType);
if (!globalStore.isEmpty()) {
return queryableStoreType.create(new WrappingStoreProvider(Collections.<StateStoreProvider>singletonList(globalStoreProvider)), storeName);
}
final List<T> allStores = new ArrayList<>();
for (StateStoreProvider storeProvider : storeProviders) {
allStores.addAll(storeProvider.stores(storeName, queryableStoreType));
}
if (allStores.isEmpty()) {
// TODO: Replace with StateStoreMigratedException
throw new InvalidStateStoreException("The state store, " + storeName + ", may have migrated to another instance.");
}
return queryableStoreType.create(
new WrappingStoreProvider(storeProviders),
storeName);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> List<T> stores(final String storeName, final QueryableStoreType<T> queryableStoreType) {
final StateStore store = globalStateStores.get(storeName);
if (store == null || !queryableStoreType.accepts(store)) {
return Collections.emptyList();
}
if (!store.isOpen()) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("the state store, " + storeName + ", is not open.");
}
return (List<T>) Collections.singletonList(store);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> List<T> stores(final String storeName, final QueryableStoreType<T> queryableStoreType) {
if (streamThread.state() == StreamThread.State.DEAD) {
return Collections.emptyList();
}
if (!streamThread.isRunningAndNotRebalancing()) {
// TODO: Replace with StateStoreMigratedException
throw new InvalidStateStoreException("Cannot get state store " + storeName + " because the stream thread is " +
streamThread.state() + ", not RUNNING");
}
final List<T> stores = new ArrayList<>();
for (Task streamTask : streamThread.tasks().values()) {
final StateStore store = streamTask.getStore(storeName);
if (store != null && queryableStoreType.accepts(store)) {
if (!store.isOpen()) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("Cannot get state store " + storeName + " for task " + streamTask +
" because the store is not open. The state store may have migrated to another instances.");
}
stores.add((T) store);
}
}
return stores;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> List<T> stores(final String storeName, QueryableStoreType<T> type) {
final List<T> allStores = new ArrayList<>();
for (StateStoreProvider provider : storeProviders) {
final List<T> stores =
provider.stores(storeName, type);
allStores.addAll(stores);
}
if (allStores.isEmpty()) {
// TODO: Replace with StateStoreMigratedException
throw new InvalidStateStoreException("The state store, " + storeName + ", may have migrated to another instance.");
}
return allStores;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
void validateStoreOpen() {
if (!innerState.isOpen()) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("Store " + innerState.name() + " is currently closed.");
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
private void validateStoreOpen() {
if (!open) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("Store " + this.name + " is currently closed");
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public synchronized boolean hasNext() {
if (!open) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException(String.format("RocksDB store %s has closed", storeName));
}
return iter.isValid();
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public synchronized boolean hasNext() {
if (!open) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException(String.format("Store %s has closed", storeName));
}
if (next != null) {
return true;
}
if (!underlying.hasNext()) {
return false;
}
next = underlying.next();
return true;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
List<Segment> segments(final long timeFrom, final long timeTo) {
final long segFrom = Math.max(minSegmentId, segmentId(Math.max(0L, timeFrom)));
final long segTo = Math.min(maxSegmentId, segmentId(Math.min(maxSegmentId * segmentInterval, Math.max(0, timeTo))));
final List<Segment> segments = new ArrayList<>();
for (long segmentId = segFrom; segmentId <= segTo; segmentId++) {
Segment segment = getSegment(segmentId);
if (segment != null && segment.isOpen()) {
try {
segments.add(segment);
} catch (InvalidStateStoreException ise) { // TODO: Replace with StateStoreClosedException
// segment may have been closed by streams thread;
}
}
}
return segments;
}
List<Segment> allSegments() {
final List<Segment> segments = new ArrayList<>();
for (Segment segment : this.segments.values()) {
if (segment.isOpen()) {
try {
segments.add(segment);
} catch (InvalidStateStoreException ise) { // TODO: Replace with StateStoreClosedException
// segment may have been closed by streams thread;
}
}
}
Collections.sort(segments);
return segments;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public boolean hasNext() {
boolean hasNext = false;
while ((currentIterator == null || !(hasNext = hasNextCondition.hasNext(currentIterator)) || !currentSegment.isOpen())
&& segments.hasNext()) {
close();
currentSegment = segments.next();
try {
if (from == null || to == null) {
currentIterator = currentSegment.all();
} else {
currentIterator = currentSegment.range(from, to);
}
} catch (InvalidStateStoreException e) { // TODO: Replace with StateStoreClosedException
// segment may have been closed so we ignore it.
}
}
return currentIterator != null && hasNext;
}
|
...
Compatibility, Deprecation, and Migration Plan
...
Rejected Alternatives
None.