(Working in Progress)
Table of Contents |
---|
Status
Current state: Under Discussion
...
Motivation
Currently, IQ throws InvalidStateStoreException for any types of error, that means a user cannot handle different types of error.
Because of that, we should throw different exceptions for each type.
Proposed Changes
There are add four new exceptions:To distinguish different types of error, we need to handle all InvalidStateStoreExceptions better during these public methods invoked. The main change is to introduce new exceptions that extend from InvalidStateStoreException. InvalidStateStoreException is not thrown at all anymore, but only new sub-classes.
Code Block | ||
---|---|---|
| ||
# Two category exceptions public class StateStoreMigratedExceptionRetryableStateStoreException extends InvalidStateStoreException public class StateStoreRetryableExceptionFatalStateStoreException extends InvalidStateStoreException # Retryable exceptions public class StreamThreadNotStartedException extends RetryableStateStoreException public class StateStoreFailException extends InvalidStateStoreExceptionStreamThreadRebalancingException extends RetryableStateStoreException public class StateStoreMigratedException extends RetryableStateStoreException # Fatal exceptions public class StateStoreClosedExceptionStreamThreadNotRunningException extends InvalidStateStoreException |
Three categories exception throw to user
FatalStateStoreException
|
Various state store exceptions can classify into two category exceptions: RetryableStateStoreException and FatalStateStoreException. The user can use the two exceptions if they only need to distinguish whether it can retry.
- Retryable exceptions
- StreamThreadNotStartedException: will be thrown when streams thread state is CREATED, the user can retry until to RUNNING.
- StreamThreadRebalancingException: will be thrown when stream thread is not running and stream state is REBALANCING
- , the user just
- retry and wait until rebalance finished (RUNNING).
- StateStoreMigratedException:
- StateStoreFailException: Fatal error when access state store, the user cannot retry or rediscover.
...
- will be thrown when state store already closed and stream state is REBALANCING.
- Fatal exceptions
- KafkaStreamsNotRunningException: will be thrown when stream thread is not running and stream state is PENDING_SHUTDOWN / NOT_RUNNING / ERROR. The user cannot retry when this exception is thrown.
- StateStoreNotAvailableException: will be thrown when state store closed and stream thread is PENDING_SHUTDOWN / NOT_RUNNING / ERROR. The user cannot retry when this exception is thrown.
- UnknownStateStoreException: will be thrown when passing an unknown state store.
The following is the public method methods that users will call to get state store instance:
- KafkaStreams
- storesstore(storeName, queryableStoreType)
Info |
---|
Throw exceptions: StreamThreadNotStartedException, StreamThreadRebalancingException, KafkaStreamsNotRunningException, UnknownStateStoreException |
The following is the public methods that users will call to get store values:
- interface ReadOnlyKeyValueStore(class CompositeReadOnlyKeyValueStore)
- get(kkey)
- range(from, to)
- all()
- approximateNumEntries()
- ReadOnlySessionStoreinterface ReadOnlySessionStore(CompositeReadOnlySessionStoreclass CompositeReadOnlySessionStore)
- fetch(kkey)
- fetch(from, to)
- interface ReadOnlyWindowStore(class CompositeReadOnlyWindowStore)
ReadOnlyWindowStore- fetch(
- key, time)
fetch(
kkey,
rffrom,
ttto)
- fetch(from, to, rffromTime, tttoTime)
- all()
- fetchAll()
- KeyValueIterator(DelegatingPeekingKeyValueIterator)
- next()
- hasNext()
- peekNextKey()
- WindowStoreIterator(MeteredWindowStoreIterator)
- next()
- hasNext()
- peekNextKey()
...
language | java |
---|
...
- from,
...
- to)
- @Deprecated fetch(key, timeFrom, timeTo)
- @Deprecated fetch(from, to, timeFrom, timeTo)
- @Deprecated fetchAll(timeFrom, timeTo)
- interface KeyValueIterator(class DelegatingPeekingKeyValueIterator)
- next()
- hasNext()
- peekNextKey()
Info |
---|
All the above methods could be throw following exceptions: StreamThreadRebalancingException, StateStoreMigratedException, KafkaStreamsNotRunningException, StateStoreNotAvailableException |
During user call one of above methods, we should check KafkaStreams state by the following rule when InvalidStateStoreException is thrown:
- If state is RUNNING or REBALANCING:
- StateStoreClosedException: should be wrapped to StateStoreRetriableException
- StateStoreMigratedException: should not be wrapped, directly thrown
- if state is PENDING_SHUTDOW or ERROR or NOT_RUNNING:
- wrap InvalidStateStoreException to StateStoreFailException
Call Trace
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
...
title | Call trace 6: ReadOnlySessionStore#fetch(key) |
---|
- CompositeReadOnlySessionStore#fetch(key) (v)
- WrappingStoreProvider#stores() (v)
- StreamThreadStateStoreProvider#stores() (v)
- MeteredSessionStore#fetch(key)
- MeteredSessionStore#findSessions()
- CachingSessionStore#findSessions()
- AbstractStateStore#validateStoreOpen() (v)
- ChangeLoggingSessionBytesStore#findSessions()
- RocksDBSessionStore.findSessions(k)
- RocksDBSessionStore.findSessions(from, to)
- RocksDBSegmentedBytesStore#fetch()
- SessionKeySchema#segmentsToSearch()
- Segments#segments() (v)
- Segments#getSegment()
- ConcurrentHashMap#get()
- Segments#isSegment()
- return segment
- retiurn segments
- Segments#getSegment()
- return
- Segments#segments() (v)
- return new SegmentIterator()
- SessionKeySchema#segmentsToSearch()
- return new WrappedSessionStoreIterator()
- RocksDBSegmentedBytesStore#fetch()
- return
- RocksDBSessionStore.findSessions(from, to)
- return
- RocksDBSessionStore.findSessions(k)
- return new MergedSortedCacheSessionStoreIterator()
- return new MeteredWindowedKeyValueIterator()
- CachingSessionStore#findSessions()
- return
- MeteredSessionStore#findSessions()
- MeteredWindowedKeyValueIterator#hasNext()
- MergedSortedCacheSessionStoreIterator#hasNext()
AbstraceMergedSortedCacheStoreIterator#hasNext()- FilteredCacheIterator#hasNext()
- WrappedSessionStoreIterator#hasNext()
- SegmentIterator#hasNext()
- Segment.range(from, to)
RocksDBStore.range(from, to)- RocksDBStore.validateStoreOpen() (v)
- return new RocksDBRangeIterator()
- return
- Segment.range(from, to)
- return
- SegmentIterator#hasNext()
- return
- return iterator(MeteredWindowedKeyValueIterator)
- MergedSortedCacheSessionStoreIterator#hasNext()
- WrappingStoreProvider#stores() (v)
- MeteredWindowedKeyValueIterator#next()
- MergedSortedCacheSessionStoreIterator#next()
AbstractMergedSortedCacheStoreIterator#next()- AbstractMergedSortedCacheStoreIterator#hasNext()
- FilteredCacheIterator#hasNext()
- WrappedSessionStoreIterator#hasNext()
- SegmentIterator#hasNext()
- Segment.range(from, to)
RocksDBStore.range(from, to)- RocksDBStore.validateStoreOpen() (v)
- return new RocksDBRangeIterator()
- return
- Segment.range(from, to)
- return
- SegmentIterator#hasNext()
- MergedSortedCacheSessionStoreIterator#nextStoreValue()
AbstractMergedSortedCacheStoreIterator#nextStoreValue()- WrappedSessionStoreIterator#next()
- SegmentIterator#next()
- RocksDBRangeIterator#next()
RocksDbIterator#next()- RocksDbIterator#getKeyValue()
- RocksIterator#next()
- return entry
- return
- RocksDBRangeIterator#next()
- return
- SegmentIterator#next()
- return
- WrappedSessionStoreIterator#next()
- return
- return
- MergedSortedCacheSessionStoreIterator#next()
...
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
Expand | ||
---|---|---|
| ||
|
...
Code Block | ||
---|---|---|
| ||
public interface QueryableStoreType<T> {
// TODO: pass stream instance parameter
T create(final KafkaStreams streams, final StateStoreProvider storeProvider, final String storeName);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> T getStore(final String storeName, final QueryableStoreType<T> queryableStoreType) {
final List<T> globalStore = globalStoreProvider.stores(storeName, queryableStoreType);
if (!globalStore.isEmpty()) {
return queryableStoreType.create(new WrappingStoreProvider(Collections.<StateStoreProvider>singletonList(globalStoreProvider)), storeName);
}
final List<T> allStores = new ArrayList<>();
for (StateStoreProvider storeProvider : storeProviders) {
allStores.addAll(storeProvider.stores(storeName, queryableStoreType));
}
if (allStores.isEmpty()) {
// TODO: Replace with StateStoreMigratedException
throw new InvalidStateStoreException("the state store, " + storeName + ", may have migrated to another instance.");
}
return queryableStoreType.create(
new WrappingStoreProvider(storeProviders),
storeName);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> List<T> stores(final String storeName, final QueryableStoreType<T> queryableStoreType) {
final StateStore store = globalStateStores.get(storeName);
if (store == null || !queryableStoreType.accepts(store)) {
return Collections.emptyList();
}
if (!store.isOpen()) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("the state store, " + storeName + ", is not open.");
}
return (List<T>) Collections.singletonList(store);
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> List<T> stores(final String storeName, final QueryableStoreType<T> queryableStoreType) {
if (streamThread.state() == StreamThread.State.DEAD) {
return Collections.emptyList();
}
if (!streamThread.isRunningAndNotRebalancing()) {
// TODO: Replace with StateStoreMigratedException
throw new InvalidStateStoreException("Cannot get state store " + storeName + " because the stream thread is " +
streamThread.state() + ", not RUNNING");
}
final List<T> stores = new ArrayList<>();
for (Task streamTask : streamThread.tasks().values()) {
final StateStore store = streamTask.getStore(storeName);
if (store != null && queryableStoreType.accepts(store)) {
if (!store.isOpen()) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("Cannot get state store " + storeName + " for task " + streamTask +
" because the store is not open. The state store may have migrated to another instances.");
}
stores.add((T) store);
}
}
return stores;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public <T> List<T> stores(final String storeName, QueryableStoreType<T> type) {
final List<T> allStores = new ArrayList<>();
for (StateStoreProvider provider : storeProviders) {
final List<T> stores =
provider.stores(storeName, type);
allStores.addAll(stores);
}
if (allStores.isEmpty()) {
// Replace with StateStoreMigratedException
throw new InvalidStateStoreException("the state store, " + storeName + ", may have migrated to another instance.");
}
return allStores;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
void validateStoreOpen() {
if (!innerState.isOpen()) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("Store " + innerState.name() + " is currently closed.");
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
private void validateStoreOpen() {
if (!open) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException("Store " + this.name + " is currently closed");
}
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public synchronized boolean hasNext() {
if (!open) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException(String.format("RocksDB store %s has closed", storeName));
}
return iter.isValid();
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public synchronized boolean hasNext() {
if (!open) {
// TODO: Replace with StateStoreClosedException
throw new InvalidStateStoreException(String.format("Store %s has closed", storeName));
}
if (next != null) {
return true;
}
if (!underlying.hasNext()) {
return false;
}
next = underlying.next();
return true;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
List<Segment> segments(final long timeFrom, final long timeTo) {
final long segFrom = Math.max(minSegmentId, segmentId(Math.max(0L, timeFrom)));
final long segTo = Math.min(maxSegmentId, segmentId(Math.min(maxSegmentId * segmentInterval, Math.max(0, timeTo))));
final List<Segment> segments = new ArrayList<>();
for (long segmentId = segFrom; segmentId <= segTo; segmentId++) {
Segment segment = getSegment(segmentId);
if (segment != null && segment.isOpen()) {
try {
segments.add(segment);
} catch (InvalidStateStoreException ise) { // TODO: Replace with StateStoreClosedException
// segment may have been closed by streams thread;
}
}
}
return segments;
}
List<Segment> allSegments() {
final List<Segment> segments = new ArrayList<>();
for (Segment segment : this.segments.values()) {
if (segment.isOpen()) {
try {
segments.add(segment);
} catch (InvalidStateStoreException ise) { // TODO: Replace with StateStoreClosedException
// segment may have been closed by streams thread;
}
}
}
Collections.sort(segments);
return segments;
} |
Code Block | ||||
---|---|---|---|---|
| ||||
public boolean hasNext() {
boolean hasNext = false;
while ((currentIterator == null || !(hasNext = hasNextCondition.hasNext(currentIterator)) || !currentSegment.isOpen())
&& segments.hasNext()) {
close();
currentSegment = segments.next();
try {
if (from == null || to == null) {
currentIterator = currentSegment.all();
} else {
currentIterator = currentSegment.range(from, to);
}
} catch (InvalidStateStoreException e) { // TODO: Replace with StateStoreClosedException
// segment may have been closed so we ignore it.
}
}
return currentIterator != null && hasNext;
}
|
...
Compatibility, Deprecation, and Migration Plan
...
Rejected Alternatives
None.