flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/SelectivityEstimator.scala (9 lines): - line 222: // TODO supports more operators - line 260: // TODO supports more operators - line 413: // TODO not take include min/max into consideration now - line 554: // TODO: It is difficult to support binary comparisons for non-numeric type - line 624: // TODO not take includeMin into consideration now - line 636: // TODO not take includeMax into consideration now - line 648: // TODO not take includeMin into consideration now - line 659: // TODO not take includeMax into consideration now - line 690: // TODO: It is difficult to support binary comparisons for non-numeric type flink-runtime/src/main/java/org/apache/flink/runtime/state/metainfo/StateMetaInfoSnapshot.java (6 lines): - line 86: // TODO this will go away once all serializers have the restoreSerializer() factory method properly implemented. - line 100: * TODO this variant, which requires providing the serializers, - line 101: * TODO should actually be removed, leaving only {@link #StateMetaInfoSnapshot(String, BackendStateType, Map, Map)}. - line 102: * TODO This is still used by snapshot extracting methods (i.e. computeSnapshot() method of specific state meta - line 103: * TODO info subclasses), and will be removed once all serializers have the restoreSerializer() factory method implemented. - line 159: * TODO this method should be removed once the serializer map is removed. flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/snapshot/RocksFullSnapshotStrategy.java (6 lines): - line 288: // TODO: this code assumes that writing a serializer is threadsafe, we should support to - line 325: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible - line 350: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible - line 359: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible - line 365: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible - line 381: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/functions/SqlDateTimeUtils.java (6 lines): - line 371: // TODO use offset, better performance - line 646: // TODO support it - line 650: // TODO support it - line 653: // TODO support it - line 853: * TODO: - line 1488: // TODO: remove if CALCITE-3199 fixed flink-runtime/src/main/java/org/apache/flink/runtime/state/OperatorStateRestoreOperation.java (6 lines): - line 98: // TODO when eager state registration is in place, we can try to get a convert deserializer - line 99: // TODO from the newly registered typeSerializer instead of simply failing here - line 114: // TODO with eager state registration in place, check here for typeSerializer migration strategies - line 132: // TODO when eager state registration is in place, we can try to get a convert deserializer - line 133: // TODO from the newly registered typeSerializer instead of simply failing here - line 148: // TODO with eager state registration in place, check here for typeSerializer migration strategies flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImpl.java (5 lines): - line 89: *

TODO : Make pending requests location preference aware - line 90: * TODO : Make pass location preferences to ResourceManager when sending a slot request - line 719: // TODO - periodic (every minute or so) catch slots that were lost (check all slots, if they have any task active) - line 721: // TODO - release slots that were not used to the resource manager - line 757: // TODO: add some unit tests when the previous two are ready, the allocation may failed at any phase flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/SubQueryDecorrelator.java (4 lines): - line 319: // TODO supports correlation variable with OR - line 454: // TODO Currently, correlation in projection is not supported. - line 970: // TODO does not allow correlation condition in its inputs now - line 1334: // TODO: create immutable copies of all maps flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/StateAssignmentOperation.java (4 lines): - line 293: //TODO: rewrite this method to only use OperatorID - line 314: // TODO rewrite based on operator id - line 352: //TODO: rewrite this method to only use OperatorID - line 608: // TODO rewrite based on operator id flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdColumnInterval.scala (4 lines): - line 277: // TODO supports ScalarSqlFunctions.IF - line 278: // TODO supports CAST - line 668: // TODO add more built-in agg functions - line 751: //TODO if column at index position is EuqiJoinKey in a Inner Join, its interval is flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/ResourceManager.java (4 lines): - line 331: // TODO: Maybe it's also ok to skip this check in case that we cannot check the leader id - line 764: // TODO :: suggest old taskExecutor to stop itself - line 875: // TODO :: suggest failed task executor to stop itself - line 917: // TODO: Improve performance by having an index on the instanceId flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregateBase.scala (4 lines): - line 264: // TODO: EventTimeTumblingGroupWindow should sort the stream on event time - line 284: // TODO: EventTimeTumblingGroupWindow should sort the stream on event time - line 318: // TODO: EventTimeTumblingGroupWindow should sort the stream on event time - line 338: // TODO: EventTimeTumblingGroupWindow should sort the stream on event time flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/common/CommonLookupJoinRule.scala (4 lines): - line 50: // TODO: shouldn't match temporal UDTF join - line 65: // TODO: support to translate rowtime temporal join to TemporalTableJoin in the future - line 93: // TODO: find TableSource in FlinkLogicalIntermediateTableScan - line 98: // TODO Support `IS NOT DISTINCT FROM` in the future: FLINK-13509 flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SourceStreamTask.java (3 lines): - line 90: // TODO - we need to see how to derive those. We should probably not encode this in the - line 91: // TODO - source's trigger message, but do a handshake in this task between the trigger - line 92: // TODO - message from the master, and the source's trigger notification flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/metrics/MetricFetcherImpl.java (3 lines): - line 151: // TODO: Once the old code has been ditched, remove the explicit TaskManager query service discovery - line 152: // TODO: and return it as part of requestMetricQueryServiceAddresses. Moreover, change the MetricStore such that - line 153: // TODO: we don't have to explicitly retain the valid TaskManagers, e.g. letting it be a cache with expiry time flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecGroupTableAggregate.scala (3 lines): - line 124: // TODO: heap state backend do not copy key currently, we have to copy input field - line 125: // TODO: copy is not need when state backend is rocksdb, improve this in future - line 126: // TODO: but other operators do not copy this input field..... flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecGroupAggregate.scala (3 lines): - line 132: // TODO: heap state backend do not copy key currently, we have to copy input field - line 133: // TODO: copy is not need when state backend is rocksdb, improve this in future - line 134: // TODO: but other operators do not copy this input field..... flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java (3 lines): - line 483: // TODO: This method needs a leader session ID - line 495: // TODO: This method needs a leader session ID - line 808: //TODO: Remove once the ZooKeeperLeaderRetrieval returns the stored address upon start flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/utils/HiveTableSqlFunction.java (3 lines): - line 73: * @deprecated TODO hack code, its logical should be integrated to TableSqlFunction - line 122: * TODO FlinkTableFunction need implement getElementType. - line 223: * @deprecated TODO hack code, should modify calcite getRowType to pass operand types flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/GenerateUtils.scala (3 lines): - line 179: // TODO: should we also consider other types? - line 455: throw new UnsupportedOperationException() // TODO support symbol? - line 743: throw new UnsupportedOperationException() // TODO support MULTISET and MAP? flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/StreamTaskStateInitializerImpl.java (3 lines): - line 332: // TODO currently this does not support local state recovery, so we expect there is only one handle. - line 367: // TODO currently this does not support local state recovery, so we expect there is only one handle. - line 454: private final String stateName; //TODO since we only support a single named state in raw, this could be dropped flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/operators/windowing/WindowOperator.java (3 lines): - line 250: // TODO this sanity check should be here, but is prevented by an incorrect test (pending validation) - line 251: // TODO see WindowOperatorTest.testCleanupTimerWithEmptyFoldingStateForSessionWindows() - line 252: // TODO activate the sanity check once resolved flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java (3 lines): - line 218: * TODO it might be replaced by the global IO executor on TaskManager level future. - line 551: // TODO: investigate why Throwable instead of Exception is used here. - line 714: // TODO: investigate why Throwable instead of Exception is used here. flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskExecutor.java (3 lines): - line 788: // TODO: Do we still need this catch branch? - line 792: // TODO: Maybe it's better to return an Acknowledge here to notify the JM about the success/failure with an Exception - line 902: // TODO: Filter invalid requests from the resource manager by using the instance/registration Id flink-scala/src/main/scala/org/apache/flink/api/scala/ClosureCleaner.scala (3 lines): - line 264: // TODO: clean all inner closures first. This requires us to find the inner objects. - line 265: // TODO: cache outerClasses / innerClasses / accessedFields - line 582: // TODO: Recursively find inner closures that we indirectly reference, e.g. flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/CatalogTableSchemaResolver.java (2 lines): - line 47: // TODO remove this once FLINK-18180 is finished - line 64: // TODO: [FLINK-14473] we only support top-level rowtime attribute right now flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkSemiAntiJoinJoinTransposeRule.java (2 lines): - line 91: // TODO support other join type - line 106: // TODO currently we does not handle this flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/AggFunctionFactory.scala (2 lines): - line 87: // TODO supports ApproximateCountDistinctAggFunction and CountDistinctAggFunction - line 120: // TODO supports SqlCardinalityCountAggFunction flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/stream/MiniBatchIntervalInferRule.scala (2 lines): - line 71: // TODO introduce mini-batch window aggregate later - line 131: // TODO: if it is ProcTime mode, we also append a minibatch node for now. flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/util/RexProgramExtractor.scala (2 lines): - line 244: // TODO we cast to planner expression as a temporary solution to keep the old interfaces - line 279: // TODO we assume only planner expression as a temporary solution to keep the old interfaces flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/FlinkRelMdUtil.scala (2 lines): - line 522: // TODO reuse FlinkRelMetadataQuery here - line 546: //TODO It's hard to make sure that the normalized key's length is accurate in optimized stage. flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamMatch.scala (2 lines): - line 159: //TODO remove this once it is supported in CEP library - line 166: //TODO remove this once it is supported in CEP library flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/BuiltInMethods.scala (2 lines): - line 461: // TODO: remove if CALCITE-3199 fixed - line 469: // TODO: remove ADD_MONTHS and SUBTRACT_MONTHS if CALCITE-3881 fixed flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/MatchCodeGenerator.scala (2 lines): - line 806: isDistinct = false, // TODO properly set once supported in Calcite - line 808: new util.ArrayList[Integer](), // TODO properly set once supported in Calcite flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptors/Rowtime.java (2 lines): - line 36: // TODO: Put these fields into RowtimeValidator once it is also ported into table-common. - line 37: // TODO: Because these fields have polluted this API class. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/LookupJoinCodeGenerator.scala (2 lines): - line 91: // TODO: filter all records when there is any nulls on the join key, because - line 140: // TODO: filter all records when there is any nulls on the join key, because flink-java/src/main/java/org/apache/flink/api/java/operators/JoinOperator.java (2 lines): - line 843: // // TODO: Runtime support required. Each left tuple may be returned only once. - line 859: // // TODO: Runtime support required. Each right tuple may be returned only once. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdColumnUniqueness.scala (2 lines): - line 84: // TODO get uniqueKeys from TableSchema of TableSource - line 513: // TODO get uniqueKeys from TableSchema of TableSource flink-formats/flink-orc/src/main/java/org/apache/flink/orc/OrcBatchReader.java (2 lines): - line 445: // TODO: possible improvement: reuse existing Row objects - line 852: // TODO: possible improvement: reuse existing Row objects flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/AggCallSelectivityEstimator.scala (2 lines): - line 83: * TODO supports more - line 368: // TODO: It is difficult to support binary comparisons for non-numeric type flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/BinaryHashTable.java (2 lines): - line 220: // TODO: combine key projection and build side conversion to code gen. - line 635: // TODO do null filter in advance? flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaShuffleFetcher.java (2 lines): - line 112: // TODO: Do we need to check the end of stream if reaching the end watermark - line 113: // TODO: Currently, if one of the partition sends an end-of-stream signal the fetcher stops running. flink-runtime/src/main/java/org/apache/flink/runtime/state/AbstractKeyedStateBackend.java (2 lines): - line 300: * TODO: NOTE: This method does a lot of work caching / retrieving states just to update the namespace. - line 364: // TODO remove this once heap-based timers are working with RocksDB incremental snapshots! flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBPriorityQueueSetFactory.java (2 lines): - line 58: static final int DEFAULT_CACHES_SIZE = 128; //TODO make this configurable - line 160: // TODO we implement the simple way of supporting the current functionality, mimicking keyed state flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecHashAggregateBase.scala (2 lines): - line 96: // TODO use BytesHashMap.BUCKET_SIZE instead of 16 - line 98: // TODO use BytesHashMap.RECORD_EXTRA_LENGTH instead of 8 flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/ClusterConfigurationInfoHeaders.java (2 lines): - line 33: // TODO this REST path is inappropriately set due to legacy design reasons, and ideally should be '/config'; - line 34: // TODO changing it would require corresponding path changes in flink-runtime-web flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/KinesisDataFetcher.java (2 lines): - line 534: // TODO: have this thread emit the records for tracking backpressure - line 1011: // TODO: refresh watermark while idle flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/LongHybridHashTable.java (2 lines): - line 47: * TODO add min max long filter and bloomFilter to spilled partition. - line 215: // TODO MemoryManager needs to support flexible larger segment, so that the index area of the flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/FlinkSubQueryRemoveRule.scala (2 lines): - line 64: // TODO supports ExistenceJoin - line 160: // TODO: flink-runtime/src/main/java/org/apache/flink/runtime/state/DefaultOperatorStateBackend.java (2 lines): - line 90: *

TODO this map should be moved to a base class once we have proper hierarchy for the operator state backends. - line 262: // TODO with eager registration in place, these checks should be moved to restore() flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java (2 lines): - line 630: *

TODO: If asynchronous registration is needed in the future, use callbacks to access {@link Execution#producedPartitions}. - line 1463: // TODO For some tests this could be a problem when querying too early if all resources were released flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecGroupWindowAggregateBase.scala (2 lines): - line 280: // TODO: EventTimeTumblingGroupWindow should sort the stream on event time - line 301: // TODO: EventTimeTumblingGroupWindow should sort the stream on event time flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecExchange.scala (2 lines): - line 63: // TODO reuse PartitionTransformation - line 181: // TODO Eliminate duplicate keys flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBMapState.java (2 lines): - line 283: //TODO make KvStateSerializer key-group aware to save this round trip and key-group computation - line 672: // TODO: optimization here could be to work with slices and not byte arrays flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/ScalarOperatorGens.scala (2 lines): - line 995: // TODO: GenericType with Date/Time/Timestamp -> String would call toString implicitly - line 2301: // TODO: Create a wrapper layer that handles type conversion between numeric. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/JoinDependentConditionDerivationRule.scala (2 lines): - line 62: // TODO supports more join type - line 104: // TODO Consider whether it is worth doing a filter if we have histogram. flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedBackendSerializationProxy.java (2 lines): - line 61: //TODO allow for more (user defined) compression formats + backwards compatibility story. - line 65: // TODO the keySerializer field should be removed, once all serializers have the restoreSerializer() method implemented flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/functions/SqlFunctionUtils.java (2 lines): - line 748: // TODO: refactor this to use jackson ? - line 1122: /** TODO: remove addMonths and subtractMonths if CALCITE-3881 fixed. flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/PartitionLoader.java (2 lines): - line 46: *

TODO: src and dest may be on different FS. - line 98: // TODO: We need move to trash when auto-purge is false. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/RelExplainUtil.scala (2 lines): - line 157: // TODO Refactor local&global aggregate name - line 339: // TODO output local/global agg call names like Partial_XXX, Final_XXX flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/RocksDBFullRestoreOperation.java (2 lines): - line 215: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible - line 227: //TODO this could be aware of keyGroupPrefixBytes and write only one byte if possible flink-python/src/main/java/org/apache/beam/runners/fnexecution/state/GrpcStateService.java (2 lines): - line 106: // TODO: Abort in-flight state requests. Flag this processBundleInstructionId as a fail. - line 115: *

TODO: Handle when the client indicates completion or an error on the inbound stream and flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecMatch.scala (2 lines): - line 178: //TODO remove this once it is supported in CEP library - line 185: //TODO remove this once it is supported in CEP library flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/common/CommonLookupJoin.scala (2 lines): - line 121: // TODO: refactor this into TableSourceTable, once legacy TableSource is removed - line 156: // TODO: support nested lookup keys in the future, flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java (2 lines): - line 506: // TODO refactor this after Table#execute support all kinds of changes - line 592: // TODO replace sqlUpdate with executeSql flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/join/temporal/TemporalRowTimeJoinOperator.java (2 lines): - line 100: *

TODO: this could be OrderedMultiMap[Jlong, Row] indexed by row's timestamp, to avoid - line 108: *

TODO: having `rightState` as an OrderedMapState would allow us to avoid sorting cost flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/LongHashJoinGenerator.scala (2 lines): - line 61: // TODO decimal and multiKeys support. - line 62: // TODO All HashJoinType support. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdUniqueKeys.scala (2 lines): - line 73: // TODO get uniqueKeys from TableSchema of TableSource - line 438: // TODO get uniqueKeys from TableSchema of TableSource flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/runtime/join/TemporalRowtimeJoin.scala (2 lines): - line 96: * TODO: this could be OrderedMultiMap[Jlong, Row] indexed by row's timestamp, to avoid - line 104: * TODO: having `rightState` as an OrderedMapState would allow us to avoid sorting cost flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/HiveGenericUDAF.java (2 lines): - line 124: // TODO: investigate whether this has impact on Flink streaming job with windows - line 140: * TODO: re-evaluate how this will fit into Flink's new type inference and udf systemß flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/expressions/converter/ExpressionConverter.java (2 lines): - line 141: // TODO planner supports only milliseconds precision - line 149: // TODO type factory strips the precision, for literals we can be more lenient already flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinator.java (2 lines): - line 119: // TODO currently we use commit vertices to receive "abort checkpoint" messages. - line 1272: // Recover the checkpoints, TODO this could be done only when there is a new leader, not on each recovery flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/StringCallGen.scala (1 line): - line 43: *

TODO Need to rewrite most of the methods here, calculated directly on the StringData flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/NettyMessage.java (1 line): - line 573: // TODO Directly serialize to Netty's buffer flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/EqualiserCodeGenerator.scala (1 line): - line 59: // TODO merge ScalarOperatorGens.generateEquals. flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/calls/FunctionGenerator.scala (1 line): - line 681: // TODO: fixme if CALCITE-3199 fixed, use BuiltInMethod.UNIX_DATE_CEIL flink-runtime/src/main/java/org/apache/flink/runtime/state/SharedStateRegistry.java (1 line): - line 202: // TODO This is a temporary fix for a problem during ZooKeeperCompletedCheckpointStore#shutdown: flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/over/RowTimeRangeBoundedPrecedingFunction.java (1 line): - line 175: // TODO Use timer with namespace to distinguish timers flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/functions/utils/UserDefinedFunctionUtils.scala (1 line): - line 86: // TODO it is not a good way to check singleton. Maybe improve it further. flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectCoordinationResponse.java (1 line): - line 74: // TODO the following two methods might be not so efficient flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliOptionsParser.java (1 line): - line 233: // TODO enable this once gateway mode is in place flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/AbstractRocksDBState.java (1 line): - line 129: //TODO make KvStateSerializer key-group aware to save this round trip and key-group computation flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PipelinedSubpartition.java (1 line): - line 130: // TODO: use readView.notifyPriorityEvent for local channels flink-runtime/src/main/java/org/apache/flink/runtime/state/StatePartitionStreamProvider.java (1 line): - line 31: * TODO use bounded stream that fail fast if the limit is exceeded on corrupted reads. flink-formats/flink-csv/src/main/java/org/apache/flink/formats/csv/CsvRowDataDeserializationSchema.java (1 line): - line 388: // TODO: FLINK-17525 support millisecond and nanosecond flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTableUtil.java (1 line): - line 381: // TODO: [FLINK-12398] Support partitioned view in catalog API flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/DerbyDialect.java (1 line): - line 94: // TODO: We can't convert BINARY data type to flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/restore/AbstractRocksDBRestoreOperation.java (1 line): - line 173: // TODO with eager state registration in place, check here for serializer migration strategies flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/flip1/RestartPipelinedRegionFailoverStrategy.java (1 line): - line 108: // TODO: show the task name in the log flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/metrics/MetricsAggregationParameter.java (1 line): - line 28: * TODO: add javadoc. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecSortLimit.scala (1 line): - line 176: // TODO Use UnaryUpdateTopNFunction after SortedMapState is merged flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBMemoryConfiguration.java (1 line): - line 168: // TODO change the formula once FLINK-15532 resolved. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/stream/StreamExecIntervalJoinRule.scala (1 line): - line 53: // TODO support SEMI/ANTI join flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/ColumnIntervalUtil.scala (1 line): - line 173: // TODO add more case flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBOperationUtils.java (1 line): - line 107: // TODO We would remove this method once we bump RocksDB version larger than 6.2.2. flink-core/src/main/java/org/apache/flink/api/common/io/BinaryInputFormat.java (1 line): - line 221: // TODO: seek not supported by compressed streams. Will throw exception flink-connectors/flink-connector-elasticsearch-base/src/main/java/org/apache/flink/streaming/connectors/elasticsearch/table/IndexGeneratorFactory.java (1 line): - line 109: // TODO we can possibly optimize it to use the nullability of the field flink-runtime/src/main/java/org/apache/flink/runtime/io/disk/iomanager/SynchronousBufferFileReader.java (1 line): - line 31: * TODO Refactor I/O manager setup and refactor this into it flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/over/frame/OffsetOverFrame.java (1 line): - line 106: // TODO refactor it. flink-java/src/main/java/org/apache/flink/api/java/operators/translation/PlanProjectOperator.java (1 line): - line 68: // TODO We should use code generation for this. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/RexNodeExtractor.scala (1 line): - line 339: // TODO support SqlTrimFunction.Flag flink-core/src/main/java/org/apache/flink/core/memory/DataInputDeserializer.java (1 line): - line 74: // TODO: FLINK-8585 handle readonly and other non array based buffers more efficiently without data copy flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunction.java (1 line): - line 250: // TODO this implementation is not very effective, flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/utils/OperationConverterUtils.java (1 line): - line 164: // TODO: handle watermark and constraints flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/sql/FlinkSqlOperatorTable.java (1 line): - line 678: // TODO: the return type of TO_TIMESTAMP should be TIMESTAMP(9) flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/join/lookup/AsyncLookupJoinRunner.java (1 line): - line 176: *

TODO: code generate a whole JoinedRowResultFuture in the future flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java (1 line): - line 333: // TODO should add a validation, while StreamTableSource is in flink-table-api-java-bridge module now flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/BaseHybridHashTable.java (1 line): - line 162: //TODO: read compression config from configuration flink-table/flink-table-planner/src/main/java/org/apache/calcite/sql2rel/AuxiliaryConverter.java (1 line): - line 62: // case SESSION_END: // TODO: ? flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/expressions/resolver/rules/ResolveCallByArgumentsRule.java (1 line): - line 152: // TODO support the new type system with ROW and STRUCTURED_TYPE flink-python/pyflink/table/udf.py (1 line): - line 311: # TODO: support to configure the python execution environment flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/KeyGroupPartitionedPriorityQueue.java (1 line): - line 163: // TODO consider bulk loading the partitions and "heapify" keyGroupHeap once after all elements are inserted. flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/schema/CompositeRelDataType.scala (1 line): - line 83: // TODO the composite type should provide the information if subtypes are nullable flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/FlinkKafkaInternalProducer.java (1 line): - line 69: // TODO: remove the workaround after Kafka dependency is bumped to 2.3.0+ flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/FunctionGenerator.scala (1 line): - line 453: // TODO: fixme if CALCITE-3199 fixed flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/FunctionCatalog.java (1 line): - line 653: Thread.currentThread().getContextClassLoader(), // TODO use classloader of catalog manager in the future flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/FunctionCodeGenerator.scala (1 line): - line 122: // TODO more functions flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/SetOpRewriteUtil.scala (1 line): - line 96: // TODO use relBuilder.functionScan() once we remove TableSqlFunction flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/InternalTimeServiceManager.java (1 line): - line 164: //TODO all of this can be removed once heap-based timers are integrated with RocksDB incremental snapshots flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecHashJoinRule.scala (1 line): - line 71: // TODO use shuffle hash join if isBroadcast is true and isBroadcastHashJoinEnabled is false ? flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecWindowAggregateRule.scala (1 line): - line 158: // TODO aggregate include projection now, so do not provide new trait will be safe flink-core/src/main/java/org/apache/flink/core/execution/DefaultExecutorServiceLoader.java (1 line): - line 44: // TODO: This code is almost identical to the ClusterClientServiceLoader and its default implementation. flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/rules/logical/PushFilterIntoTableSourceScanRule.scala (1 line): - line 106: // TODO we cast to planner expressions as a temporary solution to keep the old interfaces flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamMultipleInputProcessor.java (1 line): - line 214: // TODO: isAvailable() can be a costly operation (checking volatile). If one of flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/UpdatingPlanChecker.scala (1 line): - line 35: // TODO UpsertStreamTableSink setKeyFields interface should be Array[Array[String]] flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectResultFetcher.java (1 line): - line 230: // TODO a more proper retry strategy? flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/agg/ImperativeAggCodeGen.scala (1 line): - line 174: // TODO handle accumulate has primitive parameters flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackendBuilder.java (1 line): - line 449: // TODO eventually we might want to separate savepoint and snapshot strategy, i.e. having 2 strategies. flink-runtime/src/main/java/org/apache/flink/runtime/state/NonClosingCheckpointOutputStream.java (1 line): - line 67: // TODO if we want to support async writes, this call could trigger a callback to the snapshot context that a handle is available. flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateBackend.java (1 line): - line 98: * TODO: NOTE: This method does a lot of work caching / retrieving states just to update the namespace. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdDistribution.scala (1 line): - line 66: //FIXME transmit one possible distribution. flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java (1 line): - line 847: // TODO: server use user main method to generate job graph flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/context/ExecutionContext.java (1 line): - line 30: // TODO add create state method. flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/utils/HiveAggSqlFunction.java (1 line): - line 46: * @deprecated TODO hack code, its logical should be integrated to AggSqlFunction flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/MySQLDialect.java (1 line): - line 115: // TODO: We can't convert BINARY data type to flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapPriorityQueue.java (1 line): - line 174: // TODO implement shrinking as well? flink-core/src/main/java/org/apache/flink/core/fs/local/LocalRecoverableFsDataOutputStream.java (1 line): - line 189: // TODO how to handle this? flink-yarn/src/main/java/org/apache/flink/yarn/YarnResourceManager.java (1 line): - line 204: //TODO: change akka address to tcp host and port, the getAddress() interface should return a standard tcp address flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/JoinUtil.scala (1 line): - line 107: // TODO create NonEquiJoinInfo directly flink-core/src/main/java/org/apache/flink/core/plugin/DirectoryBasedPluginFinder.java (1 line): - line 83: //TODO: This class could be extended to parse exclude-pattern from a optional text files in the plugin directories. flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/MaxAggFunction.java (1 line): - line 75: // TODO FLINK-12295, ignore exception now flink-jepsen/src/jepsen/flink/db.clj (1 line): - line 69: ;; TODO: write log4j.properties properly flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java (1 line): - line 305: // TODO how to handle this? flink-core/src/main/java/org/apache/flink/core/memory/MemorySegmentFactory.java (1 line): - line 146: // TODO: this error handling can be removed in future, flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamConfig.java (1 line): - line 485: //TODO: could this logic be moved to the user of #setTransitiveChainedTaskConfigs() ? flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HiveTableInputFormat.java (1 line): - line 311: //TODO: we should consider how to calculate the splits according to minNumSplits in the future. flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/write/HiveWriterFactory.java (1 line): - line 180: // TODO: support partition properties, for now assume they're same as table properties flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/DataStreamQueryOperation.java (1 line): - line 53: // TODO remove this while TableSchema supports fieldNullables flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer.java (1 line): - line 1266: // TODO: somehow merge metrics from all active producers? flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/FlinkRexUtil.scala (1 line): - line 175: // TODO use more efficient solution to get number of RexCall in CNF node flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecRank.scala (1 line): - line 184: // TODO Use UnaryUpdateTopNFunction after SortedMapState is merged flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/typeutils/LegacyLocalDateTimeTypeInfo.java (1 line): - line 35: * TODO: https://issues.apache.org/jira/browse/FLINK-14927 flink-runtime/src/main/java/org/apache/flink/runtime/blob/TransientBlobCache.java (1 line): - line 45: *

TODO: make this truly transient by returning file streams to a local copy with the remote flink-core/src/main/java/org/apache/flink/api/common/typeutils/ParameterlessTypeSerializerConfig.java (1 line): - line 44: * TODO we might change this to a proper serialization format class in the future flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorFactoryUtil.java (1 line): - line 69: // TODO: what to do with ProcessingTimeServiceAware? flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java (1 line): - line 1158: // TODO remove the catch block if we align the schematics to not fail global within the restarter. flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/typeutils/RowDataSerializer.java (1 line): - line 175: * TODO modify it to code gen. flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/catalog/CatalogSchemaTable.java (1 line): - line 139: // TODO: Fix FLINK-14844. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/typeutils/LegacyDataViewUtils.scala (1 line): - line 136: // TODO supports SortedMapView flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java (1 line): - line 305: // TODO The job-submission web interface passes empty args (and thus empty flink-runtime/src/main/java/org/apache/flink/runtime/webmonitor/WebMonitorEndpoint.java (1 line): - line 643: // TODO: Remove once the Yarn proxy can forward all REST verbs flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecLimitRule.scala (1 line): - line 84: rexBuilder.makeLiteral(limit, intType, true), // TODO use Long type for limit ? flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecIntervalJoin.scala (1 line): - line 78: // TODO remove FlinkJoinType flink-table/flink-table-common/src/main/java/org/apache/flink/table/types/logical/utils/LogicalTypeCasts.java (1 line): - line 489: sourceType.getClass() != targetType.getClass() || // TODO drop this line once we remove legacy types flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/over/NonBufferOverWindowOperator.java (1 line): - line 101: // TODO Reform AggsHandleFunction.getValue instead of use JoinedRowData. Multilayer JoinedRowData is slow. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdSize.scala (1 line): - line 428: // TODO after time/date => int, timestamp => long, this estimate value should update flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBKeyedStateBackend.java (1 line): - line 694: //TODO maybe filterOrTransform only for k/v states flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/types/PlannerTypeUtils.java (1 line): - line 152: // TODO: add other precision types here in the future flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/rank/AbstractTopNFunction.java (1 line): - line 99: // TODO support RANK and DENSE_RANK flink-formats/flink-orc-nohive/src/main/java/org/apache/flink/orc/nohive/shim/OrcNoHiveShim.java (1 line): - line 73: // TODO configure filters flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/operations/RichTableSourceQueryOperation.java (1 line): - line 36: * TODO this class should be deleted after unique key in TableSchema is ready flink-java/src/main/java/org/apache/flink/api/java/io/TypeSerializerInputFormat.java (1 line): - line 46: // TODO: fix this shit flink-formats/flink-hadoop-bulk/src/main/java/org/apache/flink/formats/hadoop/bulk/committer/HadoopRenameFileCommitter.java (1 line): - line 94: // TODO: in the future we may also need to check if the target file exists. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecOverAggregate.scala (1 line): - line 373: //TODO just replace comparator to equaliser flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/FunctionCodeGenerator.scala (1 line): - line 176: // TODO more functions flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/sort/SortCodeGenerator.scala (1 line): - line 445: // TODO: support normalize key for non-compact timestamp flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/common/CommonCalc.scala (1 line): - line 60: // TODO use inputRowCnt to compute cpu cost flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/catalog/DataTypeFactoryImpl.java (1 line): - line 154: // TODO validate implementation class of structured types when converting from LogicalType to DataType flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/CorrelateUtil.scala (1 line): - line 46: // TODO: add case for pattern that we need a RexProgram merge. flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/shuffle/FlinkKafkaShuffle.java (1 line): - line 339: // TODO: consider situations where numberOfPartitions != consumerParallelism flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/ExecNodePlanDumper.scala (1 line): - line 162: // TODO refactor this part of code flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdModifiedMonotonicity.scala (1 line): - line 499: // TODO supports temporal table function join flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/physical/batch/BatchExecSortAggRule.scala (1 line): - line 83: // TODO aggregate include projection now, so do not provide new trait will be safe flink-formats/flink-parquet/src/main/java/org/apache/flink/formats/parquet/vector/ParquetColumnarRowSplitReader.java (1 line): - line 182: // TODO clip for array,map,row types. flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/LeadLagAggFunction.java (1 line): - line 93: // TODO hack, use the current input reset the buffer value. flink-table/flink-table-planner/src/main/java/org/apache/flink/table/plan/rules/logical/FlinkFilterJoinRule.java (1 line): - line 156: // TODO - add logic to derive additional filters. E.g., from flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/PostgresDialect.java (1 line): - line 115: // TODO: We can't convert BINARY data type to flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/codegen/calls/BuiltInMethods.scala (1 line): - line 199: // TODO: remove if CALCITE-3199 fixed flink-runtime/src/main/java/org/apache/flink/runtime/deployment/TaskDeploymentDescriptorFactory.java (1 line): - line 110: // TODO Refactor after removing the consumers from the intermediate result partitions flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/agg/batch/HashWindowCodeGenerator.scala (1 line): - line 482: // TODO refine this. Is it possible to reuse grouping key projection? flink-streaming-java/src/main/java/org/apache/flink/streaming/api/graph/StreamingJobGraphGenerator.java (1 line): - line 455: // TODO: inherit InputDependencyConstraint from the head operator flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapSnapshotStrategy.java (1 line): - line 135: // TODO: this code assumes that writing a serializer is threadsafe, we should support to flink-runtime/src/main/java/org/apache/flink/runtime/operators/sort/AbstractMergeIterator.java (1 line): - line 145: // TODO: Decide which side to spill and which to block! flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecExchange.scala (1 line): - line 91: // TODO Eliminate duplicate keys flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/serialization/KinesisDeserializationSchema.java (1 line): - line 73: // TODO FLINK-4194 ADD SUPPORT FOR boolean isEndOfStream(T nextElement); flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/hashtable/LongHashPartition.java (1 line): - line 304: // TODO test Conflict resolution: flink-runtime/src/main/java/org/apache/flink/runtime/minicluster/MiniCluster.java (1 line): - line 988: // TODO: That we still have to call something like this is a crime against humanity flink-table/flink-table-common/src/main/java/org/apache/flink/table/dataview/NullSerializer.java (1 line): - line 36: // TODO move this class to org.apache.flink.table.runtime.typeutils flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/rules/logical/FlinkLogicalRankRule.scala (1 line): - line 70: // TODO use the field name specified by user flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecSortMergeJoin.scala (1 line): - line 188: * TODO two input operator chain will return different value. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/common/CommonPhysicalJoin.scala (1 line): - line 70: // TODO remove FlinkJoinType flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java (1 line): - line 271: // TODO should keep `explain xx` ? flink-python/src/main/java/org/apache/flink/table/runtime/typeutils/serializers/python/RowDataSerializer.java (1 line): - line 74: // TODO: support RowData natively in Python, then we can eliminate the redundant serialize/deserialize flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/HiveFunction.java (1 line): - line 26: * TODO: Note: this is only a temporary interface for workaround when Flink type system and udf system flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkProjectJoinTransposeRule.java (1 line): - line 84: return; // TODO: support SEMI/ANTI join later flink-runtime/src/main/java/org/apache/flink/runtime/state/KeyedStateCheckpointOutputStream.java (1 line): - line 56: // TODO if we want to support async writes, this call could trigger a callback to the snapshot context that a handle is available. flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSink.java (1 line): - line 303: // TODO: may append something more meaningful than a timestamp, like query ID flink-runtime/src/main/java/org/apache/flink/runtime/state/StateSnapshotRestore.java (1 line): - line 27: * TODO find better name? flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java (1 line): - line 188: // TODO this metric is kept for compatibility purposes; should remove in the future flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/over/frame/OverWindowFrame.java (1 line): - line 70: * TODO Maybe copy is repeated. flink-connectors/flink-connector-hbase/src/main/java/org/apache/flink/connector/hbase/util/HBaseSerde.java (1 line): - line 456: // TODO: support higher precision flink-formats/flink-avro/src/main/java/org/apache/flink/formats/avro/AvroFileSystemFormatFactory.java (1 line): - line 130: * TODO {@link AvroInputFormat} support type conversion. flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/over/BufferDataOverWindowOperator.java (1 line): - line 122: // TODO Reform AggsHandleFunction.getValue instead of use JoinedRowData. Multilayer JoinedRowData is slow. flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdUniqueGroups.scala (1 line): - line 307: // TODO drop some nonGroupingCols base on FlinkRelMdColumnUniqueness#areColumnsUnique(window) flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/metadata/FlinkRelMdSelectivity.scala (1 line): - line 272: // TODO only effects BatchPhysicalRel instead of all RelNode now flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/ExpandUtil.scala (1 line): - line 225: // TODO only need output duplicate fields for the row against 'regular' aggregates flink-connectors/flink-connector-kafka-0.11/src/main/java/org/apache/flink/streaming/connectors/kafka/FlinkKafkaProducer011.java (1 line): - line 1011: // TODO: somehow merge metrics from all active producers? flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/plan/nodes/dataset/DataSetWindowAggregate.scala (1 line): - line 228: // TODO: count tumbling all window on event-time should sort all the data set flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/HeapKeyedStateBackend.java (1 line): - line 159: // TODO we implement the simple way of supporting the current functionality, mimicking keyed state flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecSortLimit.scala (1 line): - line 141: // TODO If input is ordered, there is no need to use the heap. flink-table/flink-table-planner-blink/src/main/java/org/apache/calcite/sql2rel/AuxiliaryConverter.java (1 line): - line 63: // case SESSION_END: // TODO: ? flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/RankProcessStrategy.scala (1 line): - line 113: //FIXME choose a set of primary key flink-connectors/flink-connector-kafka-0.10/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java (1 line): - line 199: // TODO this metric is kept for compatibility purposes; should remove in the future flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/TableFormatFactoryBase.java (1 line): - line 49: // TODO drop constants once SchemaValidator has been ported to flink-table-common flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/ResultStore.java (1 line): - line 134: // TODO cache this flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/utils/HiveScalarSqlFunction.java (1 line): - line 45: * @deprecated TODO hack code, its logical should be integrated to ScalarSqlFunction flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/util/JobManagerWatermarkTracker.java (1 line): - line 110: // TODO: wrap accumulator flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/filesystem/FileSystemLookupFunction.java (1 line): - line 118: // TODO: get ExecutionConfig from context? flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkAggregateExpandDistinctAggregatesRule.java (1 line): - line 499: // TODO supports more aggCalls (currently only supports COUNT) flink-runtime/src/main/java/org/apache/flink/runtime/blob/TransientBlobService.java (1 line): - line 38: *

TODO: change API to not rely on local files but return {@link InputStream} objects flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamExecCalc.scala (1 line): - line 63: // TODO deal time indicators. flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/functions/aggfunctions/MinAggFunction.java (1 line): - line 75: // TODO FLINK-12295, ignore exception now flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/PartitionRequestQueue.java (1 line): - line 84: // TODO This could potentially have a bad performance impact as in the flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/codegen/calls/CurrentTimePointCallGen.scala (1 line): - line 53: // TODO CURRENT_TIMESTAMP should return TIMESTAMP WITH TIME ZONE flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/StreamOperatorStateHandler.java (1 line): - line 266: TODO: NOTE: This method does a lot of work caching / retrieving states just to update the namespace. flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/sort/LimitOperator.java (1 line): - line 30: * TODO support stopEarly. flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeAnalyzer.scala (1 line): - line 392: // TODO: use this once 2.10 is no longer supported flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/descriptors/HiveCatalogDescriptor.java (1 line): - line 41: // TODO : set default database flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/typeutils/LegacyTimestampTypeInfo.java (1 line): - line 35: * TODO: https://issues.apache.org/jira/browse/FLINK-14927 flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/reuse/DeadlockBreakupProcessor.scala (1 line): - line 167: // TODO create a cloned BatchExecExchange for PIPELINE output flink-core/src/main/java/org/apache/flink/api/common/io/DelimitedInputFormat.java (1 line): - line 369: // TODO: Add sampling for unsplittable files. Right now, only compressed text files are affected by this limitation. flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/FlinkAggregateRemoveRule.java (1 line): - line 90: // TODO supports more AggregateCalls flink-runtime/src/main/java/org/apache/flink/runtime/state/metainfo/StateMetaInfoSnapshotReadersWriters.java (1 line): - line 54: * TODO this can go away after we eventually drop backwards compatibility with all versions < 5. flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java (1 line): - line 921: // TODO: handle GenericCatalogPartition flink-runtime/src/main/java/org/apache/flink/runtime/rest/messages/job/JobDetailsInfo.java (1 line): - line 64: // TODO: For what do we need this??? flink-connectors/flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/AbstractDialect.java (1 line): - line 39: // TODO: We can't convert VARBINARY(n) data type to flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala (1 line): - line 314: // TODO: remove if CALCITE-3199 fixed flink-python/pyflink/table/types.py (1 line): - line 1497: # TODO: type cast (such as int -> long) flink-table/flink-table-common/src/main/java/org/apache/flink/table/descriptors/LiteralValueValidator.java (1 line): - line 37: * TODO The following types need to be supported next. flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/JsonRowDeserializationSchema.java (1 line): - line 132: // TODO make this class immutable once we drop this method flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/read/HiveMapredSplitReader.java (1 line): - line 68: // TODO: push projection into underlying input format that supports it flink-python/src/main/java/org/apache/flink/streaming/api/operators/python/AbstractPythonFunctionOperatorBase.java (1 line): - line 279: // TODO enforce the memory limit of the Python worker flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamTwoInputProcessor.java (1 line): - line 303: // TODO: isAvailable() can be a costly operation (checking volatile). If one of flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/over/ProcTimeRangeBoundedPrecedingFunction.java (1 line): - line 148: // TODO Use timer with namespace to distinguish timers flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/batch/BatchExecHashJoin.scala (1 line): - line 123: // TODO use BinaryHashBucketArea.RECORD_BYTES instead of 8 flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/typeutils/LegacyInstantTypeInfo.java (1 line): - line 35: * TODO: https://issues.apache.org/jira/browse/FLINK-14927 flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBMemoryControllerUtils.java (1 line): - line 90: // TODO use strict capacity limit until FLINK-15532 resolved flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskNetworkInput.java (1 line): - line 203: // TODO: with checkpointedInputGate.isFinished() we might not need to support any events on this level. flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/functions/hive/util/HiveFunctionUtil.java (1 line): - line 55: // TODO: remove this and use the original code when it's moved to accessible, dependable module