A place to collect ideas for the next version of UiMA Java core.
A nice way to see what's new on a page is to click "view change" on the line right underneath the title above. From the compare page you can progressively (with one click) compare previous versions too.
Here's a place to assemble a "spec" of what might actually be in version 3: UimaV3Spec
The UIMA project's mission says in part it's related to the "spec" in OASIS which was more a spec about a wire-format (i.e., serialization format) for UIMA, based on XMI (XML Metadata Interchange) standard. XMI has not caught on very well. This topic is to flesh out from a data interchange point of view what the important things are.
Topic is here.
There are many big-data frameworks now. UIMA has a particular slant on things to encourage component development and reuse (I'm thinking of externalization of the Type System, merging of type systems). UIMA also has its scaleout approach, and the RUTA workbench facility. This topic is where we can think about UIMA components in other frameworks (e.g. Apache Spark), or vice-versa.
Interoperability could be facilitated by more standards around REST service packaging.
Complete JSON deserialization with an eye toward being "permissive" to receive data models from other frameworks?
Support annotators that have no type system, or that have just a piece of a type system This has two sub-ideas:
UIMAv3DynamicTypes out this discussion.
A portable Java compiler from Eclipse (ecj) and decompiling capabilities (e.g. Procyon) are appropriately licensed and could be part of the startup.
One representation only of a FS; the static fields of the class have the typeImpl info..
Features represented directly as fields.
There are use cases where JCas cover classes are not being used for some classes, yet the users define a class named identically to a JCas cover class. This is permitted in UIMA v2.
For example, you could have a class x.y.z.ConceptType which was defined as a Java enum. You could also have a UIMA type, x.y.z.ConceptType, and work with it without using JCas APIs.
One possible approach is to map the uima type name to a special java class name for these use cases so there's no collision; of course, the user would need to use the non-JCas APIs for this type.
This has one serious issue, not yet solved, illustrated by the use case:
Setting up the merged type system and generating the Java class definitions means that those classes might need to be replaced, but they might be linked to the existing code.
One of the values for UIMA is the facilitating interoperability among components. One difficulty in this is that different components may have somewhat different data models, with different names for similar things.
This topic looks at making this better.
At startup-time, the Java classes for types could be generated from the "merge" of type information in all components (this merge is done in current UIMA, and is intended to let annotators "extend" each other's type model with additional features). Any component could run JCasGen on the types they were using, in order to get classes they could compile against. These would be ignored in favor of the generated-at-startup-time version.
This could be done using either ECL or via code generation.
Currently users may customize their JCas cover classes. PEAR classpath isolation allows the use case where different customizations are present in one pipeline. The current implementation supports this, and switches the set of JCas cover classes as Pear boundaries are crossed. The idea of a Feature Structure being an instance of its cover class breaks down when multiple definitions of this exist. Some ideas for fixing this.
There are two approaches - more dynamic and less dynamic.
This would require parallel implementations of many of the internal data structures (e.g., indexes), which come at a cost, so this should be configurable, or better yet, automatically managed.
We could even consider implementing parallel capable versions of some internal UIMA Types (Lists, arrays, and Maps if we add that).
These typically have approaches to type systems that use user-defined Java types, and allow any kind of Java objects in the fields. There are new kinds of Serialization / Deserialization that work for all kinds of Java objects, but are more efficient than Java reflection-based approaches (e.g. Kryo used by Spark).
Users have wanted these kinds of objects; some implementations I've seen have tried to implement Sets using a combination of HashSet and UIMA FSLists, duplicating the data and keeping things in sync, which was very inefficient. More on this topic here.
Support parallel running of pipeline components.
Careful trade-off vs slower due to synchronization, cache-line interference. Key is to separate things being updated.
Consider special index support for this
Iterating over FSs: alternative: have generator of FSs, process with stream APIs
(Unlikely) Making the element of the "stream" be a new CAS - replacement for CAS Multipliers. Seems like the wrong granularity... Maybe best to let Java evolve this for a few more releases.
Some new capabilities may benefit from specifying boundary actions. Some possible actions:
Ability to specify "capture" of intermediate CAS results at specific points in the pipeline, integrated with JMX (Some of this has already been done as part of UIMA-AS, but should be put into the core)
Custom UIMA JMX console
Version 2.7.0 added JSON Serialization, but is missing deserialization - add that. Also not completed is whatever enhancements are needed to permit flexible interoperability with UIMA services that implement partially compatible type systems, and Delta CAS support (for sending back to a client just the changes made to a CAS that was sent from that Client).
Adding support "dynamic" typing - see paper: http://aclweb.org/anthology/W14-5209. An interesting thought is to add this without giving up the compile-time speed and checking advantages of statically strong typing. The result would be some kind of hybrid, with more performance available to fully specified static definitions.
Different components should be easily combinable even if they have different type systems, if a mapping can be found and specified. For more complex mappings, custom adapters could be supported?
User wanting to combine X with Y should be able to lookup on the web and download the adapter or 90% of the work predone. It should be easy for users to share this information on the Web.
Google, Bing, and Yahoo have standardized on microformats for semantic HTML web markup, and have a big schema defined (see http://schema.org/); some kind of integration that lets users easily make use of this information would be nice. It would be nice to be able to use this without any download /copying, by referencing the (gradually evolving/changing) web site that specifies these things. For instance, see the entry for "place" : http://schema.org/Place
There's a plus and a minus for this - plus: we get better tested, better function (perhaps), better performance for some typical capabilities (e.g., parsing XML to/from Java Objects). Minus - it make the code depend on these other packages. Also, if it's working fine now, there's little motivation to invest in changing it.
Some areas to consider:
XML parsing and writing for descriptors - use JAXB or Jackson (already used for JSON support)