Initial Contribution

The contributed materials are available on subversion here: https://svn.apache.org/repos/asf/incubator/kato/branches/import/. The code was developed mostly under Eclipse. As such, the code is organised as Eclipse projects, and some are plugins or features. Don't worry if you are not familiar with them. For the most part all you need to know is that the code is held under the "src" directories in the projects.

I have referred to "DTFJ" all through this document. This is so that the descriptions match the actual contribution. Any code after this initial contribution will be refactored to use the org.apache.kato.* and javax.diagnostics.* package hierarchy.

Here is a summary of each of the projects in the repository:

Project

Description

org.apache.kato.anttasks

Ant tasks to support the TCK and building an Eclipse Feature website.

org.apache.kato.api

The source for the DTFJ API. This consists only of interfaces.

org.apache.kato.builder.kato

build file for ant to build tck and DTFJ.

org.apache.kato.builder.tck

ant build files for running the TCK and handling the results reporting.

org.apache.kato.common

Java code for reading core files from various platforms.

org.apache.kato.tck.harness

Code for actually executing the TCK tests.

org.apache.kato.tck.tests

These are the tests for DTFJ. They are to ensure correct behaviour of different DTFJ implementations.

org.apache.kato.tools.jdi

A JDI-DTFJ bridge. Allows java debuggers to attach to dumps using DTFJ

org.apache.kato.tools.jdi.core

An Eclipse plugin - just partial.

org.apache.kato.tools.jdi.feature

An Eclipse feature to describe the JDI connector.

org.apache.kato.tools.katoview

An example tool that uses the DTFJ API. Used for querying the dump's status.

Project Descriptions

More detailed descriptions of the projects are listed below. They have been ordered in order of how interesting they probably are.

org.apache.kato.api

IBM has contributed the interfaces for its DTFJ API. This has been shipped with the IBM JVMs for about 3 years now. This is being put forward as a basis for discussing the Apache Kato API. As there are only interfaces, there is nothing that can be run against this. The Apache Kato API should spring from this - the packages are to be renamed and a implementation that works against the Sun Hotspot JVM will be being contributed to this project. The Javadoc should be available on Hudson, once that is set up.

There is a hierarchy of packages:

  • com.ibm.dtfj.image
  • com.ibm.dtfj.runtime
  • com.ibm.dtfj.java

Tool should call an implementation of com.ibm.dtfj.image.ImageFactory() to get a com.ibm.dtfj.image.Image.
From Image, there is a chain of iterators that can be navigated:

  • com.ibm.dtfj.image.Image - represents the core file. Have information such as the operating system and CPU architecture
  • com.ibm.dtfj.image.ImageAddressSpace - represents an address space. Can be used to access the virtual address space.
  • com.ibm.dtfj.image.ImageProcess - represents a process in the core file - normally one. Can get native thread information.
  • com.ibm.dtfj.runtime.ManagedRuntime - represents a managed runtime
  • com.ibm.dtfj.java.JavaRuntime - an instance of ManagedRuntime. Contains information about a Java virtual machine. Can access java threads, heap objects, classes, etc.

This is just a short summary - please see the interfaces themselves for more information.

If you'd like to discuss the API, please do so on the kato-spec mailing list. More general discussions about, for example. actually implementing it, should be addressed in the kato-dev mailing list.

org.apache.kato.common

This project contains implementations of core file readers for Linux and AIX.

org.apache.kato.tck.tests

These are a set of tests to check the behaviour of DTFJ. The tests are jUnit testcases, but the org.apache.kato.tck.harness project is needed in order to run them.

DTFJ's job is to read a JVM's process status after it was running, usually through core files.
The TCK is structured such that first the harness the TCK testcases are setup by calling their "configure*" methods, then the JVM is forced into generating a dump.
The harness then executes the TCK testcases using jUnit to execute their test methods. This way a test can, for example, set a particular object's field to a certain value, then the test can read the dump with DTFJ and then check that DTFJ reports the field's value correctly.

org.apache.kato.tools.katoview

This is a tool that has been written to use DTFJ to query a dump. It can list things such as mapped virtual address ranges, all of the objects in the heap, all classes, monitors, stack traces, etc. It really is just an explorer.

org.apache.kato.tck.harness

This project is responsible for executing the tests as described above. It relies on some configuration to tell it what JVM to run, what classes it uses to generated dumps and to read them. These will need to be implemented for Hotspot.

org.apache.kato.tools.jdi

This is prototype code for bridging between DTFJ and JDI. The intention is for Java Debuggers to attach to this in the same manner they might attach to a running process. If this was fully working, it would be possible to attach to any dump from a JVM with a Kato implementation. This exists as a standalone program - it is possible to integrate this better with Eclipse, although we have not been able to contribute all of that code.

org.apache.kato.tools.jdi.core, org.apache.kato.tools.jdi.feature

The projects are Eclipse specifics. They are incomplete, but once complete would allow the JDI connector tool to be integrated with Eclipse.

org.apache.kato.anttasks, org.apache.kato.builder.kato, org.apache.kato.builder.tck

These projects exist just to build and run DTFJ and it's TCK.

  • No labels