used to capture FAQ questions/answers that can be moved to the website.
Welcome to the Apache Tuscany Java SCA FAQ page. Please help to keep the information on this page current.
Both IBM and Sun JDK 1.5 are known to work and are used regularly by our development community.
JDK1.4 will not work as the Tuscany SCA code base relies on some of the features of JDK1.5 such as generics and annotations.
JDK1.6 can be problematic depending on what version you are running with. The problems are usually due to bundled versions of either STaX or JAXB being in conflict with the versions we are using in Tuscany.
If you are getting errors that look like the following stack trace
Then it's related to JDK 6 shipping with its own version of JAXB impl. Up to JDK 6 Update 3, the JDK ships with JAX-WS 2.0 (which includes JAXB 2.0), but Tuscany requires JAXB 2.1. There are some possible solutions to this problem:
- Upgrade your JDK to 1.6.0_04 or above, which will include JAX-WS (and JAXB) 2.1
- Copy the version 2.1 jaxb-api.jar or jaxws-api.jar (you can probably find them in your local maven repo) to <JAVA_HOME>/lib/endorsed to override the API jars that ship with the JDK
- Use the -Djava.endorsed.dir=<a folder containing our JAXB jars> to override the JAXB from JDK 6.
If you see an error like the following
It's the same problem but related to the version of the STaX API in use. TODO - locate the specific jar that causes the problem
To build the Apache Tuscany source code that you have checked out of subversion you need to install Maven. The build is know to work relatively reliably with Maven 2.0.5. People have had it working with later versions but it you are encountering unpredicatbility in the build then give 2.0.5 a go.
If you are taking code out of the trunk of the Tuscany Subversion repository then you may have been unlucky and picked up a revision of the code where the build is broken. As trunk is where the development takes place this happens now and again although the development community tries to avoid build breaks if at all possible and tries to fix them quickly when they do happen.
There are may and various other things that can cause your build to break. Its worth checking on the mail list that the trunk is building. Assuming that it is we will usually ask you to do the following as a basic level set.
Stop any ide you may have running
Check out the latest trunk revision
svn checkout https://svn.apache.org/repos/asf/incubator/tuscany/java/
or (if you already have a version of the code)
Clean all the maven projects
Remove all the sca artifacts from the local maven repository by removing (or renaming) all of the directories under
If it still doesn't work then get back on the mail list
I always get a "Java heap space" error during the build while the itests are running these days. Runs fine if I build from within the itests folder but building from the top sca folder always fails.
try "MAVEN_OPTS="-Xmx1024m -Xms512m" or You can also increase the memory options in the sca pom 1, in the surefire plugin configuration section.
If you have unpacked the source distribution or have checked out all of the code under the tuscany/java/sca directory in subversion then you should end up with a source directory containing something like:
You can build the source from using maven with the command
You can also build Eclipse projects for all the modules in the project using the command
This builds ".classpath" and ".project" files for each modules. The easiest thing to do is import all of these generated projects. In Eclipse choose "File/Import/Existing Projects Into Workspace". From the wizard select your source directory and Eclipse should now find all of the Eclipse projects that have been generated.
If you imported all modules into Eclipse you should find that project dependencies are satisfied by reference to other Tuscany SCA projects in your workspace. This is convenient for debugging as all Tuscany SCA source it now available in your workspace.
If you're using Eclipse WTP and want to get WTP Web Projects generated for our Webapp samples you can simply pass a -Dwtpversion=1.5 option tothe usual mvn eclipse:eclipse command, like this:
The magic -Dwtpversion=1.5 option will add the WTP Web project nature to all the Eclipse projects with <packaging>war</packaging> in their Maven pom.xml. You'll then be able to add these projects to a WTP Tomcat or Geronimo Server configuration, to publish and run them straight from your Eclipse workspace.
To get over this exception please go over to the jre\lib\security\java.security of the IBM JDK installation and set up the security providers as follows.
The samples in the binary distribution won't build with mvn till we actually release and the artifacts get published to the live maven repository. To test things you can bypass this by setting up a mirror pointing to the release candidate maven repository. You do that by adding the following to your maven settings.xml file:
The settings.xml is in a .m2 folder in your home directory, eg mines at: "C:\Documents and Settings\Administrator\.m2". If you don't have one then we've an example at: settings.xml
Tuscany uses the JDK logger for writing out info, warning etc. How much gets written out is controlled by a logging.properties file. We don't ship a file with Tuscany as we rely in the default INFO logging level that Java assumes. If you wan't to change the defaults then you can create (or edit) a logging.properties file in you jre/lib directory. Foe example, if you're using the IBM JDK you should end up with the file
You might want to go and change the logging level by setting it to FINE to get more information generally
or for getting more information printed on the console
See here for some overview information about JDK logging.
There are many good articles about how to turn on java remote debugging, for exampe, here's one
The long and short of it is that you need to tell the JVM to list on a port for debug connections. For example, here the Calculator sample is being debugged.
The important bits are the arguments starting -X. Note the address is set to port 8000. In Eclipse you can then simply open the debug dialog and create a new "Remote Java Application" profile specifying the port of 8000 to match the above command line.
Assuming that you have the Tuscany SCA source available to Eclipse you can then debug through the calculator sample and the Tuscany SCA code.
If you want to remote debug some tests running in the maven build then you can either do.
or use the following surefire option
This opens the debugger on port 5005 with suspend=y. You then run mvn as you normally would and connect Eclipse to the running test as described above.
If you want to debug through a webapp running in tomcat you can set up tomcat as follows:
Again from eclipse choose to remote debug.
The short answer is that we don't know. However we have seen this occasionally on Windows and is some cases it can be tracked back to some kind of interaction between Maven and other applications running on the machine. So the first thing to try is to stop all other applications that you are running and retry the Maven build and see if that helps.
When the StandardContext catches errors like this there will be additional
messages about the problem written to the logs in the Tomcat log directory,
please check to see what the errors are.
You can use the following mvn command to ignore test failures.
This depends on what you want to do! With -fn the tests still run with -
Dmaven.test.skip=true they don't. Not running the tests makes the build much
faster but without running the tests you don't know what problems there
might be with the jars that get built. A key thing is that with -fn a test
failure in one module does not stop the build of other modules, but, the jar
for the module with the test failure does not get built so if you want to
ignore a test failure but still rebuild a module then you need to use -
Dmaven.test.skip=true. On the other hand for things like the itest modules
you just want to see the tests results so it doesn't make much sense to use
-Dmaven.test.skip=true but -fn can be useful to see how good or bad the
state of the code is.
If you have a component implementation that looks something like...
Then in you operation you will find that serviceReference1 and serviceReference2 are null because SCA will only inject references into fields marked protected or public. This is true for the other injecting annotations, for example, @Callback, @ConversationId and @Context.
Unfortunately, we only have the reference binding support for SLSB (call
SLSB from SCA) in Tuscany at this moment. Any contribution to support SLSB
service binding is welcome.
The logical type represents the data type the user thinks is flowing across a wire. This could be a Java type, a XML type, a CORBA type, whatever depending on the /logical/ service contract defined in the assembly.
The physical type is the actual representation of that type that is flowed by the runtime. In the Java runtime this will always be a Java type (i.e. some subclass of Object). In some cases it will be the same as the logical type - e.g. when a Java component calls another Java component over a local wire using a Java interface then both logical and physical types will be the same. In many cases though they will be different - for example, if the service contract was WSDL then the logical type would be the XML type used by the WSDL.
Within the runtime the same logical type may have different physical forms. For example, the same XML document could be represented physically as a DOM, a StAX stream, an SDO, a JAXB object, or an AXIOM stream. The framework supports conversion between these different physical forms.
- What is the role of a data mediator interceptor? Can you cite an example of how mediation works say for a component A with reference R that references a service S in component B.?
The interceptor gets added by the connector. A's outbound wire and B's inbound wire describe the datatypes their implementations can support. When the wire ends are connected the connector adds the interceptor if mediation is needed.
One job of a transport binding is to convert an in-memory physical representation to a suitable set of bits on the network (aka serialization and deserialization). Rather than reinvent the different transports we reuse existing implementations such as Axis2 or RMI. As such we need to convert the physical representation on our internal wire with that used by the transport. So, for example, Axis2 only understands AXIOM so in a reference we need to convert the user's physical representation to AXIOM and in a service we need to convert the AXIOM generated by the transport into the form the user's implementation requires. The steps could be described as follows:
- A calls reference R with physical Java object X(java)
- X is placed on R's outbound wire
- data mediation converts X(java) to AXIOM object X(axiom)
- X(axiom) is placed on inbound wire for the Axis2 binding
- Axis2 binding serializes X(axiom) onto the network as XML
- Axis2 binding on the target deserializes the XML from the network to X(axiom)
- X(axiom) is placed on the outbound wire from the Axis2 binding
- data mediation converts X(axiom) to X(java) as needed by the target component
- X(java) is placed on B's inbound wire
- the target instance for B is invoked passing in X(java)
. An important thing to note here is that from the fabric's perspective we are dealing with two physical wires: the wire on the client connecting the source component A to the outbound Axis2 transport and the wire on the server connecting the inbound Axis2 transport to the target component B.
From a global perspective there is one logical wire from A to B but because A and B are located on two different runtimes that logical wire gets split into two physical wires A->net and net->B.
The SCA Assembly Model Specification V1.00 describes this file in section 18.104.22.168. Any composites that are named in the sca-contribution.xml file will be automatically be included in the deployable list maintained by the contribution.
If you have a contribution in a directory say.
Then all composites, along with any other resources, under this directory will be located by the contribution service. For example, assume we have
Where sca-contribution.xml is
So in this case the contribution service will locate the file mycomposite.composite and, assuming it contains a composite called mycomposite:MyComposite, will present it as being deployable based on the information in sca-contribution.xml.
This is a Tuscany specific shorthand for defining deployable components, i.e. you won't find it in the SCA specifications.
If you have a contribution in a directory say.
Then all components in composites under this directory will be located by the contribution service and any components in composites under the directory /META-INF/sca-deployables/ will automatically be included in the deployable list maintained by the contribution, for example
In Java SCA Release 1.0, policy intents and policysets can be defined for an SCA Domain in the definitions.xml file and specified for various SCA Artifacts in assembly composites (composite, component, services, references, bindings, implementation). With respect to processing, computing aplicable policies and applying them is concerned here is what the Release 1.0 supports:
- appplicable policysets are computed only for 'binding' elements only i.e. only Interaction Policies are supported. The next release will have support for 'implementation' elements also
- definition of policy attachments to represent axis2 config params. As sample representations of this, there are itests in the binding-ws-axis2 module and there is a sample (helloworld-ws-service-secure) that defines policies to enable ws-security in Axis2. The intents that are supported in the itests and sample are athentication, integrity and confidentiality
- the itest and sample does not exercise confidentiality intent due to legal issues that Tuscany has with respect to distributing bouncycastle encryption provider jars.
- support for WS-Policy attachments, annotated intents and policyset are not yet supported
- in identifying policysets applicable to sca elements, the xpath expression in the 'appliesTo' attribute of policy sets is not yet processed as xpath since we are sorting out some specs and implementation details with this - http://www.mail-archive.com/tuscany-dev%40ws.apache.org/msg21699.html
- First you must ensure that you are not encumbered by the legal and licensing requirements of bouncycastle. There are algorithms in the bouncycastle distributions (such as IDEA) that have patent obligations and you are responsible for sorting this out for yourself.
- Next you must be familiar with setting up Axis2 for confidentiality. Here are some useful links for this http://wso2.org/library/234, http://wso2.org/library/174, http://wso2.org/library/255.
- The itest in the ws-binding-axis2 module and the helloworld-ws-service-secure have definitions.xml file that define the intents and policyset for confidentiality. Change the values that have been provided in the Axis2ConfigParam policy attachment, to suit what you have defined for your application - such as the keystore, the userid and password to the keystore, the password callback handler class etc.
- Now, specify the confidentiality intent on any of the binding.ws elements in your application composite and ensure that the bcprov-jdk15-132.jar is in the classpath.
- Some JREs might require a bit of tweaking with the security.policy settings that deal with encryption providers. We have tried and tested this successfuly on Sun and IBM JREs.
You can find the 6 SCA technical committees here:
Meeting minutes, documents, issues and so on are linked from the main page.
Mailing list archives can be found here:
An SCA composite can be used as an implementation (implementation.composite) for a component. This is so-called recursive composition. It allows pre-assembled composites to be reused.
"promote" can be used to make services or references declared on a component inside the composite visible for wiring at the composite level. The composite services and references can be then further configured when it's used as a component implementation at outer level.
Hope the following samples help.
f you are trying to run some of the Tuscany sample and demo web applications on WebSphere, you may see the following exception:
The solution to this problem is that you must set application properties to use the application class loader before the parent container class loader. Then Tuscany class dependencies packaged in your web app will be successfully loaded and resolved.
A step by step explanation and walk through is given my Jean-Sebastien at http://jsdelfino.blogspot.com/2007/10/how-to-use-apache-tuscany-with.html.