Access to add and change pages is restricted. See: https://cwiki.apache.org/confluence/display/OFBIZ/Wiki+access

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Scaling and Performance Plan

Written By: David E. Jones

Web Load Handling Tiers

Web, or HTTP, based traffic can be load balanced and accelerated at many different points between the user and the database.  This is a list of the main touch points, some of which are optional.

  1. HTML Client
    1. Pages and images are often cached on the client; for dynamic pages this may be good to turn off so that the user always gets the freshest version from the server
    2. Simple fail-over can be done on this level manually by giving the user multiple URLs for the application
  2. Edge Router (optional)
    1. An edge router allows for wide are distributed server farms where a user is automatically forwarded to the nearest (fastest) farm
  3. IP Accelerator (optional)
    1. Sits in front of an HTTP or other server and handles IP connections for the server, maintaining a single connection to the server
    2. Some IP accelerators can be clustered for load balancing on this level and for high availability
    3. Some can load balance over multiple HTTP servers
    4. Some vendors that do this include: www.alacritech.com, www.netscaler.com, www.packeteer.com, www.redlinenetworks.com 
  4. HTTP Server - Apache (optional)
    1. Can be skipped by setting up Tomcat to listen on the desired port
    2. Has plugins to support connection to Tomcat: AJP12 (ad-hoc connection), AJP13 (maintained connection), JNI (fast direct calls), LB (load balancing) (see the mod_jk and workers howtos for more information)
    3. Plugin can support load balancing among multiple Tomcat servers, many Tomcat instances to one Apache instance and one site
    4. Plugin can also support multiple virtual hosted sites targeting different Tomcat servers, many Tomcat instances to one Apache instance and many sites
  5. Servlet Engine - Tomcat
    1. Many Tomcat instances to one JBoss instance is easy because EJBs are made to be destributed and can be called from anywhere
    2. Can hit database directly with JDBC, or go through JBoss EJB container
    3. Data from the database can be cached at this level for highest speed but least shared read access
  6. EJB Container - JBoss (optional)
    1. Can be skipped using JDBC from a servlet
    2. JBoss and Tomcat can run one-to-one on the same machine
    3. Many JBoss instances to one Tomcat instances is difficult now until JBoss finishes their clustering support
    4. An EJB could be custom created to support fail-over to another database with Bean Managed Persistense, or with Container Managed Persistence multiple entity bean pools could be created and pointed at different databases and writes could go to all databases and reads could come from any, maybe with a round-robin or load querying scheme (whatever it is, it would have to be coded manually)
    5. Data from the database can be, and by default in most EJB containers usually is, cached at this level for faster speed than database hits, but slower and more shared or shareable than servlet level caching
  7. RDBMS
    1. Some databases support clustering, data synchronization and failover, but these are not common features and are very proprietary
    2. Different database tables can be stored on different machines; for example the catalog on one database server and the user information on another and logging information on another

Performance Trade-Offs

Scalability vs. Performance

Components should be designed with scalability, or the support of many users, in mind. Where options are available to implement a procedure or data structure to support high performance or scalability, scalability should be chosen even if performance if negatively affected.  However, if no real scalability issues get in the way, performance should be optimized as much as is needed for the application.

Maintainability vs. Performance

Sometimes design choices for procedure and data structures emphasize maintainability or performance, but do not support both well and present a trade-off situation. In these cases maintainability, or customizability, should be the most important factor. In cases where a certain design causes an unreasonable decrease in performance, or where timing constraints are not being met, optimization may be done after the fact to tune the application, minimizing as much as possible the decrease in maintainability and customizability.

  • No labels