Access to add and change pages is restricted. See: https://cwiki.apache.org/confluence/display/OFBIZ/Wiki+access

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

Scaling and Performance Plan

Written By: David E. Jones

Web Load Handling Tiers

Web, or HTTP, based traffic can be load balanced and accelerated at different points between the user and the database.  This is a list of the main touch points, some of which are optional.

  1. HTML Client
    1. Pages and images are often cached on the client; for dynamic pages this may be good to turn off so that the user always gets the freshest version from the server
    2. Simple fail-over can be done on this level manually by giving the user multiple URLs for the application (not a common approach, and only applicable for internal or back-office applications)
  2. Edge Router (optional)
    1. An edge router allows for wide are distributed server farms where a user is automatically forwarded to the nearest (fastest) farm
  3. IP Accelerator (optional)
    1. Sits in front of an HTTP or other server and handles IP connections for the server, maintaining a single connection to the server
    2. Some IP accelerators can be clustered for load balancing on this level and for high availability
    3. Some can load balance over multiple HTTP servers
    4. Some vendors that do this include: www.alacritech.com, www.netscaler.com, www.packeteer.com, www.redlinenetworks.com
  4. HTTP Server - Apache (optional)
    1. Can be skipped by setting up Tomcat to listen on the desired port
    2. Has plugins to support connection to Tomcat: AJP12 (ad-hoc connection), AJP13 (maintained connection), JNI (fast direct calls), LB (load balancing) (see the mod_jk and workers howtos for more information)
    3. Plugin can support load balancing among multiple Tomcat servers, many Tomcat instances to one Apache instance and one site
    4. Plugin can also support multiple virtual hosted sites targeting different Tomcat servers, many Tomcat instances to one Apache instance and many sites
  5. Servlet Engine - Tomcat
    1. Many Tomcat instances to one database instance (or pool) is easy because database clients are made to be distributed and can be called from anywhere
    2. Data from the database can be cached at this level for highest speed but least shared read access (the best to cache is data that is read a lot, infrequently changed, and that is not sensitive to being stale or out of date)
  6. RDBMS
    1. Some databases support clustering, data synchronization and failover, these are most commonly used in larger deployments as database servers tend to be stable
    2. Different database tables can be stored on different machines; for example the catalog on one database server and the user information on another and logging information on another

Performance Trade-Offs

Scalability vs. Performance

Components are be designed with scalability, or the support of many users, in mind. Where options are available to implement a procedure or data structure to support high performance or scalability, scalability is chosen even if performance if negatively affected. However, if no real scalability issues get in the way, performance is optimized as much as is needed for the application.

Maintainability vs. Performance

Sometimes design choices for procedures and data structures can emphasize maintainability or performance, but do not support both well and so present a trade-off situation. In these cases maintainability, or customizability, is the most important factor. In cases where a certain design causes an unreasonable decrease in performance, or where timing constraints are not being met, optimization may be done after the fact to tune the application, minimizing as much as possible the decrease in maintainability and customizability.

  • No labels