While the default configuration values for ATS will get you up and running, they're somewhat designed for regression testing and not real-world applications.
This page documents what I've discovered myself through a fair amount of experimentation and real-world experience.
The following lists the steps involved in taking a generic configuration, and modifying it for my own needs. Yours may vary, however, and I'll do my best
to indicate which settings should be sized based on your install.
Please keep in mind the following only applies to creating a forward-only web proxy caching setup.
The following lists steps involved in taking a generic Traffic Server install and customizing it for my own needs.
NOTE: Please use the following with Apache Traffic Server v5.0.0 and higher.
Server Virtual Machine
- Server Host: Vultr (www.vultr.com)
- CPU: 3.6 Ghz Intel CPU (single core)
- Memory: 1GB
- Disk: 20GB SSD
- OS: CentOS Linux 7.0
- Cache Size: 1GB
- Browser: Google Chrome v43
The following settings have been tested against the following:
- IPv4 websites
- IPv6 websites
- Explicitly difficult web pages (i.e. Bing Image Search)
- Explicitly SSL web sites (i.e. Facebook)
- Internet Radio (HTTP streaming, as well as iTunes Radio & Pandora)
The following settings are all located in /usr/local/etc/trafficserver/records.config.
Since Traffic Server v5.0.0 has reorganized this file, I'll go through the relevant sections here. When adding configurations, simply add the settings below the existing ones.
As I'm using Traffic Server on a personal basis, I decided to explicitly configure it to not consume as many CPU cores as it might do otherwise.
If your situation is different, simply change proxy.config.exec_thread.limit to set how many CPU cores you'd like to use.
HTTP Connection Timeouts
I'm using Traffic Server on a speedy datacenter-grade connection. As such, I've configured it to be pretty impatient in terms of timeouts.
The following settings control various network-related settings within ATS.
The first setting controls how often Traffic Server will internally poll to process network events. Even though I'm now on a machine that can handle 2-3% CPU load, I decided to reduce this. I haven't noticed any significant performance difference as a result of this.
The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.
The second setting removes the TCP_NODELAY option from origin server connections. Once one has told Linux to optimize for latency, this appears to be no longer necessary.
The third/fourth settings specify the socket buffer sizes for origin server connections. I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimal.
The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.
Also, I found that from a correctness point of view, my cache behaves better when not caching "dynamic" URLs.
Heuristic Cache Expiration
The default config for Traffic Server specifies that after 1 day(86,400 seconds), any object without a specific expiration cannot be cached.
I'd prefer that they stick around for between 1-4 weeks. This setting is contentious in that what it should be is debatable.
The goal here is to enforce a window of between 1 and 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.
The default config for Traffic Server allows for up to 30,000 simultaneous connections.
I decided for my purposes that's pretty excessive.
RAM And Disk Cache Configuration
The default config for Traffic Server specifies a few things here that can be tuned.
First, I decided to explicitly set my RAM cache settings. If your situation is different, simply change proxy.config.ram_cache.size to set how much RAM you'd like to use.
Second, I observed my cache running via the "traffic_top" utility and have set the average object size accordingly.
NOTE: One should always halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objects.
The defaults for Traffic Server specify a squid-compatible logfile that's binary in nature. I prefer to have the file readable so I'm overriding this.
The defaults for Traffic Server configure the disk-based DNS cache to be rather large. First, I found I got a decent speed improvement by sizing this down.
Second, I specifically prefer IPv6 over IPv4. This simply tells the cache to prefer the newer IPv6 when possible.
Third, I also allow the cache to use stale DNS records for up to 60 seconds while they're being updated. This also contributes to cache speed.
If your situation is different, simply get to know the following settings. It takes a bit of practice to get used to, but they're all tunable.
Restart Traffic Server
Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.
After that's been done, enjoy your newly-tuned proxy server.
Previous Page: WebProxyCacheSetup
Next Page: WebProxyCacheOS