You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Current »

General Troubleshooting

Ambari Server: Check /var/log/ambari-server/ambari-server.[log|out] for errors.
Ambari Agent: Check /var/log/ambari-agent/ambari-agent.[log|out] for errors.
Note that if Ambari Agent has any output in /var/log/ambari-agent/ambari-agent.out, it is indicative of a significant problem.

Services fail to start up

  • HDFS: Check log files under /var/log/hadoop/hdfs
  • MapReduce: Check log files under /var/log/hadoop/mapred
  • HBase: Check log files under /var/log/hbase
  • Hive: Check log files under /var/log/hive
  • Oozie: Check log files under /var/log/oozie
  • ZooKeeper: Check log files under /var/log/zookeeper
  • WebHCat: Check log files under /var/log/webhcat
  • Nagios: Check log files under /var/log/nagios

Nagios alerts don't show up in Ambari Web

  • Try running "service httpd restart" on the Nagios Server host.

Install Wizard fails during Install phase

  • Click on the Retry button. In most cases, this will solve install failures due to package install problems due to intermittent software repository availability (i.e., "No more mirrors to retry").

Install Wizard failed with warning during Start/Test phase

  • Proceed by hitting "Next". Once you are in the Dashboard, go to individual services, reconfigure, and start them to resolve any startup issues.

Installing a new cluster on top of an existing cluster

When installing a Hadoop cluster via Ambari on hosts that already have Hadoop bits installed (including an existing cluster deployed via Ambari), perform the following:

  • Stop all the services on all the nodes(including ganglia and nagios)
  • It is also a good practice to delete the rpms from all nodes.
  • search for the rpms:
    rpm -qa | grep ganglia
    rpm -qa | grep oozie
    rpm -qa | grep sqoop
    rpm -qa | grep pig
    rpm -qa | grep nagios
    rpm -qa | grep hadoop
  • and remove them :
    rpm -e <package name>

HTTP error 400 – Bad Request – during REST call (POST, PUT, DELETE) to Ambari server

Ambari Server now expects an additional HTTP header called "X-Requested-By" for all non-GET calls. The value can be set to anything. For example:

curl -i -H 'X-Requested-By: mycompany' -X POST -d '{"Clusters": {"version": "HDP-2.0.6"}}' --user admin:admin http://hadoop1.mycompany.com:8080/api/v1/clusters/cluster1

 

#Spaces in manager DN/Base DN causing login issues in Ambari

Note that there are some issues with Ambari current release 1.2.5 in case you have spaces in baseDN/managerDN. Please take a look at: https://issues.apache.org/jira/browse/ambari-3006. The fix will be available in 1.4.1 release of Ambari. In case you are running into issues and cant get rid of spaces in the managerDN/baseDN - keep reading.

I have created a jar with the patch applied on top of branch-1.2.5. The jar is posted at http://people.apache.org/~mahadev/ambari/ambari-server-1.2.5.17.jar.

Download the above jar and follow the following steps for fixing the issue:

  • Stop ambari server using
ambari-server stop
  • Replace the ambari server jar with the downloaded jar. Please backup the ambari-server jar before you overwrite in case you run into issues.
cp <downloaded_ambari_server_jar> /usr/lib/ambari-server/ambari-server-1.2.5.17.jar
  • Restart Ambari Server
ambari-server start

This should fix the issue with spaces in LDAP dn names.

  • No labels