...A place where sharing IT monitoring knowledges

Showing posts with label Nagios Core. Show all posts
Showing posts with label Nagios Core. Show all posts

Friday, 13 September 2013

Nagios Core 4: Overview


Lasting few days for the 2013 Nagios World Conference, where -let's hope- Nagios Core 4 will be presented in detail, we can know the masterlines this new release will follow by both reviewing the Andreas Ericsson's presentation at last year Nagios Conference and diving in the Nagios 4 beta anorexical documentation (now available at its beta 5 release).

In a nutshell

Nagios 4 main features can be summarized as:
  • New check execution philosophy for radical performance improvement.
  • Query handler support for allowing fetching on-demand data from the own core process.
There are other minor features like -yeaaaah, finally- the possibility of inhibiting service notifications while the parent host is down (this post is one of the most visited in my blog, so the solution is being really welcome) or the removed embedded-perl support (though at this point I have some doubts, read on and you'll understand), but basically the big enhacement is the performance boosting and the great new feature is the query handler support.

Check execution philosophy: The Worker

Up to Nagios Core 3, every time a check had to be run, Nagios Core process forked in a similar child process for running the script (plugin in Nagios jaergon) bound to the check and, once done, returning the result to the parent process and dying. This way was all but efficient due to the resources needed for forking big processes (as Nagios core was) multiple times.

The idea with workers is radically different: Nagios core process is not going to run any check, in fact it will request it to much smaller (and thus much more efficient) processes called workers whose only mission will be running checks and returning results. If you already know Consol Labs mod-gearman, the concept is the same.

Without entering in detail about how the core will meet workers and how it will distribute the load (that's not the matter of an overview article), the possibilities that workers bring are exciting: enhanced distributed systems, freedom for creating complex load balancing scenarios,... a reading to the mod-gearman documentation gives a real idea of the potential of the worker approach.

Query handlers

Up to Nagios Core 3, the usual ways of getting data from the core process were:
  • By parsing the status and log files or, what is the same, by using the cgis included in the core package
  • By exporting data via a broker module like ndomod
In both cases the info didn't come on a on-demand way, ie, there was no way of getting fresh data when we wanted, we had to wait for the process to publish it.

Query handlers allow getting data from (ie, interacting with) the core process on a on-demand way. As the name reveals, they are pieces of code designed for handling certain queries that can be fetched to the core by using the nagios libapi (though I suspect that a Perl module interface will be available soon).

For those (as I) loving brokers this new approach doesn't eliminate this actor, in fact the query handlers complement them. One example might be a Nagios core web interface (like Centreon, Icinga or Nagios XI) that supports multiple concurrent users quering data like the status of a given service. Obviously seems stupid to address every query to the core process  when it could export that info to a database (via, for instance, the ndomod+ndo2db pair) just when it changes becoming free of attending multiple queries from multiple users.

Summarizing

By reading the sources I mention in the head of this article, these seem to be the main features of the new Nagios Core release. I have to admit that I don't belong to the core development scene so, until Nagios Enterprises officially published the new core features (up to now this is the only official info) we won't have a full view of how happier we'll be with this new release.


Saturday, 9 March 2013

Nagios performance. Concepts


This article, the first of a series where nagios core performance is put under microscope, analyzes what is performance under the point of view of the engine and how can it be monitored. The second article, "Nagios performance. Best practices" defines a list of steps to bring these concepts to practice.

When does Nagios perform well

For defining the previos concept it's necessary going a step back and considering what is Nagios Core: In essence it's just an scheduler, ie, a process that runs tasks in a predefined, cyclic way. So in broad terms we can consider that Nagios Core performs well when these tasks are run at the time they were scheduled.

If this scheduling becomes delayed appears what is named latency: the difference, in seconds, between the time a task should be executed and the time when if fact it was. For instance, if a service check is scheduled for being executed at 9:00:00.0000 AM but it's executed at 9:00:00:500 AM, we get a latency of 0.5 seconds on the check.

So, summarizing, we can consider (and the design of the core supports it) that Nagios Core performs well when the latency of host and service active checks is the lowest, what drops another question: how is lowest? Well, its up to the system admin deciding what latency level is acceptable in his system. As a personal rule of thumb, max latency on a system cannot reach the interval_lenght configuration option value (by default is 60 seconds) in order to be sure that every scheduled check is run in its scheduled time window.

Why not CPU

There are some considerations for discarding CPU as the main Nagios Core performance indicator. The first is generic: A system performs well if it does well what it's designed to do, nevertheless its cpu load. The second one is more specific: Nagios lacks of load balancing capabilities at core level, so even using the initial scheduling options (inter-check delay, service interleaving), the server tends to show regular load peaks. The third is more practical: lack of cpu resources implies an increase in the latency, so controlling just this metric will give you the best monitoring system health indicator.

Latency vs. execution time

You must not confuse check latency with check execution time. Execution time is the amount of time that takes Nagios executing a check. It can hardly be considered as a performance indicator since the check execution time closely depends on different factors as plugin efficiency, load on the device being checked and load on network infraestructure that interconnects your server with the checked device.

Monitoring latency

So it seems that checking latency is more important that checking CPU itself, so the question is how to get it in a programatic way. nagiostats is a command line binary that parses Nagios status.dat file and gives some interesting core running performance statistics, among others latency. Luckily there's no need of programming a plugin for parsing nagiostats output: there are a bunch of plugins doing it in Nagios Exchange and Monitoring Exchange sites, most of them generating average latency performance data.

From among them I recommend to use the fanstastic check_nagiostats since it generates, among much others, min, average and max latency performance data metrics and, moreover, it's programmed in Perl so you can get the efficiency benefits of running it with the core embedded Perl interpreter. check_nagiostats is supported by Icinga too (naming it to check_icingastats for copyright reasons) sharing usage and configuration documentation in its project wiki.

Related posts




Thursday, 6 December 2012

Monitoring Windows services with WMI


One of the most important tasks when monitoring Windows servers is controlling the status of critical services. In this post I'll explain how to externally check if they are running using WMI.

The key point for using WMI instead other until now common methods (NSClient++) is clear: WMI is Windows native and it doesn't require third-party software installation on the server side, what leverages you of:

  • agent software periodic upgrades
  • potential security holes
  • (what translates to) long and boring server administrators discussions
This post must not be considered as an in-depth WMI howto but as a "if you need to do it follow this way". You can find tons of info on the Web deeply covering each aspect of configuring a Windows system for allowing WMI remote access.

WMI

Windows Management instrumentation is the infrastructure for management data and operations on Windows-based operating systems, what can be translated to all you need to know about a Win XP, Vista, Windows 7, Windows 8, 2003 server and 2008 server can be retrieved via WMI.

wmic is a Windows command-line program that allows you interacting with WMI from both a local or a remote windows-based system. For instance, it allows us checking what are the running processes calling wmic from the command line in this way:

>wmic SERVICE where (state=”running”) GET caption, name, state

WMI on the linux side

Luckily for us it exists wmi-client, a Linux program that allows us getting the same kind of info but, in this case, running sql-like queries from a remote Linux host:

>wmic -U myDomain/jdoe%jdoe_password //192.168.0.64 "select caption, name, state from Win32_Service where state='running'"

The previous command will retrieve the name, caption and state from all running services in a windows host with address 192.168.0.64 using the credentials of the user jdoe (password jdoe_password) that belongs to a Windows domain called myDomain.

wmi-client 1.3.14 packages for Debian (Squeeze) and Ubuntu (Maverick) are available in the Mike Palmer's website (sorry Mike, I've no available online resources for storing them). For those not using Debian-like systems, the wmi-client 1.3.13 source package is available on the Zenoss repository.

Granting remote access to WMI

Once wmi-client is installed on your Linux system, the only needed for starting to play with is:
  • Configuring the remote Windows system for supporting external WMI queries.
  • Defining a user with enough privileges for running remote WMI queries.
About the first task, if you're dealing with Windows 2003/2008 servers you don't need to do anything on the server side since WMI access is enabled from scratch. 

About defining a user, you can google for finding many literature (and the most wrong) about how to configure it with just the needed privileges for running WMI queries, but if you need to go for it faster just create a local user with admin privileges. Moreover, if you need to go for it fastest, create that user in the domain and prefix the user name with the domain name in the way domain/user (like in the previous example).

Monitoring services

Once you get remote server access, it's time to allow your monitoring system managing the available info retrieved via WMI. For that purpose you need to run a query that returns the status of a service given its name, for instance "select state from Win32_Service where name='target_service_name'".

In order to feed your monitoring system with the retrieved info you can proceed in different ways:
  • Calling wmic command and parsing its output.
  • If you like Perl, you can manage the previous query using Net::WMIClient, an programatic interface for wmic binary.
Nagios-core compatible solutions users (what include Nagios, Centreon, Icinga, OP5) can rely on check_wmic_plus, a well documented mega-plugin that allows getting, among many others, info related to service status running on a remote WMI system:

define command{
command_name check_wmi_service
command_line $USER1$/check_wmi_plus.pl -H $HOSTADDRESS$ -u $ARG1$ -p $ARG2$ -m checkservice -a '$ARG3$' --inidir=/usr/local/nagios/libexec -c _NumBad=0
}

In the previous example a command named check_wmi_service is defined for monitoring if a given service is running. It is based on these variables:
  • $HOSTADDRESS$: The address of the WMI compatible Windows seerver
  • $ARG1$: The user name (if you're using a local user) or the domain/username (if you are using a domain user)
  • $ARG2$: The user password
  • $ARG3$: A regular expression matching the service name
That command will return OK if one or more services whose name matches $ARG3$ regex are running, else it will return CRITICAL since the threshold for services in "bad" state is set to 0 (-c _Numbad=0).

So, for instance, you can define a service using the previous command for checking if the MS Enchange address book service (called MSExchangeAB) is running if in this way:

define service{
host_name ExchangeServer
service_description Address book service
check_command check_wmi_service!jdoe!jdoe_password!^MSExchangeAB$
...
}


Wednesday, 14 November 2012

Monitoring HP bladesystem servers

HP Bladesystem servers are different guys when compared with their brothers from the DL, ML or even BL series: Among other things, its management is not based on ILO but on Onboard Administrator (OA).

ILO supports the great RIBCL protocol, that is by far the best option for monitoring HP servers: it is based on xml and, thus, easily parseable and it is native (no need of installing SNMP daemons in our servers). Sadly there's not a similar option to RIBCL in Onboard Administrator. It supports a telnet/ssh command interpreter, but parsing outputs from a facility addressed to human administrators instead of machines is more than tricky: Bet that the output format of the command you parse will change in the next firmware revision.

It's true that the blades contained in a bladesystem enclosure -since are considered as servers- support ILO,  but the output you get when you submit a RIBCL command is not 100% real: For instance a virtual fan is shown for representing all the fans available in the enclosure, and something similar happens with power supplies. What blade servers publish via RIBCL is an abstraction of the enclosure reality.

SNMP is the answer

So the only option for fine-graining monitoring the bladesystem is SNMP. HP C3000 and C7000 series bladesystems support the CPQRACK-MIB MIB (1.3.6.1.4.1.232.22) storing interesting information for monitoring the system health:
  • The enclosure itself polling the table cpqRackCommonEnclosureTable (CPQRACK-MIB.2.3.1.1)
  • Enclosure manager (the own onboard administrators) information is located in the table  cpqRackCommonEnclosureManagerTable (CPQRACK-MIB.2.3.1.6)
  • Temperature data can be found in the table cpqRackCommonEnclosureTempTable (CPQRACK-MIB.2.3.1.2)
  • Fan info is located in the table cpqRackCommonEnclosureFanTable (CPQRACK-MIB.2.3.1.3)
  • Fuses are represented in the table cpqRackCommonEnclosureFuseTable (CPQRACK-MIB.2.3.1.4)
  • FRUs (Field Replaceable Units) information is stored in the table cpqRackCommonEnclosureFruTable (CPQRACK-MIB.2.3.1.5)
  • Power systems (global and power supply specific) can be monitored polling the tables cpqRackPowerEnclosureTable (CPQRACK-MIB.2.3.3.1) and cpqRackPowerSupplyTable (CPQRACK-MIB.2.5.1.1)
  • Blade information is stored in the table cpqRackServerBladeTable (CPQRACK-MIB.2.4.1.1)
  • Finally, network IO subsystems can be polled via the table cpqRackNetConnectorTable (CPQRACK-MIB.2.6.1.1)

MIB in detail

All of them store item working status and levels that is what a monitoring system needs for building an image of the status and performance of a blade system:

  • cpqRackCommonEnclosureCondition (cpqRackCommonEnclosureTable.1.16) stores the status of the whole enclosure: OK (2), degraded (3), failed (4) or other (1).
  • cpqRackCommonEnclosureManagerCondition (cpqRackCommonEnclosureManagerTable.1.12) stores the status of each manager: OK (2), degraded (3), failed (4) or other (1). cpqRackCommonEnclosureManagerRedundant (cpqRackCommonEnclosureManagerTable.1.11) stores the manager redundancy status: redundant (3), notRedundant (2) or other(1).
  • cpqRackCommonEnclosureTempCondition (cpqRackCommonEnclosureTempTable.1.8) states the temperature condition of a single sensor: OK (2), degraded (3), failed (4) or other (1). You can get the real temperature value (in celsius) from cpqRackCommonEnclosureTempCurrent (cpqRackCommonEnclosureTempTable.1.6) and its factory threshold from cpqRackCommonEnclosureTempThreshold (cpqRackCommonEnclosureTempTable.1.7)
  • cpqRackCommonEnclosureFanCondition (cpqRackCommonEnclosureFanTable.1.11) returns a single fan status: OK (2), degraded (3), failed (4) or other (1). cpqRackCommonEnclosureFanRedundant (cpqRackCommonEnclosureFanTable.1.9) returns if a fan is in a redundant configuration: redundant (3), notRedundant (2) or other(1).
  • cpqRackCommonEnclosureFuseCondition (cpqRackCommonEnclosureFuseTable.1.7) stores the condition of a single fuse: OK (2), failed (4) or other (1).
  • cpqRackPowerEnclosureCondition (cpqRackPowerEnclosureTable.1.9) stores the overall power system status: OK (2), degraded (3) or other (1).
  • cpqRackPowerSupplyCondition (cpqRackPowerSupplyTable.1.17) returns the working condition of a single power supply: OK (2), degraded (3), failed (4) or other (1). If you like LOTS of details, cpqRackPowerSupplyStatus (cpqRackPowerSupplyTable.1.14) stores the real status of the element:
    • noError (1)
    • generalFailure (2)
    • bistFailure (3)
    • fanFailure (4)
    • tempFailure (5)
    • interlockOpen (6)
    • epromFailed (7)
    • vrefFailed (8)
    • dacFailed (9)
    • ramTestFailed (10)
    • voltageChannelFailed (11)
    • orringdiodeFailed (12)
    • brownOut (13)
    • giveupOnStartup (14)
    • nvramInvalid (15)
    • calibrationTableInvalid (16)
  • cpqRackServerBladeStatus (cpqRackServerBladeTable.1.21) returns the status of a single blade: OK (2), degraded (3), failed (4) or other (1). cpqRackServerBladePowered (cpqRackServerBladeTable.1.25) returns the operational status of a single blade: On (2), off (3) powerStaggedOff (4), rebooting (5) or other (1).

Using traps

Maybe you are an experienced monitoring technician and you discard polling data continuously because you prefer to manage the bladesystem status based on SNMP traps (the truth is that plotting fan speeds and temperatures is cool, but unpractical).

If you select this approach, focus on managing at least these traps. All of them are derived from cpqHoGenericTrap (.1.3.6.1.4.1.232.0) defined in CPQHOST-MIB (and inherited by CPQRACK-MIB):
  • Managers: 
    • cpqRackEnclosureManagerDegraded (cpqHoGenericTrap.22037)
    • cpqRackEnclosureManagerOk (cpqHoGenericTrap.22038)
  • Temperatures: 
    • cpqRackEnclosureTempFailed (cpqHoGenericTrap.22005)
    • cpqRackEnclosureTempDegraded (cpqHoGenericTrap.22006)
    • cpqRackEnclosureTempOk (cpqHoGenericTrap.22007)
  • Fans: 
    • cpqRackEnclosureFanFailed (cpqHoGenericTrap.22008)
    • cpqRackEnclosureFanDegraded (cpqHoGenericTrap.22009)
    • cpqRackEnclosureFanOk (cpqHoGenericTrap.22010)
  • Power supplies:
    • cpqRackPowerSupplyFailed (cpqHoGenericTrap.22013)
    • cpqRackPowerSupplyDegraded (cpqHoGenericTrap.22014)
    • cpqRackPowerSupplyOk (cpqHoGenericTrap.22015)
  • Power system:
    • cpqRackPowerSubsystemNotRedundant (cpqHoGenericTrap.22018)
    • cpqRackPowerSubsystemLineVoltageProblem (cpqHoGenericTrap.22019)
    • cpqRackPowerSubsystemOverloadCondition (cpqHoGenericTrap.22020)
  • Blades:
    • cpqRackServerBladeStatusRepaired (cpqHoGenericTrap.22052)
    • cpqRackServerBladeStatusDegraded (cpqHoGenericTrap.22053)
    • cpqRackServerBladeStatusCritical (cpqHoGenericTrap.22054)
  • Network IO subsystem:
    • cpqRackNetConnectorFailed (cpqHoGenericTrap.22046)
    • cpqRackNetConnectorDegraded (cpqHoGenericTrap.22047)
    • cpqRackNetConnectorOk (cpqHoGenericTrap.22048)


Getting the MIB itself

You can browse CPQRACK-MIB in different places, but be warned that it is not show on its last version in, for instance, mibdepot (it doesn't cover the more than important cpqRackServerBladeStatus field in the cpqRackServerBladeTable blade table that defines the status of a blade). If you need the CQPRACK-MIB MIB itself, you can download it from Plixer.


Monitoring Bladesystem servers in Nagios

If you are a practical guy or you feel too lazy for programming, I recommend using Trond H. Amundsen's check_hp_bladechassis Nagios plugin. It is based on the polling of the previous tables and it's able of generating performance data.


Last but not least...

If you found this article useful, please leave your comments and support the site by clicking in some (or even in all!) of the interesting advertisements of our sponsors. Thanks in advance!



Saturday, 28 May 2011

Nagios: Service checks based on host status


Notice

This article applies to Nagios Core 2.x and 3.x. Luckily Nagios Core 4 natively manages the inhibition of service notifications when the service parent (for instance its host) is not UP. Read about this and other Nagios 4 Core features at Nagios Core 4: Overview.


It is likely that when a host switch to a DOWN state or UNREACHABLE, Nagios inhibit cheking its services: Why checking them if Nagios itself has determined that the host isnot  UP?

For better or worse this is not true: Nagios keeps on running regular checks on the services on a non-UP host. The resulting state of each service check depends on how it handles the unavailability of the data source.

Beyond the advantages of that fact, there are some disadvantages:

  • Too much information produces perplexity, and a set of alarms in services related to a host failure can hide real problems in services from other hosts.
  • Resource consumption related to the implementation of checks predestined to fail.
  • Notification storm related to the host and its services failure.

Therefore it seems desirable, if not for all at least for many service types, following some steps to avoid the above problems:

  1. Establishing service states to reflect the reality of the situation, such as an UNKNOWN state.
  2. Inhibiting notifications related to service state change.
  3. Disabling active checks of services while their host is not UP.

These steps should prevent, in a major or minor way, the problems related to mesleading information, resource consumption and notification storm.


Howto
So now the question is: How to do it? There are different approaches, having each one its pros and cons. Far from analyzing all, the best solution seems to be using Nagios external commands for performing all previous tasks every time host status changes.

Required external commands should be:
All these commands must be used on a script designed for managing host status changes. This script migth manage these command line arguments:
  • Host name, avaliable through the $HOSTNAME$ host macro.
  • Host status, available (in numeric format) through the $HOSTSTATUSID$ host macro.

This could be the script algorithm using metalanguage:

if HOSTSTATUSID=0 the
  # Host has changed to an UP status
   
  # Force status for all host services
  for each host Service
    # Submit an external command to set, as service status,
    # previous current value ($LASTSERVICESTATUSID$ macro)
    ExternalCommand(PROCESS_SERVICE_CHECK_RESULT,Service,
                    $LASTSERVICESTATUSID:HostName:Service$)
  endfor

  # Enable notifications for all host services
  ExternalCommand(ENABLE_HOST_SVC_NOTIFICATIONS, HostName)

  # Enable active checks for all host services
  ExternalCommand(ENABLE_HOST_SVC_CHECKS, Hostname 
else
  # Host has changed to a non-UP status
   
  # Disable active checks for all host services
  ExternalCommand(DISABLE_HOST_SVC_CHECKS, Hostname)
   
  # Disable notifications for all host services
  ExternalCommand(DISABLE_HOST_SVC_NOTIFICATIONS, HostName)
  # Set UNKNOWN (3) status for all host services
  for each host Service
    ExternalCommand(PROCESS_SERVICE_CHECK_RESULT,Service,3)
  endfor
endif


Configuration
Once the script is written, you must define a command object for enabling its usage from Nagios:

define command {
command_name setSvcStatusByHostStatus
command_line -h $HOSTNAME$ -s $HOSTSTATUSID$
}

In the previous example, hostname will be passed to the script using the -h argument, and -s argument will be used to pass host status id.
Finally, it will be necessary setting the previous command as host event handler. If the defined solution is suitable for managing all host status changes, previous command must be set as global event handler in the Nagios configuration (usually stored in nagios.cfg file):

global_host_event_handler = setSvcStatusByHostStatus

If it's not to be used on all hosts, it must be set as event handler for every suitable host:

define host {
...
event_handler setSvcStatusByHostStatus
...
}

Centreon
Previous solution is fully supported by Centreon:
  • Command definition is not different to other usual command. The only thing to consider is defining it as "check" type in order to be available through the event handler  configuration lists.
  • You can set the value of global_host_event_handler through the field "Global host event handler" located on the "Checking options" tab in the Configuration>Nagios>Nagios.cfg menu.
  • You can set the event_handler directive for each host using the field "Event handler" located on the "Data management" of the Configuration>Hosts>(host name).

Related posts


Saturday, 21 May 2011

Monitoring multi-address or multi-identifier devices


When managing monitoring systems it's common to find situations in which one device has more than one identifier, or number of network addresses or a combination of specific IP addresses and identifiers. Some cases may be:
  • Servers with different management and production network interfaces. These include, for example, HP Proliant servers on which the ILO has a dedicated network interface and therefore a different network address to the address of the production network.
  • Virtual Hosts with an IP address and an ID production system level. A common example are virtualized on VMWare ESX hosts, where the identifier system-level virtualization is completely disconnected from the IP address that is assigned to the device.
When the monitoring system is based on Nagios, where there is only one property that identifies the host address (property address in host object), the above situation becomes a problem.
The usual solution is keeping the second value stored in the property alias and change the definitions of the check command, replacing $HOSTADDRESS$ macro by $HOSTALIAS$ macro. However, this approach leads to more problems than solutions:
  • The alias is very useful when correctly used in reports, identifying and providing valuable information about the host.
  • Some third-party tools, usually topology tools, use this field as display name. 

User Macros
    In addition to the standard macros, Nagios supports the so-called custom variable macros: identifier-value pairs defined in the host, service or contact objects. Macros of this type are distinguished from standard being necessarily prefixed by a "_" symbol.

    define host {     

        host_name ProliantServer
        address 192.168.1.1 
        _ILOADDRESS 192.168.2.1
        ...
    }
     
    In the above example a macro called
    $_ILOADDRESS$ is defined, being 192.168.2.1 its value that identifies the IP address of the ILO management interface on a server called MyProliantServer. From all points of view this macro can be considered as an standard Nagios macro: It can be invoked from both the execution of host or service checks and therefore can be used in the in a command definition:
     

    define command {
        command_name CheckILOFans
        command_line $USER1$/check_snmp -H $_HOSTILOADDRESS$ ...
        ...
    }
     
    define command {
        command_name CheckHTTPPort 
        command_line $USER1$/check_tcp -H $HOSTADDRESS$ ...
        ...
    }
     

    The above example first defines a command called CheckILOFans addressed to check the status of fans on a server with an ILO management interface. CheckHTTPPort also defines a command intended to establish the availability status of the HTTP port on a production interface.


    In the first case the used host address is not $HOSTADDRESS$. Instead we use the address stored in our recently created macro, whose name must be prefixed by _HOST because it has been defined as part of a host object, so the macro must be referenced as $_HOSTILOADDRESS$. In the same way, if we define a custom macro in a service object definition, it should be referenced prefixing its id with _SERVICE and finally, if we define a custom macro in a contact object definition, it should be prefixed by _CONTACT.


    By following this approach, now we can use both commands to define checks on the same host, even being based on information available through different network interfaces: 




    define service {
        host_name ProliantServer
        service_description FanStatus 
        check_command CheckILOFans
        ...
    }

    define service {
        host_name ProliantServer
        service_description HTTPStatus 
        check_command CheckHTTPPort
        ...
    }

     
    Macros in Centreon
     

    For those who prefer configuring Nagios using the Merethis tool, Centreon supports the management of variable custom macros from version 1.x: You can create, modify and delete them on the tab "Macros" in the definition of host and service objects configuration. Unfortunately, being version 2.2 recently released, the management of macros in contact objects is still not supported.



     
    Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes