Included with the drivers and installed at the same time). DriverView - Free - utility displays the list of all device drivers currently loaded on your system. For each driver in the list, additional useful information is displayed: load address of the driver, description, version, product name, company that created the driver,. The QA-90 represents a new generation of safety testing equipment. It is the only automatic safety Analyzer that can test units with different classes of protection in one test run (e.g., cardiac float and body float defibrillators). The installation of Nvidia drivers consits of multile steps. First, we identify the model number of the Nvidia VGA card available, prepare the system by installation of all package prerequisites and download the official Nvidia driver. The next step will be to disable the default nouveau driver and install the proprietary Nvidia driver. QA-ES Series II analyzes electrosurgical units quickly and accurately. A wide load-resistance range provides 128 user-selectable loads, including very low loads for testing many of today's ESUs.An accuracy of ±5% of reading down to 20 mA guarantees reliable high-frequency leakage results. We would like to show you a description here but the site won’t allow us.
The CentOS 7’s support for Nvidia video graphic cards comes in a form of an open source nouveau
driver. In case the nouveau
driver is not a sufficient solution, users can install the official Nvidia driver as a proprietary alternative. This step by steps tutorial will guide you through the entire process on Nvidia driver installation.
To install Nvidia driver on other Linux distributions, follow our Nvidia Linux Driver guide.
Criteria | Requirements |
---|---|
Operating System | CentOS 7.5 or higher |
Software | Existing Desktop installation such as GNOME, KDE etc. |
Other | Privileged access to your Linux system as root or via the sudo command. |
Conventions | # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command$ – requires given linux commands to be executed as a regular non-privileged user |
The installation of Nvidia drivers consits of multile steps. First, we identify the model number of the Nvidia VGA card available, prepare the system by installation of all package prerequisites and download the official Nvidia driver.
The next step will be to disable the default nouveau
driver and install the proprietary Nvidia driver. Let’s get started:
STEP 1: Open up terminal and identify your Nvidia graphic card model by executing:
The above command provides information about the Nvidia card’s model number. Also note that the open source nouveau
nvidia driver is currently in use.
STEP 2: Download the Nvidia driver package from nvidia.com using search criteria based on your Nvidia card model and Linux operating system.
Alternatively, if you know what you are doing you can download the driver directly from the Nvidia Linux driver list. Once ready you should end up with a file similar to the one shown below:
STEP 3: Install all prerequisites for a successful Nvidia driver compilation and installation.
The dkms
package is optional. However, this package will ensure continuous Nvidia kernel module compilation and installation in the event of new kernel update.
STEP 4: Disable nouveau
driver by changing the configuration /etc/default/grub
file. Add the nouveau.modeset=0
into line starting with GRUB_CMDLINE_LINUX
. Below you can find example of grub configuration file reflecting the previously suggested change:
The above line “GRUB_CMDLINE_LINUX” ensures that the nouveau
driver is disabled the next time you boot your CentOS 7 Linux system. Once ready execute the following command to apply the new GRUB configuration change:
STEP 5: Reboot your CentOS 7 Linux System. Once the boot is finished confirm that the nouveau
open source Nvidia driver is no longer in use:
STEP 6: The Nvidia drivers must be installed while Xorg server is stopped. Switch to text mode by:
STEP 7: Install the Nvidia driver by executing the following command:
When prompted answer YES
to installation of NVIDIA’s 32-bit compatibility libraries and automatic update of your X configuration file.
All done. The Nvidia driver should now be installed on your CentOS 7 Linux system. Reboot your system now, login and run nvidia-settings
to further configure your Nvidia graphic card settings.
This module provides a RESTful API for interacting with Metron.
Package the application with Maven:
Untar the archive in the $METRON_HOME directory. The directory structure will look like:
Copy the $METRON_HOME/bin/metron-rest script to /etc/init.d/metron-rest
Deploy the RPM at /metron/metron-deployment/packaging/docker/rpm-docker/target/RPMS/noarch/metron-rest-$METRON_VERSION-*.noarch.rpm
Install the RPM with:
The REST application depends on several configuration parameters:
No optional parameter has a default.
Environment Variable | Description |
---|---|
ZOOKEEPER | Zookeeper quorum (ex. node1:2181,node2:2181) |
BROKERLIST | Kafka Broker list (ex. node1:6667,node2:6667) |
HDFS_URL | HDFS url or fs.defaultFS Hadoop setting (ex. hdfs://node1:8020) |
Environment Variable | Description | Required | Default |
---|---|---|---|
METRON_LOG_DIR | Directory where the log file is written | Optional | /var/log/metron/ |
METRON_PID_FILE | File where the pid is written | Optional | /var/run/metron/ |
METRON_REST_PORT | REST application port | Optional | 8082 |
METRON_JDBC_CLIENT_PATH | Path to JDBC client jar | Optional | H2 is bundled |
METRON_TEMP_GROK_PATH | Temporary directory used to test grok statements | Optional | ./patterns/temp |
METRON_DEFAULT_GROK_PATH | Defaults HDFS directory used to store grok statements | Optional | /apps/metron/patterns |
SECURITY_ENABLED | Enables Kerberos support | Optional | false |
METRON_USER_ROLE | Name of the role at the authentication provider that provides user access to Metron. | Optional | USER |
METRON_ADMIN_ROLE | Name of the role at the authentication provider that provides administrative access to Metron. | Optional | ADMIN |
Environment Variable | Description | Required |
---|---|---|
METRON_JDBC_DRIVER | JDBC driver class | Optional |
METRON_JDBC_URL | JDBC url | Optional |
METRON_JDBC_USERNAME | JDBC username | Optional |
METRON_JDBC_PLATFORM | JDBC platform (one of h2, mysql, postgres, oracle) | Optional |
METRON_JVMFLAGS | JVM flags added to the start command | Optional |
METRON_SPRING_PROFILES_ACTIVE | Active Spring profiles (see below) | Optional |
METRON_SPRING_OPTIONS | Additional Spring input parameters | Optional |
METRON_PRINCIPAL_NAME | Kerberos principal for the metron user | Optional |
METRON_SERVICE_KEYTAB | Path to the Kerberos keytab for the metron user | Optional |
These are set in the /etc/default/metron file.
The REST application can be accessed with the Swagger UI at http://host:port/swagger-ui.html#/. The default port is 8082.
Logging for the REST application can be configured in Ambari. Log levels can be changed at the root, package and class level:
Navigate to Services > Metron > Configs > REST and locate the Metron Spring options setting.
Logging configuration is exposed through Spring properties as explained here.
The root logging level defaults to ERROR but can be changed to INFO by adding --logging.level.root=INFO to the Metron Spring options setting.
The Metron REST logging level can be changed to INFO by adding --logging.level.org.apache.metron.rest=INFO.
HTTP request and response logging can be enabled by adding --logging.level.org.springframework.web.filter.CommonsRequestLoggingFilter=DEBUG --logging.level.org.apache.metron.rest.web.filter.ResponseLoggingFilter=DEBUG.
The REST application comes with a few Spring Profiles to aid in testing and development.
Profile | Description |
---|---|
test | adds test users [user, user1, user2, admin] to the database with password “password”. sets variables to in-memory services, only used for integration testing |
dev | adds test users [user, user1, user2, admin] to the database with password “password” |
vagrant | sets configuration variables to match the Metron vagrant environment |
docker | sets configuration variables to match the Metron docker environment |
Setting active profiles is done with the METRON_SPRING_PROFILES_ACTIVE variable. For example, set this variable in /etc/default/metron to configure the REST application for the Vagrant environment and add a test user:
Metron REST can be configured for a cluster with Kerberos enabled. A client JAAS file is required for Kafka and Zookeeper and a Kerberos keytab for the metron user principal is required for all other services. Configure these settings in the /etc/default/metron file:
Metron REST can be configured to use LDAP for authentication and roles. Use the following steps to enable LDAP.
In Ambari, go to Metron > Config > Security > Roles
Set “User Role Name” to the name of the role at the authentication provider that provides user level access to Metron.
Set “Admin Role Name” to the name of the role at the authentication provider that provides administrative access to Metron.
In Ambari, go to Metron > Config > Security > LDAP
Turn on LDAP using the toggle.
Set “LDAP URL” to your LDAP instance. For example, ldap://<host>:<port>.
Set “Bind User” to the name of the bind user. For example, cn=admin,dc=apache,dc=org.
Set the “Bind User Password”
Other fields may be required depending on your LDAP configuration.
Save the changes and restart the required services.
By default, configuration will default to matching Knox’s Demo LDAP for convenience. This should only be used for development purposes. Manual instructions for setting up demo LDAP and finalizing configuration (e.g. setting up the user LDIF file) can be found in the Development README.
There is configuration to provide a path to a truststore with SSL certificates and provide a password. Users should import certificates as needed to appropriate truststores. An example of doing this is:
Roles used by Metron are ROLE_ADMIN and ROLE_USER. Metron will use a property in a group containing the appropriate role to construct this.
Metron can be configured to map the roles defined in your authorization provider to the authorities used internally for access control. This can be configured under Security > Roles in Ambari.
For example, our ldif file could create this group:
If we are using “cn” as our role attribute, Metron will give the “admin” user the role “ROLE_ADMIN”.
Similarly, we could give a user “sam” ROLE_USER with the following group:
The REST application persists data in a relational database and requires a dedicated database user and database (see https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-sql.html for more detail).
Spring uses Hibernate as the default ORM framework but another framework is needed becaused Hibernate is not compatible with the Apache 2 license. For this reason Metron uses EclipseLink. See the Spring Data JPA - EclipseLink project for an example on how to configure EclipseLink in Spring.
The metron-rest module uses Spring Security for authentication and stores user credentials in the relational database configured above. The required tables are created automatically the first time the application is started so that should be done first. For example (continuing the MySQL example above), users can be added by connecting to MySQL and running:
The REST application comes with embedded database support for development purposes.
For example, edit these variables in /etc/default/metron before starting the application to configure H2:
The REST application should be configured with a production-grade database outside of development.
Installing with Ambari is recommended for production deployments. Ambari handles setup, configuration, and management of the REST component. This includes managing the PID file, directing logging, etc.
The following configures the application for MySQL:
Install MySQL if not already available (this example uses version 5.7, installation instructions can be found here)
Create a metron user and REST database and permission the user for that database:
Create the security tables as described in the Spring Security Guide.
Install the MySQL JDBC client onto the REST application host and configurate the METRON_JDBC_CLIENT_PATH variable:
Edit these variables in /etc/default/metron to configure the REST application for MySQL:
Switch to the metron user
Start the REST API. Adjust the password as necessary.
The REST application exposes endpoints for querying Pcap data. For more information about filtering options see Query Filter Utility.
There is an endpoint available that will return Pcap data in PDML format. Wireshark must be installed for this feature to work. Installing wireshark in CentOS can be done with yum -y install wireshark.
The REST application uses a Java Process object to call out to the pcap_to_pdml.sh script. This script is installed at $METRON_HOME/bin/pcap_to_pdml.sh by default. Out of the box it is a simple wrapper around the tshark command to transform raw pcap data to PDML. However it can be extended to do additional processing as long as the expected input/output is maintained. REST will supply the script with raw pcap data through standard in and expects PDML data serialized as XML.
Pcap query jobs can be configured for submission to a YARN queue. This setting is exposed as the Spring property pcap.yarn.queue and can be set in the PCAP tab under Metron service -> Configs in Ambari. If configured, the REST application will set the mapreduce.job.queuename Hadoop property to that value. It is highly recommended that a dedicated YARN queue be created and configured for Pcap queries to prevent a job from consuming too many cluster resources. More information about setting up YARN queues can be found here.
Pcap query results are stored in HDFS. The location of query results when run through the REST app is determined by a couple factors. The root of Pcap query results defaults to /apps/metron/pcap/output but can be changed with the Spring property pcap.final.output.path. Assuming the default Pcap query output directory, the path to a result page will follow this pattern:
Over time Pcap query results will accumulate in HDFS. Currently these results are not cleaned up automatically so cluster administrators should be aware of this and monitor them. It is highly recommended that a process be put in place to periodically delete files and directories under the Pcap query results root.
Users should also be mindful of date ranges used in queries so they don’t produce result sets that are too large. Currently there are no limits enforced on date ranges.
Queries can also be configured on a global level for setting the number of results per page via a Spring property pcap.page.size. This property can be set in the PCAP tab under Metron service -> Configs, in Ambari. By default, this value is set to 10 pcaps per page, but you may choose to set this value higher based on observing frequenetly-run query result sizes. This setting works in conjunction with the property for setting finalizer threadpool size when optimizing query performance.
Pcap query jobs have a finalization routine that writes their results out to HDFS in pages. Depending on the size of your pcaps, the number or results typically returned, page sizing (described above), and available CPU cores for running your REST application, your performance can be improved by adjusting the number of files that can be written to HDFS in parallel. To this end, there is a threadpool used for this finalization step that can be configured to use a specified number of threads. This setting is exposed as the Spring property pcap.finalizer.threadpool.size. A default value of “1” is used if not specified by the user. Generally speaking, you should see a performance gain when this value is set to anything higher than 1. A sizeable increase in performance can be achieved, especially for larger numbers of files of smaller size, by increasing the number of threads. It should be noted that this property is parsed as a String to allow for more complex parallelism values. In addition to normal integer values, you can specify a multiple of the number of cores. If it’s a string and ends with “C”, then strip the C and treat it as an integral multiple of the number of cores. If it’s a string and does not end with a C, then treat it as a number in string form.
Request and Response objects are JSON formatted. The JSON schemas are available in the Swagger UI.
POST /api/v1/alerts/ui/escalate |
GET /api/v1/alerts/ui/settings |
GET /api/v1/alerts/ui/settings/all |
DELETE /api/v1/alerts/ui/settings |
POST /api/v1/alerts/ui/settings |
GET /api/v1/global/config |
DELETE /api/v1/global/config |
POST /api/v1/global/config |
GET /api/v1/grok/get/statement |
GET /api/v1/grok/list |
POST /api/v1/grok/validate |
POST /api/v1/hdfs |
GET /api/v1/hdfs |
DELETE /api/v1/hdfs |
GET /api/v1/hdfs/list |
GET /api/v1/kafka/topic |
POST /api/v1/kafka/topic |
GET /api/v1/kafka/topic/{name} |
DELETE /api/v1/kafka/topic/{name} |
GET /api/v1/kafka/topic/{name}/sample |
POST /api/v1/kafka/topic/{name}/produce |
GET /api/v1/metaalert/searchByAlert |
GET /api/v1/metaalert/create |
GET /api/v1/metaalert/add/alert |
GET /api/v1/metaalert/remove/alert |
GET /api/v1/metaalert/update/status/{guid}/{status} |
POST /api/v1/pcap/fixed |
POST /api/v1/pcap/query |
GET /api/v1/pcap |
GET /api/v1/pcap/{jobId} |
GET /api/v1/pcap/{jobId}/pdml |
GET /api/v1/pcap/{jobId}/raw |
DELETE /api/v1/pcap/kill/{jobId} |
GET /api/v1/pcap/{jobId}/config |
GET /api/v1/search/search |
POST /api/v1/search/search |
POST /api/v1/search/group |
GET /api/v1/search/findOne |
GET /api/v1/search/column/metadata |
GET /api/v1/sensor/enrichment/config |
GET /api/v1/sensor/enrichment/config/list/available/enrichments |
GET /api/v1/sensor/enrichment/config/list/available/threat/triage/aggregators |
DELETE /api/v1/sensor/enrichment/config/{name} |
POST /api/v1/sensor/enrichment/config/{name} |
GET /api/v1/sensor/enrichment/config/{name} |
GET /api/v1/sensor/indexing/config |
DELETE /api/v1/sensor/indexing/config/{name} |
POST /api/v1/sensor/indexing/config/{name} |
GET /api/v1/sensor/indexing/config/{name} |
POST /api/v1/sensor/parser/config |
GET /api/v1/sensor/parser/config |
GET /api/v1/sensor/parser/config/list/available |
POST /api/v1/sensor/parser/config/parseMessage |
GET /api/v1/sensor/parser/config/reload/available |
DELETE /api/v1/sensor/parser/config/{name} |
GET /api/v1/sensor/parser/config/{name} |
POST /api/v1/sensor/parser/group |
GET /api/v1/sensor/parser/group/{name} |
GET /api/v1/sensor/parser/group |
DELETE /api/v1/sensor/parser/group/{name} |
POST /api/v1/stellar/apply/transformations |
GET /api/v1/stellar/list |
GET /api/v1/stellar/list/functions |
GET /api/v1/stellar/list/simple/functions |
POST /api/v1/stellar/validate/rules |
GET /api/v1/storm |
GET /api/v1/storm/client/status |
GET /api/v1/storm/enrichment |
GET /api/v1/storm/enrichment/activate |
GET /api/v1/storm/enrichment/deactivate |
GET /api/v1/storm/enrichment/start |
GET /api/v1/storm/enrichment/stop |
GET /api/v1/storm/indexing/batch |
GET /api/v1/storm/indexing/batch/activate |
GET /api/v1/storm/indexing/batch/deactivate |
GET /api/v1/storm/indexing/batch/start |
GET /api/v1/storm/indexing/batch/stop |
GET /api/v1/storm/indexing/randomaccess |
GET /api/v1/storm/indexing/randomaccess/activate |
GET /api/v1/storm/indexing/randomaccess/deactivate |
GET /api/v1/storm/indexing/randomaccess/start |
GET /api/v1/storm/indexing/randomaccess/stop |
GET /api/v1/storm/parser/activate/{name} |
GET /api/v1/storm/parser/deactivate/{name} |
GET /api/v1/storm/parser/start/{name} |
GET /api/v1/storm/parser/stop/{name} |
GET /api/v1/storm/{name} |
GET /api/v1/storm/supervisors |
PATCH /api/v1/update/patch |
POST /api/v1/update/add/comment |
POST /api/v1/update/remove/comment |
GET /api/v1/user |
Profiles are includes for both the metron-docker and Full Dev environments.
Start the metron-docker environment. Build the metron-rest module and start it with the Spring Boot Maven plugin:
The metron-rest application will be available at http://localhost:8080/swagger-ui.html#/.
Start the development environment. Build the metron-rest module and start it with the Spring Boot Maven plugin:
The metron-rest application will be available at http://localhost:8080/swagger-ui.html#/.
To run the application locally on the Full Dev host (node1), follow the Installation instructions above. Then set the METRON_SPRING_PROFILES_ACTIVE variable in /etc/default/metron:
and start the application:
In a cluster with Kerberos enabled, update the security settings in /etc/default/metron. Security is disabled by default in the vagrant Spring profile so that setting must be overriden with the METRON_SPRING_OPTIONS variable:
The metron-rest application will be available at http://node1:8082/swagger-ui.html#/.
This project depends on the Java Transaction API. See https://java.net/projects/jta-spec/ for more details.