mirror of
https://github.com/wazuh/wazuh-indexer-plugins.git
synced 2025-12-10 14:32:28 -06:00
Migrate code and documentation from wazuh-indexer (#265)
* Migrate code and documentation from wazuh-indexer * Migrate operational--integrations_maintenance_request.md * Add ECS folder and workflow * Add ECS workflow badge * Adapt ECS workflow generator * Trigger workflow * Update ECS templates for modified modules: agent alerts command states-fim states-inventory-hardware states-inventory-hotfixes states-inventory-networks states-inventory-packages states-inventory-ports states-inventory-processes states-inventory-system states-vulnerabilities * Remove unused code * Update ECS templates for modified modules: agent alerts command states-fim states-inventory-hardware states-inventory-hotfixes states-inventory-networks states-inventory-packages states-inventory-ports states-inventory-processes states-inventory-system states-vulnerabilities * Clean-up --------- Co-authored-by: Wazuh Indexer Bot <github_devel_xdrsiem_indexer@wazuh.com>
This commit is contained in:
parent
ba4033b891
commit
f04d6fcd90
30
.github/ISSUE_TEMPLATE/operational--integrations_maintenance_request.md
vendored
Normal file
30
.github/ISSUE_TEMPLATE/operational--integrations_maintenance_request.md
vendored
Normal file
@ -0,0 +1,30 @@
|
||||
---
|
||||
name: Integrations maintenance request
|
||||
about: Used by the Indexer team to maintain third-party software integrations and track the results.
|
||||
title: Integrations maintenance request
|
||||
labels: level/task, request/operational, type/maintenance
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
## Description
|
||||
|
||||
The Wazuh Indexer team is responsible for the maintenance of the third-party integrations hosted in the `wazuh/wazuh-indexe-plugins` repository. We must ensure these integrations work under new releases of the third-party software (Splunk, Elastic, Logstash, …) and our own.
|
||||
|
||||
For that, we need to:
|
||||
|
||||
- [ ] Create a pull request that upgrades the components to the latest version.
|
||||
- [ ] Update our testing environments to verify the integrations work under new versions.
|
||||
- [ ] Test the integrations, checking that:
|
||||
- The Docker Compose project starts without errors.
|
||||
- The data arrives to the destination.
|
||||
- All the dashboards can be imported successfully.
|
||||
- All the dashboards are populated with data.
|
||||
- [ ] Finally, upgrade the compatibility matrix in integrations/README.md with the new versions.
|
||||
|
||||
> [!NOTE]
|
||||
> * For Logstash, we use the logstash-oss image.
|
||||
> * For Wazuh Indexer and Wazuh Dashboard, we use the opensearch and opensearch-dashboards images. These must match the opensearch version that we support (e.g: for Wazuh 4.9.0 it is OpenSearch 2.13.0).
|
||||
|
||||
## Issues
|
||||
|
||||
- _List here the detected issues_
|
||||
50
.github/workflows/generate-ecs-mappings.yml
vendored
Normal file
50
.github/workflows/generate-ecs-mappings.yml
vendored
Normal file
@ -0,0 +1,50 @@
|
||||
name: ECS Generator
|
||||
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- "ecs/**/*.json"
|
||||
- "ecs/**/*.yml"
|
||||
|
||||
jobs:
|
||||
run-ecs-generator:
|
||||
if: github.repository == 'wazuh/wazuh-indexer-plugins'
|
||||
runs-on: ubuntu-24.04
|
||||
env:
|
||||
output_folder: /tmp/ecs-templates
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Extract branch name
|
||||
shell: bash
|
||||
run: echo "branch=${GITHUB_HEAD_REF:-${GITHUB_REF#refs/heads/}}" >> $GITHUB_OUTPUT
|
||||
id: branch-name
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Set up Docker Compose
|
||||
run: sudo apt-get install docker-compose
|
||||
|
||||
- name: Generate PR to wazuh-indexer-plugins
|
||||
id: generate-pr
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.INDEXER_BOT_TOKEN }}
|
||||
COMMITER_EMAIL: ${{ secrets.INDEXER_BOT_EMAIL }}
|
||||
COMMITTER_USERNAME: "Wazuh Indexer Bot"
|
||||
SSH_PRIVATE_KEY: ${{ secrets.INDEXER_BOT_PRIVATE_SSH_KEY }}
|
||||
SSH_PUBLIC_KEY: ${{ secrets.INDEXER_BOT_PUBLIC_SSH_KEY }}
|
||||
run: |
|
||||
bash ecs/scripts/generate-pr-to-plugins.sh \
|
||||
-b ${{ steps.branch-name.outputs.branch }} \
|
||||
-o ${{env.output_folder }}
|
||||
|
||||
- name: Upload artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ecs-templates
|
||||
path: ${{env.output_folder }}
|
||||
@ -5,6 +5,8 @@
|
||||
[](https://groups.google.com/forum/#!forum/wazuh)
|
||||
[](https://wazuh.com/community/join-us-on-slack)
|
||||
[](https://documentation.wazuh.com)
|
||||
[](https://github.com/wazuh/wazuh-indexer-plugins/actions/workflows/generate-ecs-mappings.yml)
|
||||
|
||||
|
||||
- [Welcome!](#welcome)
|
||||
- [Project Resources](#project-resources)
|
||||
|
||||
3
ecs/.gitignore
vendored
Normal file
3
ecs/.gitignore
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
**/mappings
|
||||
*.log
|
||||
generatedData.json
|
||||
128
ecs/README.md
Normal file
128
ecs/README.md
Normal file
@ -0,0 +1,128 @@
|
||||
## ECS mappings generator
|
||||
|
||||
This script generates the ECS mappings for the Wazuh indices.
|
||||
|
||||
### Requirements
|
||||
|
||||
- [Docker Compose](https://docs.docker.com/compose/install/)
|
||||
|
||||
### Folder structure
|
||||
|
||||
There is a folder for each module. Inside each folder, there is a `fields` folder with the required files to generate the mappings. These are the inputs for the ECS generator.
|
||||
|
||||
### Usage
|
||||
|
||||
1. Execute the mapping-generator tool
|
||||
```bash
|
||||
bash ecs/generator/mapping-generator.sh run <MODULE_NAME>
|
||||
```
|
||||
2. (Optional) Run the tool's cleanup
|
||||
> The tool stops the container automatically, but it is recommended to run the down command if the tool is not going to be used anymore.
|
||||
```bash
|
||||
bash ecs/generator/mapping-generator.sh down
|
||||
```
|
||||
|
||||
### Output
|
||||
|
||||
A new `mappings` folder will be created inside the module folder, containing all the generated files.
|
||||
The files are versioned using the ECS version, so different versions of the same module can be generated.
|
||||
For our use case, the most important files are under `mappings/<ECS_VERSION>/generated/elasticsearch/legacy/`:
|
||||
|
||||
- `template.json`: Elasticsearch compatible index template for the module
|
||||
- `opensearch-template.json`: OpenSearch compatible index template for the module
|
||||
|
||||
The original output is `template.json`, which is not compatible with OpenSearch by default.
|
||||
In order to make this template compatible with OpenSearch, the following changes are made:
|
||||
|
||||
- The `order` property is renamed to `priority`.
|
||||
- The `mappings` and `settings` properties are nested under the `template` property.
|
||||
|
||||
The script takes care of these changes automatically, generating the `opensearch-template.json` file as a result.
|
||||
|
||||
### Upload
|
||||
|
||||
You can either upload the index template using cURL or the UI (dev tools).
|
||||
|
||||
```bash
|
||||
curl -u admin:admin -k -X PUT "https://indexer:9200/_index_template/wazuh-states-vulnerabilities" -H "Content-Type: application/json" -d @opensearch-template.json
|
||||
```
|
||||
|
||||
Notes:
|
||||
- PUT and POST are interchangeable.
|
||||
- The name of the index template does not matter. Any name can be used.
|
||||
- Adjust credentials and URL accordingly.
|
||||
|
||||
### Adding new mappings
|
||||
|
||||
The easiest way to create mappings for a new module is to take a previous one as a base.
|
||||
Copy a folder and rename it to the new module name. Then, edit the `fields` files to match the new module fields.
|
||||
|
||||
The name of the folder will be the name of the module to be passed to the script. All 3 files are required.
|
||||
|
||||
- `fields/subset.yml`: This file contains the subset of ECS fields to be used for the module.
|
||||
- `fields/template-settings-legacy.json`: This file contains the legacy template settings for the module.
|
||||
- `fields/template-settings.json`: This file contains the composable template settings for the module.
|
||||
|
||||
### Event generator
|
||||
|
||||
Each module contains a Python script to generate events for its module. The script prompts for the required parameters, so it can be launched without arguments:
|
||||
|
||||
```bash
|
||||
./event_generator.py
|
||||
```
|
||||
|
||||
The script will generate a JSON file with the events, and will also ask whether to upload them to the indexer. If the upload option is selected, the script will ask for the indexer URL and port, credentials, and index name.
|
||||
The script uses log file. Check it out for debugging or additional information.
|
||||
|
||||
---
|
||||
|
||||
### Automatic PR creation tool
|
||||
|
||||
The `generate-pr-to-plugins.sh` script found in the `ecs/scripts` folder is a tool that detects modified ECS modules, generates new templates, commits the changes to a target repository, and creates or updates a pull request.
|
||||
|
||||
#### Requirements
|
||||
|
||||
- Docker Compose
|
||||
- GitHub CLI (`gh`)
|
||||
|
||||
#### Usage
|
||||
|
||||
To use the script, run the following command:
|
||||
|
||||
```sh
|
||||
./update-ecs-templates.sh -t <GITHUB_TOKEN>
|
||||
```
|
||||
|
||||
**Options**
|
||||
|
||||
- `-b <BRANCH_NAME>`: (Optional) Branch name to create or update the pull request. Default is current branch.
|
||||
- `-t <GITHUB_TOKEN>`: (Optional) GitHub token to authenticate with the GitHub API. If not provided, the script will use the `GITHUB_TOKEN` environment variable.
|
||||
|
||||
#### Script Workflow
|
||||
|
||||
1. **Validate Dependencies**
|
||||
- Checks if the required commands (`docker`, `docker-compose`, and `gh`) are installed.
|
||||
|
||||
2. **Detect Modified Modules**
|
||||
- Fetches and extracts modified ECS modules by comparing the current branch with the base branch.
|
||||
- Identifies relevant ECS modules that have been modified.
|
||||
|
||||
3. **Run ECS Generator**
|
||||
- Runs the ECS generator script for each relevant module to generate new ECS templates.
|
||||
|
||||
4. **Clone Target Repository**
|
||||
- Clones the target repository (`wazuh/wazuh-indexer-plugins`) if it does not already exist.
|
||||
- Configures Git and GitHub CLI with the provided GitHub token.
|
||||
|
||||
5. **Commit and Push Changes**
|
||||
- Copies the generated ECS templates to the appropriate directory in the target repository.
|
||||
- Commits and pushes the changes to the specified branch.
|
||||
|
||||
6. **Create or Update Pull Request**
|
||||
- Creates a new pull request or updates an existing pull request with the modified ECS templates.
|
||||
|
||||
#### References
|
||||
|
||||
- [ECS repository](https://github.com/elastic/ecs)
|
||||
- [ECS usage](https://github.com/elastic/ecs/blob/main/USAGE.md)
|
||||
- [ECS field reference](https://www.elastic.co/guide/en/ecs/current/ecs-field-reference.html)
|
||||
182
ecs/agent/event-generator/event_generator.py
Normal file
182
ecs/agent/event-generator/event_generator.py
Normal file
@ -0,0 +1,182 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-agents"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'status': random.choice(['active', 'inactive']),
|
||||
'last_login': generate_random_date(),
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'key': f'key{random.randint(0, 999)}',
|
||||
'host': generate_random_host()
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host():
|
||||
family = random.choice(
|
||||
['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL'])
|
||||
version = f'{random.randint(0, 99)}.{random.randint(0, 99)}'
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'boot{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(0, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(0, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 999)}',
|
||||
'geo': {
|
||||
'city_name': random.choice(['San Francisco', 'New York', 'Berlin', 'Tokyo']),
|
||||
'continent_code': random.choice(['NA', 'EU', 'AS']),
|
||||
'continent_name': random.choice(['North America', 'Europe', 'Asia']),
|
||||
'country_iso_code': random.choice(['US', 'DE', 'JP']),
|
||||
'country_name': random.choice(['United States', 'Germany', 'Japan']),
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90.0, 90.0), 6),
|
||||
'lon': round(random.uniform(-180.0, 180.0), 6)
|
||||
},
|
||||
'name': f'geo{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': f'region{random.randint(0, 999)}',
|
||||
'region_name': f'Region {random.randint(0, 999)}',
|
||||
'timezone': random.choice(['PST', 'EST', 'CET', 'JST'])
|
||||
},
|
||||
'hostname': f'host{random.randint(0, 9999)}',
|
||||
'id': f'hostid{random.randint(0, 9999)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(1, 255)}.{random.randint(1, 255)}.{random.randint(1, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'name': f'hostname{random.randint(0, 9999)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(0, 1000000),
|
||||
'packets': random.randint(0, 1000000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(0, 1000000),
|
||||
'packets': random.randint(0, 1000000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': family,
|
||||
'full': f'{family} {version}',
|
||||
'kernel': f'kernel{random.randint(0, 999)}',
|
||||
'name': family,
|
||||
'platform': random.choice(['linux', 'windows', 'macos']),
|
||||
'type': family,
|
||||
'version': version
|
||||
},
|
||||
'pid_ns_ino': f'{random.randint(1000000, 9999999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 100),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 100),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'uptime': random.randint(0, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'agent': generate_random_agent()
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
32
ecs/agent/fields/custom/agent.yml
Normal file
32
ecs/agent/fields/custom/agent.yml
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
- name: key
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The registration key of the agent.
|
||||
- name: last_login
|
||||
type: date
|
||||
level: custom
|
||||
description: >
|
||||
The last time the agent logged in.
|
||||
- name: status
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Agents' interpreted connection status depending on `agent.last_login`.
|
||||
allowed_values:
|
||||
- name: active
|
||||
description: Active agent status
|
||||
- name: disconnected
|
||||
description: Disconnected agent status
|
||||
6
ecs/agent/fields/custom/host.yml
Normal file
6
ecs/agent/fields/custom/host.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent
|
||||
6
ecs/agent/fields/custom/os.yml
Normal file
6
ecs/agent/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
6
ecs/agent/fields/custom/risk.yml
Normal file
6
ecs/agent/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
4
ecs/agent/fields/mapping-settings.json
Normal file
4
ecs/agent/fields/mapping-settings.json
Normal file
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "strict",
|
||||
"date_detection": false
|
||||
}
|
||||
18
ecs/agent/fields/subset.yml
Normal file
18
ecs/agent/fields/subset.yml
Normal file
@ -0,0 +1,18 @@
|
||||
---
|
||||
name: agent
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
groups: {}
|
||||
key: {}
|
||||
last_login: {}
|
||||
status: {}
|
||||
host:
|
||||
fields: "*"
|
||||
22
ecs/agent/fields/template-settings-legacy.json
Normal file
22
ecs/agent/fields/template-settings-legacy.json
Normal file
@ -0,0 +1,22 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-agents*"
|
||||
],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.name",
|
||||
"agent.type",
|
||||
"agent.version",
|
||||
"agent.name",
|
||||
"host.os.full",
|
||||
"host.ip"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
24
ecs/agent/fields/template-settings.json
Normal file
24
ecs/agent/fields/template-settings.json
Normal file
@ -0,0 +1,24 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-agents*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.name",
|
||||
"agent.type",
|
||||
"agent.version",
|
||||
"agent.name",
|
||||
"host.os.full",
|
||||
"host.ip"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
12
ecs/alerts/fields/custom/agent.yml
Normal file
12
ecs/alerts/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
6
ecs/alerts/fields/custom/host.yml
Normal file
6
ecs/alerts/fields/custom/host.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
6
ecs/alerts/fields/custom/os.yml
Normal file
6
ecs/alerts/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
6
ecs/alerts/fields/custom/risk.yml
Normal file
6
ecs/alerts/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
4
ecs/alerts/fields/mapping-settings.json
Normal file
4
ecs/alerts/fields/mapping-settings.json
Normal file
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": true,
|
||||
"date_detection": false
|
||||
}
|
||||
603
ecs/alerts/fields/subset.yml
Normal file
603
ecs/alerts/fields/subset.yml
Normal file
@ -0,0 +1,603 @@
|
||||
---
|
||||
name: main
|
||||
fields:
|
||||
base:
|
||||
fields: "*"
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
as:
|
||||
fields: "*"
|
||||
client:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
cloud:
|
||||
fields: "*"
|
||||
code_signature:
|
||||
fields: "*"
|
||||
container:
|
||||
fields: "*"
|
||||
data_stream:
|
||||
fields: "*"
|
||||
destination:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
device:
|
||||
fields: "*"
|
||||
dll:
|
||||
fields: "*"
|
||||
dns:
|
||||
fields: "*"
|
||||
ecs:
|
||||
fields: "*"
|
||||
elf:
|
||||
fields: "*"
|
||||
email:
|
||||
fields: "*"
|
||||
error:
|
||||
fields: "*"
|
||||
event:
|
||||
fields: "*"
|
||||
faas:
|
||||
fields: "*"
|
||||
file:
|
||||
fields: "*"
|
||||
geo:
|
||||
fields: "*"
|
||||
group:
|
||||
fields: "*"
|
||||
hash:
|
||||
fields: "*"
|
||||
host:
|
||||
fields: "*"
|
||||
http:
|
||||
fields: "*"
|
||||
interface:
|
||||
fields: "*"
|
||||
log:
|
||||
fields: "*"
|
||||
macho:
|
||||
fields: "*"
|
||||
network:
|
||||
fields: "*"
|
||||
observer:
|
||||
fields: "*"
|
||||
orchestrator:
|
||||
fields: "*"
|
||||
organization:
|
||||
fields: "*"
|
||||
os:
|
||||
fields: "*"
|
||||
package:
|
||||
fields: "*"
|
||||
pe:
|
||||
fields: "*"
|
||||
process:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
code_signature:
|
||||
fields: "*"
|
||||
command_line: {}
|
||||
elf:
|
||||
fields: "*"
|
||||
end: {}
|
||||
entity_id: {}
|
||||
entry_leader:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
command_line: {}
|
||||
entity_id: {}
|
||||
entry_meta:
|
||||
fields:
|
||||
type: {}
|
||||
source:
|
||||
fields:
|
||||
ip: {}
|
||||
executable: {}
|
||||
interactive: {}
|
||||
name: {}
|
||||
parent:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
session_leader:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
same_as_process: {}
|
||||
start: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
working_directory: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
attested_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
attested_groups:
|
||||
fields:
|
||||
name: {}
|
||||
entry_meta:
|
||||
fields:
|
||||
type:
|
||||
docs_only: True
|
||||
env_vars: {}
|
||||
executable: {}
|
||||
exit_code: {}
|
||||
group_leader:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
command_line: {}
|
||||
entity_id: {}
|
||||
executable: {}
|
||||
interactive: {}
|
||||
name: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
same_as_process: {}
|
||||
start: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
working_directory: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
hash:
|
||||
fields: "*"
|
||||
interactive: {}
|
||||
io:
|
||||
fields: "*"
|
||||
macho:
|
||||
fields: "*"
|
||||
name: {}
|
||||
parent:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
code_signature:
|
||||
fields: "*"
|
||||
command_line: {}
|
||||
elf:
|
||||
fields: "*"
|
||||
end: {}
|
||||
entity_id: {}
|
||||
executable: {}
|
||||
exit_code: {}
|
||||
group_leader:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
hash:
|
||||
fields: "*"
|
||||
interactive: {}
|
||||
macho:
|
||||
fields: "*"
|
||||
name: {}
|
||||
pe:
|
||||
fields: "*"
|
||||
pgid: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
thread:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
capabilities:
|
||||
fields:
|
||||
effective: {}
|
||||
permitted: {}
|
||||
title: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
uptime: {}
|
||||
working_directory: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
pe:
|
||||
fields: "*"
|
||||
pgid: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
previous:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
executable: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
same_as_process:
|
||||
docs_only: True
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
start: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
session_leader:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
command_line: {}
|
||||
entity_id: {}
|
||||
executable: {}
|
||||
interactive: {}
|
||||
name: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
same_as_process: {}
|
||||
start: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
working_directory: {}
|
||||
parent:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
session_leader:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
thread:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
capabilities:
|
||||
fields:
|
||||
effective: {}
|
||||
permitted: {}
|
||||
title: {}
|
||||
tty:
|
||||
fields: "*"
|
||||
uptime: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
working_directory: {}
|
||||
registry:
|
||||
fields: "*"
|
||||
related:
|
||||
fields: "*"
|
||||
risk:
|
||||
fields: "*"
|
||||
rule:
|
||||
fields: "*"
|
||||
server:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
service:
|
||||
fields: "*"
|
||||
source:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
threat:
|
||||
fields: "*"
|
||||
tls:
|
||||
fields: "*"
|
||||
tracing:
|
||||
fields: "*"
|
||||
url:
|
||||
fields: "*"
|
||||
user_agent:
|
||||
fields: "*"
|
||||
user:
|
||||
fields:
|
||||
changes:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
domain: {}
|
||||
effective:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
risk:
|
||||
fields: "*"
|
||||
roles: {}
|
||||
target:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
vlan:
|
||||
fields: "*"
|
||||
vulnerability:
|
||||
fields: "*"
|
||||
x509:
|
||||
fields: "*"
|
||||
18
ecs/alerts/fields/template-settings-legacy.json
Normal file
18
ecs/alerts/fields/template-settings-legacy.json
Normal file
@ -0,0 +1,18 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-alerts-5.x-*"
|
||||
],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"mapping": {
|
||||
"total_fields": {
|
||||
"limit": 2500
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
18
ecs/alerts/fields/template-settings.json
Normal file
18
ecs/alerts/fields/template-settings.json
Normal file
@ -0,0 +1,18 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-alerts-5.x-*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"mapping": {
|
||||
"total_fields": {
|
||||
"limit": 2500
|
||||
}
|
||||
},
|
||||
"refresh_interval": "5s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
165
ecs/command/event-generator/event_generator.py
Normal file
165
ecs/command/event-generator/event_generator.py
Normal file
@ -0,0 +1,165 @@
|
||||
#!/bin/python3
|
||||
|
||||
import argparse
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
import uuid
|
||||
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-commands"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date(initial_date=None, days_range=30):
|
||||
if initial_date is None:
|
||||
initial_date = datetime.datetime.now(datetime.timezone.utc)
|
||||
random_days = random.randint(0, days_range)
|
||||
new_timestamp = initial_date + datetime.timedelta(days=random_days)
|
||||
return new_timestamp.strftime('%Y-%m-%dT%H:%M:%SZ')
|
||||
|
||||
|
||||
def generate_random_command(include_all_fields=False):
|
||||
command = {
|
||||
"source": random.choice(["Users/Services", "Engine", "Content manager"]),
|
||||
"user": f"user{random.randint(1, 100)}",
|
||||
"target": {
|
||||
"id": f"target{random.randint(1, 10)}",
|
||||
"type": random.choice(["agent", "group", "server"])
|
||||
},
|
||||
"action": {
|
||||
"name": random.choice(["restart", "update","change_group", "apply_policy"]),
|
||||
"args": { "arg1": f"/path/to/executable/arg{random.randint(1, 10)}"},
|
||||
"version": f"v{random.randint(1, 5)}"
|
||||
},
|
||||
"timeout": random.randint(10, 100)
|
||||
}
|
||||
if include_all_fields:
|
||||
document = {
|
||||
"@timestamp": generate_random_date(),
|
||||
"delivery_timestamp": generate_random_date(),
|
||||
"agent": {"groups": [f"group{random.randint(1, 5)}"]},
|
||||
"command": {
|
||||
**command,
|
||||
"status": random.choice(["pending", "sent", "success", "failure"]),
|
||||
"result": {
|
||||
"code": random.randint(0, 255),
|
||||
"message": f"Result message {random.randint(1, 1000)}",
|
||||
"data": f"Result data {random.randint(1, 100)}"
|
||||
},
|
||||
"request_id": str(uuid.uuid4()),
|
||||
"order_id": str(uuid.uuid4())
|
||||
}
|
||||
}
|
||||
return document
|
||||
|
||||
return command
|
||||
|
||||
|
||||
def generate_random_data(number, include_all_fields=False):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
data.append(generate_random_command(include_all_fields))
|
||||
if not include_all_fields:
|
||||
return {"commands": data}
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(protocol, ip, port, index, username, password, data, use_index=False):
|
||||
try:
|
||||
if not use_index:
|
||||
# Use the command-manager API
|
||||
url = f'{protocol}://{ip}:{port}/_plugins/_command_manager/commands'
|
||||
send_post_request(username, password, url, data)
|
||||
return
|
||||
for event_data in data:
|
||||
# Generate UUIDs for the document id
|
||||
doc_id = str(uuid.uuid4())
|
||||
url = f'{protocol}://{ip}:{port}/{index}/_doc/{doc_id}'
|
||||
send_post_request(username, password, url, event_data)
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def send_post_request(username, password, url, event_data):
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
# Send request
|
||||
response = session.post(url, data=json.dumps(event_data), headers=headers)
|
||||
if response.status_code not in [201, 200]:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
return response
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate and optionally inject events into an OpenSearch index or Command Manager."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--index",
|
||||
action="store_true",
|
||||
help="Generate additional fields for indexing and inject into a specific index."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--protocol",
|
||||
choices=['http', 'https'],
|
||||
default='https',
|
||||
help="Specify the protocol to use: http or https."
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number, include_all_fields=args.index)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
json.dump(data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input(
|
||||
"Do you want to inject the generated data into your indexer/command manager? (y/n) "
|
||||
).strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
|
||||
if args.index:
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
else:
|
||||
index = None
|
||||
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
|
||||
inject_events(args.protocol, ip, port, index, username, password,
|
||||
data, use_index=bool(args.index))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/command/fields/custom/agent.yml
Normal file
12
ecs/command/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
9
ecs/command/fields/custom/base.yml
Normal file
9
ecs/command/fields/custom/base.yml
Normal file
@ -0,0 +1,9 @@
|
||||
- name: base
|
||||
title: Wazuh base fields
|
||||
root: true
|
||||
fields:
|
||||
- name: delivery_timestamp
|
||||
type: date
|
||||
level: custom
|
||||
description: >
|
||||
The latest date-time for the command to be delivered. Calculated as the current timestamp plus the timeout.
|
||||
79
ecs/command/fields/custom/command.yml
Normal file
79
ecs/command/fields/custom/command.yml
Normal file
@ -0,0 +1,79 @@
|
||||
---
|
||||
- name: command
|
||||
title: Wazuh commands
|
||||
short: Wazuh Inc. custom fields.
|
||||
description: >
|
||||
This index stores information about the Wazuh's commands. These commands can be sent to agents or Wazuh servers.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: source
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Origin of the request.
|
||||
- name: user
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The user that originated the request.
|
||||
- name: target.id
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Unique identifier of the destination to send the command to.
|
||||
- name: target.type
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The destination type. One of [`group`, `agent`, `server`]
|
||||
- name: action.name
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The requested action type. Examples: `restart`, `update`, `change_group`, `apply_policy`, ...
|
||||
- name: action.args
|
||||
type: object
|
||||
level: custom
|
||||
description: >
|
||||
Command arguments object.
|
||||
- name: action.version
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Version of the command's schema.
|
||||
- name: timeout
|
||||
type: short
|
||||
level: custom
|
||||
description: >
|
||||
Seconds in which the command has to be sent to its target.
|
||||
- name: status
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Status within the Command Manager's context. One of ['pending', 'sent', 'success', 'failure'].
|
||||
- name: result.code
|
||||
type: short
|
||||
level: custom
|
||||
description: >
|
||||
Status code returned by the target.
|
||||
- name: result.message
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Result message returned by the target.
|
||||
- name: result.data
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Result data returned by the target.
|
||||
- name: request_id
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
UUID generated by the Command Manager.
|
||||
- name: order_id
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
UUID generated by the Command Manager.
|
||||
4
ecs/command/fields/mapping-settings.json
Normal file
4
ecs/command/fields/mapping-settings.json
Normal file
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "true",
|
||||
"date_detection": false
|
||||
}
|
||||
13
ecs/command/fields/subset.yml
Normal file
13
ecs/command/fields/subset.yml
Normal file
@ -0,0 +1,13 @@
|
||||
---
|
||||
name: command
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
"delivery_timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
command:
|
||||
fields: "*"
|
||||
17
ecs/command/fields/template-settings-legacy.json
Normal file
17
ecs/command/fields/template-settings-legacy.json
Normal file
@ -0,0 +1,17 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-commands*"],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"command.source",
|
||||
"command.target.type",
|
||||
"command.status",
|
||||
"command.action.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
21
ecs/command/fields/template-settings.json
Normal file
21
ecs/command/fields/template-settings.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-commands*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"command.source",
|
||||
"command.target.type",
|
||||
"command.status",
|
||||
"command.action.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
22
ecs/docs/README.md
Normal file
22
ecs/docs/README.md
Normal file
@ -0,0 +1,22 @@
|
||||
# Wazuh Common Schema
|
||||
|
||||
The Wazuh Common Schema is a derivation of the [Elastic Common Schema](https://www.elastic.co/guide/en/ecs/current/ecs-field-reference.html) (ECS) providing a common data schema for the different central components of Wazuh.
|
||||
|
||||
- [agent](./agent.md)
|
||||
- [alerts](alerts.md)
|
||||
- [command](commands.md)
|
||||
- [states-fim](states-fim.md)
|
||||
- [states-inventory-hardware](states-inventory-hardware.md)
|
||||
- [states-inventory-hotfixes](states-inventory-hotfixes.md)
|
||||
- [states-inventory-networks](states-inventory-networks.md)
|
||||
- [states-inventory-packages](states-inventory-packages.md)
|
||||
- [states-inventory-ports](states-inventory-ports.md)
|
||||
- [states-inventory-processes](states-inventory-processes.md)
|
||||
- [states-inventory-system](states-inventory-system.md)
|
||||
- [states-vulnerabilities](states-vulnerabilities.md)
|
||||
|
||||
---
|
||||
|
||||
### Useful resources
|
||||
For more information and additional resources, please refer to the following links:
|
||||
- [ECS schemas repository](https://github.com/elastic/ecs/tree/main/schemas)
|
||||
108
ecs/docs/agents.md
Normal file
108
ecs/docs/agents.md
Normal file
@ -0,0 +1,108 @@
|
||||
## `agents` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh/issues/23396#issuecomment-2176402993
|
||||
|
||||
Based on ECS [Agent Fields](https://www.elastic.co/guide/en/ecs/current/ecs-agent.html).
|
||||
|
||||
| | Field | Type | Description | Example |
|
||||
| --- | -------------------- | ------- | ---------------------------------------------------------------------- | ---------------------------------- |
|
||||
| | `agent.id` | keyword | Unique identifier of this agent. | `8a4f500d` |
|
||||
| | `agent.name` | keyword | Custom name of the agent. | `foo` |
|
||||
| \* | `agent.groups` | keyword | List of groups the agent belong to. | `["group1", "group2"]` |
|
||||
| \* | `agent.key` | keyword | The registration key of the agent. | `BfDbq0PpcLl9iWatJjY1shGvuQ4KXyOR` |
|
||||
| | `agent.type` | keyword | Type of agent. | `endpoint` |
|
||||
| | `agent.version` | keyword | Version of the agent. | `6.0.0-rc2` |
|
||||
| \* | `agent.is_connected` | boolean | Agents' interpreted connection status depending on `agent.last_login`. | |
|
||||
| \* | `agent.last_login` | date | The last time the agent logged in. | `11/11/2024 00:00:00` |
|
||||
| | `host.ip` | ip | Host IP addresses. Note: this field should contain an array of values. | `["192.168.56.11", "10.54.27.1"]` |
|
||||
| | `host.os.full` | keyword | Operating system name, including the version or code name. | `Mac OS Mojave` |
|
||||
|
||||
\* Custom field.
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: agent
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
groups: {}
|
||||
key: {}
|
||||
last_login: {}
|
||||
is_connected: {}
|
||||
host:
|
||||
fields:
|
||||
ip: {}
|
||||
os:
|
||||
fields:
|
||||
full: {}
|
||||
```
|
||||
|
||||
```yml
|
||||
---
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The groups the agent belongs to.
|
||||
- name: key
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The agent's registration key.
|
||||
- name: last_login
|
||||
type: date
|
||||
level: custom
|
||||
description: >
|
||||
The agent's last login.
|
||||
- name: is_connected
|
||||
type: boolean
|
||||
level: custom
|
||||
description: >
|
||||
Agents' interpreted connection status depending on `agent.last_login`.
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-agents*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"agent.name",
|
||||
"agent.type",
|
||||
"agent.version",
|
||||
"agent.name",
|
||||
"host.os.full",
|
||||
"host.ip"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
644
ecs/docs/alerts.md
Normal file
644
ecs/docs/alerts.md
Normal file
@ -0,0 +1,644 @@
|
||||
## `wazuh-alerts-5.x` time series index
|
||||
|
||||
Stateless index.
|
||||
|
||||
### Fields summary
|
||||
|
||||
For this stage, we are using all the fields of the ECS. No custom fields are used. As a result, we are using the default mapping of the ECS.
|
||||
|
||||
- [ECS main mappings](https://github.com/elastic/ecs/blob/v8.11.0/schemas/subsets/main.yml)
|
||||
|
||||
The generated template must match [this one](https://github.com/elastic/ecs/blob/v8.11.0/generated/elasticsearch/legacy/template.json).
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: main
|
||||
fields:
|
||||
base:
|
||||
fields: "*"
|
||||
agent:
|
||||
fields: "*"
|
||||
as:
|
||||
fields: "*"
|
||||
client:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
cloud:
|
||||
fields: "*"
|
||||
code_signature:
|
||||
fields: "*"
|
||||
container:
|
||||
fields: "*"
|
||||
data_stream:
|
||||
fields: "*"
|
||||
destination:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
device:
|
||||
fields: "*"
|
||||
dll:
|
||||
fields: "*"
|
||||
dns:
|
||||
fields: "*"
|
||||
ecs:
|
||||
fields: "*"
|
||||
elf:
|
||||
fields: "*"
|
||||
email:
|
||||
fields: "*"
|
||||
error:
|
||||
fields: "*"
|
||||
event:
|
||||
fields: "*"
|
||||
faas:
|
||||
fields: "*"
|
||||
file:
|
||||
fields: "*"
|
||||
geo:
|
||||
fields: "*"
|
||||
group:
|
||||
fields: "*"
|
||||
hash:
|
||||
fields: "*"
|
||||
host:
|
||||
fields: "*"
|
||||
http:
|
||||
fields: "*"
|
||||
interface:
|
||||
fields: "*"
|
||||
log:
|
||||
fields: "*"
|
||||
macho:
|
||||
fields: "*"
|
||||
network:
|
||||
fields: "*"
|
||||
observer:
|
||||
fields: "*"
|
||||
orchestrator:
|
||||
fields: "*"
|
||||
organization:
|
||||
fields: "*"
|
||||
os:
|
||||
fields: "*"
|
||||
package:
|
||||
fields: "*"
|
||||
pe:
|
||||
fields: "*"
|
||||
process:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
code_signature:
|
||||
fields: "*"
|
||||
command_line: {}
|
||||
elf:
|
||||
fields: "*"
|
||||
end: {}
|
||||
entity_id: {}
|
||||
entry_leader:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
command_line: {}
|
||||
entity_id: {}
|
||||
entry_meta:
|
||||
fields:
|
||||
type: {}
|
||||
source:
|
||||
fields:
|
||||
ip: {}
|
||||
executable: {}
|
||||
interactive: {}
|
||||
name: {}
|
||||
parent:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
session_leader:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
same_as_process: {}
|
||||
start: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
working_directory: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
attested_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
attested_groups:
|
||||
fields:
|
||||
name: {}
|
||||
entry_meta:
|
||||
fields:
|
||||
type:
|
||||
docs_only: True
|
||||
env_vars: {}
|
||||
executable: {}
|
||||
exit_code: {}
|
||||
group_leader:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
command_line: {}
|
||||
entity_id: {}
|
||||
executable: {}
|
||||
interactive: {}
|
||||
name: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
same_as_process: {}
|
||||
start: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
working_directory: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
hash:
|
||||
fields: "*"
|
||||
interactive: {}
|
||||
io:
|
||||
fields: "*"
|
||||
macho:
|
||||
fields: "*"
|
||||
name: {}
|
||||
parent:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
code_signature:
|
||||
fields: "*"
|
||||
command_line: {}
|
||||
elf:
|
||||
fields: "*"
|
||||
end: {}
|
||||
entity_id: {}
|
||||
executable: {}
|
||||
exit_code: {}
|
||||
group_leader:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
hash:
|
||||
fields: "*"
|
||||
interactive: {}
|
||||
macho:
|
||||
fields: "*"
|
||||
name: {}
|
||||
pe:
|
||||
fields: "*"
|
||||
pgid: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
thread:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
capabilities:
|
||||
fields:
|
||||
effective: {}
|
||||
permitted: {}
|
||||
title: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
uptime: {}
|
||||
working_directory: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
pe:
|
||||
fields: "*"
|
||||
pgid: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
previous:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
executable: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
same_as_process:
|
||||
docs_only: True
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
start: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
session_leader:
|
||||
fields:
|
||||
args: {}
|
||||
args_count: {}
|
||||
command_line: {}
|
||||
entity_id: {}
|
||||
executable: {}
|
||||
interactive: {}
|
||||
name: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
same_as_process: {}
|
||||
start: {}
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: {}
|
||||
minor: {}
|
||||
working_directory: {}
|
||||
parent:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
session_leader:
|
||||
fields:
|
||||
entity_id: {}
|
||||
pid: {}
|
||||
vpid: {}
|
||||
start: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
real_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
saved_group:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
supplemental_groups:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
thread:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
capabilities:
|
||||
fields:
|
||||
effective: {}
|
||||
permitted: {}
|
||||
title: {}
|
||||
tty:
|
||||
fields: "*"
|
||||
uptime: {}
|
||||
user:
|
||||
fields:
|
||||
id: {}
|
||||
name: {}
|
||||
working_directory: {}
|
||||
registry:
|
||||
fields: "*"
|
||||
related:
|
||||
fields: "*"
|
||||
risk:
|
||||
fields: "*"
|
||||
rule:
|
||||
fields: "*"
|
||||
server:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
service:
|
||||
fields: "*"
|
||||
source:
|
||||
fields:
|
||||
address: {}
|
||||
as:
|
||||
fields: "*"
|
||||
bytes: {}
|
||||
domain: {}
|
||||
geo:
|
||||
fields: "*"
|
||||
ip: {}
|
||||
mac: {}
|
||||
nat:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
packets: {}
|
||||
port: {}
|
||||
subdomain: {}
|
||||
registered_domain: {}
|
||||
top_level_domain: {}
|
||||
user:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
full_name: {}
|
||||
group:
|
||||
fields: "*"
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
threat:
|
||||
fields: "*"
|
||||
tls:
|
||||
fields: "*"
|
||||
tracing:
|
||||
fields: "*"
|
||||
url:
|
||||
fields: "*"
|
||||
user_agent:
|
||||
fields: "*"
|
||||
user:
|
||||
fields:
|
||||
changes:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
domain: {}
|
||||
effective:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
risk:
|
||||
fields: "*"
|
||||
roles: {}
|
||||
target:
|
||||
fields:
|
||||
domain: {}
|
||||
email: {}
|
||||
group:
|
||||
fields: "*"
|
||||
full_name: {}
|
||||
hash: {}
|
||||
id: {}
|
||||
name: {}
|
||||
roles: {}
|
||||
vlan:
|
||||
fields: "*"
|
||||
vulnerability:
|
||||
fields: "*"
|
||||
x509:
|
||||
fields: "*"
|
||||
```
|
||||
|
||||
### Template settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-alerts-5.x-*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"mapping": {
|
||||
"total_fields": {
|
||||
"limit": 2500
|
||||
}
|
||||
},
|
||||
"refresh_interval": "5s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Mapping settings
|
||||
|
||||
```json
|
||||
{
|
||||
"dynamic": true,
|
||||
"date_detection": false
|
||||
}
|
||||
```
|
||||
168
ecs/docs/commands.md
Normal file
168
ecs/docs/commands.md
Normal file
@ -0,0 +1,168 @@
|
||||
## `commands` index data model
|
||||
|
||||
> [!NOTE]
|
||||
> rev 0.1 - September 18th, 2024: Add initial model.
|
||||
> rev 0.2 - September 30th, 2024: Change type of `request_id`, `order_id` and `id` to keyword.
|
||||
> rev 0.3 - October 3rd, 2024: Change descriptions for `command.type`, `command.action.type`, `command.request_id`, `command.order_id`.
|
||||
> rev 0.4 - October 9th, 2024: Apply changes described in https://github.com/wazuh/wazuh-indexer-plugins/issues/96#issue-2576028654.
|
||||
> rev 0.5 - December 3rd, 2024: Added `@timestamp` and `delivery_timestamp` date fields.
|
||||
> rev 0.6 - January 24th, 2025: Rename index to `wazuh-commands`. The index is now visible to users.
|
||||
|
||||
### Fields summary
|
||||
|
||||
This index stores information about the commands executed by the agents. The index appears in 5.0.0 for the first time.
|
||||
|
||||
| | Field | Type | Description |
|
||||
| --- | ------------------------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| \* | `agent.groups` | keyword | List of groups the agent belong to. |
|
||||
| \* | `command.source` | keyword | Origin of the request. One of [`Users/Services` (via Management API), `Engine` (via Management API), `Content manager` (directly)]. |
|
||||
| \* | `command.user` | keyword | The user that originated the request. This user may represent a Management API or Indexer API user depending on the source. |
|
||||
| \* | `command.target.id` | keyword | Unique identifier of the destination to send the command to. |
|
||||
| \* | `command.target.type` | keyword | The destination type. One of [`group`, `agent`, `server`], |
|
||||
| \* | `command.action.name` | keyword | The requested action type. Examples: `restart`, `update`, `change_group`, `apply_policy`, ... |
|
||||
| \* | `command.action.args` | object | Command arguments. The Object type allows for ad-hoc format of the value. |
|
||||
| \* | `command.action.version` | keyword | Version of the command's schema. |
|
||||
| \* | `command.timeout` | short | Time window in which the command has to be sent to its target. |
|
||||
| \* | `command.status` | keyword | Status within the Command Manager's context. One of [`pending`, `sent`, `success`, `failure`]. |
|
||||
| \* | `command.result.code` | short | Status code returned by the target. |
|
||||
| \* | `command.result.message` | keyword | Result message returned by the target. |
|
||||
| \* | `command.result.data` | keyword | Result data returned by the target. |
|
||||
| \* | `command.request_id` | keyword | UUID generated by the Command Manager. |
|
||||
| \* | `command.order_id` | keyword | UUID generated by the Command Manager. |
|
||||
|
||||
\* Custom field.
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: command
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
"delivery_timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
command:
|
||||
fields: "*"
|
||||
```
|
||||
|
||||
```yml
|
||||
---
|
||||
- name: command
|
||||
title: Wazuh commands
|
||||
short: Wazuh Inc. custom fields.
|
||||
description: >
|
||||
This index stores information about the Wazuh's commands. These commands can be sent to agents or Wazuh servers.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: source
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Origin of the request.
|
||||
- name: user
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The user that originated the request.
|
||||
- name: target.id
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Unique identifier of the destination to send the command to.
|
||||
- name: target.type
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The destination type. One of [`group`, `agent`, `server`]
|
||||
- name: action.name
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The requested action type. Examples: `restart`, `update`, `change_group`, `apply_policy`, ...
|
||||
- name: action.args
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Array of command arguments, starting with the absolute path to the executable.
|
||||
- name: action.version
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Version of the command's schema.
|
||||
- name: timeout
|
||||
type: short
|
||||
level: custom
|
||||
description: >
|
||||
Time window in which the command has to be sent to its target.
|
||||
- name: status
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Status within the Command Manager's context. One of ['pending', 'sent', 'success', 'failure'].
|
||||
- name: result.code
|
||||
type: short
|
||||
level: custom
|
||||
description: >
|
||||
Status code returned by the target.
|
||||
- name: result.message
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Result message returned by the target.
|
||||
- name: result.data
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Result data returned by the target.
|
||||
- name: request_id
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
UUID generated by the Command Manager.
|
||||
- name: order_id
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
UUID generated by the Command Manager.
|
||||
```
|
||||
```yml
|
||||
- name: base
|
||||
title: Wazuh base fields
|
||||
root: true
|
||||
fields:
|
||||
- name: delivery_timestamp
|
||||
type: date
|
||||
level: custom
|
||||
description: >
|
||||
The latest date-time for the command to be delivered. Calculated as the current timestamp plus the timeout.
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-commands*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"command.source",
|
||||
"command.target.type",
|
||||
"command.status",
|
||||
"command.action.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
81
ecs/docs/inventory-hardware.md
Normal file
81
ecs/docs/inventory-hardware.md
Normal file
@ -0,0 +1,81 @@
|
||||
## `wazuh-states-inventory-hardware` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Host Fields](https://www.elastic.co/guide/en/ecs/current/ecs-host.html).
|
||||
- [Observer Fields](https://www.elastic.co/guide/en/ecs/current/ecs-observer.html).
|
||||
|
||||
| | Field name | Data type | Description | Example |
|
||||
| --- | ----------------------------- | --------- | ------------------------------------ | -------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Date/time when the event originated. | `2016-05-23T08:05:34.853Z` |
|
||||
| | `observer.serial_number` | keyword | Observer serial number. | |
|
||||
| \* | `host.cpu.name` | keyword | Name of the CPU | |
|
||||
| \* | `host.cpu.cores` | long | Number of CPU cores | |
|
||||
| \* | `host.cpu.speed` | long | Speed of the CPU in MHz | |
|
||||
| \* | `host.memory.total` | long | Total RAM in the system | |
|
||||
| \* | `host.memory.free` | long | Free RAM in the system | |
|
||||
| \* | `host.memory.used.percentage` | long | RAM usage as a percentage | |
|
||||
|
||||
\* Custom fields
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-hardware
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
observer:
|
||||
fields:
|
||||
serial_number: {}
|
||||
host:
|
||||
fields:
|
||||
memory:
|
||||
fields:
|
||||
total: {}
|
||||
free: {}
|
||||
used:
|
||||
fields:
|
||||
percentage: {}
|
||||
cpu:
|
||||
fields:
|
||||
name: {}
|
||||
cores: {}
|
||||
speed: {}
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-hardware*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": ["observer.board_serial"]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
66
ecs/docs/inventory-hotfixes.md
Normal file
66
ecs/docs/inventory-hotfixes.md
Normal file
@ -0,0 +1,66 @@
|
||||
## `wazuh-states-inventory-hotfixes` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Package Fields](https://www.elastic.co/guide/en/ecs/current/ecs-package.html).
|
||||
|
||||
| | Field name | Data type | Description | Example |
|
||||
| --- | --------------------- | --------- | --------------------- | -------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Timestamp of the scan | `2016-05-23T08:05:34.853Z` |
|
||||
| \* | `package.hotfix.name` | keyword | Name of the hotfix | |
|
||||
|
||||
\* Custom fields
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-hotfixes
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
package:
|
||||
fields:
|
||||
hotfix:
|
||||
fields:
|
||||
name: {}
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-states-inventory-hotfixes*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"package.hotfix.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
116
ecs/docs/inventory-networks.md
Normal file
116
ecs/docs/inventory-networks.md
Normal file
@ -0,0 +1,116 @@
|
||||
## `wazuh-states-inventory-networks` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Observer Fields](https://www.elastic.co/guide/en/ecs/current/ecs-observer.html).
|
||||
- [Interface Fields](https://www.elastic.co/guide/en/ecs/current/ecs-interface.html).
|
||||
- [Network Fields](https://www.elastic.co/guide/en/ecs/current/ecs-network.html).
|
||||
|
||||
| | Field name | Data type | Description | Example |
|
||||
| --- | ---------------------------------- | --------- | ------------------------------------------------------------------------------ | -------------------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Date/time when the event originated. | `2016-05-23T08:05:34.853Z` |
|
||||
| | `device.id` | keyword | The unique identifier of a device. | `00000000-54b3-e7c7-0000-000046bffd97` |
|
||||
| | `host.ip` | ip | Host IP addresses. Note: this field should contain an array of values. | `["192.168.56.11", "10.54.27.1"]` |
|
||||
| | `host.mac` | keyword | Host MAC addresses. | |
|
||||
| | `host.network.egress.bytes` | long | The number of bytes sent on all network interfaces. | |
|
||||
| | `host.network.egress.packets` | long | The number of packets sent on all network interfaces. | |
|
||||
| | `host.network.ingress.bytes` | long | The number of bytes received on all network interfaces. | |
|
||||
| | `host.network.ingress.packets` | long | The number of packets received on all network interfaces. | |
|
||||
| | `network.protocol` | keyword | Application protocol name. | `http` |
|
||||
| | `network.type` | keyword | In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc. | `ipv4` |
|
||||
| | `observer.ingress.interface.alias` | keyword | Interface alias. | `outside` |
|
||||
| | `observer.ingress.interface.name` | keyword | Interface name. | `eth0` |
|
||||
| \* | `host.network.egress.drops` | long | Number of dropped transmitted packets. | |
|
||||
| \* | `host.network.egress.errors` | long | Number of transmission errors. | |
|
||||
| \* | `host.network.ingress.drops` | long | Number of dropped received packets. | |
|
||||
| \* | `host.network.ingress.errors` | long | Number of reception errors. | |
|
||||
| \* | `interface.mtu` | long | Maximum transmission unit size. | |
|
||||
| \* | `interface.state` | keyword | State of the network interface. | |
|
||||
| \* | `interface.type` | keyword | Interface type (eg. "wireless" or "ethernet"). | |
|
||||
| \* | `network.broadcast` | ip | Broadcast address. | |
|
||||
| \* | `network.dhcp` | keyword | DHCP status (enabled, disabled, unknown, BOOTP). | |
|
||||
| \* | `network.gateway` | ip | Gateway address. | |
|
||||
| \* | `network.metric` | long | Metric of the network protocol. | |
|
||||
| \* | `network.netmask` | ip | Network mask. | |
|
||||
|
||||
\* Custom fields
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-networks
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
host:
|
||||
fields: "*"
|
||||
interface:
|
||||
fields:
|
||||
mtu: {}
|
||||
state: {}
|
||||
type: {}
|
||||
network:
|
||||
fields:
|
||||
broadcast: {}
|
||||
dhcp: {}
|
||||
gateway: {}
|
||||
metric: {}
|
||||
netmask: {}
|
||||
protocol: {}
|
||||
type: {}
|
||||
observer:
|
||||
fields:
|
||||
ingress:
|
||||
fields:
|
||||
interface:
|
||||
fields:
|
||||
alias: {}
|
||||
name: {}
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-states-inventory-networks*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"device.id",
|
||||
"event.id",
|
||||
"host.ip",
|
||||
"observer.ingress.interface.name",
|
||||
"observer.ingress.interface.alias",
|
||||
"process.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
95
ecs/docs/inventory-packages.md
Normal file
95
ecs/docs/inventory-packages.md
Normal file
@ -0,0 +1,95 @@
|
||||
## `wazuh-states-inventory-packages` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Package Fields](https://www.elastic.co/guide/en/ecs/current/ecs-package.html).
|
||||
|
||||
| | Field name | Data type | Description | Example |
|
||||
| --- | ---------------------- | --------- | ------------------------------------ | ------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Timestamp of the scan. | |
|
||||
| | `package.architecture` | keyword | Package architecture. | |
|
||||
| | `package.description` | keyword | Description of the package. | |
|
||||
| | `package.installed` | date | Time when package was installed. | |
|
||||
| | `package.name` | keyword | Package name. | |
|
||||
| | `package.path` | keyword | Path where the package is installed. | |
|
||||
| | `package.size` | long | Package size in bytes. | |
|
||||
| | `package.type` | keyword | Package type. | |
|
||||
| | `package.version` | keyword | Package version. | |
|
||||
|
||||
\* Custom field
|
||||
|
||||
<details><summary>Fields not included in ECS</summary>
|
||||
<p>
|
||||
|
||||
| | Field name | ECS field name | Data type | Description |
|
||||
| --- | ---------- | ----------------- | --------- | ------------------------------------------------------------------------------ |
|
||||
| ? | priority | | | Priority of the program |
|
||||
| ? | section | | | Section of the program category the package belongs to in DEB package managers |
|
||||
| X | vendor | package.reference | keyword | Home page or reference URL of the software in this package, if available. |
|
||||
| ? | multiarch | | | Multi-architecture compatibility |
|
||||
| X | source | | | Source of the program - package manager |
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-packages
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
"@timestamp": {}
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
package:
|
||||
fields:
|
||||
architecture: ""
|
||||
description: ""
|
||||
installed: {}
|
||||
name: ""
|
||||
path: ""
|
||||
size: {}
|
||||
type: ""
|
||||
version: ""
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-packages*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"package.architecture",
|
||||
"package.name",
|
||||
"package.version",
|
||||
"package.type"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
112
ecs/docs/inventory-ports.md
Normal file
112
ecs/docs/inventory-ports.md
Normal file
@ -0,0 +1,112 @@
|
||||
## `wazuh-states-inventory-ports` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Interface Fields](https://www.elastic.co/guide/en/ecs/current/ecs-interface.html).
|
||||
- [Network Fields](https://www.elastic.co/guide/en/ecs/current/ecs-network.html).
|
||||
- [Host Fields](https://www.elastic.co/guide/en/ecs/current/ecs-host.html).
|
||||
|
||||
| | Field name | Data type | Description | Example |
|
||||
| --- | ---------------------------- | --------- | ---------------------------------------------- | -------------------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Timestamp of the scan. | `2016-05-23T08:05:34.853Z` |
|
||||
| | `destination.ip` | ip | IP address of the destination. | `["192.168.0.100"]` |
|
||||
| | `destination.port` | long | Port of the destination. | |
|
||||
| | `device.id` | keyword | The unique identifier of a device. | `00000000-54b3-e7c7-0000-000046bffd97` |
|
||||
| | `file.inode` | keyword | Inode representing the file in the filesystem. | `256383` |
|
||||
| | `network.protocol` | keyword | Application protocol name. | `http` |
|
||||
| | `process.name` | keyword | Process name. | `ssh` |
|
||||
| | `process.pid` | long | Process ID. | `4242` |
|
||||
| | `source.ip` | ip | IP address of the source. | `["192.168.0.100"]` |
|
||||
| | `source.port` | long | Port of the source. | |
|
||||
| \* | `host.network.egress.queue` | long | Transmit queue length. | |
|
||||
| \* | `host.network.ingress.queue` | long | Receive queue length. | |
|
||||
| \* | `interface.state` | keyword | State of the network interface. | |
|
||||
|
||||
\* Custom fields
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-ports
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
destination:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
device:
|
||||
fields:
|
||||
id: {}
|
||||
file:
|
||||
fields:
|
||||
inode: {}
|
||||
host:
|
||||
fields:
|
||||
network:
|
||||
fields:
|
||||
egress:
|
||||
fields:
|
||||
queue: {}
|
||||
ingress:
|
||||
fields:
|
||||
queue: {}
|
||||
network:
|
||||
fields:
|
||||
protocol: {}
|
||||
process:
|
||||
fields:
|
||||
name: {}
|
||||
pid: {}
|
||||
source:
|
||||
fields:
|
||||
ip: {}
|
||||
port: {}
|
||||
interface:
|
||||
fields:
|
||||
state: {}
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-states-inventory-ports*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"process.name",
|
||||
"source.ip",
|
||||
"destination.ip"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
138
ecs/docs/inventory-processes.md
Normal file
138
ecs/docs/inventory-processes.md
Normal file
@ -0,0 +1,138 @@
|
||||
## `wazuh-states-inventory-processes` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Process Fields](https://www.elastic.co/guide/en/ecs/current/ecs-process.html).
|
||||
|
||||
| | Field name | Data type | Description | Examples | Comments |
|
||||
|----|---------------------------------| --------- | ---------------------------------------------------------------------------------------------------- | -------------------------------------------------- | ---------------------------------------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Date/time when the event originated. | `2016-05-23T08:05:34.853Z` | |
|
||||
| | `process.args` | keyword | Array of process arguments. | `["/usr/bin/ssh", "-l", "user", "10.0.0.16"]` | |
|
||||
| | `process.command_line` | wildcard | process.command_line. | `/usr/bin/ssh -l user 10.0.0.16` | |
|
||||
| | `process.name` | keyword | Process name. | `ssh` | |
|
||||
| | `process.parent.pid` | long | Parent process ID. | `4242` | |
|
||||
| | `process.pid` | long | Process ID. | `4242` | |
|
||||
| | `process.real_group.id` | keyword | Unique identifier for the group on the system/platform. | | |
|
||||
| | `process.real_user.id` | keyword | Unique identifier of the user. | `S-1-5-21-202424912787-2692429404-2351956786-1000` | |
|
||||
| | `process.saved_group.id` | keyword | Unique identifier for the group on the system/platform. | | |
|
||||
| | `process.saved_user.id` | keyword | Unique identifier of the user. | `S-1-5-21-202424912787-2692429404-2351956786-1000` | |
|
||||
| | `process.start` | date | The time the process started. | `2016-05-23T08:05:34.853Z` | |
|
||||
| | `process.user.id` | keyword | Unique identifier of the user. | `S-1-5-21-202424912787-2692429404-2351956786-1000` | |
|
||||
| ! | `process.thread.id` | long | Thread ID. | | `thread.group` is **not part of ECS;** but `thread.id` is. |
|
||||
| | `process.tty.char_device.major` | object | Information about the controlling TTY device. If set, the process belongs to an interactive session. | | Needs clarification |
|
||||
| \* | `process.group.id` | keyword | Unique identifier for the effective group on the system/platform. | | |
|
||||
|
||||
\* Custom field
|
||||
|
||||
!: Fields awaiting analysis
|
||||
|
||||
<details><summary>Fields not included in ECS</summary>
|
||||
<p>
|
||||
|
||||
| | Field name | ECS field name | Data type | Description | Example | Comments |
|
||||
| --- | ---------- | ------------------------- | ------------------ | ---------------------------------------------------------------------------------------------------- | ------- | ---------------------------------------------------------- |
|
||||
| x | state | `process.state` | **No ECS mapping** | State of the process | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | utime | `process.cpu.user` | **No ECS mapping** | User mode CPU time | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | stime | `process.cpu.system` | **No ECS mapping** | Kernel mode CPU time | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x? | fgroup | `process.group.file.id` | **No ECS mapping** | unknown | | |
|
||||
| x | priority | `process.priority` | **No ECS mapping** | Process priority | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | nice | `process.nice` | **No ECS mapping** | Nice value | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | size | `process.size` | **No ECS mapping** | Process size | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | vm_size | `process.vm.size` | **No ECS mapping** | Virtual memory size | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | resident | `process.memory.resident` | **No ECS mapping** | Resident set size | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| x | share | `process.memory.share` | **No ECS mapping** | Shared memory size | | **Not part of ECS;** Maybe as a custom field. |
|
||||
| ! | pgrp | `process.group.id` | keyword | Process group | | Isn't it duplicated ?? |
|
||||
| x | session | `process.session` | **No ECS mapping** | Session ID | | **Not part of ECS;** Needs clarification. |
|
||||
| x | nlwp | `process.nlwp` | **No ECS mapping** | Number of light-weight processes | | **Not part of ECS;** Needs clarification. |
|
||||
| ! | tgid | `process.thread.id` | **No ECS mapping** | Thread ID ID | | `thread.group` is **not part of ECS;** but `thread.id` is. |
|
||||
| x | processor | `host.cpu.processor` | **No ECS mapping** | Processor number | | No ECS field refers to the core number of the CPU. |
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-processes
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
"@timestamp": {}
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
process:
|
||||
fields:
|
||||
pid: {}
|
||||
name: ""
|
||||
parent:
|
||||
fields:
|
||||
pid: {}
|
||||
command_line: ""
|
||||
args: ""
|
||||
user:
|
||||
fields:
|
||||
id: ""
|
||||
real_user:
|
||||
fields:
|
||||
id: ""
|
||||
saved_user:
|
||||
fields:
|
||||
id: ""
|
||||
group:
|
||||
fields:
|
||||
id: ""
|
||||
real_group:
|
||||
fields:
|
||||
id: ""
|
||||
saved_group:
|
||||
fields:
|
||||
id: ""
|
||||
start: {}
|
||||
thread:
|
||||
fields:
|
||||
id: ""
|
||||
tty:
|
||||
fields:
|
||||
char_device:
|
||||
fields:
|
||||
major: ""
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-processes*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"process.name",
|
||||
"process.pid",
|
||||
"process.command_line"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
98
ecs/docs/inventory-system.md
Normal file
98
ecs/docs/inventory-system.md
Normal file
@ -0,0 +1,98 @@
|
||||
## `wazuh-states-inventory-system` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189837612
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Host Fields](https://www.elastic.co/guide/en/ecs/current/ecs-host.html).
|
||||
- [Operating System Fields](https://www.elastic.co/guide/en/ecs/current/ecs-os.html).
|
||||
|
||||
| | Field name | Data type | Description | Example |
|
||||
| --- | ------------------- | --------- | ---------------------------------------------------------- | -------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `@timestamp` | date | Date/time when the event originated. | `2016-05-23T08:05:34.853Z` |
|
||||
| | `host.architecture` | keyword | Operating system architecture. | `x86_64` |
|
||||
| | `host.hostname` | keyword | Hostname of the host. | |
|
||||
| | `host.os.full` | keyword | Operating system name, including the version or code name. | `Mac OS Mojave` |
|
||||
| | `host.os.kernel` | keyword | Operating system kernel version as a raw string. | `4.4.0-112-generic` |
|
||||
| | `host.os.name` | keyword | Operating system name, without the version. | `Mac OS X` |
|
||||
| | `host.os.platform` | keyword | Operating system platform (such centos, ubuntu, windows). | `darwin` |
|
||||
| | `host.os.type` | keyword | [linux, macos, unix, windows, ios, android] | `macos` |
|
||||
| | `host.os.version` | keyword | Operating system version as a raw string. | `10.14.1` |
|
||||
|
||||
\* Custom field
|
||||
|
||||
<details><summary>Details</summary>
|
||||
<p>
|
||||
|
||||
Removed fields:
|
||||
|
||||
- os_display_version
|
||||
- os_major (can be extracted from os_version)
|
||||
- os_minor (can be extracted from os_version)
|
||||
- os_patch (can be extracted from os_version)
|
||||
- os_release
|
||||
- reference
|
||||
- release
|
||||
- scan_id
|
||||
- sysname
|
||||
- version
|
||||
- checksum
|
||||
|
||||
Available fields:
|
||||
|
||||
- `os.family`
|
||||
- `hots.name`
|
||||
|
||||
</p>
|
||||
</details>
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-inventory-system
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
host:
|
||||
fields: "*"
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-system*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"host.name",
|
||||
"host.os.type",
|
||||
"host.os.version"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
106
ecs/docs/states-fim.md
Normal file
106
ecs/docs/states-fim.md
Normal file
@ -0,0 +1,106 @@
|
||||
## `wazuh-states-fim` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/issues/282#issuecomment-2189377542
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [File Fields](https://www.elastic.co/guide/en/ecs/current/ecs-file.html).
|
||||
- [Registry Fields](https://www.elastic.co/guide/en/ecs/current/ecs-registry.html).
|
||||
|
||||
| | Field | Type | Description | Example |
|
||||
| --- | ------------------ | ------- | ----------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
|
||||
| | `agent.*` | object | All the agent fields. | ` |
|
||||
| | `file.attributes` | keyword | Array of file attributes. | `["readonly", "system"]` |
|
||||
| | `file.gid` | keyword | Primary group ID (GID) of the file. | `1001` |
|
||||
| | `file.group` | keyword | Primary group name of the file. | `alice` |
|
||||
| | `file.inode` | keyword | Inode representing the file in the filesystem. | `256383` |
|
||||
| | `file.name` | keyword | Name of the file including the extension, without the directory. | `example.png` |
|
||||
| | `file.mode` | keyword | File permissions in octal mode. | `0640` |
|
||||
| | `file.mtime` | date | Last time the file's metadata changed. | |
|
||||
| | `file.owner` | keyword | File owner’s username. | |
|
||||
| | `file.path` | keyword | Full path to the file, including the file name. It should include the drive letter, when appropriate. | `/home/alice/example.png` |
|
||||
| | `file.size` | long | File size in bytes. | `16384` |
|
||||
| | `file.target_path` | keyword | Target path for symlinks. | |
|
||||
| | `file.type` | keyword | File type (file, dir, or symlink). | `file` |
|
||||
| | `file.uid` | keyword | User ID (UID) of the file owner. | `1001` |
|
||||
| | `file.hash.md5` | keyword | MD5 hash of the file. | |
|
||||
| | `file.hash.sha1` | keyword | SHA1 hash of the file. | |
|
||||
| | `file.hash.sha256` | keyword | SHA256 hash of the file. | |
|
||||
| | `registry.key` | keyword | Hive-relative path of keys. | `SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe` |
|
||||
| | `registry.value` | keyword | Name of the value written. | `HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\winword.exe\Debugger` |
|
||||
|
||||
\* Custom field.
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-fim
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
file:
|
||||
fields:
|
||||
attributes: {}
|
||||
name: {}
|
||||
path: {}
|
||||
gid: {}
|
||||
group: {}
|
||||
inode: {}
|
||||
hash:
|
||||
fields:
|
||||
md5: {}
|
||||
sha1: {}
|
||||
sha256: {}
|
||||
mtime: {}
|
||||
mode: {}
|
||||
size: {}
|
||||
target_path: {}
|
||||
type: {}
|
||||
uid: {}
|
||||
owner: {}
|
||||
registry:
|
||||
fields:
|
||||
key: {}
|
||||
value: {}
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-states-fim*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"file.name",
|
||||
"file.path",
|
||||
"file.target_path",
|
||||
"file.group",
|
||||
"file.uid",
|
||||
"file.gid"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
169
ecs/docs/states-vulnerability.md
Normal file
169
ecs/docs/states-vulnerability.md
Normal file
@ -0,0 +1,169 @@
|
||||
## `wazuh-states-vulnerabilities` index data model
|
||||
|
||||
### Fields summary
|
||||
|
||||
The fields are based on https://github.com/wazuh/wazuh-indexer/blob/4.9.0/ecs/vulnerability-detector
|
||||
|
||||
Based on ECS:
|
||||
|
||||
- [Agent Fields](https://www.elastic.co/guide/en/ecs/current/ecs-agent.html).
|
||||
- [Package Fields](https://www.elastic.co/guide/en/ecs/current/ecs-package.html).
|
||||
- [Host Fields](https://www.elastic.co/guide/en/ecs/current/ecs-host.html).
|
||||
- [Vulnerability Fields](https://www.elastic.co/guide/en/ecs/current/ecs-vulnerability.html).
|
||||
|
||||
| | Field | Type | Description |
|
||||
| --- | ----------------------------------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- |
|
||||
| | `agent.*` | object | All the `agent` fields. | ` |
|
||||
| | `host.*` | object | All the `host` fields. |
|
||||
| | `package.architecture` | keyword | Package architecture. |
|
||||
| | `package.build_version` | keyword | Additional information about the build version of the installed package. |
|
||||
| | `package.checksum` | keyword | Checksum of the installed package for verification. |
|
||||
| | `package.description` | keyword | Description of the package. |
|
||||
| | `package.install_scope` | keyword | Indicating how the package was installed, e.g. user-local, global. |
|
||||
| | `package.installed` | date | Time when package was installed. |
|
||||
| | `package.license` | keyword | License under which the package was released. |
|
||||
| | `package.name` | keyword | Package name |
|
||||
| | `package.path` | keyword | Path where the package is installed. |
|
||||
| | `package.reference` | keyword | Home page or reference URL of the software in this package, if available. |
|
||||
| | `package.size` | long | Package size in bytes. |
|
||||
| | `package.type` | keyword | Type of package. |
|
||||
| | `package.version` | keyword | Package version |
|
||||
| | `vulnerability.category` | keyword | The type of system or architecture that the vulnerability affects |
|
||||
| | `vulnerability.classification` | keyword | The classification of the vulnerability scoring system. |
|
||||
| | `vulnerability.description` | keyword | The description of the vulnerability that provides additional context of the vulnerability |
|
||||
| \* | `vulnerability.detected_at` | date | Vulnerability's detection date. |
|
||||
| | `vulnerability.enumeration` | keyword | The type of identifier used for this vulnerability. |
|
||||
| | `vulnerability.id` | keyword | The identification (ID) is the number portion of a vulnerability entry. |
|
||||
| \* | `vulnerability.published_at` | date | Vulnerability's publication date. |
|
||||
| | `vulnerability.reference` | keyword | A resource that provides additional information, context, and mitigations for the identified vulnerability. |
|
||||
| | `vulnerability.report_id` | keyword | The report or scan identification number. |
|
||||
| \* | `vulnerability.scanner.source` | keyword | The origin of the decision of the scanner (AKA feed used to detect the vulnerability). |
|
||||
| | `vulnerability.scanner.vendor` | keyword | The name of the vulnerability scanner vendor. |
|
||||
| | `vulnerability.score.base` | float | Scores can range from 0.0 to 10.0, with 10.0 being the most severe. |
|
||||
| | `vulnerability.score.environmental` | float | Scores can range from 0.0 to 10.0, with 10.0 being the most severe. |
|
||||
| | `vulnerability.score.temporal` | float | Scores can range from 0.0 to 10.0, with 10.0 being the most severe. |
|
||||
| | `vulnerability.score.version` | keyword | The National Vulnerability Database (NVD) provides qualitative severity rankings of "Low", "Medium", and "High" for CVSS v2.0 base score ranges in addition to the severity ratings for CVSS v3.0 as they are defined in the CVSS v3.0 specification. |
|
||||
| | `vulnerability.severity` | keyword | The severity of the vulnerability can help with metrics and internal prioritization regarding remediation. |
|
||||
| \* | `vulnerability.under_evaluation` | boolean | Indicates if the vulnerability is awaiting analysis by the NVD. |
|
||||
| \* | `wazuh.cluster.name` | keyword | Name of the Wazuh cluster. |
|
||||
| \* | `wazuh.cluster.node` | keyword | Name of the Wazuh cluster node. |
|
||||
| \* | `wazuh.schema.version` | keyword | Version of the Wazuh schema. |
|
||||
|
||||
\* Custom field.
|
||||
|
||||
### ECS mapping
|
||||
|
||||
```yml
|
||||
---
|
||||
name: wazuh-states-vulnerabilities
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
package:
|
||||
fields: "*"
|
||||
host:
|
||||
fields: "*"
|
||||
vulnerability:
|
||||
fields: "*"
|
||||
wazuh:
|
||||
fields: "*"
|
||||
|
||||
```
|
||||
|
||||
```yml
|
||||
---
|
||||
- name: vulnerability
|
||||
title: Vulnerability
|
||||
group: 2
|
||||
short: Fields to describe the vulnerability relevant to an event.
|
||||
description: >
|
||||
The vulnerability fields describe information about a vulnerability that is
|
||||
relevant to an event.
|
||||
type: group
|
||||
fields:
|
||||
- name: detected_at
|
||||
type: date
|
||||
level: custom
|
||||
description: >
|
||||
Vulnerability's detection date.
|
||||
- name: published_at
|
||||
type: date
|
||||
level: custom
|
||||
description: >
|
||||
Vulnerability's publication date.
|
||||
- name: under_evaluation
|
||||
type: boolean
|
||||
level: custom
|
||||
description: >
|
||||
Indicates if the vulnerability is awaiting analysis by the NVD.
|
||||
- name: scanner.source
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
The origin of the decision of the scanner (AKA feed used to detect the vulnerability).
|
||||
```
|
||||
|
||||
```yml
|
||||
---
|
||||
---
|
||||
- name: wazuh
|
||||
title: Wazuh
|
||||
description: >
|
||||
Wazuh Inc. custom fields
|
||||
fields:
|
||||
- name: cluster.name
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Wazuh cluster name.
|
||||
- name: cluster.node
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Wazuh cluster node name.
|
||||
- name: schema.version
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Wazuh schema version.
|
||||
```
|
||||
|
||||
### Index settings
|
||||
|
||||
```json
|
||||
{
|
||||
"index_patterns": ["wazuh-states-vulnerabilities*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"host.os.full",
|
||||
"host.os.version",
|
||||
"package.name",
|
||||
"package.version",
|
||||
"vulnerability.id",
|
||||
"vulnerability.description",
|
||||
"vulnerability.severity",
|
||||
"wazuh.cluster.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
33
ecs/generator/images/Dockerfile
Normal file
33
ecs/generator/images/Dockerfile
Normal file
@ -0,0 +1,33 @@
|
||||
FROM python:3.10
|
||||
|
||||
# Define the version as a build argument
|
||||
ARG ECS_VERSION=v8.11.0
|
||||
|
||||
# Update the package list and upgrade all packages
|
||||
RUN apt-get update && \
|
||||
apt-get upgrade -y && \
|
||||
# Install dependencies
|
||||
apt-get install -y git jq && \
|
||||
# Cleanup
|
||||
apt-get clean && \
|
||||
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
|
||||
# Clone elastic ECS repository and install required Python libraries
|
||||
git clone https://github.com/elastic/ecs.git -b ${ECS_VERSION} --depth 1 && \
|
||||
pip install -r ecs/scripts/requirements.txt && \
|
||||
# Create the directory for the ecs definitions (this will be used as a volume)
|
||||
mkdir -p /source/ecs
|
||||
|
||||
# Ensure the generator.sh script is in the correct location
|
||||
ADD ecs/generator/images/generator.sh /ecs/generator.sh
|
||||
|
||||
# Define the directory as a volume to allow for external mounting
|
||||
VOLUME /source/ecs
|
||||
|
||||
# Ensure the generator.sh script is executable
|
||||
RUN chmod +x /ecs/generator.sh
|
||||
|
||||
# Set the working directory to the ECS repository
|
||||
WORKDIR /ecs
|
||||
|
||||
# Define the entry point for the container to execute the generator.sh script
|
||||
ENTRYPOINT ["/bin/bash", "/ecs/generator.sh"]
|
||||
103
ecs/generator/images/generator.sh
Executable file
103
ecs/generator/images/generator.sh
Executable file
@ -0,0 +1,103 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
# The OpenSearch Contributors require contributions made to
|
||||
# this file be licensed under the Apache-2.0 license or a
|
||||
# compatible open source license.
|
||||
|
||||
# Default values
|
||||
ECS_VERSION="${ECS_VERSION:-v8.11.0}"
|
||||
ECS_SOURCE="${ECS_SOURCE:-/source}"
|
||||
|
||||
# Function to display usage information
|
||||
show_usage() {
|
||||
echo "Usage: $0"
|
||||
echo "Environment Variables:"
|
||||
echo " * ECS_MODULE: Module to generate mappings for"
|
||||
echo " * ECS_VERSION: (Optional) ECS version to generate mappings for (default: v8.11.0)"
|
||||
echo " * ECS_SOURCE: (Optional) Path to the wazuh-indexer repository (default: /source)"
|
||||
echo "Example: docker run -e ECS_MODULE=alerts -e ECS_VERSION=v8.11.0 ecs-generator"
|
||||
}
|
||||
|
||||
# Ensure ECS_MODULE is provided
|
||||
if [ -z "${ECS_MODULE:-}" ]; then
|
||||
show_usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to remove multi-fields from the generated index template
|
||||
remove_multi_fields() {
|
||||
local in_file="$1"
|
||||
local out_file="$2"
|
||||
|
||||
jq 'del(
|
||||
.mappings.properties.agent.properties.host.properties.os.properties.full.fields,
|
||||
.mappings.properties.agent.properties.host.properties.os.properties.name.fields,
|
||||
.mappings.properties.host.properties.os.properties.full.fields,
|
||||
.mappings.properties.host.properties.os.properties.name.fields,
|
||||
.mappings.properties.process.properties.command_line.fields,
|
||||
.mappings.properties.process.properties.name.fields,
|
||||
.mappings.properties.vulnerability.properties.description.fields
|
||||
)' "$in_file" > "$out_file"
|
||||
}
|
||||
|
||||
# Function to generate mappings
|
||||
generate_mappings() {
|
||||
local ecs_module="$1"
|
||||
local indexer_path="$2"
|
||||
local ecs_version="$3"
|
||||
|
||||
local in_files_dir="$indexer_path/ecs/$ecs_module/fields"
|
||||
local out_dir="$indexer_path/ecs/$ecs_module/mappings/$ecs_version"
|
||||
|
||||
# Ensure the output directory exists
|
||||
mkdir -p "$out_dir"
|
||||
|
||||
# Generate mappings
|
||||
python scripts/generator.py --strict --ref "$ecs_version" \
|
||||
--include "$in_files_dir/custom/" \
|
||||
--subset "$in_files_dir/subset.yml" \
|
||||
--template-settings "$in_files_dir/template-settings.json" \
|
||||
--template-settings-legacy "$in_files_dir/template-settings-legacy.json" \
|
||||
--mapping-settings "$in_files_dir/mapping-settings.json" \
|
||||
--out "$out_dir"
|
||||
|
||||
# Replace unsupported types
|
||||
echo "Replacing unsupported types in generated mappings"
|
||||
find "$out_dir" -type f -exec sed -i 's/constant_keyword/keyword/g' {} \;
|
||||
find "$out_dir" -type f -exec sed -i 's/wildcard/keyword/g' {} \;
|
||||
find "$out_dir" -type f -exec sed -i 's/match_only_text/keyword/g' {} \;
|
||||
find "$out_dir" -type f -exec sed -i 's/flattened/flat_object/g' {} \;
|
||||
find "$out_dir" -type f -exec sed -i 's/scaled_float/float/g' {} \;
|
||||
find "$out_dir" -type f -exec sed -i '/scaling_factor/d' {} \;
|
||||
|
||||
local in_file="$out_dir/generated/elasticsearch/legacy/template.json"
|
||||
local out_file="$out_dir/generated/elasticsearch/legacy/template-tmp.json"
|
||||
|
||||
# Delete the "tags" field from the index template
|
||||
echo "Deleting the \"tags\" field from the index template"
|
||||
jq 'del(.mappings.properties.tags)' "$in_file" > "$out_file"
|
||||
mv "$out_file" "$in_file"
|
||||
|
||||
# Remove multi-fields from the generated index template
|
||||
echo "Removing multi-fields from the index template"
|
||||
remove_multi_fields "$in_file" "$out_file"
|
||||
mv "$out_file" "$in_file"
|
||||
|
||||
# Transform legacy index template for OpenSearch compatibility
|
||||
jq '{
|
||||
"index_patterns": .index_patterns,
|
||||
"priority": .order,
|
||||
"template": {
|
||||
"settings": .settings,
|
||||
"mappings": .mappings
|
||||
}
|
||||
}' "$in_file" > "$out_dir/generated/elasticsearch/legacy/opensearch-template.json"
|
||||
|
||||
echo "Mappings saved to $out_dir"
|
||||
}
|
||||
|
||||
# Generate mappings
|
||||
generate_mappings "$ECS_MODULE" "$ECS_SOURCE" "$ECS_VERSION"
|
||||
79
ecs/generator/mapping-generator.sh
Normal file
79
ecs/generator/mapping-generator.sh
Normal file
@ -0,0 +1,79 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Run the ECS generator tool container.
|
||||
# Requirements:
|
||||
# - Docker
|
||||
# - Docker Compose
|
||||
|
||||
set -e
|
||||
|
||||
# The container is built only if needed, the tool can be executed several times
|
||||
# for different modules in the same build since the script runs as entrypoint
|
||||
|
||||
|
||||
|
||||
# ====
|
||||
# Checks that the script is run from the intended location
|
||||
# ====
|
||||
function navigate_to_project_root() {
|
||||
local repo_root_marker
|
||||
local script_path
|
||||
repo_root_marker=".github"
|
||||
script_path=$(dirname "$(realpath "$0")")
|
||||
|
||||
while [[ "$script_path" != "/" ]] && [[ ! -d "$script_path/$repo_root_marker" ]]; do
|
||||
script_path=$(dirname "$script_path")
|
||||
done
|
||||
|
||||
if [[ "$script_path" == "/" ]]; then
|
||||
echo "Error: Unable to find the repository root."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cd "$script_path"
|
||||
}
|
||||
|
||||
# ====
|
||||
# Displays usage information
|
||||
# ====
|
||||
function usage() {
|
||||
echo "Usage: $0 {run|down|stop} <ECS_MODULE> [REPO_PATH]"
|
||||
exit 1
|
||||
}
|
||||
|
||||
function main() {
|
||||
local compose_filename="ecs/generator/mapping-generator.yml"
|
||||
local compose_command
|
||||
local module
|
||||
local repo_path
|
||||
|
||||
navigate_to_project_root
|
||||
|
||||
compose_command="docker compose -f $compose_filename"
|
||||
|
||||
case $1 in
|
||||
run)
|
||||
if [[ "$#" -lt 2 || "$#" -gt 3 ]]; then
|
||||
usage
|
||||
fi
|
||||
module="$2"
|
||||
repo_path="${3:-$(pwd)}"
|
||||
|
||||
# Start the container with the required env variables
|
||||
ECS_MODULE="$module" REPO_PATH="$repo_path" $compose_command up
|
||||
# The containers are stopped after each execution
|
||||
$compose_command stop
|
||||
;;
|
||||
down)
|
||||
$compose_command down
|
||||
;;
|
||||
stop)
|
||||
$compose_command stop
|
||||
;;
|
||||
*)
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
main "$@"
|
||||
11
ecs/generator/mapping-generator.yml
Normal file
11
ecs/generator/mapping-generator.yml
Normal file
@ -0,0 +1,11 @@
|
||||
services:
|
||||
ecs-mapping-generator:
|
||||
image: wazuh-ecs-generator
|
||||
container_name: wazuh-ecs-generator
|
||||
build:
|
||||
context: ./../..
|
||||
dockerfile: ${REPO_PATH:-.}/ecs/generator/images/Dockerfile
|
||||
volumes:
|
||||
- ${REPO_PATH:-.}/ecs:/source/ecs
|
||||
environment:
|
||||
- ECS_MODULE=${ECS_MODULE}
|
||||
258
ecs/scripts/generate-pr-to-plugins.sh
Normal file
258
ecs/scripts/generate-pr-to-plugins.sh
Normal file
@ -0,0 +1,258 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Constants
|
||||
ECS_VERSION=${ECS_VERSION:-v8.11.0}
|
||||
MAPPINGS_SUBPATH="mappings/${ECS_VERSION}/generated/elasticsearch/legacy/template.json"
|
||||
TEMPLATES_PATH="plugins/setup/src/main/resources/"
|
||||
CURRENT_PATH=$(pwd)
|
||||
OUTPUT_PATH=${OUTPUT_PATH:-"$CURRENT_PATH"/../output}
|
||||
BASE_BRANCH=${BASE_BRANCH:-main}
|
||||
|
||||
# Committer's identity
|
||||
COMMITER_EMAIL=${COMMITER_EMAIL:-$(git config user.email)}
|
||||
COMMITTER_USERNAME=${COMMITTER_USERNAME:-$(git config user.name)} # Human readable username
|
||||
|
||||
# Global variables
|
||||
declare -a relevant_modules
|
||||
declare -A module_to_file
|
||||
|
||||
# Check if a command exists on the system.
|
||||
# Parameters:
|
||||
# $1: Command to check.
|
||||
command_exists() {
|
||||
command -v "$1" &> /dev/null
|
||||
}
|
||||
|
||||
# Validate that all required dependencies are installed.
|
||||
validate_dependencies() {
|
||||
local required_commands=("docker" "docker-compose" "gh")
|
||||
for cmd in "${required_commands[@]}"; do
|
||||
if ! command_exists "$cmd"; then
|
||||
echo "Error: $cmd is not installed. Please install it and try again."
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
# Check if the script is being executed in a GHA Workflow
|
||||
check_running_on_gha() {
|
||||
if [[ -n "${GITHUB_RUN_ID}" ]]; then
|
||||
return 0
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Detect modified ECS modules by comparing the current branch with the base branch.
|
||||
detect_modified_modules() {
|
||||
echo
|
||||
echo "---> Fetching and extracting modified ECS modules..."
|
||||
git fetch origin +refs/heads/main:refs/remotes/origin/main
|
||||
local modified_files
|
||||
local updated_modules=()
|
||||
modified_files=$(git diff --name-only origin/"$BASE_BRANCH")
|
||||
|
||||
for file in $modified_files; do
|
||||
if [[ $file == ecs/* ]]; then
|
||||
ecs_module=$(echo "$file" | cut -d'/' -f2)
|
||||
if [[ ! " ${updated_modules[*]} " =~ ${ecs_module} ]]; then
|
||||
updated_modules+=("$ecs_module")
|
||||
fi
|
||||
fi
|
||||
done
|
||||
echo "Updated ECS modules: ${updated_modules[*]}"
|
||||
|
||||
# Mapping section
|
||||
module_to_file=(
|
||||
[agent]="index-template-agent.json"
|
||||
[alerts]="index-template-alerts.json"
|
||||
[command]="index-template-commands.json"
|
||||
[states-fim]="index-template-fim.json"
|
||||
[states-inventory-hardware]="index-template-hardware.json"
|
||||
[states-inventory-hotfixes]="index-template-hotfixes.json"
|
||||
[states-inventory-networks]="index-template-networks.json"
|
||||
[states-inventory-packages]="index-template-packages.json"
|
||||
[states-inventory-ports]="index-template-ports.json"
|
||||
[states-inventory-processes]="index-template-processes.json"
|
||||
[states-inventory-scheduled-commands]="index-template-scheduled-commands.json"
|
||||
[states-inventory-system]="index-template-system.json"
|
||||
[states-vulnerabilities]="index-template-vulnerabilities.json"
|
||||
)
|
||||
|
||||
relevant_modules=()
|
||||
for ecs_module in "${updated_modules[@]}"; do
|
||||
if [[ -n "${module_to_file[$ecs_module]}" ]]; then
|
||||
relevant_modules+=("$ecs_module")
|
||||
fi
|
||||
done
|
||||
echo "Relevant ECS modules: ${relevant_modules[*]}"
|
||||
}
|
||||
|
||||
# Run the ECS generator script for relevant modules.
|
||||
run_ecs_generator() {
|
||||
echo
|
||||
echo "---> Running ECS Generator script..."
|
||||
if [[ ${#relevant_modules[@]} -gt 0 ]]; then
|
||||
for ecs_module in "${relevant_modules[@]}"; do
|
||||
bash ecs/generator/mapping-generator.sh run "$ecs_module"
|
||||
echo "Processed ECS module: $ecs_module"
|
||||
bash ecs/generator/mapping-generator.sh down
|
||||
done
|
||||
else
|
||||
echo "No relevant modifications detected in ecs/ directory."
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Configure Git with the committer's identity and commit signing.
|
||||
configure_git() {
|
||||
# Setup the committers identity.
|
||||
git config --global user.email "${COMMITER_EMAIL}"
|
||||
git config --global user.name "${COMMITTER_USERNAME}"
|
||||
|
||||
# Store the SSH key pair so Git can read it.
|
||||
mkdir -p ~/.ssh/
|
||||
echo "${SSH_PRIVATE_KEY}" > ~/.ssh/id_ed25519_bot
|
||||
echo "${SSH_PUBLIC_KEY}" > ~/.ssh/id_ed25519_bot.pub
|
||||
chmod 600 ~/.ssh/id_ed25519_bot
|
||||
chmod 644 ~/.ssh/id_ed25519_bot.pub
|
||||
|
||||
# Setup commit signing
|
||||
ssh-add ~/.ssh/id_ed25519_bot
|
||||
git config --global gpg.format ssh
|
||||
git config --global commit.gpgsign true
|
||||
git config --global user.signingkey ~/.ssh/id_ed25519_bot.pub
|
||||
}
|
||||
|
||||
# Commit and push changes to the target repository.
|
||||
commit_and_push_changes() {
|
||||
# Only for the GH Workflow
|
||||
if check_running_on_gha; then
|
||||
echo "Configuring Git for ${COMMITTER_USERNAME}"
|
||||
configure_git
|
||||
fi
|
||||
|
||||
echo
|
||||
echo "---> Committing and pushing changes to ${PLUGINS_REPO} repository..."
|
||||
|
||||
echo "Copying ECS templates to the plugins repository..."
|
||||
for ecs_module in "${relevant_modules[@]}"; do
|
||||
target_file=${module_to_file[$ecs_module]}
|
||||
if [[ -z "$target_file" ]]; then
|
||||
continue
|
||||
fi
|
||||
# Save the template on the output path
|
||||
mkdir -p "$OUTPUT_PATH"
|
||||
cp "$CURRENT_PATH/ecs/$ecs_module/$MAPPINGS_SUBPATH" "$OUTPUT_PATH/$target_file"
|
||||
# Copy the template to the plugins repository
|
||||
echo " - Copy template for module '$ecs_module' to '$target_file'"
|
||||
cp "$CURRENT_PATH/ecs/$ecs_module/$MAPPINGS_SUBPATH" "$TEMPLATES_PATH/$target_file"
|
||||
done
|
||||
|
||||
git status --short
|
||||
|
||||
if ! git diff-index --quiet HEAD --; then
|
||||
echo "Changes detected. Committing and pushing to the repository..."
|
||||
git add .
|
||||
git commit -m "Update ECS templates for modified modules: ${relevant_modules[*]}"
|
||||
git push
|
||||
else
|
||||
echo "Nothing to commit, working tree clean."
|
||||
exit 0
|
||||
fi
|
||||
}
|
||||
|
||||
# Create or update a Pull Request with the modified ECS templates.
|
||||
create_or_update_pr() {
|
||||
echo
|
||||
echo "---> Creating or updating Pull Request..."
|
||||
|
||||
local existing_pr
|
||||
local modules_body
|
||||
local title
|
||||
local body
|
||||
|
||||
existing_pr=$(gh pr list --head "$BRANCH_NAME" --json number --jq '.[].number')
|
||||
# Format modules
|
||||
modules_body=$(printf -- '- %s\n' "${relevant_modules[@]}")
|
||||
|
||||
# Create title and body with formatted modules list
|
||||
title="[ECS Generator] Update index templates"
|
||||
body=$(cat <<EOF
|
||||
This PR updates the ECS templates for the following modules:
|
||||
${modules_body}
|
||||
EOF
|
||||
)
|
||||
# Store the PAT in a file that can be accessed by the GitHub CLI.
|
||||
echo "${GITHUB_TOKEN}" > token.txt
|
||||
|
||||
# Authorize GitHub CLI for the current repository and
|
||||
# create a pull-requests containing the updates.
|
||||
gh auth login --with-token < token.txt
|
||||
|
||||
if [ -z "$existing_pr" ]; then
|
||||
output=$(gh pr create --title "$title" --body "$body" --base master --head "$BRANCH_NAME")
|
||||
pr_url=$(echo "$output" | grep -oP 'https://github.com/\S+')
|
||||
export PR_URL="$pr_url"
|
||||
echo "New pull request created: $PR_URL"
|
||||
fi
|
||||
}
|
||||
|
||||
# Display usage information.
|
||||
usage() {
|
||||
echo "Usage: $0 -b <BRANCH_NAME> -t <GITHUB_TOKEN>"
|
||||
echo " -t [GITHUB_TOKEN] (Required) GitHub token to authenticate with GitHub API."
|
||||
echo " -b [BRANCH_NAME] (Optional) Branch name to create or update the PR. Default: current branch."
|
||||
echo " If not provided, the script will use the GITHUB_TOKEN environment variable."
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Main function
|
||||
main() {
|
||||
while getopts ":b:t:o:" opt; do
|
||||
case ${opt} in
|
||||
b )
|
||||
BRANCH_NAME=$OPTARG
|
||||
;;
|
||||
t )
|
||||
GITHUB_TOKEN=$OPTARG
|
||||
;;
|
||||
o )
|
||||
if [[ "$OPTARG" == "./"* || ! "$OPTARG" =~ ^/ ]]; then
|
||||
OPTARG="$(pwd)/${OPTARG#./}"
|
||||
fi
|
||||
OUTPUT_PATH=$OPTARG
|
||||
;;
|
||||
\? )
|
||||
usage
|
||||
;;
|
||||
: )
|
||||
echo "Invalid option: $OPTARG requires an argument" 1>&2
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -z "$BRANCH_NAME" ]; then
|
||||
# Check if we are in a Git repository
|
||||
if git rev-parse --is-inside-work-tree > /dev/null 2>&1; then
|
||||
BRANCH_NAME=$(git rev-parse --abbrev-ref HEAD)
|
||||
else
|
||||
echo "Error: You are not in a Git repository." >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ -z "$BRANCH_NAME" ] || [ -z "$GITHUB_TOKEN" ]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
validate_dependencies
|
||||
detect_modified_modules
|
||||
run_ecs_generator # Exit if no changes on relevant modules.
|
||||
clone_target_repo
|
||||
commit_and_push_changes # Exit if no changes detected.
|
||||
create_or_update_pr
|
||||
}
|
||||
|
||||
main "$@"
|
||||
211
ecs/states-fim/event-generator/event_generator.py
Normal file
211
ecs/states-fim/event-generator/event_generator.py
Normal file
@ -0,0 +1,211 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-states-fim"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'host': generate_random_host()
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host():
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'bootid{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 1000)}',
|
||||
'geo': {
|
||||
'city_name': 'CityName',
|
||||
'continent_code': 'NA',
|
||||
'continent_name': 'North America',
|
||||
'country_iso_code': 'US',
|
||||
'country_name': 'United States',
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90, 90), 6),
|
||||
'lon': round(random.uniform(-180, 180), 6)
|
||||
},
|
||||
'name': f'hostname{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': 'US-CA',
|
||||
'region_name': 'California',
|
||||
'timezone': 'America/Los_Angeles'
|
||||
},
|
||||
'hostname': f'host{random.randint(0, 1000)}',
|
||||
'id': f'id{random.randint(0, 1000)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'name': f'host{random.randint(0, 1000)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': random.choice(['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL']),
|
||||
'full': f'{random.choice(["debian", "ubuntu", "macos", "ios", "android", "RHEL"])} {random.randint(0, 99)}.{random.randint(0, 99)}',
|
||||
'kernel': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}',
|
||||
'name': random.choice(['Linux', 'Windows', 'macOS']),
|
||||
'platform': random.choice(['platform1', 'platform2']),
|
||||
'type': random.choice(['os_type1', 'os_type2']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
},
|
||||
'pid_ns_ino': f'pid_ns{random.randint(0, 9999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 10),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 10),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'type': random.choice(['type1', 'type2']),
|
||||
'uptime': random.randint(1000, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_file():
|
||||
file = {
|
||||
'attributes': random.choice(['attribute1', 'attribute2']),
|
||||
'gid': f'gid{random.randint(0, 1000)}',
|
||||
'group': f'group{random.randint(0, 1000)}',
|
||||
'hash': {
|
||||
'md5': f'{random.randint(0, 9999)}',
|
||||
'sha1': f'{random.randint(0, 9999)}',
|
||||
'sha256': f'{random.randint(0, 9999)}'
|
||||
},
|
||||
'inode': f'inode{random.randint(0, 1000)}',
|
||||
'mode': f'mode{random.randint(0, 1000)}',
|
||||
'mtime': generate_random_date(),
|
||||
'name': f'name{random.randint(0, 1000)}',
|
||||
'owner': f'owner{random.randint(0, 1000)}',
|
||||
'path': f'/path/to/file',
|
||||
'size': random.randint(1000, 1000000),
|
||||
'target_path': f'/path/to/target{random.randint(0, 1000)}',
|
||||
'type': random.choice(['file_type1', 'file_type2']),
|
||||
'uid': f'uid{random.randint(0, 1000)}'
|
||||
}
|
||||
return file
|
||||
|
||||
|
||||
def generate_random_registry():
|
||||
registry = {
|
||||
'key': f'registry_key{random.randint(0, 1000)}',
|
||||
'value': f'registry_value{random.randint(0, 1000)}'
|
||||
}
|
||||
return registry
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'agent': generate_random_agent(),
|
||||
'file': generate_random_file(),
|
||||
'registry': generate_random_registry()
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/states-fim/fields/custom/agent.yml
Normal file
12
ecs/states-fim/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
6
ecs/states-fim/fields/custom/host.yml
Normal file
6
ecs/states-fim/fields/custom/host.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
6
ecs/states-fim/fields/custom/os.yml
Normal file
6
ecs/states-fim/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
6
ecs/states-fim/fields/custom/risk.yml
Normal file
6
ecs/states-fim/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
4
ecs/states-fim/fields/mapping-settings.json
Normal file
4
ecs/states-fim/fields/mapping-settings.json
Normal file
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "strict",
|
||||
"date_detection": false
|
||||
}
|
||||
39
ecs/states-fim/fields/subset.yml
Normal file
39
ecs/states-fim/fields/subset.yml
Normal file
@ -0,0 +1,39 @@
|
||||
---
|
||||
name: wazuh-states-fim
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
file:
|
||||
fields:
|
||||
attributes: {}
|
||||
name: {}
|
||||
path: {}
|
||||
gid: {}
|
||||
group: {}
|
||||
inode: {}
|
||||
hash:
|
||||
fields:
|
||||
md5: {}
|
||||
sha1: {}
|
||||
sha256: {}
|
||||
mtime: {}
|
||||
mode: {}
|
||||
size: {}
|
||||
target_path: {}
|
||||
type: {}
|
||||
uid: {}
|
||||
owner: {}
|
||||
registry:
|
||||
fields:
|
||||
key: {}
|
||||
value: {}
|
||||
21
ecs/states-fim/fields/template-settings-legacy.json
Normal file
21
ecs/states-fim/fields/template-settings-legacy.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-fim*"],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"file.name",
|
||||
"file.path",
|
||||
"file.target_path",
|
||||
"file.group",
|
||||
"file.uid",
|
||||
"file.gid"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
23
ecs/states-fim/fields/template-settings.json
Normal file
23
ecs/states-fim/fields/template-settings.json
Normal file
@ -0,0 +1,23 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-fim*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"file.name",
|
||||
"file.path",
|
||||
"file.target_path",
|
||||
"file.group",
|
||||
"file.uid",
|
||||
"file.gid"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
219
ecs/states-inventory-hardware/event-generator/event_generator.py
Normal file
219
ecs/states-inventory-hardware/event-generator/event_generator.py
Normal file
@ -0,0 +1,219 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-states-inventory-hardware"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'host': generate_random_host(False)
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host(is_root_level=False):
|
||||
if is_root_level:
|
||||
host = {
|
||||
'cpu': {
|
||||
'cores': random.randint(1, 16),
|
||||
'name': f'CPU{random.randint(1, 999)}',
|
||||
'speed': random.randint(1000, 5000),
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'memory': {
|
||||
'free': random.randint(1000, 100000),
|
||||
'total': random.randint(1000, 100000),
|
||||
'used': {
|
||||
'percentage': random.uniform(0, 100)
|
||||
}
|
||||
}
|
||||
}
|
||||
else:
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'bootid{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'cores': random.randint(1, 16),
|
||||
'name': f'CPU{random.randint(1, 999)}',
|
||||
'speed': random.randint(1000, 5000),
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 1000)}',
|
||||
'geo': generate_random_geo(),
|
||||
'hostname': f'host{random.randint(0, 1000)}',
|
||||
'id': f'id{random.randint(0, 1000)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'memory': {
|
||||
'free': random.randint(1000, 100000),
|
||||
'total': random.randint(1000, 100000),
|
||||
'used': {
|
||||
'percentage': random.uniform(0, 100)
|
||||
}
|
||||
},
|
||||
'name': f'host{random.randint(0, 1000)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': random.choice(['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL']),
|
||||
'full': f'{random.choice(["debian", "ubuntu", "macos", "ios", "android", "RHEL"])} {random.randint(0, 99)}.{random.randint(0, 99)}',
|
||||
'kernel': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}',
|
||||
'name': random.choice(['Linux', 'Windows', 'macOS']),
|
||||
'platform': random.choice(['platform1', 'platform2']),
|
||||
'type': random.choice(['os_type1', 'os_type2']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
},
|
||||
'pid_ns_ino': f'pid_ns{random.randint(0, 9999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 10),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 10),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'type': random.choice(['type1', 'type2']),
|
||||
'uptime': random.randint(1000, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_geo():
|
||||
geo = {
|
||||
'city_name': 'CityName',
|
||||
'continent_code': 'NA',
|
||||
'continent_name': 'North America',
|
||||
'country_iso_code': 'US',
|
||||
'country_name': 'United States',
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90, 90), 6),
|
||||
'lon': round(random.uniform(-180, 180), 6)
|
||||
},
|
||||
'name': f'location{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': 'US-CA',
|
||||
'region_name': 'California',
|
||||
'timezone': 'America/Los_Angeles'
|
||||
}
|
||||
return geo
|
||||
|
||||
|
||||
def generate_random_observer():
|
||||
observer = {
|
||||
'serial_number': f'serial{random.randint(0, 9999)}'
|
||||
}
|
||||
return observer
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'@timestamp': generate_random_date(),
|
||||
'agent': generate_random_agent(),
|
||||
'host': generate_random_host(True),
|
||||
'observer': generate_random_observer()
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/states-inventory-hardware/fields/custom/agent.yml
Normal file
12
ecs/states-inventory-hardware/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
52
ecs/states-inventory-hardware/fields/custom/host.yml
Normal file
52
ecs/states-inventory-hardware/fields/custom/host.yml
Normal file
@ -0,0 +1,52 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
fields:
|
||||
- name: memory
|
||||
description: >
|
||||
Memory related data
|
||||
type: object
|
||||
level: custom
|
||||
- name: memory.total
|
||||
description: >
|
||||
Total memory in MB
|
||||
type: long
|
||||
level: custom
|
||||
- name: memory.free
|
||||
description: >
|
||||
Free memory in MB
|
||||
type: long
|
||||
level: custom
|
||||
- name: memory.used
|
||||
description: >
|
||||
Used memory related data
|
||||
type: object
|
||||
level: custom
|
||||
- name: memory.used.percentage
|
||||
description: >
|
||||
Used memory percentage
|
||||
type: long
|
||||
level: custom
|
||||
- name: cpu
|
||||
description: >
|
||||
CPU related data
|
||||
type: object
|
||||
level: custom
|
||||
- name: cpu.name
|
||||
description: >
|
||||
CPU Model name
|
||||
type: keyword
|
||||
level: custom
|
||||
- name: cpu.cores
|
||||
description: >
|
||||
Number of CPU cores
|
||||
type: long
|
||||
level: custom
|
||||
- name: cpu.speed
|
||||
description: >
|
||||
CPU clock speed
|
||||
type: long
|
||||
level: custom
|
||||
6
ecs/states-inventory-hardware/fields/custom/os.yml
Normal file
6
ecs/states-inventory-hardware/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
6
ecs/states-inventory-hardware/fields/custom/risk.yml
Normal file
6
ecs/states-inventory-hardware/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "strict",
|
||||
"date_detection": false
|
||||
}
|
||||
25
ecs/states-inventory-hardware/fields/subset.yml
Normal file
25
ecs/states-inventory-hardware/fields/subset.yml
Normal file
@ -0,0 +1,25 @@
|
||||
---
|
||||
name: wazuh-states-inventory-hardware
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
observer:
|
||||
fields:
|
||||
serial_number: {}
|
||||
host:
|
||||
fields:
|
||||
memory:
|
||||
fields: "*"
|
||||
cpu:
|
||||
fields: "*"
|
||||
@ -0,0 +1,14 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-hardware*"],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"observer.board_serial"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
18
ecs/states-inventory-hardware/fields/template-settings.json
Normal file
18
ecs/states-inventory-hardware/fields/template-settings.json
Normal file
@ -0,0 +1,18 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-states-inventory-hardware*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"observer.board_serial"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
193
ecs/states-inventory-hotfixes/event-generator/event_generator.py
Normal file
193
ecs/states-inventory-hotfixes/event-generator/event_generator.py
Normal file
@ -0,0 +1,193 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-states-inventory-hotfixes"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'host': generate_random_host()
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host():
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'bootid{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 1000)}',
|
||||
'geo': generate_random_geo(),
|
||||
'hostname': f'host{random.randint(0, 1000)}',
|
||||
'id': f'id{random.randint(0, 1000)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'name': f'host{random.randint(0, 1000)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': random.choice(['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL']),
|
||||
'full': f'{random.choice(["debian", "ubuntu", "macos", "ios", "android", "RHEL"])} {random.randint(0, 99)}.{random.randint(0, 99)}',
|
||||
'kernel': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}',
|
||||
'name': random.choice(['Linux', 'Windows', 'macOS']),
|
||||
'platform': random.choice(['platform1', 'platform2']),
|
||||
'type': random.choice(['os_type1', 'os_type2']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
},
|
||||
'pid_ns_ino': f'pid_ns{random.randint(0, 9999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 10),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 10),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'type': random.choice(['type1', 'type2']),
|
||||
'uptime': random.randint(1000, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_geo():
|
||||
geo = {
|
||||
'city_name': 'CityName',
|
||||
'continent_code': 'NA',
|
||||
'continent_name': 'North America',
|
||||
'country_iso_code': 'US',
|
||||
'country_name': 'United States',
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90, 90), 6),
|
||||
'lon': round(random.uniform(-180, 180), 6)
|
||||
},
|
||||
'name': f'location{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': 'US-CA',
|
||||
'region_name': 'California',
|
||||
'timezone': 'America/Los_Angeles'
|
||||
}
|
||||
return geo
|
||||
|
||||
|
||||
def generate_random_package():
|
||||
package = {
|
||||
'hotfix': {
|
||||
'name': f'hotfix{random.randint(0, 9999)}'
|
||||
}
|
||||
}
|
||||
return package
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'@timestamp': generate_random_date(),
|
||||
'agent': generate_random_agent(),
|
||||
'package': generate_random_package()
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/states-inventory-hotfixes/fields/custom/agent.yml
Normal file
12
ecs/states-inventory-hotfixes/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
6
ecs/states-inventory-hotfixes/fields/custom/host.yml
Normal file
6
ecs/states-inventory-hotfixes/fields/custom/host.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
6
ecs/states-inventory-hotfixes/fields/custom/os.yml
Normal file
6
ecs/states-inventory-hotfixes/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
19
ecs/states-inventory-hotfixes/fields/custom/package.yml
Normal file
19
ecs/states-inventory-hotfixes/fields/custom/package.yml
Normal file
@ -0,0 +1,19 @@
|
||||
---
|
||||
- name: package
|
||||
title: Package
|
||||
type: group
|
||||
group: 2
|
||||
description: >
|
||||
Package related data.
|
||||
fields:
|
||||
- name: hotfix
|
||||
type: object
|
||||
level: custom
|
||||
group: 2
|
||||
description: >
|
||||
Hotfix related data.
|
||||
- name: hotfix.name
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Name of the Hotfix.
|
||||
6
ecs/states-inventory-hotfixes/fields/custom/risk.yml
Normal file
6
ecs/states-inventory-hotfixes/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "strict",
|
||||
"date_detection": false
|
||||
}
|
||||
21
ecs/states-inventory-hotfixes/fields/subset.yml
Normal file
21
ecs/states-inventory-hotfixes/fields/subset.yml
Normal file
@ -0,0 +1,21 @@
|
||||
---
|
||||
name: wazuh-states-inventory-hotfixes
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
package:
|
||||
fields:
|
||||
hotfix:
|
||||
fields:
|
||||
name: {}
|
||||
@ -0,0 +1,14 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-hotfixes*"],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"package.hotfix.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
18
ecs/states-inventory-hotfixes/fields/template-settings.json
Normal file
18
ecs/states-inventory-hotfixes/fields/template-settings.json
Normal file
@ -0,0 +1,18 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-states-inventory-hotfixes*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"package.hotfix.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
247
ecs/states-inventory-networks/event-generator/event_generator.py
Normal file
247
ecs/states-inventory-networks/event-generator/event_generator.py
Normal file
@ -0,0 +1,247 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-states-inventory-networks"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'host': generate_random_host(False)
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host(is_root_level_level=False):
|
||||
if is_root_level_level:
|
||||
host = {
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'drops': random.randint(0, 100),
|
||||
'errors': random.randint(0, 100),
|
||||
'packets': random.randint(100, 10000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'drops': random.randint(0, 100),
|
||||
'errors': random.randint(0, 100),
|
||||
'packets': random.randint(100, 10000)
|
||||
}
|
||||
}
|
||||
}
|
||||
else:
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'bootid{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 1000)}',
|
||||
'geo': generate_random_geo(),
|
||||
'hostname': f'host{random.randint(0, 1000)}',
|
||||
'id': f'id{random.randint(0, 1000)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'name': f'host{random.randint(0, 1000)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'drops': random.randint(0, 100),
|
||||
'errors': random.randint(0, 100),
|
||||
'packets': random.randint(100, 10000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'drops': random.randint(0, 100),
|
||||
'errors': random.randint(0, 100),
|
||||
'packets': random.randint(100, 10000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': random.choice(['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL']),
|
||||
'full': f'{random.choice(["debian", "ubuntu", "macos", "ios", "android", "RHEL"])} {random.randint(0, 99)}.{random.randint(0, 99)}',
|
||||
'kernel': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}',
|
||||
'name': random.choice(['Linux', 'Windows', 'macOS']),
|
||||
'platform': random.choice(['platform1', 'platform2']),
|
||||
'type': random.choice(['os_type1', 'os_type2']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
},
|
||||
'pid_ns_ino': f'pid_ns{random.randint(0, 9999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 10),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 10),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'type': random.choice(['type1', 'type2']),
|
||||
'uptime': random.randint(1000, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_geo():
|
||||
geo = {
|
||||
'city_name': 'CityName',
|
||||
'continent_code': 'NA',
|
||||
'continent_name': 'North America',
|
||||
'country_iso_code': 'US',
|
||||
'country_name': 'United States',
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90, 90), 6),
|
||||
'lon': round(random.uniform(-180, 180), 6)
|
||||
},
|
||||
'name': f'location{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': 'US-CA',
|
||||
'region_name': 'California',
|
||||
'timezone': 'America/Los_Angeles'
|
||||
}
|
||||
return geo
|
||||
|
||||
|
||||
def generate_random_network():
|
||||
network = {
|
||||
'broadcast': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'dhcp': f'dhcp{random.randint(0, 9999)}',
|
||||
'gateway': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'metric': random.randint(1, 100),
|
||||
'netmask': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'protocol': random.choice(['TCP', 'UDP', 'ICMP']),
|
||||
'type': random.choice(['wired', 'wireless'])
|
||||
}
|
||||
return network
|
||||
|
||||
def generate_random_interface(is_root_level=False):
|
||||
if is_root_level:
|
||||
interface = {
|
||||
'mtu': f'{random.randint(1000000, 99999999)}',
|
||||
'state': random.choice(['Active', 'Inactive', 'Unknown']),
|
||||
'type': random.choice(['wireless', 'ethernet'])
|
||||
}
|
||||
else:
|
||||
interface = {
|
||||
'alias': f'alias{random.randint(0, 9999)}',
|
||||
'name': f'name{random.randint(0, 9999)}',
|
||||
}
|
||||
|
||||
return interface
|
||||
|
||||
def generate_random_observer():
|
||||
observer = {
|
||||
'ingress': {
|
||||
'interface': generate_random_interface(False)
|
||||
}
|
||||
}
|
||||
return observer
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'@timestamp': generate_random_date(),
|
||||
'agent': generate_random_agent(),
|
||||
'host': generate_random_host(True),
|
||||
'network': generate_random_network(),
|
||||
'observer': generate_random_observer(),
|
||||
'interface': generate_random_interface(True)
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/states-inventory-networks/fields/custom/agent.yml
Normal file
12
ecs/states-inventory-networks/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
27
ecs/states-inventory-networks/fields/custom/host.yml
Normal file
27
ecs/states-inventory-networks/fields/custom/host.yml
Normal file
@ -0,0 +1,27 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
fields:
|
||||
- name: network.egress.drops
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Number of dropped transmitted packets.
|
||||
- name: network.egress.errors
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Number of transmission errors.
|
||||
- name: network.ingress.drops
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Number of dropped received packets.
|
||||
- name: network.ingress.errors
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Number of reception errors.
|
||||
27
ecs/states-inventory-networks/fields/custom/interface.yml
Normal file
27
ecs/states-inventory-networks/fields/custom/interface.yml
Normal file
@ -0,0 +1,27 @@
|
||||
---
|
||||
- name: interface
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: observer.egress.interface, as: observer.ingress.interface }
|
||||
title: Interface
|
||||
type: group
|
||||
group: 2
|
||||
description: >
|
||||
Network interface related data.
|
||||
fields:
|
||||
- name: mtu
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Maximum transmission unit size.
|
||||
- name: state
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
State of the network interface.
|
||||
- name: type
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
Interface type.
|
||||
33
ecs/states-inventory-networks/fields/custom/network.yml
Normal file
33
ecs/states-inventory-networks/fields/custom/network.yml
Normal file
@ -0,0 +1,33 @@
|
||||
---
|
||||
- name: network
|
||||
title: Network
|
||||
type: group
|
||||
group: 2
|
||||
description: >
|
||||
Network related data.
|
||||
fields:
|
||||
- name: broadcast
|
||||
type: ip
|
||||
level: custom
|
||||
description: >
|
||||
Broadcast address
|
||||
- name: dhcp
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
DHCP status (enabled, disabled, unknown, BOOTP)
|
||||
- name: gateway
|
||||
type: ip
|
||||
level: custom
|
||||
description: >
|
||||
Gateway address
|
||||
- name: metric
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Metric of the network protocol
|
||||
- name: netmask
|
||||
type: ip
|
||||
level: custom
|
||||
description: >
|
||||
Network mask
|
||||
6
ecs/states-inventory-networks/fields/custom/os.yml
Normal file
6
ecs/states-inventory-networks/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
6
ecs/states-inventory-networks/fields/custom/risk.yml
Normal file
6
ecs/states-inventory-networks/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "strict",
|
||||
"date_detection": false
|
||||
}
|
||||
40
ecs/states-inventory-networks/fields/subset.yml
Normal file
40
ecs/states-inventory-networks/fields/subset.yml
Normal file
@ -0,0 +1,40 @@
|
||||
---
|
||||
name: wazuh-states-inventory-networks
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
tags: []
|
||||
"@timestamp": {}
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
host:
|
||||
fields: "*"
|
||||
interface:
|
||||
fields:
|
||||
mtu: {}
|
||||
state: {}
|
||||
type: {}
|
||||
network:
|
||||
fields:
|
||||
broadcast: {}
|
||||
dhcp: {}
|
||||
gateway: {}
|
||||
metric: {}
|
||||
netmask: {}
|
||||
protocol: {}
|
||||
type: {}
|
||||
observer:
|
||||
fields:
|
||||
ingress:
|
||||
fields:
|
||||
interface:
|
||||
fields:
|
||||
alias: {}
|
||||
name: {}
|
||||
@ -0,0 +1,21 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-networks*"],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"device.id",
|
||||
"event.id",
|
||||
"host.ip",
|
||||
"observer.ingress.interface.name",
|
||||
"observer.ingress.interface.alias",
|
||||
"process.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
25
ecs/states-inventory-networks/fields/template-settings.json
Normal file
25
ecs/states-inventory-networks/fields/template-settings.json
Normal file
@ -0,0 +1,25 @@
|
||||
{
|
||||
"index_patterns": [
|
||||
"wazuh-states-inventory-networks*"
|
||||
],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"device.id",
|
||||
"event.id",
|
||||
"host.ip",
|
||||
"observer.ingress.interface.name",
|
||||
"observer.ingress.interface.alias",
|
||||
"process.name"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
198
ecs/states-inventory-packages/event-generator/event_generator.py
Normal file
198
ecs/states-inventory-packages/event-generator/event_generator.py
Normal file
@ -0,0 +1,198 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-states-inventory-packages"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'host': generate_random_host()
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host():
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'bootid{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 1000)}',
|
||||
'geo': generate_random_geo(),
|
||||
'hostname': f'host{random.randint(0, 1000)}',
|
||||
'id': f'id{random.randint(0, 1000)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'name': f'host{random.randint(0, 1000)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': random.choice(['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL']),
|
||||
'full': f'{random.choice(["debian", "ubuntu", "macos", "ios", "android", "RHEL"])} {random.randint(0, 99)}.{random.randint(0, 99)}',
|
||||
'kernel': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}',
|
||||
'name': random.choice(['Linux', 'Windows', 'macOS']),
|
||||
'platform': random.choice(['platform1', 'platform2']),
|
||||
'type': random.choice(['os_type1', 'os_type2']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
},
|
||||
'pid_ns_ino': f'pid_ns{random.randint(0, 9999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 10),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 10),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'type': random.choice(['type1', 'type2']),
|
||||
'uptime': random.randint(1000, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_geo():
|
||||
geo = {
|
||||
'city_name': 'CityName',
|
||||
'continent_code': 'NA',
|
||||
'continent_name': 'North America',
|
||||
'country_iso_code': 'US',
|
||||
'country_name': 'United States',
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90, 90), 6),
|
||||
'lon': round(random.uniform(-180, 180), 6)
|
||||
},
|
||||
'name': f'location{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': 'US-CA',
|
||||
'region_name': 'California',
|
||||
'timezone': 'America/Los_Angeles'
|
||||
}
|
||||
return geo
|
||||
|
||||
|
||||
def generate_random_package():
|
||||
package = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'description': f'description{random.randint(0, 9999)}',
|
||||
'installed': generate_random_date(),
|
||||
'name': f'package{random.randint(0, 9999)}',
|
||||
'path': f'/path/to/package{random.randint(0, 9999)}',
|
||||
'size': random.randint(1000, 100000),
|
||||
'type': random.choice(['deb', 'rpm']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
}
|
||||
return package
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'@timestamp': generate_random_date(),
|
||||
'agent': generate_random_agent(),
|
||||
'package': generate_random_package()
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/states-inventory-packages/fields/custom/agent.yml
Normal file
12
ecs/states-inventory-packages/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
6
ecs/states-inventory-packages/fields/custom/host.yml
Normal file
6
ecs/states-inventory-packages/fields/custom/host.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
6
ecs/states-inventory-packages/fields/custom/os.yml
Normal file
6
ecs/states-inventory-packages/fields/custom/os.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: os
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
6
ecs/states-inventory-packages/fields/custom/risk.yml
Normal file
6
ecs/states-inventory-packages/fields/custom/risk.yml
Normal file
@ -0,0 +1,6 @@
|
||||
---
|
||||
- name: risk
|
||||
reusable:
|
||||
top_level: false
|
||||
expected:
|
||||
- agent.host
|
||||
@ -0,0 +1,4 @@
|
||||
{
|
||||
"dynamic": "strict",
|
||||
"date_detection": false
|
||||
}
|
||||
26
ecs/states-inventory-packages/fields/subset.yml
Normal file
26
ecs/states-inventory-packages/fields/subset.yml
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
name: wazuh-states-inventory-packages
|
||||
fields:
|
||||
base:
|
||||
fields:
|
||||
"@timestamp": {}
|
||||
tags: []
|
||||
agent:
|
||||
fields:
|
||||
groups: {}
|
||||
id: {}
|
||||
name: {}
|
||||
type: {}
|
||||
version: {}
|
||||
host:
|
||||
fields: "*"
|
||||
package:
|
||||
fields:
|
||||
architecture: ""
|
||||
description: ""
|
||||
installed: {}
|
||||
name: ""
|
||||
path: ""
|
||||
size: {}
|
||||
type: ""
|
||||
version: ""
|
||||
@ -0,0 +1,19 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-packages*"],
|
||||
"order": 1,
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"package.architecture",
|
||||
"package.name",
|
||||
"package.version",
|
||||
"package.type"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
21
ecs/states-inventory-packages/fields/template-settings.json
Normal file
21
ecs/states-inventory-packages/fields/template-settings.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"index_patterns": ["wazuh-states-inventory-packages*"],
|
||||
"priority": 1,
|
||||
"template": {
|
||||
"settings": {
|
||||
"index": {
|
||||
"number_of_shards": "1",
|
||||
"number_of_replicas": "0",
|
||||
"refresh_interval": "5s",
|
||||
"query.default_field": [
|
||||
"agent.id",
|
||||
"agent.groups",
|
||||
"package.architecture",
|
||||
"package.name",
|
||||
"package.version",
|
||||
"package.type"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
247
ecs/states-inventory-ports/event-generator/event_generator.py
Normal file
247
ecs/states-inventory-ports/event-generator/event_generator.py
Normal file
@ -0,0 +1,247 @@
|
||||
#!/bin/python3
|
||||
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import random
|
||||
import requests
|
||||
import urllib3
|
||||
|
||||
# Constants and Configuration
|
||||
LOG_FILE = 'generate_data.log'
|
||||
GENERATED_DATA_FILE = 'generatedData.json'
|
||||
DATE_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
|
||||
# Default values
|
||||
INDEX_NAME = "wazuh-states-inventory-ports"
|
||||
USERNAME = "admin"
|
||||
PASSWORD = "admin"
|
||||
IP = "127.0.0.1"
|
||||
PORT = "9200"
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(filename=LOG_FILE, level=logging.INFO)
|
||||
|
||||
# Suppress warnings
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
|
||||
def generate_random_date():
|
||||
start_date = datetime.datetime.now()
|
||||
end_date = start_date - datetime.timedelta(days=10)
|
||||
random_date = start_date + (end_date - start_date) * random.random()
|
||||
return random_date.strftime(DATE_FORMAT)
|
||||
|
||||
|
||||
def generate_random_agent():
|
||||
agent = {
|
||||
'id': f'agent{random.randint(0, 99)}',
|
||||
'name': f'Agent{random.randint(0, 99)}',
|
||||
'type': random.choice(['filebeat', 'windows', 'linux', 'macos']),
|
||||
'version': f'v{random.randint(0, 9)}-stable',
|
||||
'groups': [f'group{random.randint(0, 99)}', f'group{random.randint(0, 99)}'],
|
||||
'host': generate_random_host(False)
|
||||
}
|
||||
return agent
|
||||
|
||||
|
||||
def generate_random_host(is_root_level=False):
|
||||
if is_root_level:
|
||||
host = {
|
||||
'network': {
|
||||
'egress': {
|
||||
'queue': random.randint(0, 1000)
|
||||
},
|
||||
'ingress': {
|
||||
'queue': random.randint(0, 1000)
|
||||
}
|
||||
}
|
||||
}
|
||||
else:
|
||||
host = {
|
||||
'architecture': random.choice(['x86_64', 'arm64']),
|
||||
'boot': {
|
||||
'id': f'bootid{random.randint(0, 9999)}'
|
||||
},
|
||||
'cpu': {
|
||||
'usage': random.uniform(0, 100)
|
||||
},
|
||||
'disk': {
|
||||
'read': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
},
|
||||
'write': {
|
||||
'bytes': random.randint(1000, 1000000)
|
||||
}
|
||||
},
|
||||
'domain': f'domain{random.randint(0, 1000)}',
|
||||
'geo': generate_random_geo(),
|
||||
'hostname': f'host{random.randint(0, 1000)}',
|
||||
'id': f'id{random.randint(0, 1000)}',
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'mac': f'{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}:{random.randint(0, 255):02x}',
|
||||
'name': f'host{random.randint(0, 1000)}',
|
||||
'network': {
|
||||
'egress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000),
|
||||
'queue': random.randint(0, 1000)
|
||||
},
|
||||
'ingress': {
|
||||
'bytes': random.randint(1000, 1000000),
|
||||
'packets': random.randint(100, 10000),
|
||||
'queue': random.randint(0, 1000)
|
||||
}
|
||||
},
|
||||
'os': {
|
||||
'family': random.choice(['debian', 'ubuntu', 'macos', 'ios', 'android', 'RHEL']),
|
||||
'full': f'{random.choice(["debian", "ubuntu", "macos", "ios", "android", "RHEL"])} {random.randint(0, 99)}.{random.randint(0, 99)}',
|
||||
'kernel': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}',
|
||||
'name': random.choice(['Linux', 'Windows', 'macOS']),
|
||||
'platform': random.choice(['platform1', 'platform2']),
|
||||
'type': random.choice(['os_type1', 'os_type2']),
|
||||
'version': f'{random.randint(0, 9)}.{random.randint(0, 9)}.{random.randint(0, 9)}'
|
||||
},
|
||||
'pid_ns_ino': f'pid_ns{random.randint(0, 9999)}',
|
||||
'risk': {
|
||||
'calculated_level': random.choice(['low', 'medium', 'high']),
|
||||
'calculated_score': random.uniform(0, 10),
|
||||
'calculated_score_norm': random.uniform(0, 1),
|
||||
'static_level': random.choice(['low', 'medium', 'high']),
|
||||
'static_score': random.uniform(0, 10),
|
||||
'static_score_norm': random.uniform(0, 1)
|
||||
},
|
||||
'type': random.choice(['type1', 'type2']),
|
||||
'uptime': random.randint(1000, 1000000)
|
||||
}
|
||||
return host
|
||||
|
||||
|
||||
def generate_random_geo():
|
||||
geo = {
|
||||
'city_name': 'CityName',
|
||||
'continent_code': 'NA',
|
||||
'continent_name': 'North America',
|
||||
'country_iso_code': 'US',
|
||||
'country_name': 'United States',
|
||||
'location': {
|
||||
'lat': round(random.uniform(-90, 90), 6),
|
||||
'lon': round(random.uniform(-180, 180), 6)
|
||||
},
|
||||
'name': f'location{random.randint(0, 999)}',
|
||||
'postal_code': f'{random.randint(10000, 99999)}',
|
||||
'region_iso_code': 'US-CA',
|
||||
'region_name': 'California',
|
||||
'timezone': 'America/Los_Angeles'
|
||||
}
|
||||
return geo
|
||||
|
||||
|
||||
def generate_random_destination():
|
||||
destination = {
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'port': random.randint(0, 65535)
|
||||
}
|
||||
return destination
|
||||
|
||||
|
||||
def generate_random_device():
|
||||
device = {
|
||||
'id': f'device{random.randint(0, 9999)}'
|
||||
}
|
||||
return device
|
||||
|
||||
|
||||
def generate_random_file():
|
||||
file = {
|
||||
'inode': f'inode{random.randint(0, 9999)}'
|
||||
}
|
||||
return file
|
||||
|
||||
|
||||
def generate_random_process():
|
||||
process = {
|
||||
'name': f'process{random.randint(0, 9999)}',
|
||||
'pid': random.randint(0, 99999)
|
||||
}
|
||||
return process
|
||||
|
||||
|
||||
def generate_random_source():
|
||||
source = {
|
||||
'ip': f'{random.randint(1, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}.{random.randint(0, 255)}',
|
||||
'port': random.randint(0, 65535)
|
||||
}
|
||||
return source
|
||||
|
||||
|
||||
def generate_random_data(number):
|
||||
data = []
|
||||
for _ in range(number):
|
||||
event_data = {
|
||||
'@timestamp': generate_random_date(),
|
||||
'agent': generate_random_agent(),
|
||||
'destination': generate_random_destination(),
|
||||
'device': generate_random_device(),
|
||||
'file': generate_random_file(),
|
||||
'host': generate_random_host(True),
|
||||
'network': {
|
||||
'protocol': random.choice(['TCP', 'UDP', 'ICMP'])
|
||||
},
|
||||
'process': generate_random_process(),
|
||||
'source': generate_random_source(),
|
||||
'interface': {
|
||||
'state': random.choice(['Active', 'Inactive', 'Unknown'])
|
||||
}
|
||||
}
|
||||
data.append(event_data)
|
||||
return data
|
||||
|
||||
|
||||
def inject_events(ip, port, index, username, password, data):
|
||||
url = f'https://{ip}:{port}/{index}/_doc'
|
||||
session = requests.Session()
|
||||
session.auth = (username, password)
|
||||
session.verify = False
|
||||
headers = {'Content-Type': 'application/json'}
|
||||
|
||||
try:
|
||||
for event_data in data:
|
||||
response = session.post(url, json=event_data, headers=headers)
|
||||
if response.status_code != 201:
|
||||
logging.error(f'Error: {response.status_code}')
|
||||
logging.error(response.text)
|
||||
break
|
||||
logging.info('Data injection completed successfully.')
|
||||
except Exception as e:
|
||||
logging.error(f'Error: {str(e)}')
|
||||
|
||||
|
||||
def main():
|
||||
try:
|
||||
number = int(input("How many events do you want to generate? "))
|
||||
except ValueError:
|
||||
logging.error("Invalid input. Please enter a valid number.")
|
||||
return
|
||||
|
||||
logging.info(f"Generating {number} events...")
|
||||
data = generate_random_data(number)
|
||||
|
||||
with open(GENERATED_DATA_FILE, 'a') as outfile:
|
||||
for event_data in data:
|
||||
json.dump(event_data, outfile)
|
||||
outfile.write('\n')
|
||||
|
||||
logging.info('Data generation completed.')
|
||||
|
||||
inject = input("Do you want to inject the generated data into your indexer? (y/n) ").strip().lower()
|
||||
if inject == 'y':
|
||||
ip = input(f"Enter the IP of your Indexer (default: '{IP}'): ") or IP
|
||||
port = input(f"Enter the port of your Indexer (default: '{PORT}'): ") or PORT
|
||||
index = input(f"Enter the index name (default: '{INDEX_NAME}'): ") or INDEX_NAME
|
||||
username = input(f"Username (default: '{USERNAME}'): ") or USERNAME
|
||||
password = input(f"Password (default: '{PASSWORD}'): ") or PASSWORD
|
||||
inject_events(ip, port, index, username, password, data)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
12
ecs/states-inventory-ports/fields/custom/agent.yml
Normal file
12
ecs/states-inventory-ports/fields/custom/agent.yml
Normal file
@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: agent
|
||||
title: Wazuh Agents
|
||||
short: Wazuh Inc. custom fields.
|
||||
type: group
|
||||
group: 2
|
||||
fields:
|
||||
- name: groups
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
List of groups the agent belong to.
|
||||
17
ecs/states-inventory-ports/fields/custom/host.yml
Normal file
17
ecs/states-inventory-ports/fields/custom/host.yml
Normal file
@ -0,0 +1,17 @@
|
||||
---
|
||||
- name: host
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: agent, as: host }
|
||||
fields:
|
||||
- name: network.ingress.queue
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Receive queue length.
|
||||
- name: network.egress.queue
|
||||
type: long
|
||||
level: custom
|
||||
description: >
|
||||
Transmit queue length.
|
||||
17
ecs/states-inventory-ports/fields/custom/interface.yml
Normal file
17
ecs/states-inventory-ports/fields/custom/interface.yml
Normal file
@ -0,0 +1,17 @@
|
||||
---
|
||||
- name: interface
|
||||
reusable:
|
||||
top_level: true
|
||||
expected:
|
||||
- { at: observer.egress.interface, as: observer.ingress.interface }
|
||||
title: Interface
|
||||
type: group
|
||||
group: 2
|
||||
description: >
|
||||
Network interface related data.
|
||||
fields:
|
||||
- name: state
|
||||
type: keyword
|
||||
level: custom
|
||||
description: >
|
||||
State of the network interface.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user