- See the output of
sudo aureportand the underlying events withsudo ausearch --rawor filter them withsudo ausearch --success no. Optionally point to the rules in /etc/audit/audit.rules. - Show the dashboard [Filebeat Auditd] Audit Events ECS and show additional Filebeat modules:
- [Filebeat System] New users and groups ECS
- [Filebeat System] Sudo commands ECS
- Show the Auditbeat configuration and the raw data in the Discover tab (also point out the
hostandmeta.clouddata). - Show the [Auditbeat Auditd] Overview ECS dashboard.
ssh elastic-user@xeraa.wtfwith a bad password and show the failed login on the [Filebeat System] SSH login attempts dashboard.- SSH with the same user and make it work this time.
- For a more fine grained filter run
cat /etc/passwdand find the event withtags is developers-passwd-read. - Run
service nginx restartand pick theelastic-adminuser to run the command. Show the execution on the [Auditbeat Auditd] Executions ECS dashboard by filtering down to theelastic-useruser. - Detect when an admin may be abusing power by looking in a user's home directory. Let the
ssh elastic-admin@xeraa.wtfcheck the directory /home/elastic-user and read the file /home/elastic-user/secret.txt (will require sudo). Search for the tagpower-abuseto see the violation. - Show /etc/auditbeat/auditbeat.yml that requires sudo privileges and find the call in
tags is elevated-privs. - Open a socket with
netcat -l 1025and start a chat withtelnet <hostname> 1025. Find it in the [Auditbeat System] Socket Dashboard ECS in the destination ports list and filter down on it. Optionally show the alternative with Auditd by filtering in Discover onopen-socket. - Show a seccomp violation by runnin
firejail --noprofile --seccomp.drop=bind -c nc -v -l 1025. This will show up as"event.action": "violated-seccomp-policy"in the Auditbeat events. Alternatively you can find the event withdmesgon the shell. - Show the other [Auditbeat System] dashboard and be sure to point out that this is not based on Auditd any more. For example the one listing all installed packages and their version could come in handy if there is a vulnerable binary out and you want to see where you still need to patch.
- Change the content of the website in
/var/www/html/.index.html. See the change in the [Auditbeat File Integrity] Overview ECS dashboard. Depending on the editor the actions might be slightly different; nano will generate anupdatedevent wheras vi does amovedanddeleted. - In the SIEM tab search for
1025(the port). Drop the processnetcatinto the Timeline view and see all the related details for it. Add a comment to the event when we opened the port.
Make sure you have run this before the demo.
- Have your AWS account set up, access key created, and added as environment variables in
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEY. Protip: Use https://github.com/sorah/envchain to keep your environment variables safe. - Create the Elastic Cloud instance with the same version as specified in variables.yml's
elastic_version, enable Kibana as well as the GeoIP & user agent plugins, and set the environment variables with the values forELASTICSEARCH_HOST,ELASTICSEARCH_USER,ELASTICSEARCH_PASSWORD, as well asKIBANA_HOST,KIBANA_ID. - Change the settings to a domain you have registered under Route53 in inventory, variables.tf, and variables.yml. Set the Hosted Zone for that domain and export the Zone ID under the environment variable
TF_VAR_zone_id. If you haven't created the Hosted Zone yet, you should set it up in the AWS Console first and then set the environment variable. - If you haven't installed the AWS plugin for Terraform, get it with
terraform initfirst. Then create the keypair, DNS settings, and instances withterraform apply. - Apply the configuration to the instance with
ansible-playbook configure.yml.
When you are done, remove the instances, DNS settings, and key with terraform destroy.
To build an AWS AMI for Strigo, use Packer. Using the Ansible Local Provisioner you only need to have Packer installed locally (no Ansible). Build the AMI with packer build packer.json and set up the training class on Strigo with the generated AMI and the user ubuntu.
By setting cloud: true you won't add a local Elasticsearch and Kibana instance. But you must then add the elasticsearch_user and elasticsearch_password account to that cloud account for the setup to work, add cloud.id to all the Beats, and restart them.
If things are failing for some reason: Run packer build -debug packer-ansible.yml, which will keep the instance running and save the SSH key in the current directory. Connect to it with ssh -i ec2_amazon-ebs.pem ubuntu@ec2-X-X-X-X.eu-central-1.compute.amazonaws.com; open ports as needed in the AWS Console since the instance will only open TCP/22 by default.
None.