IT Services - Adding some security
IT Services
Hi everyone,
as is customary let me share with you a little bit of something good...in this case Johnny Cash & June Carter – Jackson, June Carter’s power with Johnny Cash, I love it.
Well, this new entry will not be directly related to “networking stuff”, it is more close to other of my day to day responsibilities, sysadmin...- I just felt a great disturbance in the Force. It’s an explanation of the last post that I wrote: A decent home network is not expensive.
The TALE starts… Some days/weeks ago I wrote the mentioned little post where I said that it is not necessary invest a lot of money to deploy a relative decent home network with multiple services. Fortunately, or not, a friend, that works (co-owned) for “micro” company, read the post and call me. They were interested in the solution for multiple reasons, the need to reorganize the IT services with a little budge, if possible without expensive subscriptions, very interested in an open source solution, ...After some explanations here goes the list of the “services” that were deployed and that will be introduced here:
1-Firewall by default
2-Local Certificate Authority
3-OpenLDAP
4-Freeradius
5-Nagios Core: NagVis, Influx+Grafana+Histou
6-Self-managed GitLab
7-Squid: Squid-in-the-middle SSL Bump
8-Pi-Hole
9-Wallabag
10-Syncthing
11-WireGuard
12-Vulnerability analysis
Well, do I need all of these services?. Obviously no, but a good practice is “learn by doing”. If you have time invest part of it to learn something.
Before you begin
There are various approaches that you can follow to deploy all of these services. Perhaps the most logic path is using containers (I encourage you to follow this path if you want to learn something about docker and containers), if you have read some other post of mine probably you know it. But in this case I did not use containers, the services were deployed directly. Why?...mainly to reduce the overhead on raspberry pi and to avoid any security worries about the downloaded containers. There are other reasons like -> I think that, sometimes, a manual installation helps me to understand what really I am doing and how I can customize the service, and of course...it’s more likely my friend will call me to ask something, restore a service, etc...then I will get some “free” beers. In this case “free” could be very expensive…:D...
Disclaimer
Obviously in this post the complete installation process of each service is not shown. Here are collected the most interesting points that I have considered and, of course, the links to the resources used to deploy the services.
Another advice; we do not share any content directly on internet. Of course we always use encrypted sessions (https), the services were deployed in a relative secure form and, as you will read in this post, we executed a vulnerability analysis of each device/service to mitigate the detected vulnerabilities. But we have decided to use a VPN to consume the services, for example, mobile devices use an always-on VPN. Of course there are other solutions to share the content without a VPN, but it is not our case.
The principal actress in this movie is a raspberry pi 4 with 4GB of RAM, revision 1.2.
The OS data
Distributor ID: Raspbian
Description: Raspbian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
The kernel data:
Linux raspberrypi-station-1.home.local 5.10.92-v7l+ #1514 SMP Mon Jan 17 17:38:03 GMT 2022 armv7l GNU/Linux
1- Firewall by default
Please, if you can, sometimes it is not possible, configure a firewall on each server.
Some time ago I changed my dear iptables by nftables, I recommend you to do it, never is too late to do the things better.
As a simple summary, please forgive me, nftables is a stateful firewall that enforce the overall security. Well, what is stateful firewall? → It is a firewall that tracks and monitors the state of network sessions/connections. These network connections can be permitted or denied based on l2-L4 information (and its direction). For example, if you permit incoming ssh traffic (tcp destination port 22) the returned traffic of that tcp session is permitted without configuring any other explicit rule. The network connection is tracked by the firewall. Here goes an example of tracked https session:
? ~ sudo conntrack -L -o id,extended -p 6 --dport 443 --state ESTABLISHE
ipv4 2 tcp 6 431997 ESTABLISHED src=192.168.0.49 dst=192.168.0.11 sport=39896 dport=443 src=192.168.0.11 dst=192.168.0.49 sport=443 dport=39896 [ASSURED] mark=0 use=1 id=496724957
conntrack v1.4.6 (conntrack-tools): 1 flow entries have been shown.
Previously I said L2-L4, yes L2. With nftables you can perform microsegmentation, in other words, you can apply firewall rules at layer 2 dropping or permitting sessions inside the same broadcast domain. Here is a basic L2 firewall example that I deployed in a SRv6 L3VPN scenario: PoC: FRR containers supporting SRv6 L3VPN (IPv4 and IPv6)...and a little bit of Microsegmentation.
Of course there are a lot of challenges if you deploy these solutions, but if the scale is small a simple script can do the job.
Here goes a nftables configuration example; It is a firewall setup of a raspberry pi zero 2w that acts as print server. The file is auto-explanatory: in the first section the variables are defined (interfaces/zones, ipv4/6 addresses, ports, protocols, etc...), these variables will be used as the typical firewall objects by the rules. Then the script flush the config, and after that every table, chain and the rules per input interface/zone and per table/chain are created.
? ~ cat firewall/nftables/nft_ruleset.nf
#!/usr/sbin/nft -f
# DEFINE INTERFACES - ZONES
define IF_LoopBack = { lo }
define IF_Lan = { wlan0 }
# DEFINE IP ADDRSSES
define IP_LocalHostLan = { 192.168.2.6 }
define IP_LocalHostIPs = { 127.0.0.1, 127.0.1.1, 192.168.2.6 }
define IP_MgmIPs = { 192.168.0.11, 192.168.0.49 }
define IP_RPI4 = { 192.168.0.11 }
define IP_AllLans = { 192.168.0.0/24, 192.168.2.0/29 }
# DEFINE PROTOCOLS
define TPROTO_TrasnportProtocols = { 6, 17 }
# DEFINE PORTS AND CODES
define ICMPTypes = { echo-reply, destination-unreachable, source-quench, echo-request, time-exceeded }
define PORT_RPIZMgmPortsTCP = { 80, 443, 2211, 631 }
define PORT_RPIZPrintTCP = { 631 }
define PORT_RPIZPrintUDP = { 161 }
#####################################################################
# CLEAR ALL RULES
flush ruleset
# ADD t_filter TABLE
add table ip t_filter
# ADD c_INPUT CHAIN AND RULES
add chain ip t_filter c_INPUT { type filter hook input priority 0; policy drop; }
add rule ip t_filter c_INPUT ip protocol $TPROTO_TrasnportProtocols ct state established,related counter accept
# SOURCE ZONE IF_LoopBack
add rule ip t_filter c_INPUT iifname $IF_LoopBack ip saddr $IP_LocalHostIPs ip daddr $IP_LocalHostIPs counter accept
# SOURCE ZONE IF_Lan
add rule ip t_filter c_INPUT iifname $IF_Lan ip saddr $IP_MgmIPs ip daddr $IP_LocalHostLan icmp type $ICMPTypes counter accept
add rule ip t_filter c_INPUT iifname $IF_Lan ip saddr $IP_MgmIPs ip daddr $IP_LocalHostLan tcp dport $PORT_RPIZMgmPortsTCP counter accept
add rule ip t_filter c_INPUT iifname $IF_Lan ip saddr $IP_AllLans ip daddr $IP_LocalHostLan tcp dport $PORT_RPIZPrintTCP counter accept
add rule ip t_filter c_INPUT iifname $IF_Lan ip saddr $IP_RPI4 ip daddr $IP_LocalHostLan udp dport $PORT_RPIZPrintUDP counter accept
# ADD CHAIN c_OUTPUT AND RULES
add chain ip t_filter c_OUTPUT { type filter hook output priority 0; policy accept; }t
Flowtables allow you to accelerate packet forwarding in software (and in hardware if your NIC supports it) by using a conntrack-based network stack bypass. If offload is possible use offload, even if only in software. This is a snipset of other raspberry pi’s firewall (the raspberry pi acts as a router) that provide routing services with flowtables in place:
..
# ADD FLOWTABLE
add flowtable ip t_filter f_FLOWIP { hook ingress priority 0; devices = { eth0, wg0 }; }
# ADD CHAIN c_FORWARD AND RULES
add chain ip t_filter c_FORWARD { type filter hook forward priority 0; policy drop; }
# FLOWTABLE FAST PATH - SW OFFLOAD
add rule ip t_filter c_FORWARD ip protocol { tcp } flow offload @f_FLOWIP
add rule ip t_filter c_FORWARD ip protocol $TPROTO_TrasnportProtocols ct state established,related counter accept
# SOURCE ZONE IF_VPN
add rule ip t_filter c_FORWARD iifname $IF_VPN ip saddr $IP_TFMOTUVPNIPs tcp dport $PORT_RPIMgmPortsTCP counter accept
add rule ip t_filter c_FORWARD iifname $IF_VPN ip saddr $IP_VPNNetNotTrusted tcp dport $PORT_WELLKNOWN_TCP counter accept
add rule ip t_filter c_FORWARD iifname $IF_VPN ip saddr $IP_VPNNetNotTrusted udp dport $PORT_WHATSAPP_UDP counter accept
....
2- Local CA
If you are a xadmin, (x stand for system or network or security or…but it is not sex admin,XD, it could be the only work that remote work is worse), one of the first things that I recommend you is to deploy a local/internal Certificate Authority if you do not have one. If you do not known what is a CA do not worry, take a breath and... cry. It s a joke. There are a lot of information explaining what is a PKI, a CA, a VA, a certificate, etc..
As a simple summary: if the CA X is trusted by your system, all the certificates signed by the CA X are trusted by your system. To the purists → it is a simple summary.
In our case there will be multiple services that need this trust relationship, of course you can trust anything, or add exceptions, or...but please security by default, do not be miserable.
It is very easy to setup a simple CA with simple functionality.
Windows users can deploy XCA – hohnstaedt.de.
Linux users have a lot of alternatives, in these examples we will use openssl directly from command line.
This is a summary of the basic steps to create a CA and some certificates...
2.1- Create Local CA
2.1.1- Create Root CA Key:
? Local_CA openssl genrsa -aes256 -out CA_key.pem 409
Generating RSA private key, 4096 bit long modulus (2 primes)
........................................++++
.........................................................++++
e is 65537 (0x010001)
Enter pass phrase for CA_key.pem:
Verifying - Enter pass phrase for CA_key.pem:
? Local_CA
Please, DO NOT FORGET the passphrase that you used here, it will be needed each time that you read the key file, for example when you generate/sign certificates.
2.1.2- Create and self sign the Root Certificate (3650 days → 10 years):
? Local_CA openssl req -x509 -subj "/C=ES/CN=debian-tp1g2.lab.local" -addext "subjectAltName = DNS:debian-tp1g2.lab.local" -new -nodes -key CA_key.pem -sha256 -days 3650 -out CA_certificate.pe
Enter pass phrase for CA_key.pem:
? Local_CA
Here go the file detail:
? Local_CA ls -l
total 12
-rw-r--r-- 1 userx userx 1923 Jan 29 12:30 CA_certificate.pem
-rw------- 1 userx userx 3326 Jan 29 12:18 CA_key.peml
CA certificate basic details:
? Local_CA openssl x509 -text -noout -in CA_certificate.pem |grep 'Subject\|DNS\|CA
Subject: C = ES, CN = debian-tp1g2.lab.local
Subject Public Key Info:
X509v3 Subject Key Identifier:
:CATRUE
X509v3 Subject Alternative Name:
DNS:debian-tp1g2.lab.local
At this point you have a CA that can sign certificates...
2.2- Create and sign certificates (includes signing request)
Take into account that in this case all the process is done in the same device. But in “conventional” operations csr files can be generated on other hosts (the systems that need the certificates…).
2.2.1- Create the certificate key
? Local_CA openssl genrsa -out Site1_key.pem 204
Generating RSA private key, 2048 bit long modulus (2 primes)
.+++++
................................+++++
e is 65537 (0x010001)
? Local_CA
2.2.2- Create the signing request(csr)
? Local_CA openssl req -new -sha256 -key Site1_key.pem -subj "/C=ES/CN=debian-tp1g2-site1.lab.local" -out Site1_signingrequest.cs
? Local_CA
2.2.3- Generate and sign the certificate
? Local_CA openssl x509 -req -extfile <(printf "subjectAltName=DNS:debian-tp1g2-site1.lab.local") -in Site1_signingrequest.csr -CA CA_certificate.pem -CAkey CA_key.pem -CAcreateserial -out Site1_certificate.pem -days 3650 -sha25
Signature ok
subject=C = ES, CN = debian-tp1g2-site1.lab.local
Getting CA Private Key
Enter pass phrase for CA_key.pem:
? Local_CA openssl x509 -text -noout -in Site1_certificate.pem |grep 'Subject\|DNS'
Subject: C = ES, CN = debian-tp1g2-site1.lab.local
Subject Public Key Info:
X509v3 Subject Alternative Name:
DNS:debian-tp1g2-site1.lab.local
Now you have a certificate ready to use in your server. You can reuse this certificate for different services in the same server but, when you start signing it is very difficult to stop...it is like eating chocolate. In my case I use various certificates, some services share the same certificate but others use different certificate. Take into account that the common name (CN) and Subject Alternative Name of each certificate must be resolved to the correct ipv4/ipv6 address by your DNS or local host file.
If the services on your server communicate with other services or clients outside the system, for example when you access from your laptop/tablet/phone/other server…, the certificate used by that service will not be trusted by default. It is a normal operation because the CA that signs the certificate (the CA that you created previously) is not trusted by your external system (laptop/tablet/phone/other server…). In order to trust all the certificates that your CA signs, the CA must be a trusted certificate authority (CA) of your external systems. In other words, the public certificate (not the key file please!) must be imported in your external systems as a trusted CA. After it is done your basic pki is working!
Here goes some examples of my raspberry pi, same device with various certificates:
3- OpenLDAP
OpenLDAP is another essential part of the “ecosystem”.
The suite includes:
In this scenario the deployed services use OpenLDAP server (slapd) as a final authentication repo. For example, gitlab use ldap to authenticate the users, apache (vhosts) like nagios implement ldap authentication, wireless clients are authenticated using RADIUS (EAP-TTLS) and LDAP as final repo, … Now think about security best practices like user password changes, etc… Of course there are other services, in our case Wallabag and syncthing, that have its own user management. A dirty solution is authenticate the users via LDAP when they access to the login page. It is not the best integration but the life is too short.
Here goes the OpenLDAP administrator guide: OpenLDAP Software 2.6 Administrator's Guide. It is relatively easy get the basic functionality working. Once installed and correctly configured with some users you can get the info, and of course in a secure form using certificates issued by the local CA:
? ~ ldapsearch -x -H ldaps://raspberrypi-station-1.home.local:636 -b "dc=home,dc=local"
…
# userx, Users, home.local
dn: uid=userx,ou=Users,dc=home,dc=local
# usery, Users, home.local
dn: uid=usery,ou=Users,dc=home,dc=local
# userz, Users, home.local
dn: uid=userz,ou=Users,dc=home,dc=local
# testuser, Users, home.local
dn: uid=usert,ou=Users,dc=home,dc=local
…
? ~ ldapsearch -H ldap:// -x -b "dc=home,dc=local" -LLL -Z dn
…
dn: uid=userx,ou=Users,dc=home,dc=local
dn: uid=usery,ou=Users,dc=home,dc=local
dn: uid=userz,ou=Users,dc=home,dc=local
dn: uid=usert,ou=Users,dc=home,dc=local
…
4- Freeradius
Officially FreeRADIUS is used as backend for wired and wireless 802.1X solutions, authenticating users and tracking accounting data. In our environment freeradius will be used with openLDAP to authenticate wireless clients and some web services like nagios access, wallabag, syncthing, ...Mikrotik administration. This is the scheme:
Of course WPA2 enterprise is in place, thus wireless access point must support it. My AP device is a Mikrotik router configured with a security profile using wpa2-eap, that points to raspberry pi for radius service.
The official freeradius web, https://freeradius.org/documentation/, contains a detailed setup guide to get this service working. Only let me put your attention in 2 things, the certificate used by the freeradius to build EAP tunnels and the LDAP module (that must be installed). Obviously the certificate has been signed by our CA…
? ssl_cert openssl x509 -text -noout -in freeradius_server.crt |grep 'CN\|DNS
Issuer: C = ES, ST = BIZKAIA, L = SOPELA, O = HOME, OU = HOME_IT, CN = raspberrypi-station-1.home.local, emailAddress = [email protected]
Subject: C = ES, ST = BIZ, L = SOPELA, O = HOME, OU = IT, CN = raspberrypi-station-1.home.local, emailAddress = [email protected]
DNS:raspberrypi-station-1.home.local
? ssl_cert
Here go some logging information from freeradius:
Wireless access:
Sat Jan 29 09:00:49 2022 : Auth: (25) Login OK: [userx] (from client rtr01_vlan100 port 0 via TLS tunnel)
Sat Jan 29 09:00:49 2022 : Auth: (25) Login OK: [userx] (from client rtr01_vlan100 port 0 cli XX-XX-XX-XX-XX-XX)
…
Sat Jan 29 12:50:54 2022 : Auth: (116) Login OK: [usery] (from client rtr01_vlan100 port 0 via TLS tunnel)
Sat Jan 29 12:50:54 2022 : Auth: (116) Login OK: [usery] (from client rtr01_vlan100 port 0 cli XX-XX-XX-XX-XX-XX)
Sat Jan 29 12:51:30 2022 : Auth: (125) Login OK: [userz] (from client rtr01_vlan100 port 0 via TLS tunnel)
Sat Jan 29 12:51:30 2022 : Auth: (125) Login OK: [userz] (from client rtr01_vlan100 port 0 cli XX-XX-XX-XX-XX-XX)
Mikrotik admin access:
Sat Jan 29 19:33:17 2022 : Auth: (308) Login OK: [useradmin] (from client rtr01_vlan100 port 0 cli 192.168.0.49)
The logging information can be checked from openldap perspective. Here goes an example:
snip...
# modify 1643481197 dc=home,dc=local cn=admin,dc=home,dc=local IP=127.0.0.1:41486 conn=3088
dn: uid=useradmin,ou=Users,dc=home,dc=local
changetype: modify
replace: description
description: Authenticated at 2022-01-29 19:33:17
snip…
Of course you can integrate in the monitoring system if someone tries to login with incorrect credentials. In this case we analyze the freeradius log file looking for incorrect credentials:
5- Nagios Core: NagVis, Influx+Nagflux+Grafana+Histou
We have done some “essential” security tasks and some “essential” functionality tasks...now is time to deploy a monitoring system. To me a monitoring system it is another “essential” security and functional service.
Here goes the installation guides of the nagios core, nagvis, influx, nagflux, grafana and histou:
Again leaning by doing is the best path.
Once the setup is finished you will be able to monitor anything. Well almost anything...you can create your own scripts (plugins in nagios terminology). These scripts can be written in bash or perl or python or ruby or etc...and can be added to the nagios system. If the scripts are correctly developed nagios can execute them and interpret the output in order to inform you about whatever condition that you want. I see a monitoring system as a scheduler that execute, interpret and gather almost anything…
There are a lot of scripts/plugins ready to use, but one of the best things is to write new for your needs.
For example, I have created a python script that check if there is a rogue AP. Basically the script check the ESSIDs and if there is an AP that is announcing one of own ESSIDs but, it’s mac address is not one of the own APs or if the signal strength is outside of certain range it will be considered as rogue AP.
This is the output if a rogue AP is detected:
领英推荐
This is a list of checks monitored in a raspberry pi, from load and memory usage, to cpu and gpu temperature, interface bandwidth usage, certificate details of each service, IO activity per disk, backup jobs, etc...
Nagvis is a visualization addon for nagios system and Icinga which is a fork of Nagios.
NagVis can be used to visualize Nagios data, e.g. to display IT processes like a mail system or a network infrastructure. Using data supplied by a backend it will update objects placed on maps in certain intervals to reflect the current state. These maps allow to arrange the objects to display them in different layouts:
In general NagVis is a presentation tool for the information which is gathered by Nagios and transferred using backends.
The supported backends are:
In this setups, with raspberry pi(s) (arm), I use ndoutils or a quite old version of mklivestatus (1.5.0p24).
Once the nagvis is correctly configured you will be able to represent any host or service (script/plugin) in your maps…
Services/plugins - graphing the data
You can use grafana with nagios performance data to graph your data, it is very powerful. Follow this guide to get it working in your system: Nagios Core - Performance Graphs Using InfluxDB + Nagflux + Grafana + Histou
You will see beautiful graphs like this:
6- Self-managed GitLab
GitLab is an open source code repository and collaborative software development platform for devXops projects. GitLab offers a location for online code storage and capabilities for issue tracking and CI/CD. The repository enables hosting different development chains and versions, and allows users to inspect previous code and roll back to it in the event of unforeseen problems. In other words, the core user functionality of GitLab is a visual Git repository management system that allows users to browse, audit, merge, and perform other everyday tasks that would otherwise require the command line interface.
Self-Managed Feature Comparison give you a view of the features that you can use per platform type. In this case, and at this moment, free is enough.
This is a complete installation guide of self-managed GitLab: Install instructions.
You can integrate GitLab with LDAP, again our central user repository comes to help us:
If you have your projects hosted in other repository GitLab gives you multiple options to import them. I used Gitea helper for various projects without any issues, but there are a lot of possibilities:
7- Squid: Squid-in-the-middle SSL Bump
In a home or corporate environment client devices may be configured to use a proxy and HTTPS messages are sent over a proxy using CONNECT messages. To intercept this HTTPS traffic Squid needs to be provided both public and private keys to a self-signed CA certificate. It uses these to generate server certificates for the HTTPS domains clients visit. The client devices also need to be configured to trust the CA certificate when validating the Squid generated certificates. While decrypted, the traffic can be analyzed, blocked, or adapted using regular Squid features such as ICAP and eCAP.
I wrote a post about this setup and how you can use it to block any url using regular block lists even if the client uses https:
Remember, there are legal implications if you drecrypt and encrypt user traffic.
8- Pi-Hole
DNS Sinkholing is a mechanism aimed at protecting users by intercepting DNS request attempting to connect to known malicious or unwanted domains and returning a false, or rather controlled IP address. The controlled IP address points to a sinkhole server defined by the DNS sinkhole administrator.
This technique can be used to prevent hosts from connecting to or communicating with known malicious destinations such as a botnet C&C server. The Sinkhole server can be used to collect event logs, but in such cases the Sinkhole administrator must ensure that all logging is done within their legal boundaries and that there is no breach of privacy.
Pi-hole installation is very ease, follow this guide: basic-install. There is a big community of pi-hole users thus you will find a lot of information out there.
Another thing to take into account is the use of unbound. I use it but there are some implications, here goes a summary about this “feature”: DNS – unbound: Pi-hole includes a caching and forwarding DNS server, now known as FTLDNS. After applying the blocking lists, it forwards requests made by the clients to configured upstream DNS server(s)...this leads to some privacy concerns as it ultimately raises the question: Whom can you trust?. Furthermore, from the point of an attacker, the DNS servers of larger providers are very worthwhile targets, as they only need to poison one DNS server, but millions of users might be affected. When you operate your own (tiny) recursive DNS server, then the likeliness of getting affected by such an attack is greatly reduced. Basically the recursive server will send a query to the DNS root servers asking about top level domain if it not cached or domain blocked. All the “tree” is walked…
Benefit: Privacy, etc...
Drawback: Traversing the path may be slow, etc...
9- Wallabag
After this short definition probably you still are wondering what shit is Wallabag. Imagine that you are in one of these situations in where you are “downloading hardware”, yes, you are in the restroom. Please let's be honest, you, like me, use the phone in that “environment”. I do not know why but the inspiration in that situation increases, and increases...ha,ha,ha..XD...and for whatever reason you find a lot of great content...but, how do you save to read later?...There are various forms to do that, send an email with the link, add a bookmark, use a message application to send the link, ...I used all of these solutions but to me are not functional. You must know that I eat a lot of “fiber”...XD. Of course it is a joke, but as the best jokes, it is true. Wallabag allows you to save any link (and content) anywhere. Of course a host is needed to store the information. You can use your own or external hosting solution. As you imagine we will use raspberry pi as Wallabag host. The basic installation procedure can be found here, these are the requirements and the vhost configuration.
In order to send the information from your mobile phone or web browser to the Wallabag server you must install the official application or extension. That application is configured with the Wallabag server url and credentials (client id, client secret → token). Of course the Wallabag server (url) must be accessible from your client application (mobile phone or web browser…). As I said at the beginning we do not expose anything in the wild,wild internet directly (except VPN service). We use a VPN. In case of mobile devices that VPN is always-on. I recommend you the use of the tags in order to organize you links, you will understand me after some weeks of use. The Wallabag server uses https with a certificate that is signed by our local CA.
Here go some screenshots of the Wallabag application for mobile and web extension that are available in the most popular web browsers:
From web browser:
10- Syncthing
We use syncthing to synchronize or mobile devices with a central server. That synchronization, a backup to me, includes, photos, videos, message application data, etc...Of course, the central server is a raspberry pi, and again, we use an always-on VPN to get access to this central server.
The syncthing building process is documented here, and the configuration documentation here. Please remember, you can use containers to deploy practically any of the services showed in this post, I already explained why we do not do it.
Here go some images of the server’s management web and of the mobile application ui.
Server ui:
Mobile ui:
11- WireGuard
There are a lot of information about WireGuard out there, as a simple resume, you can easily build a secure and high performance VPN exposing only one UDP port. There is not the typical concept of “state”, user access roles, etc...it is good and/or bad. In the case of this analysis we are comfortable with the solution. One of the best the best things (among performance) is that you can move from wireless to 4G/5G to wired ...with a minimal impact. Of course, this solution is not valid to all situations, really there is not anything that is the best choice in all situations...well except my wife or Charlize Theron.
This is the official installation guide of WireGuard, really easy: https://www.wireguard.com/install/. Take a look to the conceptual-overview too.
You can use WireGuard service practically on any platform. We mainly still use raspberry pi as WireGuard “server” to provide access to all the clients, but I think that since RouterOS version 7, a Mikrotik device can provide WireGuard service.
As always is a good idea monitor any service. Since you have deployed anything here you can create scripts to check whatever you want, in this case WireGuard clients. This is the output of the WireGuard monitoring status in nagios core, where the displays the connected clients and data:
Connected peers data:
Connected peers over the time:
Bandwidth usage of WireGuard service:
12- Vulnerability analysis with GSM
Before “moving” the services to production, it is essential to check the security status. Please at least perform a vulnerability analysis...I’m sure that you will find some surprises. There are a lot of tools that you can use. In this case I tried the community-focused and free Greenbone Source Edition (GSE). The Greenbone Source Edition is adopted by third parties, for example Linux distributions like Kali, Alpine, etc.
Here go the analysis summary of the servers and router:
After apache hardening, most of the vulnerabilities are corrected. Consider the use of mod_security module and OWASP ModSecurity Core Rule Set (CSR). Perhaps you must do some exceptions in order to maintain all this stuff working but is well worth doing. As I said nothing is exposed directly to internet, but this is not always possible, thus it is important to have some idea about what are the possible vulnerabilities and how to secure them. “Learning by doing”, it’s not mine but I’m totally agree...
The typical detected vulnerabilities are the use of anything that is not TLSv1.3, weak cipher suites, self-signed certificates, some http headers configurations, etc...Remember to execute the analysis after the deployment of each new service, and at least 2 times a year. If you invest time to learn the tool (GSM, Nessus, nmap, etc..) it’s easy to perform the analysis. The “roi” of this actions is huge!
Router vulnerability trend. As you can see the detected vulnerabilities are fixed and therefore the risk is reduced:
Server vulnerability trend:
Conclusion
In this post we have deployed all the services directly on our own server. Of course if you use containers it will simplify the setup and the portability, but the weight of control, security and learning is a little bit more important here. We have tried to do all the stuff with a security in mind, for sure it could be done better but I think that is a good start point.
I want to thank you if you have read the post or part of it, it is a long post and my english is miserable. I’ll be happy if, in some form, your next setups will have a little bit more security “from scratch”.
??? Engineer & Manufacturer ?? | Internet Bonding routers to Video Servers | Network equipment production | ISP Independent IP address provider | Customized Packet level Encryption & Security ?? | On-premises Cloud ?
1 年I'm glad to see so many technologies and tools being discussed here. I've used some of these in the past with great success. I'm particularly interested in Wireguard and Greenbone, as I think they have great potential for secure networking. What are your thoughts on using open source technologies to enhance security?
Administrador de Sistemas
3 年OMG!!! Master & Commander!!!! Well done ??