Work place/computer & network security - how we do it and why
Klavs Klavsen
Head of Obmondo.com - We help ensure security and reliability of your Linux and K8s on on-prem or *cloud, enabling cost-savings through a unique customer cost-sharing model, so you can focus on your business.
Many of us have tried working in places, where every access to the internet is filtered - and this often means a lot of wasted time, as pages which actually HAD information you needed, was inaccessible - or required workarounds to access. It definetely HAS a cost to you as a person just trying to work :)
I understand WHAT the people who implement this type of security is TRYING to do.. but in my experience people always find workaround, and to the ones attacking your employees laptops, I do NOT believe it's anything but a small annoyance to get around - if at all a problem. Both DNS and HTTP tunnels can be made to work easily in all the banks and closed-off places I've been in.. The attackers isn't stupid either.
So in my mind, this is way to easy to subvert and costs way more to actual productivity for your employees, than you gain from it.
I've compiled a list of the 6 things I believe you should focus on, and why and how.
Before you read the list - remember security is a continuous travel - and we're not even where we want to be at yet.. We still have things we can improve a LOT.. But we focus on keeping the balance between atleast being able to detect and keeping the users/developers effective in their work environment. You need to weigh your own priorities by KNOWING what each choice costs (to do and in risk when you don't!) - so I'm merely suggesting to make a plan, revise it when you learn more.. and then slowly work towards that goal as part of everything else you do.
Also note that I am not writing about how to actually go about securing your applications against priviledge escalation, ensuring you have proper priviledge seperation in your design of employee/customer roles or any of all those things which are also necessary in ones security map :)
We prefer using Open Source software for everything we do.
For me - The Open Source way (open collaboration), has always been an effective way to get much more than I could ever have done by myself. The same goes for security in my mind.
This allows us to:
- Work together with the community building this software - working towards the same goals, and always being able to extend and improve a tool we use (instead of being stuck without the features we need). Whenever we improve the software we use, we always talk with the community about building what we want, in a way it'll be accepted by them - to benefit ourselves as well (so we can continue to receive the improvements to the software others do - without having to merge our changes with theirs, risking unintended sideaffects and to receive wider testing and review of our improvement idea and code).
- Look at the code - when I don't understand why something is doing something I believe is wrong (to identify what the code actually does and if there's a config option that can fix it).. and to also have a good starting point for a constructive dialog with upstream.. I've always found Open Source projects to be very positive in this regard, and much better than the paid support we get from "anything else".. sadly - as those always seem more focused on "fighting our way through the first layers of support - to reach someone who actually knows something and will try to help us" - and since we can't look at the code in that software, we're blocked much earlier in our own attempts to rectify an issue.
- Fix the code.. atleast for my own needs, and report issue upstream for a better fix. I've had to fix bugs a lot of the software I use. In apt-get f.ex. (it returned 0 'success' when asked to install specific version of a package - even though that version did not exist) and that was a huge problem for my preferred system configuration tool, Puppet :)
- Learn from what the community is doing.. If you can't find any existing solutions for what you want - you're probably going about it the wrong way, and should look for how the community solves these same needs.. Talk to your community :)
In my experience I have always been able to deliver better solutions, using the strengths of Open Source - than I would have been able to otherwise.
And being able to review the ACTUAL CODE CHANGES to software I use in specific, high-trust areas.. would definetely be something I'd do - and using Open Source allows me this option.
To manage our security, what I focus on is:
1. have an open "outgoing" network- but detect "unusual" traffic..
and preferrably also have something that blocks the origins of such traffic or atleast sounds an alert. Such tools exist and work well, if they're transparent - so you can see the actual traffic that occured and have people who can use them properly. I prefer to split up such solutions into subcomponents to ease my dependance on any "one tool" - and allow for more flexibility.
So instead of "fireeye" or whatever "integrated crap" you can buy, I'd prefer tools that can log traffic seen , and then build your own dashboards from this with a tool like Kibana. I'm a fan of packetbeat for doing this - and whatever tools you can find, that can then analyze on the collected data (and Kibana, using some of the existing community dashboards for packetbeat - to visualize and help you find what you need to know as well).
Back in the day, I had a central firewall with a simple piece of software that simply listened for anything sending traffic to the gateway address on any of our networks, and if it received any traffic - it blocked that source ip.
This always meant that users with a virus, came to IT support - because they lost their internet (and company network drive) access when this happened.. Simple and very effective :)
Today - I'd ensure to log every file-write operation on my fileservers - so ANY user - writing to more than 5 files within 5 minutes - would be blocked immediately. Thats actually very doable (also in a Windows environment) - and should stop and detect all those attacks that is todays most common successfull attacks on companies (cryptolocker attacks).
2. No internetaccess for servers
Only your repo mirror servers should need internet access actually.
A very simple and golden rule.. and again - DETECT when they try to access something they shouldn't. There's your canary in the coalmine.
And this also helps your high-availability - as your production won't accidently be brought to a halt, by external issues. (It can be very hard to identify that your website is actually down, while your servers are idling - because its waiting for some external URL thats not responding :) - and such external dependencies get added by mistake all the time.. this way they get detected immediately (as they don't work in your test, QA/staging and production environments - so the issue will happen exactly when the problem change is rolled out - and not come later when external URL breaks/changes).
Do remember that in the US atleast - the statistics say that 60% of security breaches originate from employees.. So the Principle of least privilege is VERY IMPORTANT - also to limit attack surface from your internal networks - so you also have your canary there.
3. No internetaccess for BUILD environments
For build servers - you MUST be able to build EVERYTHING you need to build and release your production and environments (and preferrably also older versions :) - without internet support. Too often I see build environments that just pulls down whatever they need and build from there.. Are you ready for your builds to stop working because of something you have no control over? or for your software to be subverted (injected with malware etc.) because of doing it this way?
There's many sad stories about how the internet broke, because someone uploaded a broken package that many used - and their builds died. Also - it HAS happened more than once, that attackers have managed to change software that builds download (often for shorter periods of time and sometimes only on some mirrors) - and some setups don't actually check GPG signatures on what they download (make sure yours do!) - and if you don't download on every run - the statistical chance of you getting subverted are a LOT lower.
If you look at bit on some of the stories of libraries that got subverted - you'll notice that SOME were actually noticed, because the developers USING those in their builds, did a source code diff from the old version of the library they were using to the new one, and saw something that made no sense.. and didn't ignore it..
So YES - we SHOULD review dependencies.. we will NEVER catch everything.. but if you see code thats obscured (which many of these attacks do) - then don't ignore it.. Report it and perhaps try a different library instead.. so atleast you tried to be vigilant. Its not a thorough review of the library code (like you do with your own code I'm sure!) - but a cursory look at how it works and the benefits it provides (and if they're worth the dependency!) will benefit you in the long run.
4. Secure your users computers.
Several large companies have been attacked, through stolen identities (f.ex. ssh keys) from developers laptops and I once worked with a security company that attacked companies to verify their security (back in 2003.. its a long time ago now :) - but one thing that always worked back then, was attacking their employee computers (send them a link to something using a browser exploit) - and gain access from there. Its still done this way today.. Many companies still allow Internet Explorer as their browser (even though AFAIK it still has known vulnerabilities that cannot be fixed, when its part of the operating system)..
I'm in luck - we only hire people who have no issue with working on a Linux laptop.. (and we don't use tools that don't work on Linux :) and then you can suddenly do a lot of things to "box" the browser and email clients, from such exploits giving undue access to the laptop. But I've always given my kids Linux at home - and most "normal users" don't even notice its not their usual Windows OS.. they care about how the icon looks :) - so I do believe you could move most users to Linux - and with a proper IT support staff, gain a LOT from doing so (also in the ability to automate your management of employee laptops).
I would recommend https://www.qubes-os.org/ - or using stuff like firejail (which uses a profile to specify exactly WHICH kernel-operations the software is allowed to perform), or SeLinux/apparmor on your laptops, ensuring the configuration for your main attack source (browser and email program) is properly protected.
5. Be sure you can identify your employees for every operation they do.. and preferrably in a way, so their identity can't just be stolen.
To do this last part, in our company, we've chosen to use the Yubikey (they develop an open standard so other implementations exist - also with open hardware).
Google also removed a huge part of their security issues by adding such a key, to support 2-factor login on everything (it supports HOTP standard - which means you just touch it to verify you're you - on f.ex. google.com and a tonnes of other websites).
Typicly, developers have ssh and gpg agents already, and local ssh and gpg keys - so they can unlock the ssh/gpg keys with their password and use it without doing anything to authenticate themselves "until they lock their computer again". Problem with this, is the identity can easily be stolen(!) when you don't trust your laptop - and you REALLY should STOP TRUSTING YOUR COMPUTER. (See nr. 4)
Which ever hardware you choose - what gives the benfit for us, besides HOTP for 2FA on websites etc., is to use it, for our sysadmins and developers (they are pretty much the same bunch as we do what's called 'devops' :) - so EVERY SSH and GPG operation I as an employee perform REQUIRES me to touch my Yubikey physicly (it starts blinking).
This means as a user, I just enter the password for the yubikey when I unlock/log-in (you have 3 attempts before it locks) and then I don't have to enter my password anywhere all day.. I just touch the key to authorize any operation.
Especially to the users who don't already use ssh and gpg key access - this is magic and much better than what they do today (entering their password all the time :)
The places we DO have passwords we sometimes need - we store in https://passwordstore.org (stores passwords, encrypted in a git repo- and has browserplugin for using with website passwords) - which besides being Open Source - requires a touch for EVERY seperate password (file) you want to access.. This means it won't be easy to steal all our passwords from a compromised laptop - even if you do manage to cheat the developer into touching their yubikey a few times by mistake.
The only place, where this touch requirement has been a huge issue.. is with Ansible (as it ssh's to the machines it needs to work on directly) and when I needed to grant a new person access to the company passwordstore (as that requires decrypting and re-encrypting all files affected). Luckily Yubikey accepted this issue, and improved the firmware about a year ago, to support caching a touch for X seconds (long press) - to support this use case, so the user can do it when they KNOW this is coming.
And when every developer has a yubikey, its easy to request everyone to commit with git -S - to actually SIGN their commits. With this and review of code before merging, it should be much harder to get source code changes, into your sourcecode, from a laptop, as you can improve your build process - to always verify git commits are from trusted parties (and verify no commits are unsigned). With the gitlab-runner f.ex. - this can be done by setting a "required step" into gitlab-runner config, so it always runs a local script, before running the build (and in that script you'd then check the gpg signatures and chain of the git clone you just got).
6. Care about your FROMs in your docker builds and which docker images you use.
Many tend to forget that docker images are just software, and as such it needs to have security updates applied. If you just download latest python from python.org - YOU have to check for relevant security updates yourself. (to avoid this - we use Ubuntu LTS - and as many of the packages they monitor and deliver security updates for as possible).
Also - if you pull it from dockerhub on every build and for your kubernetes needs etc. - you rely on the availability and security of dockerhub (and the users who build the image you fetch) EVERY TIME.
So ensure to have a local place you keep the docker images (path+tag) you use and again - don't allow buildservers to access the internet directly at all (really!).
I prefer to build ALL our images, from Ubuntu 20.04 LTS (and migrate to 22.04 when that LTS is out :) - and I would like to even copy the build tools for that image (they are open source), so we can rebuild that base image all our docker images build FROM, on a daily basis using the software updates Ubuntu releases - thus removing any access to dockerhub being necessary for our operations. On a cost/benefit ration - its actually not hard to implement, and removes one more potential issue.
Whenever we rebuild our base image - we then trigger rebuild of all our docker images - and "voila".. everything can easily be kept up2date - also in the docker world - and this being based on an LTS, ensures as high a degree as you can get, of trust in the docker image still working exactly as it did last time it was built (the FROM is the same, we just push an update to it - just like using :latest tags - except 20.04 is LTS so not as dangerous :).