A DIY Browser Isolation Solution
he Problem Statement
Virus. Malware. Spyware. Worm. Trojan. These terms have all become somewhat synonymous, although technically they don’t describe the same type of unwanted activity. Fundamentally, these terms describe intentional activity by someone else on your system which you probably don’t want. The vast majority of these infections come through one of two ways: Either by downloading a file with an infected executable component, or browsing to a site which contains an infected browser-based script. For this article, I’d like to only focus on the browser problem.
Problem History
When the Internet was young, you could stay safe by simply not browsing to non-reputable sites. As long as you stayed away from pornography sites, you probably wouldn’t pick up any browser-based malware or spyware. However, over the years, as computers have become more powerful, and browsers have more features, the threat has shifted to mainstream sites. Copycat sites which VERY closely mimic the real site, with very similar domains are often suggested by google algorithms, just waiting for someone to accidentally step on them. These sites often mimic financial institutions, storefront point-of-sale sites, email sites, etc. They are designed to collect your credentials, among other things. However, even if you don’t supply any credentials, the site often still planted malware on your system.
Possible Protection Solutions
Historically, we used to solve this problem by by running virus/malware detection software on our systems. That worked adequately for the majority of the early 2000s. However, such solutions rely primarily on signatures. Eventually, the need for end system security outgrew the capabilities of simple virus scanners, so we wound up with enterprise endpoint security solutions to supplement the need, which incorporate certain anomalistic behavior characteristics. Such solutions would detect office productivity applications attempting to connect to the Internet. However, if we are honest, even these systems are basing their analysis on certain signatures based on expected behavior.
Other solutions have wrapped the entire network with a protective shell which protects systems from connecting to sites by blocking known bad, or suspicious DNS names or IP ranges. These solutions can provide great value when combined with Geo-fencing solutions (which block IP addresses from countries with which the company should never have contact). However, they also suffer from the same problem: signatures, lists of IP ranges or known bad DNS entries or characteristics of entries.
The Signature Problem
I keep complaining about signatures. What’s the deal!?
Well, let’s take a scenario to explain the problem. Let’s say that your company, for whatever reason, hates Microsoft products. You set a policy that you do not want any Microsoft products anywhere on any system within the environment. How would you go about enforcing that? Well, perhaps you know (or could find) the names of the executable, and can get the signatures of the files (time stamp, size, etc.), so that would help identify the products. You can probably find all the products back to the beginning of time, and ensure that no one has DOS disks copied to the network servers. But how do you handle it when Microsoft comes out with an update? Or a new product?
Microsoft is a company which sells software, and wants to make that software available to as many consumers as possible. Their software is rather easy to legally acquire and validate, and they publish much of the "signature" information listed above. Therefore, it is rather easy to update your lists of details about Microsoft products. But what if you want to do the same thing with viruses? Unlike Microsoft, “bad actors” with nefarious goals do not publish the details of their products, their source IP networks, their DNS names, etc. In fact, quite the opposite. They want to hide their activity. The signatures for their products will not be known until they are discovered by a security blue team.
Even the tools above may not be effective, because the more well funded groups subscribe to the same security products as the companies they attack to test their products against those systems. Therefore, companies cannot rely on signature based solutions. It is an outdated and impractical plan. So what do we do?
Remote Browser Isolation
Well, for the browser problem we have been discussing, the best solution is to not browse at all. But, since that is completely impractical in today’s world, the second-best solution is to isolate the VIEWING from the BROWSING. In other words, let the web page data and all the active components required to make that page operate be rendered within an isolated space, allowing the user to simply see the results. This “browser isolation” provides significant protection for the user, by keeping all of the dangerous activity separate from the user’s meaningful data.
There are many commercial, as well as DIY ways of making this work, depending on your level of comfort with the technology. The commercial solutions are usually a bit pricey with varying levels of reliability. The DIY are sometimes a bit complex, but manageable. I want to dive into how I solved the DIY side within my own home lab network.
The Proof of Concept
I installed docker on my windows system, and pulled an image which had just firefox and a VNC server. The purpose then, is to use the VNC to view the contents of the docker image so that the docker image could render whatever page I visited via the installed firefox. Then, when needed, I could destroy the container and rebuild it, which completely removes anything related to that browser or the related components. This literally just takes seconds.
That worked pretty well, but wasn’t something I could share with my family. Although I am able to manage the docker solution, they wouldn’t find that trouble worth the effort. So, I needed something better.
Production
I build a small Alma Linux (a viable replacement for Centos) server, and installed docker. I pulled the same image to it, and was able to duplicate my success with that image. Now, I had the problem of how to cycle the docker container when the user session was complete. I planned to connect to server on a different port than was required by the docker, and let some tool do that translation. Then, that tool (I hoped) could signal when the connection was complete. Since I had only worked with NetCat (NC) I started trying to pound that square peg into the round hole, but soon went back to the well for a better solution. I found another utility called socat (socket cat), which is an update to nc, and offers many more features than nc. It’s now my new favorite tunneling tool. It also offers features such as forking, tcp keepalive, and several other advancements simply not available in nc.
However, I ran into a snag. When I used fork, which was required, the child processes would exit upon session completion, but the parent listener remained up. I needed the parent to close as well when all the child processes closed. I couldn’t find any option or combination of options to make this happen. Well, socat is open source. We have ways of solving such things. So I downloaded the source and added two lines. The below is what I changed in the xiosigchld.c file.
111c111,11
<??? if (num_child) num_child--;
---
>??? if (num_child) {
>?????? num_child--;
>?????? if (!num_child)? /* Cliff add 20230613 */
>????????? exit(0);?? /* Cliff add 20230613 */
>??? }
I recompiled, tried, and it worked… first time… <gasp>! (For people that haven’t suffered with programming, that almost never happens)
Ok, now I had a tool that could block until the socket was used, and then exit. I needed a way to flush the container for the next use. So, I built a bash script.
领英推荐
#!/usr/bin/bash
NUM=0
if pidof -x "$0" -o $$ > /dev/null ; then
? #echo "already running!";
? exit;
fi
#echo "Cleaning up"
netstat -lntp | grep 480$NUM | awk '{ print $7}' | sed 's#/.*##' | xargs kill 2>/dev/null > /dev/null
while true; do
/usr/bin/docker kill firefox$NUM 2>/dev/null > /dev/null
/usr/bin/docker container rm firefox$NUM 2>&1 > /dev/null
#echo "Image Destroyed"
/usr/bin/docker run --name firefox$NUM -p 127.0.0.1:580$NUM:5800 -d jlesage/firefox 2>/dev/null > /dev/null
#echo "Image built"
/root/socat/socat-master/socat tcp4-listen:480$NUM,fork,reuseaddr,keepalive tcp4:localhost:580$NUM
doneh
This script is broken up into three parts. By the way, the “NUM” is used for different iterations of the containers. For example, this is for instance 0. There can be any arbitrary number of instances, depending only on available resources.
The first part checks whether another copy of the script is running. If so, it just exits. This allows me to launch this script regularly from a cron job to ensure that the containers are always available.
The second part kills any existing, orphaned socat process. It is unlikely to ever be needed, but if that line were not there and a socat process became orphaned, it would continuously destroy and rebuild containers, causing lots of unneeded disk and CPU churn on the server.
The third part is the main loop. It kills and destroys the existing container, then reruns it. This ensures the container is completely clean. It then launches a fresh copy of the socat as a listener to ensure that the service is available. The script will pause at the socat line until a user connects. Once the user closes their browser, the socat listener will close (thanks to my hack), and allow the script to continue in the loop.
Adding the Functionality
Now, I downloaded the free kemp load balancer. Incidentally, this is a rather feature-rich free product. I set up a VIP on my network on port 80, switching to real servers in the back on the various available ports specific to each container instance. I didn’t use a health check, because a connect and disconnect on the port would trigger the container reset. However, the health check COULD be used as a feature every few minutes if you want to ensure that the container has been flushed. I don't see a need in my environment.
Security
Now, I need to ensure that if I browse to a site which causes my container to be compromised, that it cannot access anything else within my environment. So, I placed the server behind a Fortigate firewall to restrict its access to the rest of the environment, but allow it to reach the Internet. Incidentally, Fortigate is a very powerful, yet tiny firewall. It's not freeware, but for a SOHO, or even small office, it should be in the list of systems to consider.
Here is a logical flow.
?????????????????? ┌────────────────────────────────────────────
?????????????????? │??????????????????????????????????????????? │
?????????????????? │?????? ┌────────────────────────────┐?????? │
?????????????????? │?????? │??????????????????????????? │?????? │
┌─────────┐??? ┌───┴─────┐ │ ┌─────────┐??? ┌─────────┐ │? ┌────┴────┐?? ┌──────────┐
│? Client ├───?│? Load?? ├─┤?│? socat? ├───?│ Docker? ├─┼─?│ Firewall├──?│ Internet │
│???????? │??? │ Balancer│ │ │???????? │??? │Container│ │? │???????? │?? │????????? │
└─────────┘??? └───┬─────┘ │ └─────────┘??? └─────────┘ │? └────┬────┘?? └──────────┘
?????????????????? │?????? │????? Linux Server????????? │?????? │
?????????????????? │?????? └────────────────────────────┘?????? │
?????????????????? │??????????????????????????????????????????? │
?????????????????? │?????????????? Isolated Network???????????? │
?????????????????? └────────────────────────────────────────────┘┐
Also, to hide my IP space from the sites that I access through this server, set up CloudFlare’s free VPN client (Warp) which tunnels all internet based traffic from my docker host toward the internet. Therefore, it all appears as coming from a cloudflare IP rather than my home network.
Operational Considerations
In order to ensure that each user talks to only their docker container, I set up persistence to source address. The challenge in using a classic L4 load balancer solution for this type of design is that when the LB runs out of servers, it will recycle and start connecting to existing servers. This means that another user may connect to a container which is currently in use. That isn’t optimal. Fixing this may require an additional layer, which combines user authentication and load balancing. I don’t need such functionality within my environment, so I will only mention it.
Also, it is possible, with the basic settings, for an attacker that is able to control the container to reach the docker host. There are lots of articles on the best way to mitigate this. Controlling the docker host could allow an attacker much more functionality, primarily using your Internet connection as a launch pad toward the internet. In a production environment this could get your IP space black-listed. However, within my environment, this isn’t a large risk. Since everything about this system is ephemeral, I will just take a VM snapshot of the machine, and restore it if I ever suspect something is amiss.
The Final Product
Now, here’s a picture of me browsing. Notice that my window (which says "firefox") is actually just connecting to another browser. The inside firefox is the one within the container.
As soon as I close this window, everything will be destroyed. So, read fast. :)
Hopefully this is useful. Thanks for reading.
7??3??,6??0??0??????? I Useful Quality Content I Empowering Organizations and Individuals with Cybersecurity Tools and Insights
1 年Thanks for posting!