Remember how in the last post, I talked about the Source Interface Filter on FortiGate DNAT policies? And remember how I talked about how DNAT policies overrule static route policies? Well, if you ever find yourself with a guest network that needs to be able to talk to the DMZ, make sure to add the guest network to your Source Interface Filter. Then, when a system on the guest network tries to get to the IP of something in your DMZ that has an associated DNAT policy, this will route the traffic correctly. I guess this is basically a hairpin NAT?
FortiGate DNAT and Routing Table
If you have Destination NAT (DNAT) set up in your Fortigate, you may have noticed this button:

It took me a long while to figure out what this button means. We have two internet uplinks that, at time of writing this, are not aggregated in a redundant link or an SD-WAN link of any kind. We have two static routes set up, and if one ISP is down, we disable one of the routes and the one with lower priority becomes active. This is an unideal setup that I’ll be fixing in a future post, but for now it makes for an interesting situation that highlights something about how the DNAT works on the FortiGate.
I noticed recently that the systems that have a local IP that has been set up with DNAT were not able to communicate out to the internet. Traffic initiated from outside coming in could complete all of their necessary traffic, but I couldn’t browse out to the internet. In our case, these were Citrix controllers, so it took me a long time to notice because nobody was using those systems to browse the internet.
When I finally dug into what was going on, I realized just how important that little button up top is. Let’s say we have two static routes set up:
Order | Destination IP | Gateway |
1 | 0.0.0.0/0 | 50.100.123.123 |
2 | 0.0.0.0/0 | 200.100.123.123 |
We also have two DNATs set up:
Order | Details | Interface |
1 | 200.100.123.124 –> 192.168.1.10 | Port 2 |
2 | 50.100.123.124 –> 192.168.1.10 | Port 1 |
In this case, when the system at 192.168.1.10 attempts to reach out to the internet, I would have expected the FortiGate to send that traffic along the static route set, in this case 50.100.123.123. Instead, however, the FortiGate checks the DNAT rules first, before the static routes, and matches the local IP with what’s on this table. It then sends it out of the assigned interface for that DNAT rule, using the public IP address set in the DNAT rule.
Obviously, this can create a lot of confusion because then, on the return trip. the FortiGate is getting a packet that doesn’t make sense and it doesn’t know how to route it back to the system, causing the system to be unable to communicate over TCP. So how to we fix this? Using that button. It’s fairly obvious in hindsight, but that button means that the DNAT rule only applies if the packet is being sent through the specified interface. By setting that filter to Port 2 in the first DNAT rule, I can be sure that traffic originating from 192.168.1.10, on my DMZ port (we’ll call it Port 3) will still follow the static route and the DNAT rule will only apply when the packet is coming from Port 2.
I hope this helps you figure out why, despite having static routes, firewall policies, and central NAT rules in place, nothing is working and your system can’t talk out to the internet. Maybe this was obvious, and maybe this is how it works on every other router, but this had me well and truly stumped for days.
Cheers!
FortiSwitch Port Configuration via FortiLink (Don’t Do It)
We recently switched from Cisco products to Fortinet products for our network stack. We decided, perhaps unfortunately, to hook the FortiSwitch up to the FortiGate via the FortiLink (I’m not even kidding with this terminology, their branding is legit), but this made it difficult to configure the ports on the FortiSwitch with any granularity.
If you’re reading this and are thinking about doing this, I’d recommend against it. You wind up having to go through the FortiLink to perform any configuration on the FortiSwitch, and some configuration elements are not exposed in the FortiGate GUI, requiring you to go through the CLI to configure them. You might think that you can still get to the web interface for the FortiLink via direct IP, so it’ll be okay, but once you create the link with the FortiGate, changes made through the FortiSwitch’s FortiGUI (sorry, this one’s a joke) will not take effect over configurations made via the FortiLink.
If you really want to to this, though, make sure you have a really straightforward setup and desperately want everything in one single pane of glass, and that you’re comfortable doing work in the CLI.
All of that being said, the below is how you’ll get to the FortiSwitch’s port configurations via the CLI from the FortiGate (over the FortiLink).
config switch-controller managed-switch
edit [Switch SN]
config ports
edit [port#]
Connect-MGGraph “Invalid provider type specified”
If you’re setting up a certificate-based connection to Microsoft Graph Powershell (or whatever they’ve decided to name it at the point you’re reading this. You know what I’m talking about) and you’re getting an error when running:
$cert = Get-ChildItem Cert:\LocalMachine\My\$CertThumbnail
Connect-MgGraph -certificate $cert -ClientId $ClientID -TenantId $TenantID
don’t worry, that just means you need to run powershell as admin. You’re accessing the local machine’s cert store, so you need admin rights to do this.
Azure AD Synchronization Customization Cont.
In a previous post I talked about customizing the Azure AD synch rules to do some gymnastics with AD attributes getting imported into Azure AD. Recently I ran into a vendor who required that the email address’s capitalization match the capitalization in their SSO entries in order for the SSO to work. So if, on my side of things, I formatted people’s email addresses [email protected], but in the application I set someone’s email address to be [email protected], these two entries would not match and the SSO would not work.
I know. I’m flabbergasted as well.
To solve this, we resolved to always use lowercases for email addresses in both our AD and the application. But we’re human, people make mistakes, and more importantly people leave jobs with institutional knowledge like this and we may as try to make the computers do some of this work for us. As it turns out, the AD Sync synchronization rules editor has a function to convert strings to all uppercase or lowercase. We’ll use the previous post as a jumping off point.
Modified: IIF(IsPresent([extensionAttribute1]),LCase([extensionAttribute1]), IIF(IsPresent([userPrincipalName]),[userPrincipalName], IIF(IsPresent([sAMAccountName]),([sAMAccountName]&"@"&%Domain.FQDN%),Error("AccountName is not present"))))
Wrapping [extensionAttribute1]
with LCase()
will force what’s in the user’s AD extensionAttribute1 attribute to be sent to Azure AD all lowercase. This makes sure that, at least from the IT side of things, we won’t have any problems if we accidentally set up [email protected]
Crontab – Run the first Tuesday for a full week
I recently had a cronjob that I wanted to run on the first full Tuesday of the month. Well, crontab doesn’t handle this, obviously, but it got me thinking how to figure this one out. As it so happens, the first Tuesday of the month will always fall somewhere between the 2nd and the 8th.
I’m defining the first full week of the month as the first week where, starting Monday, every day that week is of the same month.
So, the earliest full week is one where the 1st falls on a Monday, so the 2nd falls on a Tuesday.

Following that logic, the latest full week would be one where the month starts on a Tuesday, meaning the first full week starts with the 7th on a Monday. That means the first Tuesday of a full week would be on the 8th.

So if we make a cronjob that runs every Tuesday and checks if the date is >= 2 and <= 8, we should always find the first Tuesday of the month that is part of a full week.
OAuth2 Proxy Cont.
As is any SysAdmin’s wont in life, I’ve been messing around with OAuth2-Proxy and trying to add additional functionality beyond It-Finally-Works. If you haven’t already seen my previous post about setting up OAuth2-Proxy, please check it out since I’ll be working from that foundation.
Sign Out
While it might not have been the most important thing for a wiki page, it would be nice for my users to have the option to sign out if, say, they’re on a public computer (and responsible enough users to actually think of that, but that’s another story altogether).
This should be as simple as putting a “sign out” link for the users to click, but what URL do we use there. Well, there are two things we have to consider: the locally cached cookies, and the actual IDP session. If we do the first, but not the second, we’ll be taken to back to a login screen, but as soon as the IDP auth begins, your IDP will say “No need, you’re already logged in” and send you on your way without a username and password prompt. If you do the second, but not the first, then you won’t even get the sign in screen, because the cookies will still be cached. Even if you’re logged out from the IDP’s perspective, OAuth2-proxy still sees the cookies and will let you in without needing to check with your IDP.
OAuth2-Proxy’s documentation tell us we can use the following to clear cookies:
/oauth2/sign_out?rd=https%3A%2F%2Fmy-oidc-provider.example.com%2Fsign_out_page
but then we need to also redirect to the IDP provider to also close sessions there. For Azure AD, that url is:
https://login.microsoftonline.com/common/oauth2/v2.0/logout? post_logout_redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
So we’ll need to combine these two into a URL. That monstrosity (including all of the HTML URL encoding necessary) should look something like this (assuming you’re using Azure AD as your IDP):
https://wiki.domain.com/oauth2/sign_out?rd=https%3A%2F%2Flogin.microsoftonline.com%2Fcommon%2Foauth2%2Fv2.0%2Flogout%3Fpost%5Flogout%5Fredirect%5Furi%3Dhttps%3A%2F%2Fwiki.domain.com
This URL will first tell OAuth2-Proxy to remove its cookies. then redirect (rd) to login.microsoft.com to log out of Azure AD, then tell Azure AD to re-route back to wiki.domain.com. From the user’s perspective, they’ll click on a sign out screen, choose a user account they’re logged in as to log out of, then get kicked to a couple of informational screens, then back to the sign in page.
There’s two more steps we need before we’re done. First, go into the Azure Portal, and go back to your registered app (Azure AD > App Registrations, and click your registered app). In the left-hand panel, go to “Authentication” and in the main panel, scroll down to “Front-channel logout URL.” Here, put in https://wiki.domain.com/oauth2/sign_out. I’m not entirely sure if this is correct, since in my testing I couldn’t quite get single sign-out to work right, but it couldn’t hurt.
Finally, and this is important, go into your config file and add whitelist_domains = "login.microsoft.com"
(or whatever domain your IDP uses). Without this, OAuth2-Proxy won’t redirect to your IDP.
OAuth2-Proxy
Don’t want to hear me babble and just want to get to the meat? Click here to go straight to the instructions.
My company recently published a company wiki for end users to go to in order to find answers to common tech issues we’ve seen in our environment (wishful thinking, I know). And even more recently, we’ve found that we wanted to put up some more sensitive information that we wouldn’t want out on the public internet. To solve this, I wanted to force users to authenticate using their Azure AD SSO credentials before viewing the wiki.
Our wiki is published through a WordPress site, and considering how many plugins there are for WordPress, I figured it couldn’t be that difficult to find something I could use, right?
Wrong.
Turns out there are a few plug-ins that will allow admins to authenticate with SSO to administrate the site and publish, but nothing that would require visitors to authenticate before viewing the site. After a bunch of searching, I finally found my solution: OAuth2-Proxy.
Now for the catch: this does exactly what I wanted it to do, but the documentation is terrible, and I have an incredibly rudimentary knowledge of how Apache and reverse proxies work. Cue a few days of Just Trying Stuff ™ before finally finding the combination of things that worked.

So here’s all I’m trying to accomplish. I want a user to go to my site (wiki.domain.com), receive an SSO prompt, log in, and then get to my site. Simple, right? Below is a little diagram that OAuth2-Proxy presents that shows what I’m trying to do.

In this case, I’ll be using OAuth2-Proxy as my reverse proxy. Thankfully it has this built-in so I don’t have to go through the headache of making this work with NGINX (something I only barely know how to configure to begin with).
First thing’s first, I need to get things set up in Azure AD, which will be my Auth Provider. Because this is using OAuth2 and not SAML, I can’t create an Enterprise Application in Azure. We’ll use App Registrations under Azure AD. Also, because this is Microsoft and they insist on changing their UI nearly constantly, this guide comes with the customary guarantee of 5 feet or 5 minutes, whichever comes first.
Azure AD
- Go to Azure AD and, in the left panel, go to Manage > App Registrations
- Click New Registration
- Give the app a name, leave everything else default.
- Click Register.
- In the app, on the Overview page, note the Application (client) ID and the Directory (tenant) ID.
- In the left panel, in Manage > Authentication, under “Redirect URIs,” add a new one for https://wiki.domain.com/oauth2/callback. Save.
- In the left panel, in Manage > Certificates & secrets, under Client Secrets, create a new client secret. Note the Value (not the Secret ID). Also note the expiration on the secret. This will need to be renewed when the secret expires. Microsoft no longer allows secrets that do not expire.
Linux
I went with Ubuntu as the OS for my Oauth2-Proxy server. I will also note here that I’m primarly a Windows sys admin that has been allowed to dabble in Linux, so I might be doing stuff all funky like. Don’t @ me.
- Create your working directory
/home/username/oauth2proxy
- Create a logs directory
/home/username/oauth2proxy/logs
- Create a www directory
/home/username/oauth2proxy/www
- Go to https://github.com/oauth2-proxy/oauth2-proxy and download the appropriate binary (
wget URL/to/file
) - Extract from the tarball (
tar -xf filename
). - Move oauth2-proxy to the root of the working directory (
/home/username/oauth2proxy
). - Run
dd if=/dev/urandom bs=32 count=1 2>/dev/null | base64 | tr -d -- '\n' | tr -- '+/' '-_'; echo
and note the result as your cookie secret. - Obtain TLS pem and key cert. Easiest to do this with certbot.
- (Optional) Place a logo file as
/home/username/oauth2proxy/www/logo.png
- Create a config file (
/home/username/oauth2proxy/config.cfg
) with the following:provider = "azure"
client_id = <enter client ID here from above>
client_secret = <enter client secret value from above>
oidc_issuer_url = "https://sts.windows.net/<enter tenant id here>/"
cookie_secret = "<enter cookie secret here from above>"
email_domains = "*"
upstreams = "https://<IP address of site behind SSO>:<port>/"
http_address = "127.0.0.1:80"
https_address = ":443"
request_logging = true
standard_logging = true
auth_logging = true
logging_filename = "/home/username/oauth2proxy/logs/log.txt"
ssl_upstream_insecure_skip_verify = "true"
tls_cert_file = "/path/to/cert.pem"
tls_key_file = "/path/to/privkey.pem"
force_https = "true"
custom_sign_in_logo= "/home/username/oauth2proxy/www/logo.png" - Create a Bash script (
oauth2proxy.sh
):#!/bin/bash /
home/username/oauth2proxy/oauth2-proxy --config
/home/username/oauth2proxy/config.cfg
- Make the script executable (
chmod 755 oauth2proxy.sh
) - Copy the script to
/etc/init.d
- Create a symlink to run the script on startup (
ln -s /etc/init.d/oauth2proxy.sh /etc/rc3.d/S02oauth2proxy.sh
) - Reboot the server and confirm if the script is running
DNS and Networking
In DNS, make sure that wiki.domain.com is pointing to the public IP address of your OAuth2-Proxy server. You also want to make sure that the server running the wiki is only allowing http and/or https traffic from your OAuth2-Proxy server, otherwise people can do an end run around your proxy server and access the wiki directly via IP.
Stuff That Didn’t Work (And How To Fix It)
Here are some of the issues and roadblocks I ran into while I was implementing this, and how I went about solving them.
Browser gives a “Redirected too many times” error after SSO authentication
In the config file, make sure the syntax for the Upstreams
parameter is exactly what I have. I had to make sure I included the port to forward traffic to (even if I’m forwarding http traffic to port 80) and had to make sure I ended the line with “/”.
Receiving a 403 Forbidden page after SSO authentication
In the config file, make sure to set the email domains to “*”. I originally had my email domain here, and maybe I need to figure out what the actual correct syntax here is, but I wound up giving it the “Domain Admins” treatment.
Can’t navigate to subpages on the upstream site
So I could go through SSO authentication and get to wiki.domain.com, but I could not then click on any links or get to wiki.domain.com/subpage. Turns out all the links on my site were pointing to http://wiki.domain.com/subpage instead of https://wiki.domain.com/subpage. Changing all of the links (I found a WordPress plugin that would do this for me in the WordPress database) to start with https://wiki.domain.com worked.
Delete Files Based On File Age
Ever wanted to delete every file over a certain age? Maybe for pesky log files that are ballooning the storage on your server?
The below script will delete all files in a specified folder that is older than the current date. Modify as necessary to change the age of files you want. Set up a Windows task to run as necessary.
$folder = "C:\Path\To\Folder"
$date = Get-Date -format "MM/dd/yyyy" | out-string
$files = Get-childitem -path $folder | where {$_.LastWriteTime -lt $date}
Remove-item $files.FullName
Enable Inheritance Without Taking Ownership
Having NTFS permissions that are messed up is a HUGE headache. Fixing them means trying to trick NTFS into letting you do what you need to, and sometimes it just won’t let you. Below is my nuclear option that will, at least, get you back where you can make the necessary changes to get what you need set.
Download the NTFSSecurity powershell module, unblock the zip file, then extract it to C:\Windows\System32\WindowsPowerShell\v1.0\Modules
Make sure that the top level folder has the permissions you want to inherit. Make sure you have permissions on this top level folder.
Run Powershell as admin.
Run the following commands in the folder you want to propagate inheritance down from:
import-module ntfssecurity
enable-privileges
get-childitem -recurse | Enable-NTFSAccessInheritance