Cybersecurity in Plain English: How Did They Use the Real Email Domain?

Once in a while, I get the chance to pull back the curtain on how threat activity works in this column, and a recent question “I got a fake email from Microsoft, but it was the REAL microsoft.com domain – how did they do that?” gives me the opportunity to do so now. Let’s take a look at some of the tricks threat actors use to make you think that spam/threat/phishing email is actually coming from a domain that looks legitimate.

 

Technique 1: Basic Spoofing

Threat actors are able to manipulate emails in many ways, but the most common is to just force your email application to display something other than the real email Noun ask 6712656 FF001C.address they’re sending from. There are several ways to do this, but the most common involves the manipulation of headers. Headers are metadata (data about data) that email systems use to figure out where an email is coming from, where it should go to, who sent it, etc. One of the most common techniques involves using different headers for the display name (which shows up before you hover over the From: address in the message) and the actual email address the mail is coming from (which you can see by hovering over the From: field). This would result in a situation where you get an email from “Microsoft Support ()” and is somewhat easy to spot if you hover over the sender and see what email address it’s really from. 

If you’re wondering why email systems don’t reject messages like that, it’s because this situation is a valid feature-set of how email works. Simple Mail Transfer Protocol (SMTP) is the method used by the whole world to send emails, and part of that protocol allows for a display name in addition to an email address. This is how your company’s emails can have the name of the person that sent it to you, or a company can give an email account a friendly name – so there’s a trade-off here. While the feature is legitimate, it can be used for malicious purposes, and you need to look at the actual email address of the sender and not just the display name. 

 

Technique 2: Fake Domains

“OK,” you say, “But I definitely have gotten fake emails that used real email addresses for a company.” While you’re not losing your mind, the emails did not come from the company in question. Threat actors use multiple tricks to ma​ke you be​lieve that the email dom​​ain that message came from is real. For example, in the last sentence, the words “make,” “believe,” and “domain,” aren’t actually those words at all. They have what is known as a “zero-width space” embedded into them. While this space isn’t visible, it’s still there – and my spell-checker flagged each of the words as mis-spelled because they indeed are. Techniques like this allow a threat actor to send an email from “support@m​icrosoft.com” because they registered that email domain with an invisible space between the letters (between the “m” and the “i” in this case). To the naked eye, the domain looks very much real, but from the perspective of an email system, it is not actually the microsoft.com domain, and therefore is not something that would get extra attention from most security tools. 

This same theory can be used in another way. For example, have a look at AMAΖON.COM – notice anything odd there besides it being in all caps? Well, the “Z” in that domain name isn’t a “Z” at all – it’s the capitalized form of the Greek letter Zeta. Utilizing foreign characters and other Unicode symbols is a common way to trick a user into believing that an email is coming from a domain that they know, when in fact it is coming from a domain specifically set up to mislead the user. 

There are two ways to defend against this kind of malicious email activity. The first – and most important – is to follow best practices for cyber hygiene. Don’t click on links or open attachments in email, and never assume that an email is from who you think it is from without proof. Did you get an email from a friend with an attachment that you weren’t expecting? Call or text them to check that they sent it. Get an email from your employer with a link in it? Hover over the link to confirm where it goes – or better yet, reach out to your IT team and make sure you are supposed to click on that link. Most companies have begun to send out pre-event emails such as “You will be receiving an invitation link to register for our upcoming event later today. The email will be from our event parter – myevents.com.” in order to make sure users know what is real and what is suspicious if not outright fake. 

The second defense is one you can’t control directly, but is happening all the time. Your email provider (your company, Google for GMail, Outlook.com for Microsoft, etc.) is constantly updating lists of known fake, fraudulent, and/or malicious email domains. Once a fake domain goes on the list, emails that come from there get blocked. While this is an effective defense, it can’t work alone as there will always be some time between when a threat actor starts using a new fake domain and when your email provider discovers and blocks it.

 

In short, that email from that legitimate looking email address may still be fake and looking to trick you. Hovering over the email sender name to see the full and real address and following good cyber hygiene can save you from opening or clicking something that is out to do you, your computer, and/or your company harm.

Cybersecurity in Plain English: Should I Encrypt My Machine?

A common question I get from folks is some variant of “Should I be encrypting my laptop/desktop/phone?” While the idea of encrypting data might sound scary or difficult, the reality is the total opposite of both, so the answer is a resounding “YES!” That being said, many people have no idea how to actually do this, so let’s have a look at the most common Operating Systems (OSs) and how to get the job done.

First, let’s talk about what device and/or disk encryption actually does. Encryption renders the data on your device unusable unless someone has the decryption key – Noun-mobile-security-6159628-4C25E1 (1).which these days is typically either a passcode/password or some kind of biometric ID like a fingerprint. So, while the device is locked or powered down, if it gets lost or stolen the data cannot be accessed by whoever now has possession of it. Most modern (less than 6 to 8 year old) devices can encrypt invisibly without any major performance impact, so there really isn’t a downside to enabling it beyond having to unlock your device to use it – which you should be doing anyway… hint, hint… 

Now, the downside – i.e. what encryption can’t do. First off, if you are on an older device, there may be a performance hit when you use encryption, or the options we talk about below may not be avialable. There’s a ton of math involved in encryption and decryption in real-time, and older devices might just not be up to the task at hand. This really only applies to extremely older devices, such as those from 6-8 years old, and at that point it may be time to start saving up to upgrade your device when you can afford to. Secondly, once the device is unlocked, the data is visible and accessible. What that means is that you still need to maintain good cyber and online hygiene when you’re using your devices. If you allow someone access, or launch malware, your data will be visible to that while you have it unlocked or while that malware is running. So encryption isn’t a magic wand to defend your devices, but it is a very powerful tool to help keep data secure if you lose the device or have it stolen. 

So, how do you enable encryption on your devices? Well, for many devices it’s already on believe it or not. Your company most likely forces the use of device encryption on your corporate phones and laptops, for example. But let’s have a look at the more common devices you might use in your personal life, and how to get them encrypted.

Windows desktops and laptops:

From Windows 10 onward (and on any hardware less than about 5 years old), Microsoft supports a technology called BitLocker to encrypt a device. BitLocker is a native tool in Windows 10 and 11 (and was available for some other versions of Windows) that will encrypt entire volumes – a.k.a. disk drives – including the system drive that Windows itself runs on. There are a couple of ways it can do this encryption, but for most desktops and laptops you want to use the default method of encryption using a Trusted Platform Module (TPM) – basically a hardware chip in the machine that handles security controls with a unique identifier. How the TPM works isn’t really something you need to know, just know that there’s a chip on the board that is unique to your machine, and that allows technologies like BitLocker to encrypt your data uniquely to your machine. Turning on BitLocker is easy, just follow the instructions for your version of Windows 10 or 11 here: https://support.microsoft.com/en-us/windows/turn-on-device-encryption-0c453637-bc88-5f74-5105-741561aae838 – the basic idea being to go into settings, then security, then device encryption, but it’ll look slightly different depending on which version of Windows you’re using. One important note: If you’re using Windows 10 or 11 Home Edition, then you may have to follow specific instructions listed on the link on that web page to encrypt your drive instead of the whole system. It has the same overall outcome, but uses a slightly different method to get the job done. 

Mac desktops and laptops:

Here’s the good news, unless you didn’t choose the defaults during your first install/setup, you’re already encrypted. MacOS since about 4-5 versions ago automatically enables FileVault (Apple’s disk encryption system) when you set up your Mac unless you tell it not to do so. If you have an older MacOS version, or you turned it off during the setup, you can still turn it on now. Much like Microsoft, FileVault relies on a TPM to handle the details of the encryption, but all Macs that are still supported by Apple have a TPM, so unless you are on extremely old hardware (over 8-10 years old), you won’t have to worry about that. Also like Microsoft, Apple has a knowledge base article on how to turn it on manually if you need to do so: https://support.apple.com/guide/mac-help/protect-data-on-your-mac-with-filevault-mh11785/mac

Android mobile devices (phones/tablets):

Android includes the ability to encrypt data on your devices as long as you are using a passcode to unlock the phone. You can turn encryption on even if you’re not using a passcode yet, but the setup will make you set a passcode as part of the process. While not every Android device supports encryption, the vast majority made in the last five years or so do, and it is fairly easy to set it up. You can find information on how to set this up for your specific version of Android from Google, such as this knowledge base article: https://support.google.com/pixelphone/answer/2844831?hl=en

Apple mobile devices (iPhone/iPad):

As long as you have a device that’s still supported by Apple themselves, your data is encrypted by default on iPhone and iPad as soon as you set up a passcode to unlock the phone. Since that’s definitely something you REALLY should be doing anyway then, if you are, then you don’t have to do anything else to make sure the data is encrypted. Note that any form of passcode will work, so if you set up TouchID or FaceID on your devices, that counts too; and your data is already encrypted. If you have not yet set up a passcode, TouchID, or FaceID, then there are instructions at this knowledge base article for how to do it: https://support.apple.com/guide/iphone/set-a-passcode-iph14a867ae/ios and similar articles exist for iPad and other Apple mobile devices. 

Some closing notes on device encryption: First and foremost, remember that when the device is unlocked, the data can be accessed. It’s therefore important to set a timeout for when the device will lock itself if not in use. This is usually on automatically, but if you turned that feature off on your laptop or phone, you should turn it back on. Secondly, a strong password/passcode/etc. is really necessary to defend the device. If a thief can guess the passcode easily, then they can unlock the device and get access to the data easily as well. Don’t use a simple 4-digit pin to protect the thing that holds all the most sensitive data about you. As with any other password stuff, I recommend the use of a passphrase to make it easy for you to remember, but hard for anyone else to guess. “This is my device password!” is an example of passphrase, just don’t use that one specifically – go make up your own. If your device supports biometric ID (like a fingerprint scanner), then that’s a great way to limit how many times you need to manually type in a complex password and can make your life easier.

Device encryption (and/or drive encryption) makes it so that if your device is lost or stolen, the data on that device is unusable to whoever finds/steals it. Setting up encryption on most devices is really easy, and the majority of devices won’t even suffer a performance hit to use it. In many cases, you’re already using it and don’t even realize it’s on, though it never hurts to check and be sure about that. So, should you use encryption on your personal devices? Yes, you absolutely should.

 

Cybersecurity in Plain English: My Employer is Spying On My Web Browsing!

A recent Reddit thread had a great situation for us to talk about here. The short version is that a company notified all employees that web traffic would be monitored – including for secure sites – and recommended using mobile devices without using the company WiFi to do any non-business web browsing. This, as you might guess, caused a bit of an uproar with multiple posters calling it illegal (it’s usually not), a violation of privacy (it is), and because it’s Reddit, about 500 other things of various levels of veracity. Let’s talk about the technology in question and how it works.

For about 95% of the Internet these days, the data flowing between you and websites is encrypted via a technology known officially as Transport Layer Security (TLS), but Noun network monitoring 6236251 00449F.almost universally referred to by the name of the technology TLS replaced some time ago, Secure Sockets Layer (SSL). No matter what you call it, TLS is the tech that is currently used, and what’s responsible for the browser communicating over HTTPS:// instead of HTTP://. Several years ago, non-encrypted web traffic was deprecated – a.k.a. phased out – because Google Chrome, Microsoft Edge, Firefox, Opera, and just about every other browser began to pop up a message whenever a user went to a non-secure web page. As website owners (myself included) did not want to deal with large numbers of help requests, secured (HTTPS://) websites became the norm; and you’d be hard-pressed to find a non-encrypted site these days. 

So, if the data flowing between your browser and the website is encrypted, how can a company see it? Well, the answer is that they normally can’t, but organizations can set up technology that allows them to decrypt the data flowing between you and the site if you are browsing that site on a laptop, desktop, or mobile device that the organization manages and controls. To explain that, we’ll have to briefly talk about a method of threat activity known as a Man in the Middle (MitM) attack:

MitM attacks work by having a threat actor intercept your web traffic, and then relay it to the real website after they’ve seen it and possible altered it. As you might guess, this could be devastating for financial institutions, healthcare companies, or anyone else that handles sensitive data and information. Without SSL encryption, MitM attacks can’t really be stopped. You think you’re logging into a site, but in reality you’re talking to the threat actor’s web server, and THEY are talking to the real site – so they can see and modify data you send, receive, or both. SSL changes things. The way SSL/TLS works is with a series of security certificates that are used along with some pretty complex math to create encryption keys that both your browser and the website agree to use to encrypt data. That’s a massive oversimplification, but a valid high-level explanation of what’s going on. Your browser and the website do this automatically, and nearly instantly, so you don’t actually see any of it happening unless something goes wrong and you get an error message. If a threat actor tries to put themselves in the middle, then both your browser and the website will immediately see that the chain of security is broken by something/someone, and refuse to continue the data transfer. By moving to nearly universal use of SSL, Man in the Middle attacks have become far less common. It’s still technically possible to perform an MitM attack, but exceedingly more difficult than before, and certainly more difficult than a lot of other attack methods a threat actor could use.

Then how can your company perform what is effectively a MitM process on your web traffic without being blocked? Simple, they tell your computer that it’s OK for them to do it. The firewalls and other security controls your company uses could decrypt the SSL traffic before it reaches your browser. That part is fairly easy to do, but would result in a lot of users not being able to get to a whole lot of websites successfully. So, they use a loophole that is purposely carved out of the SSL/TLS standards. Each device (desktop/laptop/mobile/etc.) that the company manages is told that it should trust a specific security certificate as if it was part of the certificate chain that would normally be used for SSL. This allows the company to re-encrypt the data flow with that certificate, and have your browser recognize it as still secure. The practice isn’t breaking any of the rules, and in fact is part of how the whole technology stack is designed to work expressly for this kind of purpose, so your browser works as normal even though all the traffic is being viewed un-encrypted by the company. I want to be clear here – it’s not a person looking at all this traffic. Outside of extremely small companies that would be impossible. Automated systems decrypt the traffic, scan it for any malware or threat activity, then re-encrypt it with the company’s special certificate and ferry it on to your browser. A similar process happens in the other direction, but that outbound data is re-encrypted with the website’s certificate instead of the company’s certificate. Imagine that the systems are basically using their own browser to communicate with the websites, and ferrying things back and forth to your browser. That’s another over-simplification just to outline what is going on. Humans only get involved if the automated systems catch something that requires action. That being said, humans *can* review all that data if they wanted to or needed to as it is all logged – it’s just not practical to do that unless there’s an issue that needs to be investigated.

That brings us to another question. Why tell everyone it’s happening if it can be done invisibly for any device the company controls and manages? Well, remember way up above when we talked about if it was legal, or a violation of privacy, or a host of other things? Most companies will bypass the decryption for sites they know contain financial information, healthcare info, and other stuff that they really don’t want to examine at all. That being said, it’s not possible to ensure that every bank, every hospital and doctor’s and dentist’s office, every single site that might have sensitive data on it is on the list to bypass the filter. Because of that, many companies will make it known via corporate communications and in employee manuals that all traffic can be visible to the IT and cybersecurity teams. It’s a way to cover themselves if they accidentally decrypt sensitive information that could be a privacy violation or otherwise is something they shouldn’t, or just don’t want to, see. 

Companies are allowed to do this on their own networks, and on devices that they own, control, or otherwise manage. Laws vary by country and locality, and I am not a lawyer, but at least here in the USA they can do this whenever they want as long as employees consent to it happening. The Washington Post did a whole write-up on the subject here: https://www.washingtonpost.com/technology/2021/08/20/work-from-home-computer-monitoring/ (note, this may be paywalled for some site visitors). As long as the company gets that consent (say, for example, having you sign that you have read and agree to all of the stuff in that Employee Handbook), they can monitor traffic that flows across their own networks and devices. Some companies, of course, just want to give employees a heads-up that it’s happening, but most are covering their bases to make sure they’re following the rules for whatever country/locality they and you are in. 

What about using a VPN? That could work, if you can get it to run. Many VPN services would bypass the filtering of SSL Decryption, because they’re encrypting the traffic end-to-end with methods other than SSL/TLS. In short, the browser and every other app are now communicating in an encrypted channel that the firewall and other controls can’t decrypt. Not all VPN’s are created equal though, so it isn’t a sure thing. Also keep in mind that most employers who do SSL Decryption also know about VPN’s, and will work to block them from working on their networks.

One last note: Don’t confuse security and privacy. Even without SSL Decryption, your employer can absolutely see the web address and IP address of every site you visit. This is because of two factors. First, most Domain Name Servers (DNS) are not encrypted. That’s changing over time, but right now it is highly likely that your browser looks up where the website is via a non-encrypted system. Second, even if you’re using secure DNS (which exists, but isn’t in wide-spread use), the company’s network still has to connect to the website’s network – which means at the very least the company will know the IP addresses of the sites you visit. This isn’t difficult to reverse and figure out what website is on that IP address, so your company can still see where you went – even if they don’t know what you did while you were there.

To sum up: Can your employer monitor your web surfing even if you’re on a secure website? Yes – provided they have set up the required technology, own and/or manage the device you’re using, and (in most cases) have you agree to it in the employee manual or via other consent methods. Is that legal? Depends on where you live and where the company is located, but for a lot of us the answer is “yes.” Doesn’t it violate my privacy? Yes, though most companies will at least try to avoid looking at traffic to sites that are known to have sensitive data. Your social media feeds, non-company webmail, and a whole lot of other stuff are typically fair game though; so just assume that everywhere you surf, they can see what you’re doing. Can you get around that with a VPN? Maybe, but your company may effectively block VPN services. And finally, does this mean if my company isn’t doing SSL Decryption that I’m invisible? No, there’s still a record of what servers you visited, and most likely what URL’s you went to.

Last but not least: with very few exceptions, the process of SSL Decryption is done for legitimate and very real security reasons. The technology helps keep malware out of the company’s network and acts as another link in the chain of security defending the organization. While there are no doubt some companies that do this to spy on their employees, they are the exception rather than the rule. Check FaceBook and do your banking on your phone (off WiFi) or wait until you get home. 

Cybersecurity in Plain English: The Y of the xz Vulnerability

Because of some news that broke on Friday of last week, my inbox was inundated with various forms of the question “What is xz, and why is this a problem?” The xz library is a nearly ubiquitous installation on just about every flavor of Linux out there (and some other Operating Systems as well), so let’s dive into what it is, what happened last week, and what you need to do.

Libraries are collections of source code (application code) that can be brought into larger software projects to help speed up development and take advantage of economies of scale. There are libraries for common Windows functions, different common application behaviors, and thousands of other things overall. One such common function is data compression – such as the zip files that many (if not all) of us have used at some point in our day-to-day work. On Linux systems, the most common library used to create and manage compressed files is called “xz” (pronounced ex-zee in the USA, ex-zed in most of the rest of the world). This library can be found in thousands of applications, and installed on millions of Linux machines – including Cloud systems and application appliances used in overwhelming numbers of organizations. As you might guess, any security issues with xz would be problematic for the security community to say the least.

Late last week, the news broke that researchers had discovered a backdoor had been coded into recent versions of xz. This backdoor would allow an attacker to be able to communicate with a Linux system via SSH (a secure remote shell that is commonly used for remote access) without having to go through the usual authentication process normally required to create an SSH session. SSH is how most Linux systems are managed, so the ability to open a shell session without going through the typically strict authentication sequence first is a nightmare for any Linux user, and just disabling SSH isn’t an option, as they’d lose the ability to access and control those devices legitimately. 

The back door was put into the xz library by a volunteer coder who has worked on the project for some time. Such project maintainer volunteers routinely work on Open Source code projects, and are part of a much larger community that builds, maintains, and extends Open Source projects. No information is currently available as to why they put the back door in, what their motivations were, or any possible explanation for how they went about it. Time will no doubt reveal that information, but until then we’re left with the problem in versions 5.6.0 and 5.6.1 of the xz library, and no stable versions before that since 5.4.6 was made available. 

“But,” I hear many saying online – both legitimately and sarcastically – “xz is Open Source, so isn’t it secure?” This is a bit of an exaggeration, but the idea that Open Source software is more secure is a common misconception, so let’s briefly talk about that. Open Source simply means that anyone who wants to see and (in most cases) use the source code for a library, application, or other component of software development is allowed to. Based on how the code is licensed, anyone may be able to view, edit, even re-write the source code; though they may be required to ensure the original is left alone and/or that they offer the same benefits to anyone else who now wants to use their version of the altered source. Closed Source is the opposite, applications and other software that have proprietary code that only the software developer has access to. For most of us, nearly everything we use is Closed Source. This includes Microsoft Office, web applications like SalesForce, etc. What makes things even more confusing is that many of these Closed Source platforms actually use Open Source code along with their own proprietary code – there are entire books written on the subject of how that works if you are interested in learning more.

Open Source code is no more or less secure than Closed Source is. Both can have mistakes made in coding that create vulnerabilities that a threat actor can exploit, and generally speaking both have the same incidence of that happening. Open Source has the benefit of being available to anyone who wants to look at the source code, which means a vulnerability discovered in Open Source software *can* – with specific stress on *can* – often be patched more quickly because anyone could write the patch. You are not waiting on a software development firm creating and implementing the patch with their own software development team. The drawback is that many Open Source projects are built and maintained by volunteers, while Closed Source software relies on customers continuing to license it. So while Open Source might get patched faster, that is by no means a guaranty of any kind. We’ve seen Closed Source vulnerabilities get patched immediately, Open Source vulnerabilities never get patched because no one wants to write the fix, and any combination between the two. From a security perspective, you should view Open Source and Closed Source software as equal – and address any issues with either in the same way. The vulnerability has to be patched and/or otherwise mitigated as quickly as possible.

So, what do you need to do with the xz vulnerability? First off, the security community has not – as of today – seen a lot of attempts to leverage the back door, so while there isn’t any doubt this will be a problem at some point, it isn’t a problem right at this moment. That gives us the luxury of time, and we can use that to avoid panic that would lead to drastic measures like we saw with the log4j situation a couple of years back. Make an inventory of all software, both built in-house and obtained from software developers, that uses the xz library. This may mean reaching out to the major software developers you use to ask them if they use the impacted library. For Operating systems, this library is installed on nearly every version of Linux, and in some cases can be installed on MacOS as well. On either system, open a terminal session (it’s in Utilities in Applications on Mac) and type “xz -V” without the quotes. If you get nothing back, then you don’t have the library installed in the OS itself (though you might still be using applications that incorporate the library – see above). If you get a response of 5.6.0 or 5.6.1, then you have to take action. Follow your Operating System’s normal process for downgrading or removing a package – with your goal being to install version 5.4.6 of the xz library – that earlier version is stable, and did not have the back door code in it. Due to the sheer number of package managers on different versions of Linux, you’ll have to do a bit of legwork online to find instructions if you’re not familiar with the process. On MacOS, about the only thing that installs xz into the OS itself is homebrew; a popular package manager for Mac. If you do use homebrew, open a terminal window and run “brew update” and then “brew upgrade” (without the quotes) in Terminal to force the xz package to be downgraded to 5.4.6 automatically. 

While there isn’t currently any indication of attacks using this vulnerability, there will be over time. When software vendors start releasing patches for the vulnerability within their own applications, threat actors may choose to try to use the vulnerability to exploit organizations that don’t apply the patch. We’ve seen that behavior before, especially in cases like this where exploiting the vulnerability isn’t as straight-forward or as easy as many other techniques. Threat actors will continue using easier methods to gain access unless they discover that an exploit to leverage the back door will get them access to something known to be valuable to their end goals. So downgrading to xz version 5.4.6 wherever possible is the best course of action, and should be done as soon as possible for any/all Linux systems in your organization. Where it isn’t possible (because an application requires version 5.6.x for example), then very close monitoring of SSH connections is mandatory, to ensure that only authorized users are gaining access to the organization’s systems. Of course, once an updated version of the xz library is released without the back door, then upgrading should be done as soon as possible. Also remember to patch all software (Open or Closed Source) that puts out an update to address the vulnerability as soon as it is possible to do so; as threat actors will be on the lookout for valuable targets they can go after once they know an application is vulnerable if not updated. 

The xz library vulnerability has the potential to be a major headache, but due to the relative complexity of exploiting it, and the fact that there are currently much easier methods a threat actor can use to try to gain access, we have the luxury of time to identify and mitigate the threat. Having your IT and Security teams take action now will save you a lot of time and panic later. Gathering up information about where the packages are used will also make the future update to a new version without the back door much easier as well, so taking action now has benefits that will extend beyond just the issues discovered last week.