Cybersecurity in Plain English: Should I Encrypt My Machine?

A common question I get from folks is some variant of “Should I be encrypting my laptop/desktop/phone?” While the idea of encrypting data might sound scary or difficult, the reality is the total opposite of both, so the answer is a resounding “YES!” That being said, many people have no idea how to actually do this, so let’s have a look at the most common Operating Systems (OSs) and how to get the job done.

First, let’s talk about what device and/or disk encryption actually does. Encryption renders the data on your device unusable unless someone has the decryption key – Noun-mobile-security-6159628-4C25E1 (1).which these days is typically either a passcode/password or some kind of biometric ID like a fingerprint. So, while the device is locked or powered down, if it gets lost or stolen the data cannot be accessed by whoever now has possession of it. Most modern (less than 6 to 8 year old) devices can encrypt invisibly without any major performance impact, so there really isn’t a downside to enabling it beyond having to unlock your device to use it – which you should be doing anyway… hint, hint… 

Now, the downside – i.e. what encryption can’t do. First off, if you are on an older device, there may be a performance hit when you use encryption, or the options we talk about below may not be avialable. There’s a ton of math involved in encryption and decryption in real-time, and older devices might just not be up to the task at hand. This really only applies to extremely older devices, such as those from 6-8 years old, and at that point it may be time to start saving up to upgrade your device when you can afford to. Secondly, once the device is unlocked, the data is visible and accessible. What that means is that you still need to maintain good cyber and online hygiene when you’re using your devices. If you allow someone access, or launch malware, your data will be visible to that while you have it unlocked or while that malware is running. So encryption isn’t a magic wand to defend your devices, but it is a very powerful tool to help keep data secure if you lose the device or have it stolen. 

So, how do you enable encryption on your devices? Well, for many devices it’s already on believe it or not. Your company most likely forces the use of device encryption on your corporate phones and laptops, for example. But let’s have a look at the more common devices you might use in your personal life, and how to get them encrypted.

Windows desktops and laptops:

From Windows 10 onward (and on any hardware less than about 5 years old), Microsoft supports a technology called BitLocker to encrypt a device. BitLocker is a native tool in Windows 10 and 11 (and was available for some other versions of Windows) that will encrypt entire volumes – a.k.a. disk drives – including the system drive that Windows itself runs on. There are a couple of ways it can do this encryption, but for most desktops and laptops you want to use the default method of encryption using a Trusted Platform Module (TPM) – basically a hardware chip in the machine that handles security controls with a unique identifier. How the TPM works isn’t really something you need to know, just know that there’s a chip on the board that is unique to your machine, and that allows technologies like BitLocker to encrypt your data uniquely to your machine. Turning on BitLocker is easy, just follow the instructions for your version of Windows 10 or 11 here: https://support.microsoft.com/en-us/windows/turn-on-device-encryption-0c453637-bc88-5f74-5105-741561aae838 – the basic idea being to go into settings, then security, then device encryption, but it’ll look slightly different depending on which version of Windows you’re using. One important note: If you’re using Windows 10 or 11 Home Edition, then you may have to follow specific instructions listed on the link on that web page to encrypt your drive instead of the whole system. It has the same overall outcome, but uses a slightly different method to get the job done. 

Mac desktops and laptops:

Here’s the good news, unless you didn’t choose the defaults during your first install/setup, you’re already encrypted. MacOS since about 4-5 versions ago automatically enables FileVault (Apple’s disk encryption system) when you set up your Mac unless you tell it not to do so. If you have an older MacOS version, or you turned it off during the setup, you can still turn it on now. Much like Microsoft, FileVault relies on a TPM to handle the details of the encryption, but all Macs that are still supported by Apple have a TPM, so unless you are on extremely old hardware (over 8-10 years old), you won’t have to worry about that. Also like Microsoft, Apple has a knowledge base article on how to turn it on manually if you need to do so: https://support.apple.com/guide/mac-help/protect-data-on-your-mac-with-filevault-mh11785/mac

Android mobile devices (phones/tablets):

Android includes the ability to encrypt data on your devices as long as you are using a passcode to unlock the phone. You can turn encryption on even if you’re not using a passcode yet, but the setup will make you set a passcode as part of the process. While not every Android device supports encryption, the vast majority made in the last five years or so do, and it is fairly easy to set it up. You can find information on how to set this up for your specific version of Android from Google, such as this knowledge base article: https://support.google.com/pixelphone/answer/2844831?hl=en

Apple mobile devices (iPhone/iPad):

As long as you have a device that’s still supported by Apple themselves, your data is encrypted by default on iPhone and iPad as soon as you set up a passcode to unlock the phone. Since that’s definitely something you REALLY should be doing anyway then, if you are, then you don’t have to do anything else to make sure the data is encrypted. Note that any form of passcode will work, so if you set up TouchID or FaceID on your devices, that counts too; and your data is already encrypted. If you have not yet set up a passcode, TouchID, or FaceID, then there are instructions at this knowledge base article for how to do it: https://support.apple.com/guide/iphone/set-a-passcode-iph14a867ae/ios and similar articles exist for iPad and other Apple mobile devices. 

Some closing notes on device encryption: First and foremost, remember that when the device is unlocked, the data can be accessed. It’s therefore important to set a timeout for when the device will lock itself if not in use. This is usually on automatically, but if you turned that feature off on your laptop or phone, you should turn it back on. Secondly, a strong password/passcode/etc. is really necessary to defend the device. If a thief can guess the passcode easily, then they can unlock the device and get access to the data easily as well. Don’t use a simple 4-digit pin to protect the thing that holds all the most sensitive data about you. As with any other password stuff, I recommend the use of a passphrase to make it easy for you to remember, but hard for anyone else to guess. “This is my device password!” is an example of passphrase, just don’t use that one specifically – go make up your own. If your device supports biometric ID (like a fingerprint scanner), then that’s a great way to limit how many times you need to manually type in a complex password and can make your life easier.

Device encryption (and/or drive encryption) makes it so that if your device is lost or stolen, the data on that device is unusable to whoever finds/steals it. Setting up encryption on most devices is really easy, and the majority of devices won’t even suffer a performance hit to use it. In many cases, you’re already using it and don’t even realize it’s on, though it never hurts to check and be sure about that. So, should you use encryption on your personal devices? Yes, you absolutely should.

 

Cybersecurity in Plain English: My Employer is Spying On My Web Browsing!

A recent Reddit thread had a great situation for us to talk about here. The short version is that a company notified all employees that web traffic would be monitored – including for secure sites – and recommended using mobile devices without using the company WiFi to do any non-business web browsing. This, as you might guess, caused a bit of an uproar with multiple posters calling it illegal (it’s usually not), a violation of privacy (it is), and because it’s Reddit, about 500 other things of various levels of veracity. Let’s talk about the technology in question and how it works.

For about 95% of the Internet these days, the data flowing between you and websites is encrypted via a technology known officially as Transport Layer Security (TLS), but Noun network monitoring 6236251 00449F.almost universally referred to by the name of the technology TLS replaced some time ago, Secure Sockets Layer (SSL). No matter what you call it, TLS is the tech that is currently used, and what’s responsible for the browser communicating over HTTPS:// instead of HTTP://. Several years ago, non-encrypted web traffic was deprecated – a.k.a. phased out – because Google Chrome, Microsoft Edge, Firefox, Opera, and just about every other browser began to pop up a message whenever a user went to a non-secure web page. As website owners (myself included) did not want to deal with large numbers of help requests, secured (HTTPS://) websites became the norm; and you’d be hard-pressed to find a non-encrypted site these days. 

So, if the data flowing between your browser and the website is encrypted, how can a company see it? Well, the answer is that they normally can’t, but organizations can set up technology that allows them to decrypt the data flowing between you and the site if you are browsing that site on a laptop, desktop, or mobile device that the organization manages and controls. To explain that, we’ll have to briefly talk about a method of threat activity known as a Man in the Middle (MitM) attack:

MitM attacks work by having a threat actor intercept your web traffic, and then relay it to the real website after they’ve seen it and possible altered it. As you might guess, this could be devastating for financial institutions, healthcare companies, or anyone else that handles sensitive data and information. Without SSL encryption, MitM attacks can’t really be stopped. You think you’re logging into a site, but in reality you’re talking to the threat actor’s web server, and THEY are talking to the real site – so they can see and modify data you send, receive, or both. SSL changes things. The way SSL/TLS works is with a series of security certificates that are used along with some pretty complex math to create encryption keys that both your browser and the website agree to use to encrypt data. That’s a massive oversimplification, but a valid high-level explanation of what’s going on. Your browser and the website do this automatically, and nearly instantly, so you don’t actually see any of it happening unless something goes wrong and you get an error message. If a threat actor tries to put themselves in the middle, then both your browser and the website will immediately see that the chain of security is broken by something/someone, and refuse to continue the data transfer. By moving to nearly universal use of SSL, Man in the Middle attacks have become far less common. It’s still technically possible to perform an MitM attack, but exceedingly more difficult than before, and certainly more difficult than a lot of other attack methods a threat actor could use.

Then how can your company perform what is effectively a MitM process on your web traffic without being blocked? Simple, they tell your computer that it’s OK for them to do it. The firewalls and other security controls your company uses could decrypt the SSL traffic before it reaches your browser. That part is fairly easy to do, but would result in a lot of users not being able to get to a whole lot of websites successfully. So, they use a loophole that is purposely carved out of the SSL/TLS standards. Each device (desktop/laptop/mobile/etc.) that the company manages is told that it should trust a specific security certificate as if it was part of the certificate chain that would normally be used for SSL. This allows the company to re-encrypt the data flow with that certificate, and have your browser recognize it as still secure. The practice isn’t breaking any of the rules, and in fact is part of how the whole technology stack is designed to work expressly for this kind of purpose, so your browser works as normal even though all the traffic is being viewed un-encrypted by the company. I want to be clear here – it’s not a person looking at all this traffic. Outside of extremely small companies that would be impossible. Automated systems decrypt the traffic, scan it for any malware or threat activity, then re-encrypt it with the company’s special certificate and ferry it on to your browser. A similar process happens in the other direction, but that outbound data is re-encrypted with the website’s certificate instead of the company’s certificate. Imagine that the systems are basically using their own browser to communicate with the websites, and ferrying things back and forth to your browser. That’s another over-simplification just to outline what is going on. Humans only get involved if the automated systems catch something that requires action. That being said, humans *can* review all that data if they wanted to or needed to as it is all logged – it’s just not practical to do that unless there’s an issue that needs to be investigated.

That brings us to another question. Why tell everyone it’s happening if it can be done invisibly for any device the company controls and manages? Well, remember way up above when we talked about if it was legal, or a violation of privacy, or a host of other things? Most companies will bypass the decryption for sites they know contain financial information, healthcare info, and other stuff that they really don’t want to examine at all. That being said, it’s not possible to ensure that every bank, every hospital and doctor’s and dentist’s office, every single site that might have sensitive data on it is on the list to bypass the filter. Because of that, many companies will make it known via corporate communications and in employee manuals that all traffic can be visible to the IT and cybersecurity teams. It’s a way to cover themselves if they accidentally decrypt sensitive information that could be a privacy violation or otherwise is something they shouldn’t, or just don’t want to, see. 

Companies are allowed to do this on their own networks, and on devices that they own, control, or otherwise manage. Laws vary by country and locality, and I am not a lawyer, but at least here in the USA they can do this whenever they want as long as employees consent to it happening. The Washington Post did a whole write-up on the subject here: https://www.washingtonpost.com/technology/2021/08/20/work-from-home-computer-monitoring/ (note, this may be paywalled for some site visitors). As long as the company gets that consent (say, for example, having you sign that you have read and agree to all of the stuff in that Employee Handbook), they can monitor traffic that flows across their own networks and devices. Some companies, of course, just want to give employees a heads-up that it’s happening, but most are covering their bases to make sure they’re following the rules for whatever country/locality they and you are in. 

What about using a VPN? That could work, if you can get it to run. Many VPN services would bypass the filtering of SSL Decryption, because they’re encrypting the traffic end-to-end with methods other than SSL/TLS. In short, the browser and every other app are now communicating in an encrypted channel that the firewall and other controls can’t decrypt. Not all VPN’s are created equal though, so it isn’t a sure thing. Also keep in mind that most employers who do SSL Decryption also know about VPN’s, and will work to block them from working on their networks.

One last note: Don’t confuse security and privacy. Even without SSL Decryption, your employer can absolutely see the web address and IP address of every site you visit. This is because of two factors. First, most Domain Name Servers (DNS) are not encrypted. That’s changing over time, but right now it is highly likely that your browser looks up where the website is via a non-encrypted system. Second, even if you’re using secure DNS (which exists, but isn’t in wide-spread use), the company’s network still has to connect to the website’s network – which means at the very least the company will know the IP addresses of the sites you visit. This isn’t difficult to reverse and figure out what website is on that IP address, so your company can still see where you went – even if they don’t know what you did while you were there.

To sum up: Can your employer monitor your web surfing even if you’re on a secure website? Yes – provided they have set up the required technology, own and/or manage the device you’re using, and (in most cases) have you agree to it in the employee manual or via other consent methods. Is that legal? Depends on where you live and where the company is located, but for a lot of us the answer is “yes.” Doesn’t it violate my privacy? Yes, though most companies will at least try to avoid looking at traffic to sites that are known to have sensitive data. Your social media feeds, non-company webmail, and a whole lot of other stuff are typically fair game though; so just assume that everywhere you surf, they can see what you’re doing. Can you get around that with a VPN? Maybe, but your company may effectively block VPN services. And finally, does this mean if my company isn’t doing SSL Decryption that I’m invisible? No, there’s still a record of what servers you visited, and most likely what URL’s you went to.

Last but not least: with very few exceptions, the process of SSL Decryption is done for legitimate and very real security reasons. The technology helps keep malware out of the company’s network and acts as another link in the chain of security defending the organization. While there are no doubt some companies that do this to spy on their employees, they are the exception rather than the rule. Check FaceBook and do your banking on your phone (off WiFi) or wait until you get home. 

Cybersecurity in Plain English: The Y of the xz Vulnerability

Because of some news that broke on Friday of last week, my inbox was inundated with various forms of the question “What is xz, and why is this a problem?” The xz library is a nearly ubiquitous installation on just about every flavor of Linux out there (and some other Operating Systems as well), so let’s dive into what it is, what happened last week, and what you need to do.

Libraries are collections of source code (application code) that can be brought into larger software projects to help speed up development and take advantage of economies of scale. There are libraries for common Windows functions, different common application behaviors, and thousands of other things overall. One such common function is data compression – such as the zip files that many (if not all) of us have used at some point in our day-to-day work. On Linux systems, the most common library used to create and manage compressed files is called “xz” (pronounced ex-zee in the USA, ex-zed in most of the rest of the world). This library can be found in thousands of applications, and installed on millions of Linux machines – including Cloud systems and application appliances used in overwhelming numbers of organizations. As you might guess, any security issues with xz would be problematic for the security community to say the least.

Late last week, the news broke that researchers had discovered a backdoor had been coded into recent versions of xz. This backdoor would allow an attacker to be able to communicate with a Linux system via SSH (a secure remote shell that is commonly used for remote access) without having to go through the usual authentication process normally required to create an SSH session. SSH is how most Linux systems are managed, so the ability to open a shell session without going through the typically strict authentication sequence first is a nightmare for any Linux user, and just disabling SSH isn’t an option, as they’d lose the ability to access and control those devices legitimately. 

The back door was put into the xz library by a volunteer coder who has worked on the project for some time. Such project maintainer volunteers routinely work on Open Source code projects, and are part of a much larger community that builds, maintains, and extends Open Source projects. No information is currently available as to why they put the back door in, what their motivations were, or any possible explanation for how they went about it. Time will no doubt reveal that information, but until then we’re left with the problem in versions 5.6.0 and 5.6.1 of the xz library, and no stable versions before that since 5.4.6 was made available. 

“But,” I hear many saying online – both legitimately and sarcastically – “xz is Open Source, so isn’t it secure?” This is a bit of an exaggeration, but the idea that Open Source software is more secure is a common misconception, so let’s briefly talk about that. Open Source simply means that anyone who wants to see and (in most cases) use the source code for a library, application, or other component of software development is allowed to. Based on how the code is licensed, anyone may be able to view, edit, even re-write the source code; though they may be required to ensure the original is left alone and/or that they offer the same benefits to anyone else who now wants to use their version of the altered source. Closed Source is the opposite, applications and other software that have proprietary code that only the software developer has access to. For most of us, nearly everything we use is Closed Source. This includes Microsoft Office, web applications like SalesForce, etc. What makes things even more confusing is that many of these Closed Source platforms actually use Open Source code along with their own proprietary code – there are entire books written on the subject of how that works if you are interested in learning more.

Open Source code is no more or less secure than Closed Source is. Both can have mistakes made in coding that create vulnerabilities that a threat actor can exploit, and generally speaking both have the same incidence of that happening. Open Source has the benefit of being available to anyone who wants to look at the source code, which means a vulnerability discovered in Open Source software *can* – with specific stress on *can* – often be patched more quickly because anyone could write the patch. You are not waiting on a software development firm creating and implementing the patch with their own software development team. The drawback is that many Open Source projects are built and maintained by volunteers, while Closed Source software relies on customers continuing to license it. So while Open Source might get patched faster, that is by no means a guaranty of any kind. We’ve seen Closed Source vulnerabilities get patched immediately, Open Source vulnerabilities never get patched because no one wants to write the fix, and any combination between the two. From a security perspective, you should view Open Source and Closed Source software as equal – and address any issues with either in the same way. The vulnerability has to be patched and/or otherwise mitigated as quickly as possible.

So, what do you need to do with the xz vulnerability? First off, the security community has not – as of today – seen a lot of attempts to leverage the back door, so while there isn’t any doubt this will be a problem at some point, it isn’t a problem right at this moment. That gives us the luxury of time, and we can use that to avoid panic that would lead to drastic measures like we saw with the log4j situation a couple of years back. Make an inventory of all software, both built in-house and obtained from software developers, that uses the xz library. This may mean reaching out to the major software developers you use to ask them if they use the impacted library. For Operating systems, this library is installed on nearly every version of Linux, and in some cases can be installed on MacOS as well. On either system, open a terminal session (it’s in Utilities in Applications on Mac) and type “xz -V” without the quotes. If you get nothing back, then you don’t have the library installed in the OS itself (though you might still be using applications that incorporate the library – see above). If you get a response of 5.6.0 or 5.6.1, then you have to take action. Follow your Operating System’s normal process for downgrading or removing a package – with your goal being to install version 5.4.6 of the xz library – that earlier version is stable, and did not have the back door code in it. Due to the sheer number of package managers on different versions of Linux, you’ll have to do a bit of legwork online to find instructions if you’re not familiar with the process. On MacOS, about the only thing that installs xz into the OS itself is homebrew; a popular package manager for Mac. If you do use homebrew, open a terminal window and run “brew update” and then “brew upgrade” (without the quotes) in Terminal to force the xz package to be downgraded to 5.4.6 automatically. 

While there isn’t currently any indication of attacks using this vulnerability, there will be over time. When software vendors start releasing patches for the vulnerability within their own applications, threat actors may choose to try to use the vulnerability to exploit organizations that don’t apply the patch. We’ve seen that behavior before, especially in cases like this where exploiting the vulnerability isn’t as straight-forward or as easy as many other techniques. Threat actors will continue using easier methods to gain access unless they discover that an exploit to leverage the back door will get them access to something known to be valuable to their end goals. So downgrading to xz version 5.4.6 wherever possible is the best course of action, and should be done as soon as possible for any/all Linux systems in your organization. Where it isn’t possible (because an application requires version 5.6.x for example), then very close monitoring of SSH connections is mandatory, to ensure that only authorized users are gaining access to the organization’s systems. Of course, once an updated version of the xz library is released without the back door, then upgrading should be done as soon as possible. Also remember to patch all software (Open or Closed Source) that puts out an update to address the vulnerability as soon as it is possible to do so; as threat actors will be on the lookout for valuable targets they can go after once they know an application is vulnerable if not updated. 

The xz library vulnerability has the potential to be a major headache, but due to the relative complexity of exploiting it, and the fact that there are currently much easier methods a threat actor can use to try to gain access, we have the luxury of time to identify and mitigate the threat. Having your IT and Security teams take action now will save you a lot of time and panic later. Gathering up information about where the packages are used will also make the future update to a new version without the back door much easier as well, so taking action now has benefits that will extend beyond just the issues discovered last week.

Cybersecurity in Plain English: How Does Ransomware Work?

I get a lot of great questions from people in all different areas of business, but one comes up more than most: “How does ransomware even work?” Granted, we know what the goal of ransomware is – to get paid to unlock files that are locked down by a threat actor – but how does it operate, function, do what it does? Let’s dive into this topic.

Ransomware is a generic term to refer to any cyber attack where data is encrypted in order to make it unusable to a person or organization until a payment to the threat actor is made. Because locking up the data by encrypting it renders most businesses partially or totally unable to conduct business, it is a devastatingly effective form of attack, and a preferred method of threat activity these days. How it does what it does, however, is a bit more complicated; as the methods and scope of ransomware have changed over the 20-plus years we’ve been dealing with it as a security community.

Modern ransomware can be broken down into two broad categories: Single-extortion ransomware that just locks the data down, and double-extortion ransomware that also steals a copy of all the impacted data before locking it down. Each has evolved to reduce the ability of an organization to recover from backup or otherwise fix things without having to pay the threat actor, but each category is equally popular among criminal groups. 

Single-extortion ransomware works by first gaining access to a desktop, laptop, or server. This can be through one of many initial access methods, but the more commonly used techniques these days are subterfuge and exploiting a vulnerability. See the previous post at https://www.miketalon.com/2024/02/cybersecurity-in-plain-english-how-do-threat-actors-get-in/ for more info on initial access. Subterfuge includes things like tricking a user into visiting a booby-trapped website, hiding malware in what appears to be a valid software application, or otherwise getting a user (or automated system) to install the threat actor’s software on a machine/virtual machine, etc. Exploitation of a vulnerability requires less (or no) interaction by a user, but rather tricks/forces an application or platform into doing something malicious by taking advantage of a weakness in the software or hardware itself. Note that threat actors are aware that anti-malware exists, and so will attempt to hide what they are doing for as long as possible and avoid triggering the anti-malware whenever possible (see dwell time below). This is referred to as “evasion,” and there are many different techniques that are used to different levels of effectiveness, depending on what anti-malware defenses are in place.  

Once they have the first device compromised, the threat actor then will typically attempt to spread their influence to as many other machines as possible (referred to as “propagation”). Since most organizational systems now use some form of Endpoint Detection and Response (an advanced type of anti-malware system), this has to be done carefully and cautiously to evade tripping detection and defensive systems. In fact, a threat actor can take weeks or even months just moving around a victim network in search of more devices and systems to take control of before they do anything like encrypting data. This is most commonly referred to as “dwell time,” with the average being about 10 days in 2023 but many sticking around for far longer to gain control of more systems. It isn’t uncommon to see dwell times stretching into months as double-extortion attacks become more common.

More commonly these days, threat actors will also attempt to disable backup solutions and try to weaken or disable anti-malware solutions as they go. This allows them to spread further, and to ensure that once they do spring the trap, the organization won’t have recent backups to restore from. Both actions make it more likely that the victim organization will pay to have their data unencrypted. Remember that ransomware is a business – a criminal business, but still a business – so the more likely a victim is to make a payment, the more money the criminal business generates. Additionally, many modern threat-actors will install back-door systems which will allow them to re-enter the organization’s systems if the organization does choose not to pay – so that the threat actor can re-encrypt over and over until they get money. 

Once the threat actor has gotten onto as many systems as possible and made sure things like backups have been rendered useless, then single-extortion ransomware enters its final stage. Some, most, or all of the data on each infected machine is encrypted using a key only known to the threat actor. Without going into too much detail here, threat actors use a theory known as asymmetrical encryption – meaning that the key that encrypts the data cannot be used to decrypt it. So even if the organization captures the encryption key, it won’t be useful in getting back to business. Once done, the threat actor either displays a message on the infected systems and/or directly contacts the organization to demand a ransom in exchange for the decryption key; and the attack is then finished.

For double-extortion ransomware, the game changes a bit. While all of the above steps still happen, there is another step added in between the propagation phase – where the threat actor tries to compromise as many systems as possible without being caught – and the encryption phase. As they move across the organization’s systems, the double-extortion ransomware threat actor begins stealing a copy of the data that they discover. There are many methods for performing this step, but the most common involve sending a copy of each file to cloud storage that the threat actor has access to. Many have asked why cloud providers don’t prohibit this activity and stop double-extortion ransomware, and the answer to that question will be in an upcoming article, but suffice it to say that currently; they really can’t police this type of data transfer in order to stop it. Data exfiltration can occur quickly, or very quietly – with different threat actors preferring different techniques in a trade-off between getting everything fast or evading defenses but taking longer to get the job done. 

This dataset is held until after the threat actor encrypts the original data on the organization’s systems, and the data theft can go on for as long as the threat actor is able to dwell within the organization. This means that not only can all current data be stolen, but any new data can also be siphoned off and stolen as the attack progresses. With dwell times adding up to potentially months, this can mean a great deal of current data can be stolen as it is created and modified by employees. 

Once the trap is sprung and the original data is encrypted, the threat actor now has two threats they can use to extort a payout from the victim organization. First, they will offer the decryption key in much the same way as with single-extortion ransomware. Secondly, they offer to destroy their copy of the data if the ransom is paid; but threaten to release that data to the general public if the ransom is not payed. So, even if an organization can recover without paying the ransom, they still must contend with the fact that highly privileged data could be released to the outside world unless they pay. For organizations like law firms, healthcare companies, payment processors, and other organizations that hold extremely privileged information, such public release of the info could be devastating and even trigger massive regulatory fines and penalties. Even a business that writes off the encrypted data as a loss may not be able to weather all of that data becoming public knowledge to anyone who wishes to view it. The hit to customer trust, regulatory fines, impact to stock prices, loss of investors, and other factors make such a release of data something many companies cannot withstand without going out of business. 

Some ransomware threat actors have even taken things a step further with so-called triple-extortion attacks. The data itself is encrypted, the stolen data is threatened to be released to the general public, and the threat actor also threatens persons and companies that appear in the data to try to get them to pay in addition to the company the data came from. For example, if a ransomware actor compromises a hospital, the data on the hospital data-systems is encrypted, a the threat actor threatens to release the copy of that data which they hold to the general public, and the threat actor reaches out to individual patients and demands that they also pay money to keep their own data that was in the stolen data-set from becoming public. This maximizes the payout the threat actor can get, and makes it even more likely that the original victim organization (the hospital in this scenario) will pay them to make the whole problem go away. 

Many have asked me if they should pay the ransom. While I can’t speak to every situation that ransomware can create, my overall recommendation is not to pay if there is any other way to get back to business. Paying the ransom has several negative effects: First, you’re giving money to one or more people who admit they are criminals. There’s no guarantee that they’ll do what they say they’ll do if you pay them, and they may have back-door access to continue harming your organization even if they do give you the decryption keys. There’s also no way to validate that they deleted their stolen copy of the data, and in fact law enforcement was able to find supposedly deleted data on threat actor systems they took control of in raids and shutdowns [https://krebsonsecurity.com/2024/03/blackcat-ransomware-group-implodes-after-apparent-22m-ransom-payment-by-change-healthcare/]. Second, every time the threat actor is paid, it encourages more threat actors to get into the ransomware business to make money. Third, depending on who the threat actor is and where you are, it might be against the law to send money to the threat actor at all and therefore expose your organization to even more regulatory and/or legal issues. Some information on this for US companies can be found here: https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf . While there are some cases where paying the threat actor is the only way to resolve the situation, every organization should think long and hard about the repercussions to their own business and to the greater business world if they do so. 

Ransomware is an insidious threat that is growing every day. With double- and triple-extortion techniques growing in popularity, even the ability to recover without paying the ransom doesn’t remove the threat that the criminals can hold over an organization and its customers. That being said, it is not all doom and gloom. By keeping software updated, not interacting with links in emails or attachments that come in with them, and practicing basic online hygiene; users can thwart a large number of ransomware attacks. Exploitation of weaknesses in software will still be a problem, and organization must address these by utilizing additional security controls to compensate for the weakness, but effective strategies do exist for minimizing the potential to be struck by ransomware. Together, we can make it less lucrative for a threat actor to use ransomware, causing their business models to break and making the net a safer place. 

Cybersecurity in Plain English: IAM What?

A reader recently asked, “What is IAM and why is it important?” This is a bit of a complex question, but we can definitely dive into some of the higher-level concepts and details to de-mystify Identity and Access Management (IAM).

IAM is simply the series of technologies that control who is allowed to access what on your corporate systems. The complexity comes about because – while the idea is simple – the actual implementation of IAM is one of the most complex operations that many companies will ever undertake. The reason is straight-forward, humans are not generally logical and orderly beings. Because of that, systems which enable humans to do their jobs also tend to be complicated and intertwined, meaning making sure only the right people have access to the right systems and data is often difficult at best. So, let’s have a look at the basic ideas behind IAM and what they do.

First, the Principle of Least Access is the starting ground for any IAM solution set. As its name would imply, this principle says that each user should be first given the absolute minimum amount of access to systems and applications, regardless of any other factor. When a user needs access to something more, they get it quickly and efficiently, but they only get the bare minimum access to that “something more” and no more than that. As an example, a new user needs access to things like file servers, email, and some applications. This access would be very specifically defined, giving them access to just the folders on the file server they require; for example. They get an email box, but don’t get access to shared mailboxes automatically. They get read-only access to applications, not full access. Then, based on the needs of the user and the approvals of management, the user can request and gain additional access as and when required. While this process can be cumbersome – especially when a user is first starting with an organization – it also avoids over-provisioning access that later must be pulled back. Provisioning and de-provisioning solutions can greatly aid with this process, allowing IT teams to quickly add and remove access as needed with a minimum of manual steps. Note that de-provisioning is as critical as provisioning. When an employee changes roles or leaves the organization, or when an application is reconfigured or replaced, access must also be updated to maintain the principle in action; ensuring users have the access they need but no more. 

Second, one source of truth per organization. While it is very possible for every application and site to have their own identity data store, that is a recipe for disaster as a company grows and evolves. Instead, a single source of truth for identity – like Microsoft’s Active Directory or a similar solution – allows for much tighter and effective control over identity and access. Each application would then use that single source to confirm the identity of the person logging in and what they’re allowed to have access to. The most common form of this idea in organizations today is Single Sign-On (SSO) – where you go to log in to an application (like SalesForce) and see your browser re-direct to your company login page. SalesForce is checking with your company’s single source of identity truth, instead of keeping its own database of users within the app. This is a bit of an oversimplification, as the methods and technologies used to do SSO are complex, but the basic theory of using one source of truth to identify users is the goal.

Third, the concept of zero trust. Zero trust has become a bit of a buzzword in the cybersecurity industry of late, but the actual operational methodology is extremely valuable. Zero trust says that whenever a user, systems, application, etc. attempts to access anything; they/it must prove that they are who they are and must have been granted access for that specific operation. This means that even if the user had been logged into an application already, their identity would still be challenged if they attempted to access other areas of the application. A system talking to another system might have to pass an authentication challenge if it tried to access data in another database. This is significantly different than traditional access methods which say that a user who can use an application has all of their access rights “pre-cached” and ready to go. The reason for zero trust is that a user’s device (or a data system itself) could be used in a way that is not appropriate – either because the user is attempting to do something they shouldn’t on purpose or by accident, or because the device has been compromised by a threat actor. This could easily result in access to data and systems that shouldn’t be accessible, or where access has been removed, but that removal hasn’t yet filtered down to the application in question. In short, zero trust gets its name from the fact that a user – even a user who already logged into something – isn’t trusted as they move around applications and systems. They must pass identity checks (which often happen invisibly to the actual user) to gain access to additional resources. 

Identity and Access Management attempts to implement all these theories and more, and so can be a complicated strategy for any organization to undertake. By giving users access to only what they require, forcing all applications and systems to use a single source of identity truth, and ensuring that access requests are dynamic and not static; organizations can begin to tame the beast that is IAM without keeping users and systems from effectively doing their jobs. 

Cybersecurity in Plain English: What is a Firewall?

A reader recently asked, “What is a firewall? How does it work, and what is it doing?” Both good questions, so let’s dig in and uncover what this critical network defense system does.

Most of us that have owned a car know that in an automobile, the firewall is the heavy metal panel that sits between the passenger compartment and the engine. Since the engine (in gas-powered and hybrid vehicles) works by exploding petroleum products, the chance that something could cause a fire is not insignificant. Especially in a crash or after taking damage from other sources, engine fires could pose a huge threat to anyone in the car itself. Therefore, the physical firewall does exactly what it says on the tin – it serves as a barrier between a fire in the engine compartment and everyone sitting in the passenger compartment. Physical firewalls are not uncommon in many other areas, such as in boats, different types of home/office areas, etc.

Digital firewalls are one of those things in cybersecurity that sound incredibly confusing, but the basic functions are actually straightforward. While there are advanced firewall platforms which do a ton of additional things, the primary function of a firewall is to control what comes into and goes out of a network. In essence, it serves the same function as the physical firewall – it keeps something burning through the Internet from getting in to your controlled networks at home or in the office. It does this by looking at the traffic that is being moved into and out of the network itself, trying to find systems and patterns that just don’t belong there and blocking them from entering. The firewall also acts as a boundary to keep internal traffic from going out across the Internet as well, so that network information doesn’t leak. Note that we’re not talking about keeping confidential data from getting stolen here – the firewall deals with network traffic and cannot, alone, stop someone from sending a file outside the org if they’re sending it to a non-malicious target like Dropbox or OneDrive. 

 

One bit of clarification before we move on: Commercial firewalls are designed to be used in corporate networks and are capable of seeing and filtering massive amounts of information – up to several gigabytes of data per second or more. They also can optionally have many of the advanced features that the rest of this post will describe. Home firewalls are significantly more limited, both in the speed of data they can process and in the features that are available. Your home firewall (most likely built into the router/modem you got from your cable provider or phone company) can cover the basics described below, but most likely can’t process more than one gigabyte of data per second or handle IDS/IPS and other advanced feature-sets. 

 

Most of this blocking activity is based on allow and block lists that are updated regularly within the firewall itself. Most commercial (and some home) firewalls come with subscriptions to threat intelligence feeds that provide them with constantly-updated lists of known malicious websites, IP addresses, and known malware file signatures to help make sure that any inbound our outbound traffic isn’t coming from/going to somewhere that is known to be a threat to the org. Most commercial and home firewalls can block traffic that doesn’t conform to known-safe patterns as well, such as when an application attempts to reach out over a weird port and/or some external website or app tries to communicate to your computer without your computer first communicating with that site or app. 

Threat actors haven’t ignored how firewalls act, though, and have begun to take steps to overcome the protection a basic firewall can provide. For example, since most websites now use the more secure “HHTPS” protocol instead of the plainly visible HTTP, threat actors have started to also use HTTPS communication for their malicious actions. As HTTPS transmissions are encrypted, it’s very difficult – if not impossible – for a basic firewall to see if the communication is moving malicious files or performing other forms of bad behavior. While it is still possible to block connections by port or by originating site/URL, the firewall can no longer see the traffic itself, and therefore loses some important functionality. So, how do firewalls evolve to help with this?

Modern firewalls have many additional features, though generally they’re only available on more expensive commercial firewalls or very high-end home firewalls costing about as much as a commercial firewall. These include things like SSL decryption and inspection and Intrusion Detection Systems/Services (IDS) and Intrusion Prevention Systems/Services (IPS). These tool-sets make it significantly more difficult for a threat actor to succeed in getting across the firewall, but also add layers of complexity to cybersecurity that require trained and knowledgeable staff to set up and maintain.

SSL decryption and inspection is exactly what it says on the packet. HTTPS communications are encrypted between the website and the browser/application, and therefore appear as meaningless garbage when viewed by a normal firewall. With SSL decryption, these streams of data are decrypted by the firewall, examined for malicious content or intent, then re-encrypted and passed to the user’s device that requested them. Outbound data is also examined in this way, to look for signs that a user’s device is compromised, or potentially than an insider threat action is happening. Because of the nature of HTTPS, you can’t just decrypt and then re-encrypt data. That would result in major errors in the applications and browsers communicating over HTTPS and create a lot of headaches for users and app creators as well – as apps and browsers automatically look for and block this kind of activity thinking it is a threat action. So, to set up SSL decryption and inspection, the IT/Cybersecurity team must configure both the network itself and also every device that will communicate over it with policies and security certificates which tell the devices that if traffic is re-encrypted by that known firewall, it should be treated as if it was never decrypted in the first place. This is, of course, very specific to the network in question, and only works if the end user device can confirm that the specific firewall in question was the only device to decrypt and then re-encrypt the data. By implementing SSL decryption and inspection, malware and other malicious traffic can be properly examined before it reaches the end-user device, allowing the firewall to resume its duties even where sites and apps are now sending/receiving data over HTTPS. As you might guess, this system requires not only knowledgable IT/Cybersecurity staff; but also help from Legal, Regulatory, Compliance, and often HR teams to make sure that no privacy or data regulations are being violated – as the organization can now see what would otherwise be unreadable data transmissions to banks, medical providers, and other sensitive/confidential communications.  

IDS/IPS are systems which look at data packets are being moved into and out of the network like a basic firewall, but they also seek out known patterns and behaviors that are malicious. This is accomplished by keeping track of what is being sent and received, and comparing that information to updated lists of data flows and behaviors which would indicate suspicious or outright malicious behaviors. A common example is a compromised endpoint getting data that is identifiable as Command and Control (C2) information from a threat tool or platform like a ransomware operator or criminal threat group. This would indicate that the end-user device is likely being subverted for use by a threat actor. As with blocking known bad websites and URL’s, this requires continuously updated data on what activity and network traffic is considered to be such an indicator of compromise, and IDS/IPS service providers will also provide threat feeds that supply this information to the firewall on an ongoing basis. IDS/IPS can be used in conjunction with SSL decryption and inspection to perform even more effective scanning activities, and it isn’t uncommon for both functions to be part of a next-generation firewall platform, while still allowing the IT team to decide if they will use one, the other, or both. The names (Intrusion Detection vs. Intrusion Prevention) refer to two forms of this kind of protective feature. IDS will alert staff if indicators of compromise are detected, but will not actively block traffic. While IPS will both alert and block traffic when it sees suspicious activity. A firewall may offer one or the other, but rarely both because IPS includes IDS detection features as part of its basic operations. Blocking benign traffic can create massive disruptions to business, so IT/Cybersecurity teams must properly configure and regularly tune these systems to make sure the bad stuff gets blocked, good stuff gets through, and anything else is reported and quickly evaluated to determine what to do next. 

Finally, firewalls can be extended to work with other cybersecurity tools and platforms. Endpoint protection solutions can work cooperatively with firewalls to help detect and deal with malware or other activities that involve more than one stream of data and/or multiple endpoints. Data Loss Prevention tools can integrate with a firewall to block the transmission of data outside of extremely restricted endpoints and business applications. The potential list of integrations is nearly limitless, and your IT/Cybersecurity team can set up the right combination of tools, with the right configurations, to best protect the business while still letting users get their work done. 

So, a firewall is a device (usually physical but sometimes virtual) that sits between your internal network and the outside world. Its job is to make sure any communications coming into the network conform to known traffic patterns and aren’t coming from known malicious sites/URLs. Firewalls can be extended to do additional cybersecurity tasks such as decrypting and examining HTTPS communications, and to detect and block known forms of malicious traffic even if they’re coming from otherwise benign sites and services. They can also be extended by integrating firewalls with other cybersecurity systems to enhance all of your cyber resilience plans. This is a bit of an oversimplification of the full depth and breadth of what modern firewalls can do, but it is a good way to visualize their operations and functionality in your networks. 

Cybersecurity in Plain English: Why and how to keep applications updated

A reader recently asked if just running Operating Systems (OC) updates and anti-virus updates was enough to keep their home devices safe. While that’s a good start, it may not be enough to really stay safe out there, so let’s dive a little deeper into this topic:

OS updating is critical to making sure your home/personal devices stay safe. This also applies to work devices, but your organization may have tools that make that happen automatically, so check with your IT team to find out if you also need to do this on those laptops/desktops/etc. Anti-virus/anti-malware tools also need regular updating, but nearly all of them do that by themselves. The few that require you to manually update them are generally the free AV tools, but they’re also pretty simple to keep up-to-date. Open the app, go to the settings page, and check for updates. By making sure to keep these two things (the OS and your anti-malware tool) updated, you help to ensure that the majority of threat activity which isn’t coming in via social engineering techniques like phishing will get blocked. Don’t forget to do this for your phones, tablets, smart TVs, and other devices around your home. If it has an OS and connects to the Internet, you’ve got to make sure the device is checking for updates, or that you’re doing it yourself. 

Generally, you should be updating once per week. That’s a good trade-off between time spent doing updates and security for your devices. Set aside 30 minutes once per week to run through the process, and you’ll keep everything running smoothly. At the absolute least, you should be updating once per month, but a weekly cadence is a better choice as different vendors release updates on different schedules. 

That being said, your OS is not the only software running on your home systems. Windows, MacOS, and Linux devices – along with phones, TVs, and other smart devices – all run applications, and those applications can also get out of date. As these apps age, security researchers and threat actors alike find vulnerabilities in the software that can be used to make the app misbehave, gain access to things outside of the app itself, or cause damage to your data and/or steal it. Because of this, you’ll need to make sure you’re updating those apps regularly, but it doesn’t have to be a big time-sink. If an app is no longer supported by its vendor, then it is definitely time to start seeking out an alternative that is actively being updated. Legacy applications (apps that are no longer in active development) are a massive problem in the cybersecurity world, and while updating to a new app or a new version of the old app isn’t easy, it is absolutely necessary. Get that process started as soon as possible to give yourself time to make the change before a security vulnerability is discovered in that legacy application that forces you to migrate with no warning. 

Let’s look at how to do these kinds of updates on the major operating systems and for lots of applications:

 

To update your OS…

 

On Windows 7 and higher:

Go to Settings from the Start menu, then look for Windows Update. Check for Updates, then install anything it finds. You may need to reboot, and if so be sure to check Windows Update again after your reboot to make sure there aren’t any further updates to apply.  

On MacOS 12 and higher:

Go to the Apple Menu in the upper-left of the screen and choose System Settings. Then go to General, then Software Update. Let the system check for updates, and if it finds any go ahead and install them. In nearly every case this will require a reboot, but it needs to happen so give your Mac the time it needs.

On Linux:

Open a Terminal window and use your preferred package manager (like APT or yum) to look for updates. If any are found, install them. The good news is that, while you do this via the terminal, package managers also update any other software that was installed via the package manager in question, so you update nearly everything all at once. Reboots are rarely required, but if one is needed then you should let it go ahead and restart.

 

For applications, things are a bit different for Windows versus MacOS and Linux. Let’s step through the three major OS types and how you can keep up to date.

Windows: The elephant in the room. By default, you can get Microsoft application updates for apps like Office via Windows update (you may have to tell Windows Update to do that in its own settings page), but any other apps are not included in that check. This means you have to either use an app updater or go app-by-app to check for updates manually. You can typically find the update check in the Settings or Help sections of the application. There are some app managers like PatchMyPC ( https://patchmypc.com/home-updater ) that can help with many apps, and they’re worth checking out. Keep in mind that you should not use a patch manager unless you have reviews from trusted sources that they’re legitimate and safe. It’s unfortunate, but there are several “app manager” tools for Windows that are actually malware/spyware themselves. Microsoft themselves has tried to help here, with the Microsoft Store app allowing you to keep any apps you buy through that tool updated, but only a small portion of the available Windows apps are currently in the Store just yet. 

MacOS: Apps from the App Store can be updated by just going to the Store, then clicking Updates on the left-hand menu. For other apps, you’ll need to either check the apps manually (usually it’s in the applications main menu or the Help menu) or use an app manager for any apps you have that didn’t come from the App Store. MacUpdater (note the spelling, with an “r” at the end) is a great app manager for MacOS, and is reasonably priced ( https://www.corecode.io/macupdater/ ). It tracks tons of apps, and let’s you update with a simple click when it finds one that’s outdated. I’m not being compensated by them, I just use the tool myself and know it works really well. The Standard version will get the job done for most, but there are lots of options to choose from. Between the App Store covering a huge number of apps, and tools like MacUpdater taking care of the rest, you will be covered.

Linux: As mentioned in the OS section, package managers for Linux also update any applications installed from packages – which is the vast majority of apps you’d run on Linux. There are exceptions here and there, and you’ll need to manually check those periodically to stay up to date. 

Don’t forget to also have your phone, smartTV, smart home devices, and other things connected to the Internet also check for updates. This includes both OS and app updates! For example, you can ask Alexa to “Check for software updates,” and it will look for any new software it needs. iPhones and iPads can be updated by going to Settings: General: Software Update for the OS, and the Updates page of the App Store for apps. Android is a bit different, but Google has instructions for OS updates here: https://support.google.com/android/answer/7680439?hl=en and the Play store can help keep your apps up to date. 

Keeping both your Operating Systems and applications updated is critical to staying safe. Even with a great anti-malware system, outdated applications can let threat actors perform attacks that can succeed. Taking half an hour once per week to keep things up to date is an easy – and effective – way to make sure you’re not giving an attacker any low-hanging fruit to take advantage of. 

Cybersecurity in Plain English: What Happened With LockBit?

Earlier today, a reader asked “What happened with LockBit today? They’re all over the news.” Probably a question that a lot of people have, so let’s dive in and spell it out!

First things first, who or what is LockBit? Starting life as a ransomware gang some time ago, LockBit has been responsible for attacking the infrastructure and data systems of everything from small sole-proprietorships to multinational organizations. Tactics varied, but their primary operations revolved around double-extortion ransomware: where a copy of victim data is first removed from the environment and sent to LockBit servers in the cloud, then the original data is encrypted and rendered unusable to the victim organization. This allowed LockBit to demand payment for decryption of the data, but also to threaten to make all the stolen data public if the victim org decided they didn’t want to pay for the decryption itself. In this way, LockBit had multiple avenues of extortion to bring to bear in order to get paid by the victim. More recently, LockBit branched out into Ransomware as a Service, where they would create tool-kits and host infrastructure for other criminals to use when performing ransomware attacks against victims, with LockBit getting a cut of the criminally-acquired funds.

Now on to what happened: Early in the morning of Feb 20 here in the US, a coalition of law enforcement groups led by the National Crime Agency (NCA) in the United Kingdom and the FBI in the USA struck hard at the LockBit web infrastructure. In addition to many other operations – including multiple arrests of high-ranking LockBit members in multiple countries – law enforcement took down the dark-web back-end systems and the website that drove the Ransomware as a Service platform, effectively rendering the system useless for hundreds of affiliates of LockBit. The website itself was replaced with new information: First was a fairly standard notification that law enforcement agencies had seized the website and affiliated domains. As this site is where LockBit and their affiliates posted victim information if they didn’t pay up, this was a massive blow to the organization as a whole. Shortly after, however, this placeholder notification was itself replaced with a website that looked very similar to the original LockBit leaks site, but now showing information about the group itself, its members, its operations, and links for victims to get help and assistance from law enforcement. In short, the site returned to doing what it did prior to the seizure, but now hosting the information on LockBit; instead of on their victims. 

The operation – code-named “Cronos” – was carried out quickly and efficiently; with the entire process taking just a couple of hours from start to finish. The coordinated takedown of both the web infrastructure and arrest of LockBit leaders in multiple countries crippled the ransomware gang and their affiliates effectively – and even humorously – as LockBit’s own infrastructure was suddenly converted into a weapon against them and their affiliate network. 

It should be noted that this crippling of the gang could be temporary. Not all suspected LockBit leaders were arrested, and dark-web infrastructure has a very nasty habit of being resurrected quickly somewhere else. That being said, for now, I think we can call this a total win for law enforcement and a complete loss for LockBit and their Ransomware as a Service affiliate groups. 

One only wonders, will LockBit now be offering one year of complimentary identity protection services for their affiliates like many of organizations they attacked had to do for their customers after suffering a LockBit-affiliated attack?

Cybersecurity in Plain English: What is MFA?

Multi-Factor Authentication can be confusing for those who haven’t used it regularly before, and that leads to lots of questions like “What the heck is MFA, and why should I use it?” Let’s dig into that topic and demystify something that is becoming part of our daily lives more and more often.

Multi-Factor Authentication (MFA) is primarily exactly what it says on the tin: in order to log in, a user must be able to satisfy challenges that revolve around more than one piece of data, information, hardware, or some other combination of factors. If you’ve ever had your bank tell you that you must put in the code they just emailed you when you go to log in, then you’ve experienced an MFA challenge – but not all such challenges are quite as visible. Simply stated, an MFA challenge requires a user to present more than one security factor before they’re allowed to access something. Keep in mind that your username and password – while being two bits of data – are actually just one factor for authentication, so it’s best to see them as a single item to keep things simple as we explore.

Primarily, factors in authentication (the process by which a system confirms you are who you say you are) are broken down into several types:

Something you know: This includes things like your username and password combo. While they are preferably unique to you, it’s entirely possible that two people have the same username/password either by accident or because your data was leaked or stolen. Security questions (“What is your mother’s maiden name,” etc.) are also considered something you know in most security contexts. 

Something you are: Biometric data is a factor used to prove who you are because it is – at least theoretically – entirely unique to you. This factor can include things like your fingerprint, specific topographical maps of your face, the pattern of blood vessels in your retina, etc. While biometric data is difficult to steal or fake, storing it brings with it privacy issues, and accurately collecting and reading it can be challenging for a lot of devices. 

Something you have: Tokens that you have physical and/or digital control of can be used to prove who you are by having you show information on or in those devices and/or present the device itself. While tokens can be stolen, when combined with other factors they can be a great way to show a system you are really you. Some tokens generate one-time passcodes using a physical key-fob or an app on your phone. Others work by generating and sending a unique code through near field communication (NFC) – like holding your phone or a smart-card near a reader. In some cases, your laptop/desktop/phone itself can be this factor – by looking at things like geo-location, software installed, networks connected to, etc. authentication systems can confirm that the machine you are using is known to be used by you alone. 

MFA is simply the use of at least two of these factor types in each login/access event. So, for example, when you log into a website; the site may ask for a username and password, and then send a one-time passcode to your phone via text-message. You type the code from the phone (something you have) into the site after you put in your password (something you know) to gain access to the website as a user. Apple devices like iPhones/iPads have been using biometrics as a second factor for some time (TouchID and FaceID), and Windows has begun to use it for laptops and desktops (Windows Hello).

Why are you seeing MFA being used more and more often? MFA offers much better security than a username/password alone. Since the user must also provide some other proof they are who they say they are, it becomes significantly harder for a threat actor to gain access to things they shouldn’t be able to touch. As usernames are typically easy to figure out – most systems use your email address, which is already public information – and passwords tend to either be weak and easy to guess, re-used on multiple sites, get stolen quite often, or any combination of the three; a username and password alone just isn’t proof you are who you say you are anymore. MFA therefore becomes necessary to allow a system to know you are who you say you are without relying solely on information that could be in the hands of anyone. 

Not all MFA is created equally, of course. Email and SMS text message one-time-passcodes can be problematic if a threat actor gains access to your email inbox and/or tricks your phone service provider into re-routing text messages to them instead (a technique called “SIM Swapping”). While events like this are rare, they do happen, so email and text validation for MFA are better than nothing, but not the best. Authenticator apps like Microsoft Authenticator, Google Authenticator, and others make things more secure and harder for a threat actor to overcome easily. Biometric factors are even better, but can be difficult to use effectively. Not for the user, who just taps a finger or looks into a camera, but for the technology itself. Fingerprints can be subtly altered based on pressure against the reader. Facial recognition can be impacted by lighting, glasses, and a host of other factors. Retinal scanning requires the user to hold still and stare into a camera. Researchers and vendors have been making these things better and better over time, but they can still be tricky to deal with. 

In the end, MFA is here to stay. Since usernames/passwords alone are considered nearly the same as not authenticating at all these days, more and more organizations are adopting some form of MFA to allow you to gain access to company resources safely. It doesn’t need to be difficult, however. Having an MFA challenge that just asks you to type to two numbers on your screen into your phone is easy, fast, and effective – with Microsoft and others adopting this methodology to make life easier for users while making it much harder for threat actors. Leveraging hardware “fingerprints” like the apps you have installed and the location the device appears to be sitting at can reduce the total number of MFA challenges a user has to deal with each day. The combination of known successful defenses with evolving technologies allows for MFA to better protect the organization without putting a burden on the users, allowing for better security while keeping users happy and productive. 

Cybersecurity in Plain English: How Do Threat Actors Get In?

I’ve written a blog series like this for many companies I’ve worked for, now I’m doing it on my own blog for everyone to read. Please drop me questions you’d like answered to me via Twitter/X/whatever it’s called this week @miketalonnyc – I’d love to get you answers explained without the jargon! 

A very common question I get from the field is, “How do threat actors actually get into the network in the first place?” It’s a good question, with some possibly surprising answers, so let’s talk about initial access and how threat actors take that first step.

Initial access is the term used for how a threat actor gains their first entry into a protected environment. This could be your home PC, or a corporate network – whatever they’re eventually attempting to get access to within the target environment itself. Generally, the point of initial access is not the end goal of the threat actor; since it’s highly unlikely they land on the machine or system they actually want to get hold of. More often, initial access happens on a user’s laptop, or a web server, or an application platform instead; and the threat actor then must jump from system to system to get where they want to be. This means that by minimizing initial access points, you also minimize the ability of the threat actor to do what they want to do.  

So, how do they accomplish that first step? There are quite a few different ways this can be done, but four of them stand out as being (by far) the most commonly encountered techniques. First, compromise of credentials – the threat actor gains control of legitimate usernames and passwords. Second, compromise of a vulnerable application – where a threat actor is able to exploit a vulnerability. Third is coercion or trickery used to get a user to run a malicious application. Finally, there are initial access brokers that use all of the above to amass initial access that they can sell to the highest bidder.

Credential compromise is the most common technique used. Threat actors use phishing, smsishing (phishing by text message) and a host of other social engineering techniques to get hold of legitimate credentials that they can use to access systems in your organization. Alternately, they could guess or discover credentials without having to phish or otherwise grab them from users directly. Methods such as exploiting weak and/or default passwords, credential stuffing, or even brute-force attacks can get them what they need if other security controls aren’t in place. Weak passwords that are too short (less than 8 characters), too simple (no punctuation/special characters), and/or extremely common (password123) all allow a threat actor to successfully guess in just a few tries. Credential stuffing is trying a list of passwords from one breach to attack a totally different organization that shares users who may have re-used passwords. Brute-force is exactly what it sounds like – threat actors simply try password after password until they find one that works. 

In all of these cases of credential compromise, layered defenses can be a huge help in defending the organization. The use of (and enforcement of the use of) multi-factor authentication (MFA) will help to block a threat actor with otherwise valid credentials from actually using them. Enforcing passwords which meet basic complexity rules such as including special characters (?, /, $, !, etc.) and requiring 12 or more characters makes it much more difficult for a threat actor to successfully guess a valid password. Blocking the most common passwords used online outright is also a great method to bring to bear. Troy Hunt (curator of HaveIBeenPwned.com [https://haveibeenpwned.com/ ] has worked with many government and private entities to keep lists of the most common passwords. For example, the National Cyber Security Center of the UK has worked with Troy to produce a list of the top 100 [[https://www.ncsc.gov.uk/blog-post/passwords-passwords-everywhere ]]. Enforcing restrictions on the number of incorrect entries a user can try before they’re locked out helps derail brute-force attacks. Encouraging users to not re-use passwords by utilizing password managers helps curtail credential stuffing success.

 

 Remember that most usernames are known these days. Users utilize their email address, or some combination of first/last name/initials, so the username is no longer a big secret. Passwords – when well-managed – are still secret, but additional controls are required to ensure that a threat actor can’t walk in the front door. Utilizing complex passwords and MFA, limiting re-use, and blocking commonly known passwords all help to keep the password itself from becoming known and/or useful to a threat actor.  

 

Exploitation of a vulnerability is common in the quest to gain initial access. If a system or platform has a known vulnerability that can be exploited, then a threat actor will not have to gain credentials – they can just take control of the system or platform itself. Defenses here are two-fold, first it is important to patch/upgrade systems with known vulnerabilities; but that’s not always a possibility. If budget doesn’t exist for upgrades, or if the patch or upgrade would significantly impact a business process, it’s unlikely that closing the vulnerability directly will be allowed. Here again, compensating controls can save the organization. If a threat actor gains control of one application, then blocking their ability to move through the network or gain control of additional applications becomes a vital step in limiting damage. Endpoint controls (on servers as well as user systems), restricting network access, and the ability to be alerted on anomalous activity all aid in catching a threat actor attempting to move from an exploited system to others in the environment. Of course, patching or upgrading is the optimal strategy and should be done whenever possible; but additional controls may be required when that patch or upgrade just cannot be applied.

Coercion and trickery are incredibly common in the threat landscape today. While forms of social engineering, they typically do not follow the same path as a phishing or smshing attack. Instead, a user may be tricked into installing malicious software that masquerades as legitimate software the business would regularly use. A common example is a threat actor taking over a mis-spelled domain for a popular software tool, and any user who accidentally goes to the mis-spelled site (which might even be performing search engine optimization to trick the user) downloads and installs the malware instead of the real software. Supply-side compromise – where a threat actor replaces the real software with malware on the vendor’s systems – is also a serious threat. Another common technique is the invocation of authority to coerce a user into installing malware. A threat actor may call or email pretending to be a software vendor, a bank, a government agency, or even your own IT department and pressure a user to download an install malware, spyware, or more. Of course, a user who has been doing things they should not be doing online could also be blackmailed into installing malware on company systems; but while fake attempts at this technique are common (such as “clean up software” emails that try to get a user to install something because they were “caught” on a site they shouldn’t have been on), confirmed real use of this tactic is thankfully rare. Proper security awareness training and periodic testing is the key to derailing this form of initial access attack. When users know what to look for, who to ask for help, and where to go to legitimately get software and updates will keep them from accidentally downloading malware or doing so under duress. Combining these methods with strong endpoint controls (like anti-malware tools) can help to ensure that the fake software is blocked from running. Last, but not least, running software updates in a lab to ensure that they are legitimate before deploying them across the organization – combined with ensuring vendors are following security best-practices – limits damage from supply-side attacks. 

Finally, there are entire categories of threat actor groups who perform just initial access attacks, using all of the above methods to accomplish their goals. They curate massive lists of legitimate and validated credentials, previously exploited systems that they still have active access to, automation to perform coercion and trickery on a massive scale, host and deploy malware as valid updates, etc. – but they don’t actually perform other attacks. What they do is sell that information to other threat actors who then use it to perform more extensive attacks like major data theft, ransomware, or disruptive actions. These initial access brokers make good money re-selling the access they gain to the highest bidder, allowing them to gain financial success without having to worry about extorting an actual payoff from a company. Careful monitoring of user activity and network operations to determine if there are anomalies going on can allow you to detect if credentials or systems have become compromised before that access gets sold to another threat actor. Many managed security service providers (MSSPs) can assist with that effort for those organizations who cannot do that kind of monitoring in-house.

Initial access is the first step in a sequence of events that leads to data loss, ransomware payouts, downtime and business loss, and a host of other problems for any organization. Defending against the most common forms of initial access can derail attacks before they get farther than that first step, and help keep your organization safer over time.