Cybersecurity In Plain English: The Three Calls of Social Engineering Attacks

As it is Cybersecurity Awareness Month, I’d like to post a perennial favorite of readers: My tips on how to spot a phishing email, text message, or call. 

While there is no guaranteed way to always be 100% sure that a message is real or fake (and therefore you should reach out to whoever sent Noun alert 5810521 FF001C.the message to find out), there are things that the majority of phishing/smsish/vishing scams and attacks do hold in common. The reason is simple, threat actors realize they have to make their attack messages seem real, urgent, and mandatory for you to follow their instructions. Because of that, you should always be on the lookout for three “calls,” and when you see them you should immediately become much more suspicious of the message.

1 – A Call to Authority

Threat actors know that people will typically show doubt in any unexpected email asking them to do something. Most people even doubt text messages and phone calls these days. In order to overcome this, threat actors will attempt to impersonate a person, company, or government organization you are less likely to question. Common examples are known executives at the company you work for, companies you do business with like Microsoft and Amazon, or government agencies you have to interact with like the Internal Revenue Service here in the USA. By making a call to authority, threat actors have a better chance that you will act on the information in the message without questioning it. This typically takes the form of the email or text appearing to be coming from the person or company in question – either by spoofing a known email address, or just flat out stating they’re from the company or org in question in a text message or phone call. 

2 – Call to Urgency

We humans have a few common behaviors that threat actors like to take advantage of. One of those is that, when faced with an urgent situation, we tend to examine what is going on less often and act immediately more often. Because of this, threat actors will often try to imbue a sense of urgency into their messages to try to trick a person into performing an action as a kind of knee-jerk reaction; and not take time to question what is going on. Common examples of this are when you see messages saying your membership to an online service is locked/cancelled, or an invoice for a service must be paid immediately to avoid it going into collections. Other examples may be someone impersonating your boss demanding that you perform some action – like buying gift cards – that must be done right this minute since they’re in a critical meeting. All of these situations play upon our human nature and tendency to act quickly when we feel we’re in a significantly urgent situation. 

3 – Call to Specific Action

Nearly every message you get on your phone or via email has some kind of call to action. Your spouse reminding you to pick up bananas. Your boss telling you to complete a task. Your friends asking you to choose a restaurant to meet up at. What these examples have in common with normal and legitimate requests is that they have multiple paths that can be taken to complete them. You could step out at lunch to get bananas, or get them on the way home. You can complete the task yourself, or coordinate with your co-workers. You have a selection of restaurants to choose from, or can even suggest that someone else choose this time. While there are legitimate situations where only one path of action is possible, there are far more situations where there are multiple paths that can be taken. A message that is demanding that you take a very specific path of action should raise your suspicion levels significantly. Common examples of this are when you are required to use a specific link in the email to respond, or when you are given a specific phone number for a law enforcement agency in a voicemail that you must call. 

When you encounter one of these three calls by itself, there’s a good chance the message is legitimate. Your boss may call you, for example. When you see them stacking up together in a message, it is insanely likely that the message is some form of social engineering. When Apple emails you and demands you immediately call a specific phone number because your account is suspended, this is not a real email from Apple. When your boss texts you to demand you drop everything you’re doing and go buy a specific number of visa gift cards right this second, that is not a real text from your boss. When you get a phone call from the IRS demanding you immediately wire them funds via a named wire service to cover penalties or risk being arrested, that is not a legitimate call from the IRS. 

In each of those cases, there is a call to authority (Apple, your boss, the IRS), there is a call to urgency (this must be done RIGHT NOW!), and a call to a specific action (call a defined phone number, buy specific gift cards, wire money through a specific method). By looking for all three of these calls whenever you get a text, email, or phone call that you can’t confirm the origin of; you can quickly determine the likelihood that this is a scam, phish, or fake. Note that this doesn’t remove the need to use good online hygiene – you still shouldn’t click on a link or open an attachment unless/until you know who sent it and why – but by taking a step back and really looking closely whenever you see all three calls you can spot the fakes quickly and accurately. 

So how do you take the next step when you suspect a message may be fake, fraudulent, or phishing? 

1 – Calm Down: No matter how urgent the message may seem, legitimate requests nearly always give you at least a few minutes to figure things out. The sender of that message may not want you to, but you can, and you definitely should. Take a moment to review what is going on before you take action on it.

2 – Confirm: No matter who is reaching out to you, there are other paths you can use to confirm what is happening and if it is legitimate. For companies like Apple, Amazon, Microsoft, Netflix, your bank, etc.; you can independently visit the website in question via a browser (without clicking on links in the email or text), log in, and see if there are any messages or alerts waiting for you. If the Sheriff’s Office is calling you, then you can hang up, find a known and trusted number for them through a web search, and call them directly. If your boss needs something, you can take a moment to check with HR, or call your boss directly via a phone number you know to be assigned to them. Taking a minute or so to confirm what is going on can keep you from taking an action that you’ll regret later.

3 – Continue: After you take a moment to review the situation and then confirm the details via some other method of action than the email/text/phone call is demanding; you can then decide what to do next. If you can confirm that the request/demand is actually real, you can act on that. If you can’t confirm it – or if you’re able to confirm the demand is fake – you can report it and/or ignore it. If it has anything to do with your company, you should definitely report it. If it has to do with some other company or government organization, you should still report it via the methods on their corresponding websites, but that’s up to you. Reporting the incident helps everyone else stay safer by allowing companies, your organization, and other organizations better train their users to spot these fakes in future, and to involve law enforcement when necessary.

So remember, any time you see all three calls – A Call to Authority, A Call to Urgency, and A Call to Specific Action – in one message, then Calm Down, Confirm, and then – and only then – Continue. Stay safe out there.

Cybersecurity in Plain English: Exploding… Pagers?

Editors Note: This is a developing story, and little is known about the facts surrounding the events except that they happened. The article will be updated when more information is published (if that happens). Please remember that, as with any rapidly developing story, the truth of these events may not be known for quite some time. The editor would also like to thank @SchizoDuckie and @UK_Daniel_Card for their rational and technical support in keeping the author from getting derailed and wandering into spy thriller territory.

Update: October 16, 2024 – Reuters has posted a story detailing the results of an investigation around the pager devices.  This story contains details received from unnamed sources which highlight how the devices were given an online backstory, and – while not being able to discover exactly where they were manufactured – does detail what that manufacture looked like structurally.  This post has also been updated with some date corrections. 

Update: September 18  2024 – Multiple news outlets including NPR and CNN are citing a US Official in stating that Israel has claimed responsibility for the pager explosions. 

Update 2: September 18, 2004 – Both the Tiwanese first-party manufacturer and holder of the trademark branding for the pagers and the Hungarian 3rd-party manufucaturing company that licensed that trademark are denying that they manufactured or sold the pagers to Hezbolla. We may have to wait a significant amount of time for investigations to sort out the truth of where these devices came from.

Update 3: September 18, 2024 – An additonal wave of explosions, this time involving two-way radio devices and (possibly) solar devices has occured in Lebanon.  While no one has claimed responsibility yet, it stands to reason that this was a second-strike by the same group that detonated the pagers yesterday, presumably Israel.

Update 4: September 18, 2024 – The Guardian (a UK-based News Agency) has posted a story with more detail on how this may have happened. 

Original Post:

It would seem that there are quite a few things happening these last few months that create an immediate need for an explanation in plain english. Today has continued the trend, as I got bombarded by people asking “What happened in Lebanon with the exploding pagers?” Let’s dive into this topic, and hopefully I can offer some reassurance that a world-wide panic is not needed at this time.Noun warning 4241030 FF001C.

Please note, while I have had training in some forms of chemistry in college, I am NOT an explosives or demolitions expert. The details I provide here were gleaned from hastily-performed research on the subject. This is also a longer article than usual, because the topic is complex and full of twists and turns; so breaking it down into plain english is going to require a lot of words.

TL;DR version: No, your phone is not going to explode unless there was a defect in manufacturing, and even those are rare. What happened was not a cyber attack, but rather an act of war that included a digital transmission component. The devices were built to *be* bombs, not converted into bombs by some kind of software magic. Read on for details.

First, some background. On September 17th, thousands of pagers (those old-school devices that let someone send you a phone number or short text string to let you know to call them back) detonated in dozens of locations throughout Lebanon. All of the pagers (as of this moment) were being carried by members of the group Hezbollah, an extremist group which has carried out numerous terrorist plots and attacks over the last several decades. This link to the New York Times coverage of the event is paywalled for many, but one of the better sources of news and information on this particular situation: https://www.nytimes.com/live/2024/09/17/world/israel-hamas-war-news

While no one has yet declared responsibility for the attack, it is likely that this action was carried out by Israel as part of their ongoing conflict in the region. This is one of many points of fact that are not confirmed yet, so it is only a suspicion at this time. Considering the last operation of this scale that Israel was involved in (Stuxnet) went un-declared for 30 years, we may not know for a very long time at that. 

This leads to the inevitable question, “Can a pager (or any other mobile device) that I’m wearing or carrying become a bomb?”

The answer is a bit complex, but the short form is “not unless there was a defect in manufacturing,” and also that it is highly unlikely that today’s events were anywhere near that straight-forward. Rather, it is much more likely the exact opposite – that bombs were fashioned into mobile devices instead of mobile devices being turned into bombs themselves. Let me walk you through that.

The definition of a bomb is simply a massive amount of energy (usually in the form of heat and/or pressure) suddenly being created but trapped inside of an enclosed space. Eventually the heat and pressure exceed the ability of whatever space is containing it, and the result is a rapid dispersal of the heat, pressure, and whatever the container was made of into the immediate surrounding environment; i.e. it explodes. In this case, something caused the pagers – made of plastic with circuit boards, a battery, a small screen, and a few other components – to become the container for all the energy. When the container couldn’t hold back the energy anymore, it exploded; seriously injuring anyone nearby – including whoever was wearing the pager or had it in their pocket at the time.   

Such an explosion could be caused by many different substances. We’ve seen lithium-ion batteries explode before – https://www.cnn.com/2023/03/09/tech/lithium-ion-battery-fires/index.html – but they generally convert themselves to heat more slowly, causing very hot fires but not what we saw in the videos coming out of Lebanon today. On the other side of the equation, there are many compounds that do not take a large amount of space or weight to create significant explosions. I won’t list them out here, but a quick google search will bring them up if you don’t mind that info being in your browser history.

It is also critical to point out that these were not pagers that you could buy from a local electronics store. These were encrypted devices designed to facilitate communications between members of a known terrorist organization (using the US and UK designation for Hezbollah). Therefore, they had to either have been built for that purpose, or heavily modified to suit that purpose. This becomes vitally important later in this article. 

So, what do we know as of this moment? Two things. First, that specialized pager devices which were being worn and/or carried by thousands of Hezbollah members exploded nearly simultaneously throughout Lebanon. Second, that it is unlikely to have been caused by the batteries or internal electronics of the devices themselves due to the explosions being very different from a standard lithium-ion battery fire. 

That can lead us to a set of conclusions, but this is pending additional information which may – or may not – come out later:

It is likely that an external group – potentially the Israeli security organization Mossad – managed to replace the pagers the Hezbollah members were expecting to get with devices that had been altered to include an explosive charge and a detonation system. Alternately, as noted by Reddit user UrsusArctus – https://www.reddit.com/user/UrsusArctus/ – the pagers may have been built with the capability to be remotely destroyed if they were to be lost or stolen as a Hezbollah security measure. This last scenario is less likely because Hezbollah has not been well-known to maintain such a level of Operational Security (OpSec), but it is possible and should be considered. 

This leaves us with pagers that contain an explosive charge on purpose (put there by either an external group or by Hezbollah themselves), and some way to trigger that charge to go off on-command. In scenario one, whoever diverted and altered the pagers would have built in the ability to trigger the explosion by sending a specific code to the device or through some other remote activation. In the scenario where the devices already had a self-destruct function, a security agency (i.e. spy group) could have found the sequence of codes or other operations which would trigger such a function. On-command, all of the pagers received the code to detonate, and the result is what we saw today.

What does this mean to everyone who is not a Hezbollah agent carrying a pager? Can this be done to a regular mobile phone? A laptop? My doorbell?! – in short, it’s insanely unlikely unless you’re being targeted by a state-sponsored espionage agency, and even then there is very little chance. In cybersecurity, we don’t like using terms like “never,” but this is as close to never as you’re going to get.  

The level of coordination and secrecy necessary to pull off either of the two scenarios (replacing the pagers or infiltrating the self-destruct system) is so massive that we almost never see anyone pull off this kind of attack. It has happened for espionage purposes – see https://www.securityweek.com/chinese-gov-hackers-caught-hiding-in-cisco-router-firmware/ – but it is just absolutely rare as to be close to non-existent, and certainly insanely rare for acts of war like we saw today. While it is true that Mossad has rigged exploding mobile phones in the past, each incident was one phone, given to one target by a spy or through some other means – never anything at this massive of a scale. 

Remember that in the first scenario, you must have infiltrated and compromised the supply chain for the devices – a supply chain that routinely deals with a terrorist organization who is likely to retaliate with extreme prejudice. This would require that you basically control everything about the supply chain to an extent that no one who is part of the manufacturer or the other suppliers knows you are there, because they will certainly call you out to the bad guys if they figure out you are there.

In the second case, you would have to have had operatives in place within the terrorist group itself long enough for them to acquire access to the self-destruct systems. This is much more possible with really good spies, but still not something that your average threat actor could pull off with any level of success. Also of note, your devices would have to be rigged to explode in the first place, which I can safely assume no one reading this article has built into their iPhone. 

In both cases, it would only be possible to carry out this kind of attack because the devices were specifically built for use by the group that was targeted. These devices worked on an encrypted network, and therefore would have to be purpose-built or modified to function on that network. This allowed whoever carried out the attack to specifically target hardware and users to an incredibly precise degree. Trying to do this with commodity devices like Android phones would make it impossible to ensure that you attack those people you’re looking to attack, and them alone. Using off-the-shelf commercial devices like this also means there is a significantly higher – almost guaranteed – chance that the alterations are discovered before you can put your plan into action. So it isn’t the kind of thing that you’d see being done unless it was directly, highly, and explicitly targeted.

This is also something that can only be done once. That’s it. Now, everyone who uses covert mobile devices is going to be looking to make sure that they haven’t been tampered with; and those with self-destruct systems will disable them until they can re-secure the control systems. 

Finally, there’s no profit in this. Remember that cyber threat actors are typically in this to make money through extortion and/or resale of the data they steal. Blowing up someone’s phone doesn’t aid that goal in any way, since the device and its data are now gone. Not to mention the massive law-enforcement reaction because you either could have or actually did injure and possibly kill people. Even for hacktivists, detonation of a target will not gain them any ground, and will probably cause them to lose quite a lot instead. 

Taken together, this indicates that the attack was a state-motivated and state-sponsored act of war, and not a cybersecurity incident. Technically, it involved a cyber aspect – the devices were remotely detonated through some form of digital connectivity – but would not be classified as a cyber attack itself. This is not something that you are going to see happening frequently, and certainly not something that we’re likely to see be used as part of a cyber attack in the traditional sense. It’s also extremely unlikely that the devices were turned into bombs with just the components that would normally be part of the pager/phone/whatever. Either the devices were substituted for ones that contained an explosive charge, or the devices were built to have a self-destruct feature; they were built to be bombs, they didn’t become bombs through some technological trickery. 

So, for 99% of us, there is no real likelihood that our phones will explode without warning. Or, at least no more of a likelihood than already exists due to accidental manufacturing issues – https://www.wired.com/2017/01/why-the-samsung-galaxy-note-7-kept-exploding/ . Instead, we should maintain focus on actual cyber threats. It is far more likely that you will fall victim to a phishing or text scam, accidentally download and run malware, or do a hundred other things that do not involve explosions at all, but still cause significant damage to your personal digital systems and/or company.

Cybersecurity in Plain English: The Great Social Security Number Leak

Because of the recent news  that 2.9 billion (with a B) Social Security Numbers for US Citizens had been stolen from a background investigation firm, lots of people have been asking me to talk about what they should do.  

 

The short answer is… nothing. 

 

While this latest massive data breach is concerning to be sure, the fact that billions of Social Security Numbers were stolen is not the story. Noun social security card 76840 FF001C. Unfortunately for all of us in the US (or who otherwise have a US Social Security Number), that data is almost definitely already known to the general public and the threat actor community. So, let’s look a little deeper about why you don’t need to be all that worried that your Social Security Number ended up in a huge data dump, again.

First, a bit about Social Security Numbers (SSNs). For those outside the USA, SSNs are numbers used to identify each US citizen in order to track a government-managed welfare program system called – you guessed it – Social Security. It’s managed by the Social Security Administration  and provides multiple services for citizens during their lifetime. SSNs are usually assigned shortly after a person is born, or shortly after they become a citizen if they immigrated. They are issued once and, with only a few incredibly rare exceptions, they are never changed during a person’s lifetime. So for most of us living here in the USA, we have one that was assigned to us at birth and will be with us until after we die. 

While these numbers were never meant to be used as any form of identification, they ended up being used for exactly that purpose over the 80+ years the system has been in active country-wide use. SSNs are used on tax forms, medical records, employment records, financial records, and just about everything else. The issue is that there are zero security controls around these numbers. While organizations who collect them are required to use reasonable and standard practices to protect the data; the actual number is not randomized or anonymized in any way by anyone – including the agency that issues it to you. 

The numbers themselves can be decoded and even guessed if you have enough information on a person. Entire calculators and decoders exist, because the SSN was meant to be decoded so it could be used to route benefits properly – such as this site. Because of this, SSNs should never be considered privileged or private information – they’re just too easy to figure out.

Additionally, as with any program that’s been in existence for nearly a century now, just about any organization or agency that’s held SSNs has lost control of some or all of that data over the years. So many data breaches (both physical paper-based access and digital access) have included SSNs that – at this point – you’d be in a ultra-tiny minority if your SSN wasn’t already known to anyone who wanted to find it. 

So, what to do about this breach? As I said at the top, there’s really not much to do in this case, nor is there much to worry about. The breach did include much more sensitive information that – when present all together in one place – absolutely could lead to identity fraud and other nefarious activity. Your SSN being the data dump, on the other hand, really isn’t a big deal. Keep an eye on your credit score/reports, be very wary of emails, text messages, or phone calls that want you to buy something or pay money or share additional information. Always remember that the FBI, Apple, Microsoft, Google, the Sheriff’s Office, etc. won’t call you first. When in doubt, ignore the link in the email and/or hang up the phone; then manually go to the website in question and log in or find a number to call to ask about the situation. Trust me, if any government organization or corporation needs you to do something, there will be a web page on their site or a phone number where they can tell you what they want you to do. None of them work exclusively by outbound email or phone calls. 

Threat activity generated from data breaches is very real. Follow good online hygiene and be cautious with any phone calls or texts – but you should be doing that even when you aren’t hearing about a massive data leak these days. The fact that SSNs were in the latest breach doesn’t change anything, and should be the issue you’re least concerned about surrounding this ongoing problem. 

Cybersecurity in Plain English: What happened with CrowdStrike?

It’s probably known to just about everyone in the world right now that on Friday, July 19 2024, millions of computers went offline unexpectedly Noun crash 4437101 C462DD.due to the software provided by CrowdStrike – a vendor specializing in cybersecurity tools. Many have asked for an explanation at a high level as to what happened an why, so let’s dive into this topic. Settle in, this is going to be a long one. 

Editor’s Note for Disclosure: While the author works for an organization which offers sales and services around CrowdStrike products, they also offer such sales and services for a wide variety of other EDR/XDR solutions. As such, objectivity can be preserved.

First, some background information:

 

CrowdStrike is a well-known and well-respected vendor in the cybersecurity space. They offer a large range of products and services to help businesses with everything from anti-malware defenses to forensic investigations after a cyber attack occurs. For the most part, their software works exceptionally well and their customers are typically happy with them as a company. 

Endpoint Detection and Response (EDR) is the general term for any software that looks both for known malware files on a computer and also looks at what things are actively running on a computer to attempt to determine if they may be some form of yet-unknown malware. These operations are often referred to as “signature/heuristic scanning” and “behavioral detection” respectively. While it isn’t necessary to understand the ins and outs of how this stuff works to understand what happened on Friday, CrowdStrike has a product line (Falcon XDR) which does both signature scanning and behavioral detection. 

EDR solutions have two forms of updates that they regularly get delivered and installed. The first type is one most of us are familiar with, application updates. This is when a vendor needs to update the EDR software itself, much like how Windows receives patches and updates. In the case of an application update, it is the software itself being updated to a new version. These updates are infrequent, and only released when required to correct a software issue or deploy a new feature-set. 

 

The second form of update is policy or definition updates (vendors use different terms for these, we will use “definition updates” for this article). Unlike application updates, definition updates do not change how the software works – they only change what the EDR knows to look for. As an example, every day there are new malicious files discovered in the world. So every day, EDR vendors prepare and send new definitions to allow their EDR to recognize and block these new threats. Definition updates happen multiple times per day for most vendors as new threat forms are discovered, analyzed, and quantified. 

The other term that was heard a lot this weekend was “kernel mode.” This can be a bit complex, but it helps if you visualize your operating system (Windows, MacOS, Linux, etc.) as a physical brick-and-mortar retail store. Most of what the store does happens in the front – customers buy things, clerks stock items, cash is received, credit cards are processed for payment. There are some things, like the counting of cash and the receiving of new stock, that are done in the back office because they are sensitive enough that extra control has to be enforced on them. In a computer operating system, user space is the front of the store where the majority of things get done. Kernel space is the back office, where only restricted and sensitive operations occur. By their nature, EDR solutions run some processes in the kernel space; since they require the ability to view, analyze, and control other software. While this allows an EDR to do what it does, it also means that errors or issues that would not create major problems if they were running in user space can create truly massive problems as they are actually running in kernel space. 

OK, with all that taken care of… what happened on Friday?

Early in the morning (UST), CrowdStrike pushed a definition update to all devices running their software on Windows operating systems. This is a process that happens many times a day, every day, and would not normally produce any kind of problems. After all, the definition update isn’t changing how the software works or anything like that. This update, however, had a flaw which set the stage for an absolute disaster. 

Normally, any changes to software in an enterprise environment (like airlines, banks, etc.) would go through a process called a “staged rollout” – the update is tested in a computer lab, then rolled out to low-impact systems that won’t disrupt business if something goes wrong. Then, and only then, it goes out to all the other systems once the company is sure that it won’t cause trouble. For CrowdStrike application updates, this process happens like any other software update, and they are put through a staged rollout process. However, definition updates are not application updates, and because of both the frequency of definition updates and the nature of their data (supplying new detection methods), they are not subject to staged rollout by the customer. In fact, customers rarely have even the ability to subject definition updates to staged rollouts themselves – the feature just doesn’t exist in nearly all EDR platforms. There are several EDR vendors who do staged rollouts to their customers, but once a definition update is pushed, it is installed immediately for every customer in that phase of the rollout. CrowdStrike pushed this update out to over 8 million systems in a matter of minutes.

This particular definition update had a massive issue. The update itself was improperly coded, which made it attempt to read an area of memory that couldn’t exist. In user space, this problem would just cause the application to crash, but not have any other impact on the system. In kernel space, however, an error of this type can cause the system itself to crash, since in kernel space the “application” is – essentially – the operating system itself. This meant that every machine which attempted to apply the definition update (over 8.5 million at last count) crashed immediately.

To recover from this issue, a machine would need to be booted into Safe Mode – a special function of Windows operating systems that boots up the machine with the absolute bare minimum of stuff running. No 3rd-party applications, no non-essential Windows applications and features, etc. Once booted into Safe Mode, the offending update file could be deleted and the machine rebooted to return to normal. 

So, why did it take days to make this happen if you just had to reboot into Safe Mode and delete a file? Well, there are two reasons this was a problem:

First, Safe Mode booting has to be done manually. On every single impacted device. When we may be talking about tens of thousands of devices in some companies, just the manpower needed to manually perform this process on every single machine is staggering. 

Second, if the machine is using BitLocker (Microsoft’s disk encryption technology) – which they all absolutely should be using, and the majority were using – then a series of steps must be performed to unlock the disk that holds the Windows operating system before you can boot into Safe Mode and fix the problem. This series of steps is also very manual and time consuming, though in the days following the initial incident there were some methods discovered that could make it faster. Again, when applied against tens of thousands of devices, this will take a massive amount of people and time. 

Combined, the requirement to manually boot into Safe Mode after performing the steps to unlock the drive led to company IT teams spending 72 hours and longer un-doing this bad definition update across their organizations. All the while, critical systems which are required to run the business and service customers were offline entirely. That led to the situation we saw this weekend, with airlines, stores, banks, and lots of other businesses being unable to do anything but move through the recovery process as quickly as they could – but it was still taking a long time to accomplish. Of course this led to cancelled flights, no access to government and business services, slow-downs or worse in healthcare organizations, etc. These operations slowly started coming back online over the weekend, with more still being fixed as I write this on Monday.

Now that we’ve got a good handle on what went wrong, let’s answer some other common questions:

“Was this a cyber attack?” No. This was a cyber incident, but does not show any evidence that it was an attack. Incidents are anything that causes an impact to a person or business, and this definitely qualifies. Attacks are purposeful, malicious actions against a person or business, and this doesn’t qualify as that. While the potential that this was threat activity can not yet be entirely ruled out, there are no indications that any threat actor was part of this situation. No group claimed responsibility, no ransom was demanded, no data was stolen. The incident also was not targeted, systems that were impacted were just online when the bad update was available, and therefore it was pretty random. This view may change in future as more details become available, but as of today this does not appear to be an attack. 

“Why did CrowdStrike push out an update on a Friday, when there would be less people available to fix it?” The short answer is that definition updates are pushed several times a day, every day. This wasn’t something that was purposely pushed on on Friday specifically, it was just bad luck that the first update for Friday AM had the error in it. 

“How did CrowdStrike not know this would happen? Didn’t they test the update?” We don’t know just yet. While we now know what happened, we do not yet have all the details on how it happened. It would be expected that such information will be disclosed or otherwise come to light in the coming weeks. 

“Why was only Windows impacted?” Definition updates for Windows, MacOS, and Linux are created, managed, and delivered through different channels. That is something that is common for most EDR vendors. This update was only for Windows, so only Windows systems were impacted.

“Was this a Microsoft issue?” Yes and no, but in every important way no. It was not actually Microsoft’s error, but since it only impacted Windows systems it was a Microsoft problem. Microsoft was not responsible for causing the problem, or responsible for fixing it, though they did offer whatever support and tools they could to help, and continue to do so. 

“Couldn’t companies test the update before it rolled out?” No, not in this case. The ability to stage the rollout of definition updates is not generally available in EDR solutions (CrowdStrike or other vendors) – though after this weekend, that might be changing. There are very real reasons why such features aren’t available, but with the issues we just went through, it might be time to change that policy. 

“How can we stop this from ever happening again?” The good news is that many EDR vendors stage the rollout of definition updates across their customers. So while a customer cannot stage the rollouts themselves, at least only a limited number of customers will be impacted by a bad update. No doubt CrowdStrike will be implementing this policy in the very near future. The nature and urgency of definition updates makes traditional staging methods unusable as organizations cannot delay updates for weeks as they do with Windows updates and other application updates. That being said, some method of automated staging of definition updates to specific groups of machines – while truly not optimal – might be necessary in future.

To sum up, CrowdStrike put out a definition update with an error in it, and because this definition update was loaded into a kernel-mode process, it crashed Windows. Over 8.5 million such Windows machines downloaded and applied the update before the error was discovered, causing thousands of businesses to be unable to operate until the situation was corrected. That correction required manual and time-consuming operations to be performed machine by machine, so the process took (and continues to take) a significant amount of time. No data theft or destruction occurred (beyond what would normally happen during a Windows crash), no ransom demanded, no responsibility beyond CrowdStrike claimed. As such, it is highly unlikely that this was any form of cyber attack; but it was definitely a cyber incident since a huge chunk of the business world went offline. 

Cybersecurity in Plain English: A Special Snowflake Disaster

Editor’s Note: This is an emergent story, and as such there may be more information available after the date of publication. 

Many readers have been asking: “What happened with Snowflake, and why is it making the news?” Let’s dive into this situation, as it is a little more complex than many other large-scale attacks we’ve seen recently.Noun abstract geometric snowflake 2143460 FF001C.

Snowflake is a data management and analytics service provider. What that essentially means is that when companies need to store, manage, and perform intelligence operations on massive amounts of data; Snowflake is one of the larger vendors that has services that allow that to happen. According to SoCRadar [[ https://socradar.io/overview-of-the-snowflake-breach/ ]], in late-May of 2024 Snowflake acknowledged that unusual activity had been observed across their platform since mid-April. While the activity indicated that something wasn’t right, the investigation didn’t find any threat activity being run against Snowflake’s systems directly. This was a bit of a confusing period, as usually you would see evidence that the vendor’s own systems were being attacked when you had strange activity going on across the vendor’s networks. 

Around the time of that disclosure, Santander Bank and Ticketmaster both reported that their data had been stolen, and was being held ransom by a threat actor. These are two enormous companies, and both reporting data breach activity within days of each other is an event that doesn’t happen often. Sure enough, when both companies investigated independently, they both came to the same conclusion – their data in Snowflake was what had been stolen. Many additional disclosures by both victim companies and the threat actors themselves – a group identified as UNC5537 by Mandiant [[ https://cloud.google.com/blog/topics/threat-intelligence/unc5537-snowflake-data-theft-extortion ]] occurred over the following weeks. Most recently, AT&T disclosed that they had suffered a massive breach of their data, with over 7 million customers impacted [[ https://about.att.com/story/2024/addressing-data-set-released-on-dark-web.html ]].

So, was Snowflake compromised? Not exactly. What happened her was that Snowflake did not require that customers use Multi-Factor Authentication (MFA) for users logging into the Snowflake platform. This allowed attackers who were able to successfully get malware on user desktops/devices to grab credentials; and then use those credentials to access and steal that customer’s data in Snowflake. This was primarily done by tricking a user into installing/running an “infostealer” malware, which allowed the attacker to see keystrokes, grab saved credentials, snoop on connections, etc. All the attacker needed to do was infect one machine that was being used by an authorized Snowflake user, and they could then get access to all the data that customer stored in Snowflake. Techniques like the use of password vaults (so there would be no keystrokes to spy on) and the use of MFA (which would require the user acknowledge a push alert or get a code on a different device) would be good defenses against this kind of attack, but Snowflake didn’t require these techniques to be in use for their customers.

Snowflake did not – at least technically – do anything wrong. They allow customers to use MFA and other login/credential security with their service, they just didn’t mandate it. They also did not have a quick way to turn on the requirement for MFA throughout a customer organization if that customer hadn’t started out making it mandatory for all Snowflake accounts they created. This is a point of contention with the cybersecurity community, but even though it is a violation of best practices it is not something that Snowflake purposely did incorrectly. Because of this, the attacks being seen are not the direct fault of Snowflake, but rather a result of Snowflake not forcing customers to use available security measures. Keep in mind that Snowflake has been around for some time now. When they first started, MFA was not an industry standard and customers starting to work with Snowflake back then were unlikely to have enabled it. 

Snowflake themselves have taken steps to address the issue. Most notably, they implemented a setting in their customer administration panel that lets an organization force the use of MFA for everyone in that company. If any users were not set up for MFA, they would need to configure it the next time they logged in. This is a good step in the right direction, but Snowflake did make a few significant errors in the way they handled the situation overall:

 – Snowflake did not enforce cybersecurity best practices by default, even for new customers. While they have been around long enough that their earlier customers may have started using the service before MFA was a standard security control, not getting those legacy customers to enable MFA was definitely a mistake. 

 – They also immediately tried to shift blame to customers who had suffered breaches. The customers in question were responsible for not implementing MFA and/or other security controls to safeguard their data; but attempting to blame the victim rarely works out in an vendor’s favor. In this case, the backlash from the security community was immediate and vocal. Especially when it came to light that there was no easy way to enable MFA across an entire customer, they lost the high ground quickly. 

 – That brings us to the next issue Snowflake faced: they didn’t make it easy to enable MFA. Most vendors these days allow for a quick way to enforce MFA across all users at that customer; with many vendors now having it be opt-out; meaning customer users will have to use MFA unless the customer organization opts-out of that feature. MFA was opt-in for Snowflake customers, even those signing up more recently when the use of MFA was considered a best practice by the cybersecurity community at large. With no quick switch or toggle to change that, many customers found themselves scrambling to identify each user of Snowflake within their organization and turn MFA on for each, one by one. 

Snowflake, in the end, was not responsible for the breaches multiple customers fell victim to. While that is true; their handling of the situation, attempt to blame the victims loudly and immediately, and lack of a critical feature set (to enforce MFA customer-wide) has created a situation where they are seen as at-fault, even when they’re not. A great case-study for other service providers who may want to plan for potential negative press events before they end up having to deal with them. 

If you are a Snowflake customer, you should immediately locate and then enable the switch to enforce MFA on all user accounts. Your users can utilize either Microsoft or Google Authenticator apps, or whatever Single Sign-On/IAM systems your organization uses. 

Is Ransomware Getting Worse, or Does it Just Feel That Way?

A reader contributed a great question recently: “So many more ransomware attacks are getting talked about in the news. Is ransomware growing Noun broadcast 6870591 C462DD.that quickly, or does it just seem worse than it is?” The answer is “both,” but let’s break things down.

 

According to Security Magazine, ransomware has indeed grown exponentially in the last year, with an 81% increase in attack activity. That’s certainly not good, but may not be telling the whole story. While there’s no doubt that threat actors have increased attacks via Ransomware-as-a-Service (RaaS) and more sophisticated automation; some of what we’re seeing is an increase in the number of reported attacks compared to previous years.  

 

Better automation allows threat actors to perform more attack attempts in the same amount of time than they’d be able to perform manually. Scripting and automation have increased the effectiveness of legitimate organizations in many different ways. Processes like allowing a user access to an application which would have previously taken days or a week can now be done in seconds – safely. Stocks trades that would take hours in years past are now done in seconds – also safely, usually. As legitimate businesses have embraced automation to make their organizations better, threat actors have done the same. Now, a new exploit that would allow for a new attack, which would normally take weeks or months to see significant spread throughout the world, can become a major world-wide threat in hours. This, of course, means that more attack attempts leads to more successful attacks and higher numbers of organizations compromised year over year. 

 

RaaS allows established threat actor cartels to re-package and sell attack protocols they no longer use themselves to lower-tier threat actors. This extends the life of the product (the ransomware attack), and allows the cartel to continue to make money from it for much longer periods of time. By having more threat actors use existing tools against still-unpatched systems, more organizations end up compromised.

 

Both of these factors have lead to a marked increase in the total number of ransomware victim organizations over time, and that can’t be dismissed as a statistical blip or anything like that. We’re facing more attacks, more often, across more industries.

 

However, it should be noted that a huge portion of the compromised organizations would not – until recently – have reported the compromise at all. Businesses have many reasons to attempt to hide the fact that they fell victim to a ransomware attack. Loss of customer trust, violation of clauses in contracts, endangering future business – all reasons companies may choose to hide that an attack took place. This isn’t new behavior, as companies would often try to gloss over or bury anything that could impact their bottom line as you would expect – we’re just now talking about impacts caused by digital disasters instead of bad accounting practices, corporate espionage, and other more traditional events. 

 

Generally, if such hidden events and setbacks would cause overall market impact or jeopardize citizens of a country or locality; government agencies create regulation to make it mandatory to report it. This is not something that’s done frequently, and only occurs when the burying of such events would create major fallout in an entire market or a large group of citizens. Typically, new regulations only occur after such a major impact occurs. Over the last several years, the impact of cybersecurity incidents has indeed begun to cause fallout in markets, and has caused impact to massive amounts of citizens through identity theft and other problems. Because of this, governments have begun to pass legislation that makes it mandatory to quickly disclose any cybersecurity incident which might have a “material impact” to markets and/or consumers. You can read more about one such regulation in a previous post here.

 

In the USA, both the Federal Government (specifically the Securities and Exchange Commission) and several State Governments (most notably New York and California) have already passed regulations which compel organizations to report incidents via public filings. The SEC, for example, requires the filing of an amendment to a regular reporting form (8-K) within four days of any incident that has material impact, and the incident must also be part of the annual 10-K filing every public company and certain other companies must file. Since these reports are public, anyone and everyone can view them. Other US states either have regulations that are being/have been amended to cover cybersecurity incidents, or are creating new legislation to make disclosure mandatory for any companies that do business within that state or territory. The European Union and other nations/coalitions are also either strengthening reporting regulations or implementing new regulations specifically around cybersecurity incident reporting.

 

The practical upshot of this is that significantly more incidents are becoming public knowledge that would not have been publicly reported previously. Incidents that would have been “swept under the rug” in previous years are now becoming public knowledge quickly, leading to a marked uptick in the number of known attack victim organizations. While this number is certainly not enough to account for the total increase in attacks, it has most definitely increased the number of reported attacks over the last few years. The combination has lead to massive increases in year-over-year ransomware reports, leading to dramatic news reporting on the problem. As the issue becomes more sensational, everyone hears about it more often and with more volume.

 

So, while it is true that the total number of ransomware attacks has increased sharply due to a combination of the rise Ransomware-as-a-Service and the use of automation in threat actor activities, it is important to also realize some of the sensational numbers are attributable to companies being required to talk about the problem more than in the past. In total, the issue of ransomware and other cybercrime is taking a much bigger share of the public interest – which is a very good thing – but we must look at all of the factors that lead to such numbers to more fully understand what’s going on. 

Cybersecurity in Plain English: How Did They Use the Real Email Domain?

Once in a while, I get the chance to pull back the curtain on how threat activity works in this column, and a recent question “I got a fake email from Microsoft, but it was the REAL microsoft.com domain – how did they do that?” gives me the opportunity to do so now. Let’s take a look at some of the tricks threat actors use to make you think that spam/threat/phishing email is actually coming from a domain that looks legitimate.

 

Technique 1: Basic Spoofing

Threat actors are able to manipulate emails in many ways, but the most common is to just force your email application to display something other than the real email Noun ask 6712656 FF001C.address they’re sending from. There are several ways to do this, but the most common involves the manipulation of headers. Headers are metadata (data about data) that email systems use to figure out where an email is coming from, where it should go to, who sent it, etc. One of the most common techniques involves using different headers for the display name (which shows up before you hover over the From: address in the message) and the actual email address the mail is coming from (which you can see by hovering over the From: field). This would result in a situation where you get an email from “Microsoft Support ()” and is somewhat easy to spot if you hover over the sender and see what email address it’s really from. 

If you’re wondering why email systems don’t reject messages like that, it’s because this situation is a valid feature-set of how email works. Simple Mail Transfer Protocol (SMTP) is the method used by the whole world to send emails, and part of that protocol allows for a display name in addition to an email address. This is how your company’s emails can have the name of the person that sent it to you, or a company can give an email account a friendly name – so there’s a trade-off here. While the feature is legitimate, it can be used for malicious purposes, and you need to look at the actual email address of the sender and not just the display name. 

 

Technique 2: Fake Domains

“OK,” you say, “But I definitely have gotten fake emails that used real email addresses for a company.” While you’re not losing your mind, the emails did not come from the company in question. Threat actors use multiple tricks to ma​ke you be​lieve that the email dom​​ain that message came from is real. For example, in the last sentence, the words “make,” “believe,” and “domain,” aren’t actually those words at all. They have what is known as a “zero-width space” embedded into them. While this space isn’t visible, it’s still there – and my spell-checker flagged each of the words as mis-spelled because they indeed are. Techniques like this allow a threat actor to send an email from “support@m​icrosoft.com” because they registered that email domain with an invisible space between the letters (between the “m” and the “i” in this case). To the naked eye, the domain looks very much real, but from the perspective of an email system, it is not actually the microsoft.com domain, and therefore is not something that would get extra attention from most security tools. 

This same theory can be used in another way. For example, have a look at AMAΖON.COM – notice anything odd there besides it being in all caps? Well, the “Z” in that domain name isn’t a “Z” at all – it’s the capitalized form of the Greek letter Zeta. Utilizing foreign characters and other Unicode symbols is a common way to trick a user into believing that an email is coming from a domain that they know, when in fact it is coming from a domain specifically set up to mislead the user. 

There are two ways to defend against this kind of malicious email activity. The first – and most important – is to follow best practices for cyber hygiene. Don’t click on links or open attachments in email, and never assume that an email is from who you think it is from without proof. Did you get an email from a friend with an attachment that you weren’t expecting? Call or text them to check that they sent it. Get an email from your employer with a link in it? Hover over the link to confirm where it goes – or better yet, reach out to your IT team and make sure you are supposed to click on that link. Most companies have begun to send out pre-event emails such as “You will be receiving an invitation link to register for our upcoming event later today. The email will be from our event parter – myevents.com.” in order to make sure users know what is real and what is suspicious if not outright fake. 

The second defense is one you can’t control directly, but is happening all the time. Your email provider (your company, Google for GMail, Outlook.com for Microsoft, etc.) is constantly updating lists of known fake, fraudulent, and/or malicious email domains. Once a fake domain goes on the list, emails that come from there get blocked. While this is an effective defense, it can’t work alone as there will always be some time between when a threat actor starts using a new fake domain and when your email provider discovers and blocks it.

 

In short, that email from that legitimate looking email address may still be fake and looking to trick you. Hovering over the email sender name to see the full and real address and following good cyber hygiene can save you from opening or clicking something that is out to do you, your computer, and/or your company harm.

Cybersecurity in Plain English: Should I Encrypt My Machine?

A common question I get from folks is some variant of “Should I be encrypting my laptop/desktop/phone?” While the idea of encrypting data might sound scary or difficult, the reality is the total opposite of both, so the answer is a resounding “YES!” That being said, many people have no idea how to actually do this, so let’s have a look at the most common Operating Systems (OSs) and how to get the job done.

First, let’s talk about what device and/or disk encryption actually does. Encryption renders the data on your device unusable unless someone has the decryption key – Noun-mobile-security-6159628-4C25E1 (1).which these days is typically either a passcode/password or some kind of biometric ID like a fingerprint. So, while the device is locked or powered down, if it gets lost or stolen the data cannot be accessed by whoever now has possession of it. Most modern (less than 6 to 8 year old) devices can encrypt invisibly without any major performance impact, so there really isn’t a downside to enabling it beyond having to unlock your device to use it – which you should be doing anyway… hint, hint… 

Now, the downside – i.e. what encryption can’t do. First off, if you are on an older device, there may be a performance hit when you use encryption, or the options we talk about below may not be avialable. There’s a ton of math involved in encryption and decryption in real-time, and older devices might just not be up to the task at hand. This really only applies to extremely older devices, such as those from 6-8 years old, and at that point it may be time to start saving up to upgrade your device when you can afford to. Secondly, once the device is unlocked, the data is visible and accessible. What that means is that you still need to maintain good cyber and online hygiene when you’re using your devices. If you allow someone access, or launch malware, your data will be visible to that while you have it unlocked or while that malware is running. So encryption isn’t a magic wand to defend your devices, but it is a very powerful tool to help keep data secure if you lose the device or have it stolen. 

So, how do you enable encryption on your devices? Well, for many devices it’s already on believe it or not. Your company most likely forces the use of device encryption on your corporate phones and laptops, for example. But let’s have a look at the more common devices you might use in your personal life, and how to get them encrypted.

Windows desktops and laptops:

From Windows 10 onward (and on any hardware less than about 5 years old), Microsoft supports a technology called BitLocker to encrypt a device. BitLocker is a native tool in Windows 10 and 11 (and was available for some other versions of Windows) that will encrypt entire volumes – a.k.a. disk drives – including the system drive that Windows itself runs on. There are a couple of ways it can do this encryption, but for most desktops and laptops you want to use the default method of encryption using a Trusted Platform Module (TPM) – basically a hardware chip in the machine that handles security controls with a unique identifier. How the TPM works isn’t really something you need to know, just know that there’s a chip on the board that is unique to your machine, and that allows technologies like BitLocker to encrypt your data uniquely to your machine. Turning on BitLocker is easy, just follow the instructions for your version of Windows 10 or 11 here: https://support.microsoft.com/en-us/windows/turn-on-device-encryption-0c453637-bc88-5f74-5105-741561aae838 – the basic idea being to go into settings, then security, then device encryption, but it’ll look slightly different depending on which version of Windows you’re using. One important note: If you’re using Windows 10 or 11 Home Edition, then you may have to follow specific instructions listed on the link on that web page to encrypt your drive instead of the whole system. It has the same overall outcome, but uses a slightly different method to get the job done. 

Mac desktops and laptops:

Here’s the good news, unless you didn’t choose the defaults during your first install/setup, you’re already encrypted. MacOS since about 4-5 versions ago automatically enables FileVault (Apple’s disk encryption system) when you set up your Mac unless you tell it not to do so. If you have an older MacOS version, or you turned it off during the setup, you can still turn it on now. Much like Microsoft, FileVault relies on a TPM to handle the details of the encryption, but all Macs that are still supported by Apple have a TPM, so unless you are on extremely old hardware (over 8-10 years old), you won’t have to worry about that. Also like Microsoft, Apple has a knowledge base article on how to turn it on manually if you need to do so: https://support.apple.com/guide/mac-help/protect-data-on-your-mac-with-filevault-mh11785/mac

Android mobile devices (phones/tablets):

Android includes the ability to encrypt data on your devices as long as you are using a passcode to unlock the phone. You can turn encryption on even if you’re not using a passcode yet, but the setup will make you set a passcode as part of the process. While not every Android device supports encryption, the vast majority made in the last five years or so do, and it is fairly easy to set it up. You can find information on how to set this up for your specific version of Android from Google, such as this knowledge base article: https://support.google.com/pixelphone/answer/2844831?hl=en

Apple mobile devices (iPhone/iPad):

As long as you have a device that’s still supported by Apple themselves, your data is encrypted by default on iPhone and iPad as soon as you set up a passcode to unlock the phone. Since that’s definitely something you REALLY should be doing anyway then, if you are, then you don’t have to do anything else to make sure the data is encrypted. Note that any form of passcode will work, so if you set up TouchID or FaceID on your devices, that counts too; and your data is already encrypted. If you have not yet set up a passcode, TouchID, or FaceID, then there are instructions at this knowledge base article for how to do it: https://support.apple.com/guide/iphone/set-a-passcode-iph14a867ae/ios and similar articles exist for iPad and other Apple mobile devices. 

Some closing notes on device encryption: First and foremost, remember that when the device is unlocked, the data can be accessed. It’s therefore important to set a timeout for when the device will lock itself if not in use. This is usually on automatically, but if you turned that feature off on your laptop or phone, you should turn it back on. Secondly, a strong password/passcode/etc. is really necessary to defend the device. If a thief can guess the passcode easily, then they can unlock the device and get access to the data easily as well. Don’t use a simple 4-digit pin to protect the thing that holds all the most sensitive data about you. As with any other password stuff, I recommend the use of a passphrase to make it easy for you to remember, but hard for anyone else to guess. “This is my device password!” is an example of passphrase, just don’t use that one specifically – go make up your own. If your device supports biometric ID (like a fingerprint scanner), then that’s a great way to limit how many times you need to manually type in a complex password and can make your life easier.

Device encryption (and/or drive encryption) makes it so that if your device is lost or stolen, the data on that device is unusable to whoever finds/steals it. Setting up encryption on most devices is really easy, and the majority of devices won’t even suffer a performance hit to use it. In many cases, you’re already using it and don’t even realize it’s on, though it never hurts to check and be sure about that. So, should you use encryption on your personal devices? Yes, you absolutely should.

 

Cybersecurity in Plain English: My Employer is Spying On My Web Browsing!

A recent Reddit thread had a great situation for us to talk about here. The short version is that a company notified all employees that web traffic would be monitored – including for secure sites – and recommended using mobile devices without using the company WiFi to do any non-business web browsing. This, as you might guess, caused a bit of an uproar with multiple posters calling it illegal (it’s usually not), a violation of privacy (it is), and because it’s Reddit, about 500 other things of various levels of veracity. Let’s talk about the technology in question and how it works.

For about 95% of the Internet these days, the data flowing between you and websites is encrypted via a technology known officially as Transport Layer Security (TLS), but Noun network monitoring 6236251 00449F.almost universally referred to by the name of the technology TLS replaced some time ago, Secure Sockets Layer (SSL). No matter what you call it, TLS is the tech that is currently used, and what’s responsible for the browser communicating over HTTPS:// instead of HTTP://. Several years ago, non-encrypted web traffic was deprecated – a.k.a. phased out – because Google Chrome, Microsoft Edge, Firefox, Opera, and just about every other browser began to pop up a message whenever a user went to a non-secure web page. As website owners (myself included) did not want to deal with large numbers of help requests, secured (HTTPS://) websites became the norm; and you’d be hard-pressed to find a non-encrypted site these days. 

So, if the data flowing between your browser and the website is encrypted, how can a company see it? Well, the answer is that they normally can’t, but organizations can set up technology that allows them to decrypt the data flowing between you and the site if you are browsing that site on a laptop, desktop, or mobile device that the organization manages and controls. To explain that, we’ll have to briefly talk about a method of threat activity known as a Man in the Middle (MitM) attack:

MitM attacks work by having a threat actor intercept your web traffic, and then relay it to the real website after they’ve seen it and possible altered it. As you might guess, this could be devastating for financial institutions, healthcare companies, or anyone else that handles sensitive data and information. Without SSL encryption, MitM attacks can’t really be stopped. You think you’re logging into a site, but in reality you’re talking to the threat actor’s web server, and THEY are talking to the real site – so they can see and modify data you send, receive, or both. SSL changes things. The way SSL/TLS works is with a series of security certificates that are used along with some pretty complex math to create encryption keys that both your browser and the website agree to use to encrypt data. That’s a massive oversimplification, but a valid high-level explanation of what’s going on. Your browser and the website do this automatically, and nearly instantly, so you don’t actually see any of it happening unless something goes wrong and you get an error message. If a threat actor tries to put themselves in the middle, then both your browser and the website will immediately see that the chain of security is broken by something/someone, and refuse to continue the data transfer. By moving to nearly universal use of SSL, Man in the Middle attacks have become far less common. It’s still technically possible to perform an MitM attack, but exceedingly more difficult than before, and certainly more difficult than a lot of other attack methods a threat actor could use.

Then how can your company perform what is effectively a MitM process on your web traffic without being blocked? Simple, they tell your computer that it’s OK for them to do it. The firewalls and other security controls your company uses could decrypt the SSL traffic before it reaches your browser. That part is fairly easy to do, but would result in a lot of users not being able to get to a whole lot of websites successfully. So, they use a loophole that is purposely carved out of the SSL/TLS standards. Each device (desktop/laptop/mobile/etc.) that the company manages is told that it should trust a specific security certificate as if it was part of the certificate chain that would normally be used for SSL. This allows the company to re-encrypt the data flow with that certificate, and have your browser recognize it as still secure. The practice isn’t breaking any of the rules, and in fact is part of how the whole technology stack is designed to work expressly for this kind of purpose, so your browser works as normal even though all the traffic is being viewed un-encrypted by the company. I want to be clear here – it’s not a person looking at all this traffic. Outside of extremely small companies that would be impossible. Automated systems decrypt the traffic, scan it for any malware or threat activity, then re-encrypt it with the company’s special certificate and ferry it on to your browser. A similar process happens in the other direction, but that outbound data is re-encrypted with the website’s certificate instead of the company’s certificate. Imagine that the systems are basically using their own browser to communicate with the websites, and ferrying things back and forth to your browser. That’s another over-simplification just to outline what is going on. Humans only get involved if the automated systems catch something that requires action. That being said, humans *can* review all that data if they wanted to or needed to as it is all logged – it’s just not practical to do that unless there’s an issue that needs to be investigated.

That brings us to another question. Why tell everyone it’s happening if it can be done invisibly for any device the company controls and manages? Well, remember way up above when we talked about if it was legal, or a violation of privacy, or a host of other things? Most companies will bypass the decryption for sites they know contain financial information, healthcare info, and other stuff that they really don’t want to examine at all. That being said, it’s not possible to ensure that every bank, every hospital and doctor’s and dentist’s office, every single site that might have sensitive data on it is on the list to bypass the filter. Because of that, many companies will make it known via corporate communications and in employee manuals that all traffic can be visible to the IT and cybersecurity teams. It’s a way to cover themselves if they accidentally decrypt sensitive information that could be a privacy violation or otherwise is something they shouldn’t, or just don’t want to, see. 

Companies are allowed to do this on their own networks, and on devices that they own, control, or otherwise manage. Laws vary by country and locality, and I am not a lawyer, but at least here in the USA they can do this whenever they want as long as employees consent to it happening. The Washington Post did a whole write-up on the subject here: https://www.washingtonpost.com/technology/2021/08/20/work-from-home-computer-monitoring/ (note, this may be paywalled for some site visitors). As long as the company gets that consent (say, for example, having you sign that you have read and agree to all of the stuff in that Employee Handbook), they can monitor traffic that flows across their own networks and devices. Some companies, of course, just want to give employees a heads-up that it’s happening, but most are covering their bases to make sure they’re following the rules for whatever country/locality they and you are in. 

What about using a VPN? That could work, if you can get it to run. Many VPN services would bypass the filtering of SSL Decryption, because they’re encrypting the traffic end-to-end with methods other than SSL/TLS. In short, the browser and every other app are now communicating in an encrypted channel that the firewall and other controls can’t decrypt. Not all VPN’s are created equal though, so it isn’t a sure thing. Also keep in mind that most employers who do SSL Decryption also know about VPN’s, and will work to block them from working on their networks.

One last note: Don’t confuse security and privacy. Even without SSL Decryption, your employer can absolutely see the web address and IP address of every site you visit. This is because of two factors. First, most Domain Name Servers (DNS) are not encrypted. That’s changing over time, but right now it is highly likely that your browser looks up where the website is via a non-encrypted system. Second, even if you’re using secure DNS (which exists, but isn’t in wide-spread use), the company’s network still has to connect to the website’s network – which means at the very least the company will know the IP addresses of the sites you visit. This isn’t difficult to reverse and figure out what website is on that IP address, so your company can still see where you went – even if they don’t know what you did while you were there.

To sum up: Can your employer monitor your web surfing even if you’re on a secure website? Yes – provided they have set up the required technology, own and/or manage the device you’re using, and (in most cases) have you agree to it in the employee manual or via other consent methods. Is that legal? Depends on where you live and where the company is located, but for a lot of us the answer is “yes.” Doesn’t it violate my privacy? Yes, though most companies will at least try to avoid looking at traffic to sites that are known to have sensitive data. Your social media feeds, non-company webmail, and a whole lot of other stuff are typically fair game though; so just assume that everywhere you surf, they can see what you’re doing. Can you get around that with a VPN? Maybe, but your company may effectively block VPN services. And finally, does this mean if my company isn’t doing SSL Decryption that I’m invisible? No, there’s still a record of what servers you visited, and most likely what URL’s you went to.

Last but not least: with very few exceptions, the process of SSL Decryption is done for legitimate and very real security reasons. The technology helps keep malware out of the company’s network and acts as another link in the chain of security defending the organization. While there are no doubt some companies that do this to spy on their employees, they are the exception rather than the rule. Check FaceBook and do your banking on your phone (off WiFi) or wait until you get home. 

Cybersecurity in Plain English: The Y of the xz Vulnerability

Because of some news that broke on Friday of last week, my inbox was inundated with various forms of the question “What is xz, and why is this a problem?” The xz library is a nearly ubiquitous installation on just about every flavor of Linux out there (and some other Operating Systems as well), so let’s dive into what it is, what happened last week, and what you need to do.

Libraries are collections of source code (application code) that can be brought into larger software projects to help speed up development and take advantage of economies of scale. There are libraries for common Windows functions, different common application behaviors, and thousands of other things overall. One such common function is data compression – such as the zip files that many (if not all) of us have used at some point in our day-to-day work. On Linux systems, the most common library used to create and manage compressed files is called “xz” (pronounced ex-zee in the USA, ex-zed in most of the rest of the world). This library can be found in thousands of applications, and installed on millions of Linux machines – including Cloud systems and application appliances used in overwhelming numbers of organizations. As you might guess, any security issues with xz would be problematic for the security community to say the least.

Late last week, the news broke that researchers had discovered a backdoor had been coded into recent versions of xz. This backdoor would allow an attacker to be able to communicate with a Linux system via SSH (a secure remote shell that is commonly used for remote access) without having to go through the usual authentication process normally required to create an SSH session. SSH is how most Linux systems are managed, so the ability to open a shell session without going through the typically strict authentication sequence first is a nightmare for any Linux user, and just disabling SSH isn’t an option, as they’d lose the ability to access and control those devices legitimately. 

The back door was put into the xz library by a volunteer coder who has worked on the project for some time. Such project maintainer volunteers routinely work on Open Source code projects, and are part of a much larger community that builds, maintains, and extends Open Source projects. No information is currently available as to why they put the back door in, what their motivations were, or any possible explanation for how they went about it. Time will no doubt reveal that information, but until then we’re left with the problem in versions 5.6.0 and 5.6.1 of the xz library, and no stable versions before that since 5.4.6 was made available. 

“But,” I hear many saying online – both legitimately and sarcastically – “xz is Open Source, so isn’t it secure?” This is a bit of an exaggeration, but the idea that Open Source software is more secure is a common misconception, so let’s briefly talk about that. Open Source simply means that anyone who wants to see and (in most cases) use the source code for a library, application, or other component of software development is allowed to. Based on how the code is licensed, anyone may be able to view, edit, even re-write the source code; though they may be required to ensure the original is left alone and/or that they offer the same benefits to anyone else who now wants to use their version of the altered source. Closed Source is the opposite, applications and other software that have proprietary code that only the software developer has access to. For most of us, nearly everything we use is Closed Source. This includes Microsoft Office, web applications like SalesForce, etc. What makes things even more confusing is that many of these Closed Source platforms actually use Open Source code along with their own proprietary code – there are entire books written on the subject of how that works if you are interested in learning more.

Open Source code is no more or less secure than Closed Source is. Both can have mistakes made in coding that create vulnerabilities that a threat actor can exploit, and generally speaking both have the same incidence of that happening. Open Source has the benefit of being available to anyone who wants to look at the source code, which means a vulnerability discovered in Open Source software *can* – with specific stress on *can* – often be patched more quickly because anyone could write the patch. You are not waiting on a software development firm creating and implementing the patch with their own software development team. The drawback is that many Open Source projects are built and maintained by volunteers, while Closed Source software relies on customers continuing to license it. So while Open Source might get patched faster, that is by no means a guaranty of any kind. We’ve seen Closed Source vulnerabilities get patched immediately, Open Source vulnerabilities never get patched because no one wants to write the fix, and any combination between the two. From a security perspective, you should view Open Source and Closed Source software as equal – and address any issues with either in the same way. The vulnerability has to be patched and/or otherwise mitigated as quickly as possible.

So, what do you need to do with the xz vulnerability? First off, the security community has not – as of today – seen a lot of attempts to leverage the back door, so while there isn’t any doubt this will be a problem at some point, it isn’t a problem right at this moment. That gives us the luxury of time, and we can use that to avoid panic that would lead to drastic measures like we saw with the log4j situation a couple of years back. Make an inventory of all software, both built in-house and obtained from software developers, that uses the xz library. This may mean reaching out to the major software developers you use to ask them if they use the impacted library. For Operating systems, this library is installed on nearly every version of Linux, and in some cases can be installed on MacOS as well. On either system, open a terminal session (it’s in Utilities in Applications on Mac) and type “xz -V” without the quotes. If you get nothing back, then you don’t have the library installed in the OS itself (though you might still be using applications that incorporate the library – see above). If you get a response of 5.6.0 or 5.6.1, then you have to take action. Follow your Operating System’s normal process for downgrading or removing a package – with your goal being to install version 5.4.6 of the xz library – that earlier version is stable, and did not have the back door code in it. Due to the sheer number of package managers on different versions of Linux, you’ll have to do a bit of legwork online to find instructions if you’re not familiar with the process. On MacOS, about the only thing that installs xz into the OS itself is homebrew; a popular package manager for Mac. If you do use homebrew, open a terminal window and run “brew update” and then “brew upgrade” (without the quotes) in Terminal to force the xz package to be downgraded to 5.4.6 automatically. 

While there isn’t currently any indication of attacks using this vulnerability, there will be over time. When software vendors start releasing patches for the vulnerability within their own applications, threat actors may choose to try to use the vulnerability to exploit organizations that don’t apply the patch. We’ve seen that behavior before, especially in cases like this where exploiting the vulnerability isn’t as straight-forward or as easy as many other techniques. Threat actors will continue using easier methods to gain access unless they discover that an exploit to leverage the back door will get them access to something known to be valuable to their end goals. So downgrading to xz version 5.4.6 wherever possible is the best course of action, and should be done as soon as possible for any/all Linux systems in your organization. Where it isn’t possible (because an application requires version 5.6.x for example), then very close monitoring of SSH connections is mandatory, to ensure that only authorized users are gaining access to the organization’s systems. Of course, once an updated version of the xz library is released without the back door, then upgrading should be done as soon as possible. Also remember to patch all software (Open or Closed Source) that puts out an update to address the vulnerability as soon as it is possible to do so; as threat actors will be on the lookout for valuable targets they can go after once they know an application is vulnerable if not updated. 

The xz library vulnerability has the potential to be a major headache, but due to the relative complexity of exploiting it, and the fact that there are currently much easier methods a threat actor can use to try to gain access, we have the luxury of time to identify and mitigate the threat. Having your IT and Security teams take action now will save you a lot of time and panic later. Gathering up information about where the packages are used will also make the future update to a new version without the back door much easier as well, so taking action now has benefits that will extend beyond just the issues discovered last week.