Blue Line

Cybersecurity challenges

January 4, 2022  By Cameron Field and Ian Goertz

Photo credit: Sikov / Adobe Stock

Private sector perspectives on the endless war

It was said in the 20th century the burglar always came through a basement window of your house to avoid detection. Now, well into the 21st century, there’s no need for that anymore. Thieves and burglars just need to have the technical capability of subverting our technology or social engineering us out of our personal information. These vulnerabilities, whether observed by police through crime theory, or by private companies looking to bolster defenses of themselves and their customers, and the growth of cyber security threats is overwhelming.

That being said—and despite years of innovation and growing cybersecurity budgets—one of the greatest challenges in securing individuals and organizations from a growing list of cybersecurity threats persists: people. In some industry surveys, over 50 per cent of respondents globally indicate that employees are the greatest perceived security threat to the organization (Kaspersky Lab, 2017) and despite enormous efforts undertaken, this is unlikely to change.

There is more than one reason for this; bad actors are getting smarter, technology is allowing for innovation, and generally individuals are ill-equipped to identify the kinds of threats they face and how to respond. From a criminological standpoint, target hardening of victims is failing us often.

Social engineering

Social engineering has existed for so long in the cyberspace alone, that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) first published an alert on it back in 2009. Many of the tactics that were leveraged back then are as prevalent today, with some technological upgrades. In addition, many of the ways to spot these schemes also remain the same. As such, if we are truly listening, there is an opportunity to shore up defences against social engineering.


Social engineering refers to the targeting of individuals’ underlying social interaction skills, general societal behaviours and emotions in order to learn information about a potential target or even gain access to a system, network or account. As CISA puts it in their report, “In a social engineering attack, an attacker uses human interaction (social skills) to obtain or compromise information about an organization or its computer systems… By asking questions, he or she may be able to piece together enough information to infiltrate an organization’s network” (CISA 2009).

Beyond the technical changes that have allowed these campaigns to look more realistic, people feel sympathy, panic, curiosity and greed, and for years threat actors have used these emotions to create convincing scams or initial points of entry. Over the past decade, tried and true scams leveraging social engineering continue to draw success in both generating illicit revenue and in obtaining initial access for further malicious activities. They employ the latest and greatest events, such as COVID-19 or the holidays, leave USB devices in parking lots or mail them directly to individuals, and continue with classic fraud schemes leveraging the branding of popular companies or charities, and although these tactics are not new, they continue to be highly successful. One such criminal champion of social engineering is that of romance scams against our elderly. Even public private partnerships in Canada like Project Chameleon have not stopped the growth of these scams (FinTRAC, 2021).

The major difference, as technology has improved, is that bad actors have increased their tools and the appearance of legitimacy of their activities while employing the same thematic approach of targeting an individual’s social interactions with others and their emotions. The use of spoofed information, such as phone numbers or websites, previously breached details about an individual and other commonly known information, is becomingly increasingly standard in malicious campaigns. It’s clear that the increasing availability of information available, both public and private, has created an environment where with enough time and determination, just about any bad actor can create a convincing social engineering campaign.

Automation has furthered these strategies by enabling bad actors to develop bots and other tools to help enable both sophisticated and less sophisticated actors to streamline and manage their social engineering campaigns. Bad actors can leverage these tools to expand campaigns in size and scope for relatively minimal effort and manage the various stages of manipulating a target in order to maximize the potential success. The tools are widely available on underground markets and forums, as well as in communities found on free applications such as discord or telegram and continue to generate a significant amount of success.

Social engineering has enabled many actors to circumvent millions in cyber security budgets regularly and continues to be one of the most prevalent methods for initial access.

Brute forcing & credential re-use

While brute forcing is unlikely to require understanding or knowledge about an individual or target, it does in some ways rely on people for its success in a similar fashion to social engineering. Instead of relying on people’s emotions, it relies on people to prioritize ease of use and convenience rather than making their accounts and private information more secure. Brute force attacks are an old and still-popular tactic, leveraging automated trial and error to guess login details. The tactic benefits greatly from weak password complexity and password re-use, both of which are a common flaw among an individual’s own account security.

In a brute force attack, bad actors leverage previous breaches, commonly used passwords, and automated bots and tools to logically guess credentials for accounts. In some cases, brute force attacks may not even be targeted, and might try common passwords against a set of previously known emails or usernames in the hopes of gaining access.

Ultimately, brute forcing and credential re-use attacks are avoidable; however, it requires a level of security know-how and a desire to prioritize safety or convenience—which, at this stage, people tend to avoid. The recurring strategy of private sector companies and law enforcement agencies giving reminders to people to safeguard their passwords and change them frequently has not worked.

Future state – deep fakes

As we look ahead to the future of malicious cyber activity – and, in particular, the challenge placed on people as part of the process – there are many tactics, techniques and procedures that will raise an eyebrow. One gaining notoriety is ‘deep fakes’.

A ‘deep fake’ is synthetic media in which an individual’s likeness, whether voice or image, can be manipulated to the creator’s aim. Deep fakes currently exist in many forms, largely for comedic or adult entertainment purposes, and can be used in conjunction with other technology to make convincing fake videos. This technology’s logical growth into other areas and those that potentially represent a risk to both private and public sector entities—including law enforcement—are already being seen globally.

As the deep fake technology improves, the potential for various fraudulent and malicious schemes will only increase. These activities are slowly gaining steam, hindered only by the amount of resources and availability of the technology. The use of convincing fakes further places the importance on the individuals being targeted for identifying fakes and suspicious activity in order to avoid becoming a victim.

As the race for cyber security continues at a brutal pace, the war is very much ongoing. Private sector entities, law enforcement, and individual citizens need to redouble their collective efforts. Citizens can’t rely on the private sector to cover their losses or their personal information. Law enforcement needs to double down on their upskilling and resourcing of cyber investigations, and the private sector needs to continue to innovate. The greatest casualty of the information age is our privacy.



Cameron Field, BA, MSc, is the Senior Manager of BMO’s Anti-money Laundering Team.

Ian Goertz, BA, MA, is the Senior Manager of BMO’s Financial Crimes Unit.

Print this page


Stories continue below