A Study on the Psychology of Social Engineering-Based Cyberattacks and Existing Countermeasures
Abstract
:1. Introduction
2. Social Engineering Attacks
2.1. Phishing Attack
2.2. Dumpster Diving
2.3. Scareware
2.4. Water Hole
2.5. Reverse Social Engineering
2.6. Deepfake
3. Influence Methodologies
3.1. Social Influence
3.2. Persuasion
3.3. Attitude and Behavior
3.4. Trust and Deception
3.5. Language and Reasoning
3.6. Countering Social Engineering-Based Cyberattacks
3.7. Machine Learning-Based Countermeasures
3.7.1. Deep Learning
3.7.2. Reinforcement Learning
3.7.3. Natural Language Processing
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
Abbreviations
SE | social engineering |
BEC | business email compromise |
RAT | remote access Trojan |
ML | machine learning |
DSD | distributed spam distraction |
GANs | generative adversarial networks |
ANNs | artificial neural networks |
TPB | theory of planned behavior |
IDT | interpersonal deception theory |
SMS | short message service |
NLP | natural language processing |
DL | deep learning |
DNN | deep neural network |
LSTM | long short-term memory |
RL | reinforcement learning |
CRM | cyber-resilient mechanism |
References
- Abroshan, H.; Devos, J.; Poels, G.; Laermans, E. Phishing happens beyond technology: The effects of human behaviors and demographics on each step of a phishing process. IEEE Access 2021, 9, 44928–44949. [Google Scholar] [CrossRef]
- Siddiqi, M.A.; Mugheri, A.; Oad, K. Advanced persistent threats defense techniques: A review. Pak. J. Comput. Inf. Syst. 2016, 1, 53–65. [Google Scholar]
- Wang, Z.; Zhu, H.; Sun, L. Social engineering in cybersecurity: Effect mechanisms, human vulnerabilities and attack methods. IEEE Access 2021, 9, 11895–11910. [Google Scholar] [CrossRef]
- Albladi, S.M.; Weir, G.R.S. Predicting individuals’ vulnerability to social engineering in social networks. Cybersecurity 2020, 3, 7. [Google Scholar] [CrossRef]
- Saudi Aramco Confirms Data Leak after Reported Cyber Ransom. Available online: https://www.bloomberg.com/news/articles/2021-07-21/saudi-aramco-confirms-data-leak-after-reported-cyber-extortion (accessed on 6 August 2021).
- Marriott Discloses Data Breach Possibly Affecting over 5 Million Customers. Available online: https://edition.cnn.com/2020/04/01/business/marriott-hack-trnd/index.html (accessed on 10 August 2021).
- Marriott Data Breach FAQ: How Did It Happen and What Was the Impact? Available online: https://www.csoonline.com/article/3441220/marriott-data-breach-faq-how-did-it-happen-and-what-was-the-impact.html (accessed on 7 July 2021).
- Hughes-Larteya, K.; Li, M.; Botchey, F.E.; Qin, Z. Human factor, a critical weak point in the information security of an organization’s Internet of things. Heliyon 2021, 7, 6522–6535. [Google Scholar] [CrossRef]
- Siddiqi, M.A.; Ghani, N. Critical analysis on advanced persistent threats. Int. J. Comput. Appl. 2016, 141, 46–50. [Google Scholar]
- Americans Lost $29.8 Billion to Phone Scams Alone over the Past Year. Available online: https://www.cnbc.com/2021/06/29/americans-lost-billions-of-dollars-to-phone-scams-over-the-past-year.html (accessed on 8 August 2021).
- Widespread Credential Phishing Campaign Abuses Open Redirector Links. Available online: https://www.microsoft.com/security/blog/2021/08/26/widespread-credential-phishing-campaign-abuses-open-redirector-links/ (accessed on 11 October 2021).
- Twitter Hack: Staff Tricked by Phone Spear-Phishing Scam. Available online: https://www.bbc.com/news/technology-53607374 (accessed on 10 August 2021).
- Shark Tank Host Barbara Corcoran Loses $380,000 in Email Scam. Available online: https://www.forbes.com/sites/rachelsandler/2020/02/27/shark-tank-host-barbara-corcoran-loses-380000-in-email-scam/?sh=73b0935a511a (accessed on 7 October 2021).
- Toyota Parts Supplier Hit by $37 Million Email Scam. Available online: https://www.forbes.com/sites/leemathews/2019/09/06/toyota-parts-supplier-hit-by-37-million-email-scam/?sh=733a2c6e5856 (accessed on 7 October 2021).
- Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. Available online: https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 (accessed on 11 October 2021).
- Google and Facebook Duped in Huge ‘Scam’. Available online: https://www.bbc.com/news/technology-39744007 (accessed on 15 October 2021).
- Facebook and Google Were Conned out of $100m in Phishing Scheme. Available online: https://www.theguardian.com/technology/2017/apr/28/facebook-google-conned-100m-phishing-scheme (accessed on 12 October 2021).
- Govindankutty, M.S. Is human error paving way to cyber security? Int. Res. J. Eng. Technol. 2021, 8, 4174–4178. [Google Scholar]
- Siddiqi, M.A.; Pak, W. Optimizing filter-based feature selection method flow for intrusion detection system. Electronics 2020, 9, 2114. [Google Scholar] [CrossRef]
- Human Cyber Risk—The First Line of Defense. Available online: https://www.aig.com/about-us/knowledge-insights/human-cyber-risk-the-first-line-of-defense (accessed on 12 August 2021).
- Pfeffel, K.; Ulsamer, P.; Müller, N. Where the user does look when reading phishing mails—An eye-tracking study. In Proceedings of the International Conference on Human-Computer Interaction (HCII), Orlando, FL, USA, 26–31 July 2019. [Google Scholar]
- Gratian, M.; Bandi, S.; Cukier, M.; Dykstra, J.; Ginther, A. Correlating human traits and cyber security behavior intentions. Comput. Secur. 2018, 73, 345–358. [Google Scholar] [CrossRef]
- Dhillon, G.; Talib, Y.A.; Picoto, W.N. The mediating role of psychological empowerment in information security compliance intentions. J. Assoc. Inf. Syst. 2020, 21, 152–174. [Google Scholar] [CrossRef]
- 12 Types of Phishing Attacks and How to Identify Them. Available online: https://securityscorecard.com/blog/types-of-phishing-attacks-and-how-to-identify-them (accessed on 16 August 2021).
- Social Engineering Attack Escalation. Available online: https://appriver.com/blog/201708social-engineering-attack-escalation (accessed on 11 September 2021).
- Cross, M. Social Media Security: Leveraging Social Networking While Mitigating Risk, 1st ed.; Syngress Publishing: Rockland, MA, USA, 2014; pp. 161–191. [Google Scholar]
- Grover, A.; Berghel, H.; Cobb, D. Advances in Computers; Academic Press: Burlington, MA, USA, 2011; Volume 83, pp. 1–50. [Google Scholar]
- Malin, C.H.; Gudaitis, T.; Holt, T.J.; Kilger, M. Phishing, Watering Holes, and Scareware. In Deception in the Digital Age: Exploiting and Defending Human Targets through Computer-Mediated Communications, 1st ed.; Academic Press: Burlington, MA, USA, 2017; pp. 149–166. [Google Scholar]
- Malin, C.H.; Gudaitis, T.; Holt, T.J.; Kilger, M. Viral Influence: Deceptive Computing Attacks through Persuasion. In Deception in the Digital Age: Exploiting and Defending Human Targets through Computer-Mediated Communications, 1st ed.; Academic Press: Burlington, MA, USA, 2017; pp. 77–124. [Google Scholar]
- Social Engineering: What You Can Do to Avoid Being a Victim. Available online: https://www.g2.com/articles/social-engineering (accessed on 26 August 2021).
- Social Engineering Technique: The Watering Hole Attack. Available online: https://medium.com/@thefoursec/social-engineering-technique-the-watering-hole-attack-9ee3d2ca17b4 (accessed on 26 August 2021).
- Shi, Z.R.; Schlenker, A.; Hay, B.; Bittleston, D.; Gao, S.; Peterson, E.; Trezza, J.; Fang, F. Draining the water hole: Mitigating social engineering attacks with cybertweak. In Proceedings of the Thirty-Second Innovative Applications of Artificial Intelligence Conference (IAAI-20), New York, NY, USA, 9–11 February 2020. [Google Scholar]
- Parthy, P.P.; Rajendran, G. Identification and prevention of social engineering attacks on an enterprise. In Proceedings of the International Carnahan Conference on Security Technology (ICCST), Chennai, India, 1–3 October 2019. [Google Scholar]
- Irani, D.; Balduzzi, M.; Balzarotti, D.; Kirda, E.; Pu, C. Reverse social engineering attacks in online social networks. In Proceedings of the Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA), Berlin, Germany, 7–8 July 2011. [Google Scholar]
- Albahar, M.; Almalki, J. Deepfakes: Threats and countermeasures systematic review. J. Theor. Appl. Inf. Technol. 2019, 97, 3242–3250. [Google Scholar]
- Chi, H.; Maduakor, U.; Alo, R.; Williams, E. Integrating deepfake detection into cybersecurity curriculum. In Proceedings of the Future Technologies Conference (FTC), Virtual Platform, San Francisco, CA, USA, 5–6 November 2020. [Google Scholar]
- Gass, R.H. International Encyclopedia of the Social & Behavioral Sciences, 2nd ed.; Elsevier: Houston, TX, USA, 2015; pp. 348–354. [Google Scholar]
- Myers, D. Social Psychology, 10th ed.; Mc Graw Hill: New York, NY, USA, 2012; pp. 266–304. [Google Scholar]
- Mamedova, N.; Urintsov, A.; Staroverova, O.; Ivanov, E.; Galahov, D. Social engineering in the context of ensuring information security. In Proceedings of the Current Issues of Linguistics and Didactics: The Interdisciplinary Approach in Humanities and Social Sciences (CILDIAH), Volgograd, Russia, 23–28 April 2019. [Google Scholar]
- Foa, E.B.; Foa, U.G. Handbook of Social Resource Theory, 2012th ed.; Springer: New York, NY, USA, 2012; pp. 15–32. [Google Scholar]
- Wang, Z.; Zhu, H.; Liu, P.; Sun, L. Social engineering in cybersecurity: A domain ontology and knowledge graph application examples. Cybersecurity 2021, 4, 31. [Google Scholar] [CrossRef]
- Collins, N.L.; Miller, L.C. Self-disclosure and liking: A meta-analytic review. Psychol. Bull. 1994, 116, 457–475. [Google Scholar] [CrossRef] [PubMed]
- Hacking Human Psychology: Understanding Social Engineering Hacks. Available online: https://www.relativity.com/blog/hacking-human-psychology-understanding-social-engineering/ (accessed on 2 September 2021).
- Ferreira, A.; Coventry, L.; Lenzini, G. Principles of persuasion in social engineering and their use in phishing. In Proceedings of the Name of the Human Aspects of Information Security, Privacy, and Trust (HAS), Los Angeles, CA, USA, 2–7 August 2015. [Google Scholar]
- Cialdini, R.B. Influence: The Psychology of Persuasion, revised ed.; Harper Business: New York, NY, USA, 2006; pp. 1–12. [Google Scholar]
- Norton, M.; Frost, J.; Ariely, D. Less is more: The lure of ambiguity, or why familiarity breeds contempt. J. Pers. Soc. Psychol. 2007, 92, 97–105. [Google Scholar] [CrossRef] [Green Version]
- Guadagno, R.E.; Cialdini, R.B. The Social Net: The Social Psychology of the Internet, 1st ed.; Oxford University Press: New York, NY, USA, 2009; pp. 91–113. [Google Scholar]
- Robert, O.; Timothy, B. Distraction increases yielding to propaganda by inhibiting counterarguing. J. Pers. Soc. Psychol. 1970, 15, 344–358. [Google Scholar]
- Siadati, H.; Nguyena, T.; Gupta, P.; Jakobsson, M.; Memon, N. Mind your SMSes: Mitigating social engineering in second factor authentication. Comput. Secur. 2017, 65, 14–28. [Google Scholar] [CrossRef] [Green Version]
- Priester, J.; Petty, R. Source attributions and persuasion: Perceived honesty as a determinant of message scrutiny. Pers. Soc. Psychol. Bull. 1995, 21, 637–654. [Google Scholar] [CrossRef]
- Mitnick, K.D.; Simon, W.L.; Wozniak, S. The Art of Deception: Controlling the Human Element of Security, 1st ed.; Wiley: Hoboken, NJ, USA, 2003; pp. 59–71. [Google Scholar]
- Ajzen, I. The theory of planned behavior: Frequently asked questions. Hum. Behav. Emerg. Technol. 2002, 2, 314–324. [Google Scholar] [CrossRef]
- Gulenko, I. Social against social engineering: Concept and development of a Facebook application to raise security and risk awareness. Inf. Manag. Comput. Secur. 2013, 21, 91–101. [Google Scholar] [CrossRef] [Green Version]
- Leary, M.R. Self-Presentation Impression Management And Interpersonal Behavior, 1st ed.; Routledge: London, UK, 1996; pp. 25–35. [Google Scholar]
- Montañez, R.; Golob, E.; Xu, S. Human cognition through the lens of social engineering cyberattacks. Front. Psychol. 2020, 11, 1755–1773. [Google Scholar] [CrossRef]
- Metzger, M.J.; Hartsell, E.H.; Flanagin, A.J. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun. Res. 2020, 47, 3–28. [Google Scholar] [CrossRef] [Green Version]
- Social Engineering as a Threat to Societies: The Cambridge Analytica Case. Available online: https://thestrategybridge.org/the-bridge/2018/7/18/social-engineering-as-a-threat-to-societies-the-cambridge-analytica-case (accessed on 20 September 2021).
- Lahcen, R.A.M.; Caulkins, B.; Mohapatra, R.; Kumar, M. Review and insight on the behavioral aspects of cybersecurity. Cybersecurity 2020, 3, 10. [Google Scholar] [CrossRef]
- You, L.; Lee, Y.H. The bystander effect in cyberbullying on social network sites: Anonymity, group size, and intervention intentions. Telemat. Inform. 2019, 45, 101284. [Google Scholar] [CrossRef]
- Sherchan, W.; Nepal, S.; Paris, C. A survey of trust in social networks. ACM Comput. Surv. 2013, 45, 1–33. [Google Scholar] [CrossRef]
- Molodetska, K.; Solonnikov, V.; Voitko, O.; Humeniuk, I.; Matsko, O.; Samchyshyn, O. Counteraction to information influence in social networking services by means of fuzzy logic system. Int. J. Electr. Comput. Eng. 2021, 11, 2490–2499. [Google Scholar] [CrossRef]
- Campbell, C.C. Solutions for counteracting human deception in social engineering attacks. Inf. Technol. People 2019, 32, 1130–1152. [Google Scholar] [CrossRef]
- Burgoon, J.K.; Buller, D.B. Interpersonal deception theory. Commun. Theory 1996, 6, 203–242. [Google Scholar] [CrossRef]
- Handoko, H.; Putri, D.A.W. Threat language: Cognitive exploitation in social engineering. In Proceedings of the International Conference on Social Sciences, Humanities, Economics and Law (ICSSHEL), Padang, Indonesia, 5–6 September 2018. [Google Scholar]
- Dorr, B.J.; Bhatia, A.; Dalton, A.; Mather, B.; Hebenstreit, B.; Santhanam, S.; Cheng, Z.; Shaikh, S.; Zemel, A.; Strzalkowski, T. Detecting asks in SE attacks: Impact of linguistic and structural knowledge. arXiv 2020, arXiv:2002.10931. [Google Scholar]
- Rodríguez-Priego, N.; Bavel, R.V.; Vila, J.; Briggs, P. Framing effects on online security behavior. Front. Psychol. 2020, 11, 2833–2844. [Google Scholar] [CrossRef]
- Yasin, A.; Fatima, R.; Liu, L.; Wang, J.; Ali, R.; Wei, Z. Understanding and deciphering of social engineering attack scenarios. Secur. Priv. 2021, 4, e161. [Google Scholar] [CrossRef]
- Handoko, H.; Putri, D.A.W.; Sastra, G.; Revita, I. The language of social engineering: From persuasion to deception. In Proceedings of the 2nd International Seminar on Linguistics (ISL), Padang, West Sumatra, Indonesia, 12–13 August 2015. [Google Scholar]
- Comment of NLP and Social Engineering Hacking the Human Mind Article. Available online: https://www.hellboundhackers.org/articles/read-article.php?article_id=8%78 (accessed on 9 September 2021).
- Alkhaiwani, A.H.; Almalki, G.A. Saudi human awareness needs. A survey in how human causes errors and mistakes leads to leak confidential data with proposed solutions in Saudi Arabia. In Proceedings of the National Computing Colleges Conference (NCCC), Taif, Saudi Arabia, 27–28 March 2021. [Google Scholar]
- Spear Phishing: Top Threats and Trends. Available online: https://assets.barracuda.com/assets/docs/dms/spear-phishing_report_vol6.pdf (accessed on 23 May 2022).
- Sushruth, V.; Reddy, K.R.; Chandavarkar, B.R. Social engineering attacks during the COVID-19 pandemic. SN Comput. Sci. 2021, 2, 78. [Google Scholar] [CrossRef]
- Washo, A.H. An interdisciplinary view of social engineering: A call to action for research. Comput. Hum. Behav. Rep. 2021, 4, 100126. [Google Scholar] [CrossRef]
- Alsulami, M.H.; Alharbi, F.D.; Almutairi, H.M.; Almutairi, B.S.; Alotaibi, M.M.; Alanzi, M.E.; Alotaibi, K.G.; Alharthi, S.S. Measuring awareness of social engineering in the educational sector in the kingdom of Saudi Arabia. Information 2021, 12, 208. [Google Scholar] [CrossRef]
- Aldawood, H.; Skinner, G. Reviewing cyber security social engineering training and awareness programs—Pitfalls and ongoing issues. Future Internet 2019, 11, 73. [Google Scholar] [CrossRef] [Green Version]
- Fan, W.; Lwakatare, K.; Rong, R. Social engineering: I-E based model of human weakness for attack and defense investigations. Int. J. Comput. Netw. Inf. Secur. 2017, 9, 1–11. [Google Scholar] [CrossRef] [Green Version]
- Bakhshi, T. Social engineering: Revisiting end-user awareness and susceptibility to classic attack vectors. In Proceedings of the 13th International Conference on Emerging Technologies (ICET), Islamabad, Pakistan, 27–28 December 2017. [Google Scholar]
- Sillanpää, M.; Hautamäki, J. Social engineering intrusion: A case study. In Proceedings of the 11th International Conference on Advances in Information Technology (IAIT), Bangkok, Thailand, 1–3 July 2020. [Google Scholar]
- What Is Social Engineering? A Definition + Techniques to Watch for. Available online: https://us.norton.com/internetsecurity-emerging-threats-what-is-social-engineering.html (accessed on 16 September 2021).
- What Is Social Engineering and How to Prevent It. Available online: https://www.avast.com/c-social-engineering (accessed on 16 September 2021).
- Network Intrusion Detection Techniques Using Machine Learning. Available online: https://www.researchgate.net/publication/349392282_Network_Intrusion_Detection_Techniques_using_Machine_Learning (accessed on 20 May 2022).
- Here’s How Cyber Threats Are Being Detected Using Deep Learning. Available online: https://techhq.com/2021/09/heres-how-cyber-threats-are-being-detected-using-deep-learning (accessed on 4 November 2021).
- Peng, T.; Harris, I.; Sawa, Y. Detecting phishing attacks using natural language processing and machine learning. In Proceedings of the IEEE 12th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 31 January–2 February 2018. [Google Scholar]
- Tsinganos, N.; Sakellariou, G.; Fouliras, P.; Mavridis, I. Towards an automated recognition system for chat-based social engineering attacks in enterprise environments. In Proceedings of the 13th International Conference on Availability, Reliability and Security (ICARS), Hamburg, Germany, 27–30 August 2018. [Google Scholar]
- Siddiqi, M.; Pak, W. An agile approach to identify single and hybrid normalization for enhancing machine learning based network intrusion detection. IEEE Access 2021, 9, 137494–137513. [Google Scholar] [CrossRef]
- Lansley, M.; Polatidis, N.; Kapetanakis, S.; Amin, K.; Samakovitis, G.; Petridis, M. Seen the villains: Detecting social engineering attacks using case-based reasoning and deep learning. In Proceedings of the Twenty-seventh International Conference on Case-Based Reasoning (ICCBR), Otzenhausen, Germany, 28–30 September 2019. [Google Scholar]
- Ozcan, A.; Catal, C.; Donmez, E.; Senturk, B. A hybrid DNN–LSTM model for detecting phishing URLs. Neural Comput. Appl. 2021, 9, 1–17. [Google Scholar] [CrossRef]
- Vinayakumar, R.; Alazab, M.; Jolfaei, A.; Soman, K.P.; Poornachandran, P. Ransomware Triage Using Deep Learning: Twitter as a Case Study. In Proceedings of the Cybersecurity and Cyberforensics Conference (CCC), Melbourne, Australia, 8–9 May 2019. [Google Scholar]
- Vinayakumar, R.; Soman, K.P.; Poornachandran, P.; Mohan, V.S.; Kumar, A.D. ScaleNet: Scalable and Hybrid Framework for Cyber Threat Situational Awareness Based on DNS, URL, and Email Data Analysis. J. Cyber Secur. Mobil. 2019, 8, 189–240. [Google Scholar] [CrossRef]
- Ketha, S.; Srinivasan, S.; Ravi, V.; Soman, K.P. Deep Learning Approach for Intelligent Named Entity Recognition of Cyber Security. In Proceedings of the the 5th International Symposium on Signal Processing and Intelligent Recognition Systems (SIRS’19), Trivandrum, India, 18–21 December 2019. [Google Scholar]
- Huang, Y.; Huang, L.; Zhu, Q. Reinforcement learning for feedback-enabled cyber resilience. Annu. Rev. Control 2022, 23, 273–295. [Google Scholar] [CrossRef]
- Bland, J.A.; Petty, M.D.; Whitaker, T.S.; Maxwell, K.P.; Cantrell, W.A. Machine learning cyberattack and defense strategies. Comput. Secur. 2020, 92, 101738. [Google Scholar] [CrossRef]
- Rawindaran, N.; Jayal, A.; Prakash, E.; Hewage, C. Cost benefits of using machine learning features in NIDS for cyber security in UK small medium enterprises (SME). Future Internet 2021, 13, 186. [Google Scholar] [CrossRef]
- Sallouma, S.; Gaber, T.; Vadera, S.; Shaalan, K. Phishing email detection using natural language processing techniques: A literature survey. Procedia Comput. Sci. 2021, 189, 19–28. [Google Scholar] [CrossRef]
- Fang, Y.; Zhang, C.; Huang, C.; Liu, L.; Yang, Y. Phishing email detection using improved RCNN model with multilevel vectors and attention mechanism. IEEE Access 2019, 7, 56329–56340. [Google Scholar] [CrossRef]
- Gutierrez, C.N.; Kim, T.; Corte, R.D.; Avery, J.; Goldwasser, D.; Cinque, M.; Bagchi, S. Learning from the ones that got away: Detecting new forms of phishing attacks. IEEE Trans. Dependable Secure Comput. 2018, 15, 988–1001. [Google Scholar] [CrossRef]
- Repke, T.; Krestel, R. Bringing back structure to free text email conversations with recurrent neural networks. In Proceedings of the European Conference on Information Retrieval (ECIR), Grenoble, France, 25–29 March 2018. [Google Scholar]
- Lan, Y. Chat-oriented social engineering attack detection using attention-based Bi-LSTM and CNN. In Proceedings of the 2nd International Conference on Computing and Data Science (CDS), Stanford, CA, USA, 28–30 January 2021. [Google Scholar]
- Cano, J. The human factor in information security: The weakest link or the most fatigued? Inf. Syst. Audit. Control Assoc. 2019, 5, 1–7. [Google Scholar]
- Bian, J.; Li, L.; Sun, J.; Deng, J.; Li, Q.; Zhang, X.; Yan, L. The influence of self-relevance and cultural values on moral orientation. Front. Psychol. 2019, 10, 292. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bada, M.; Sasse, A.M.; Nurse, J. Cyber security awareness campaigns: Why do they fail to change behavior? In Proceedings of the International Conference on Cyber Security for Sustainable Society (ICCSSS), Coventry, UK, 26 February 2015. [Google Scholar]
- Mortan, E.A. Cyber Security and Supply Chain Management: Risk, Challenges, and Solutions, 1st ed.; World Scientific Publishing: Singapore, 2021; pp. 62–63. [Google Scholar]
- Alkhalil, Z.; Hewage, C.; Nawaf, L.; Khan, I. Phishing attacks: A recent comprehensive study and a new anatomy. Front. Comput. Sci. 2021, 3, 563060. [Google Scholar] [CrossRef]
- Dodge, R.; Carver, C.; Ferguson, A.J. Phishing for user security awareness. Comput. Secur. 2007, 26, 73–80. [Google Scholar] [CrossRef]
- Arachchilage, N.A.G.; Love, S. Security awareness of computer users: A phishing threat avoidance perspective. Comput. Hum. Behav. 2014, 38, 304–312. [Google Scholar] [CrossRef]
- Ani, U.D.; He, H.; Tiwari, A. Human factor security: Evaluating the cybersecurity capacity of the industrial workforce. J. Syst. Inf. Technol. 2019, 21, 2–35. [Google Scholar] [CrossRef]
- Sibrian, J. Sensitive Data? Now That’s a Catch! the Psychology of Phishing, Chapter 3-Sensitive Data? Now That’s a Catch! The Psychology of Phishing. Bachelor’s Thesis, Harvard College, Cambridge, MA, USA, 2021; pp. 17–28. [Google Scholar]
- Kabay, M.E.; Robertson, B.; Akella, M.; Lang, D.T. Chapter 50-Using Social Psychology to Implement Security Policies. In Computer Security Handbook, 6th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2012; pp. 50.1–50.25. [Google Scholar]
- What You Need to Know about Cybersecurity in 2022. Available online: https://www.weforum.org/agenda/2022/01/cyber-security-2022-global-outlook (accessed on 4 April 2022).
Reference | Company | Date | Details/Damage | Breach Method/Tools |
---|---|---|---|---|
[5] | Saudi Aramco | 2021 | The hackers claimed that they had almost 1 terabyte worth of Aramco data and demanded USD 50 million as ransom. | Phishing email. |
[11] | Microsoft | 2021 | Several MS Office users fell for the phishing email scam. Each victim was scammed for USD 100 to 199. | Business email compromise (BEC) attack, Phishing email |
[6,7] | Marriott | 2020–2018 | On both occasions, hackers acquired access to millions of guest records. These records included guest names, addresses, contact numbers, and encrypted credit card information. | Phishing email, compromised credentials of two Marriott employees, remote access Trojan (RAT), Mimikatz post-exploitation tool. |
[12] | 2020 | Hackers compromised 130 Twitter accounts. Each account had at least 1 million followers. Hackers used 45 highly-influential accounts to promote a Bitcoin scam. | Impersonation-based SE attack, spear-phishing attacks. | |
[13] | Shark Tank | 2020 | Shark Tank host lost USD 400,000 after falling for a scam email. | Phishing email. |
[14] | Toyota | 2019 | The Toyota Boshoku Corporation lost USD 37 million after falling victim to a BEC attack. | Phishing email (i.e., BEC) |
[15] | Energy firm (U.K based) | 2019 | The Chief Executive Officer (CEO) was deceived and scammed for USD 243,000 by the hackers. | Deepfake phishing impersonation |
[16,17] | Google and Facebook | 2015–2013 | The phishing emails cost both Google and Facebook over USD 100 million. | Spear |
Phishing Attack | Description |
---|---|
Spear | Tailored attack to target a specific individual. For instance, an employee is targeted to gain access to an organization’s network. |
Whaling | The intended target is usually a high-profile individual. The attack requires considerable time to find the opportunity or means to compromise the individual’s credentials. |
Vishing | Vishing or voice phishing is an attack based on SE. Vishing is a fraudulent call intended to acquire classified information or credentials of the target individual. |
Smishing | Smishing is a text message format of a vishing attack. In smishing, the only difference is that it is based on a text message rather than a call. |
Impersonation/business email compromise (BEC) | BEC is an attack that requires planning and information. In a BES attack, the attacker impersonates a company executive, outsourced resource, or a supplier to acquire classified information, access to an organizational network, etc. |
Clone | Clone phishing is an email-based phishing attack. In these attacks, the malicious actor knows most of the business applications being used by a person or an organization. Based on this knowledge, the attacker will clone a similar email disguised as an everyday email from the application to extract critical information or even credentials from the target. |
Social media phishing | In social media phishing, the attacker works by observing the target individual’s social media and other frequently visited sites to collect detailed information. Then, that attacker plans the attack based on acquired information. The attacker can use the gathered information to trick the victim in many ways. |
Distributed spam distraction (DSD) | The DSD attack is executed in two steps. In the first step, the victim is spammed with phishing emails mirroring an authentic or credible source, i.e., a new letter, magazine, software company, etc. These fake emails contain a link that leads the victim to a web page that is a copy of an authentic and credible company’s website. The second step depends on how the attacker plans to conduct the SE attack, i.e., the fake page could ask the victim for the login information (to garner further or confidential information) to confirm the identity and proceed further. |
Types of Social Influence | Description |
---|---|
Group influence [38] | Conformity is a variation in behavior to agree with others. A group can influence such behavior. On social media, online groups may be used to influence victims into falling for a SE attack. For example, a social media group with hundreds of subscribers can present a subscriber with a malicious link and influence the victim by informing that every member must go to the link to be a part of this new group for future events and information. |
Informative influence/normative influences [39] | In SE attacks, the foe often uses particular information and setups with the help of informational/normative influences. For example, informing the victim about some free software and convincing the victim to install it by providing information on software importance and ease of use. Such an approach can be used to puzzle or manipulate the victim into performing certain actions or revealing information that benefits the foe. |
Social exchange theory/reciprocity norm [34,40] | Such methods of influence are used in reverse SE attacks. The social exchange theory highlights that people make decisions on the value (intentionally or unintentionally) of a relationship. For example, while working in a corporate environment, coworkers exchange favors based on their relations with each other. |
Moral influence/social responsibility [41] | SE attacks use moral influence or social responsibility in two ways. One way is that the foe exploits the victim’s helpful nature to extract information or to gain favor to facilitate the attack. The second way is to exert pressure of social responsibility norms or moral duty on the victim during the SE attack. This pressure of moral duty influences the victim’s behavior. Specifically, if the victim is not keen to offer any help. An example could be an online group to help animals. The malicious actor can identify the individuals who are highly motivated and willing to help. The attacker can target that victim for financial gain or can fabricate a story to target the moral values of the victim to extract information. |
Self-disclosure/rapport relation-building [42,43] | Research shows that during the process of building social relations, self-disclosure causes a willingness to reveal more to people who show connections to us. Adversaries use this SE method on victims who feel the need to connect with someone special. |
Types of Persuasions | Description |
---|---|
Similarity [46] | The similarity of interests invites likeness and dissimilarity leads to dislike. The criminal tends to use this as an effective approach to gain the trust of the victim. For example, on social media platforms, a foe may join groups that are similar to the groups joined by a potential target. Such similarities can help build a relationship of trust between a hacker and a victim. |
Distraction/ manipulation [25,47,48] | Research shows that moderate distraction does facilitate persuasion. Distraction is used as an effective tool in manipulation attacks. An example of a distraction-based SE attack is the DSD attack highlighted in Table 2. |
Curiosity [49] | The majority of individuals are curious by nature. In a SE attack, human curiosity can be exploited in many ways. For example, the attacker can send a phished email or an infected file as an attachment with a curious subject line, i.e., you are fired, annual performance report, employee layoff list, etc. |
Persuasion using authority/ credibility [50,51] | Most people tend to comply in front of an authoritarian figure. On the internet, hackers use symbols and logos that reflect authenticity and authority. For example, an official logo of taxation, law enforcement, etc., to show authority and credibility can be an effective approach to initiate a SE attack. |
Methods to Influence Attitudes and Behaviors | Description |
---|---|
Impression/ commitment [54,55] | The self-presentation theory highlights the fact that every individual presents a likable impression, both internally and to other people. An individual might put a lot of effort into creating a desirable image. Such efforts can be an opening for hackers to conduct SE attacks. For example, an individual’s behavior can be influenced if the social image of that person is threatened. |
Cognitive dissonance theory [56,57] | The theory highlights the inner conflict when an individual’s behaviors and beliefs are not aligned with each other. Such conflict can influence cognitive biases, i.e., decision-making. A malicious actor can exploit these cognitive biases for extracting confidential information. |
Behavior affects attitude [58] | The act that once an individual has agreed to a minor request, he/she is more likely to comply with a major request, is known as the foot-in-the-door effect. In essence, people build an image by performing a minor favor; to maintain this image, they tend to agree on the next favor. SE attackers can use such behavior to initiate an attack. |
Bystander effect [59] | The bystander effect defines human behavior involving an individual who is reluctant to help when bystanders are present. In SE attacks, victims may be tempted into a specific situation while in a group and exploited later in a private chat to acquire personal or confidential information. |
Scarcity/time pressure [41] | In SE attacks, the attacker uses scarcity to enforce a feeling of panic or urgency. This panic/urgency can influence the decision-making abilities of the victim. Due to this confusion, the attacker can persuade the victim into making decisions deemed desirable by the attacker. |
Sub-Domains of Trust and Deception | Description |
---|---|
Trust/relation [4] | Building trust is one of the most crucial parts of SE attacks. An attacker can use multiple means to develop a trusted relationship with the victim. Methods such as influence, persuasion, likeness, reward, etc., can be used to build a trusted relationship with the target. As per the research, once a relationship of trust is developed, the victim does not feel hesitant to be vulnerable in front of the trusted individual. Such a relationship can be a risk-taking behavior that could assist in a SE attack. |
Deception/fraud [55,62,63] | Deception is an intentional act based on strategic interaction by the deceiver. The interpersonal deception theory (IDT) suggests that humans believe that they can identify deception but a majority of the time they do not. The IDT also highlights that the malicious actor takes every action based on a strategic plan to manipulate the victim’s behavior. A SE attack based on deception can rely on several methods, e.g., lying, creating false fiction or narratives, partial truths, dodging questions, giving the impression of misunderstanding, etc. |
SUB-Domains of Language/Reasoning | Description |
---|---|
Framing effect/cognitive bias [66] | The phenomena of reflecting cognitive biases, i.e., expressing opinions and decision-making, are influenced by the way a question is asked. This cognitive bias based on language-framing leads to decision manipulation. For example, beef labeled “75% lean” is more preferred by customers as compared to the label “25% fat”. In SE attacks, the cognitive biasing of a victim is exploited by using a pre-planned language framework. |
Complicating the thinking process [67,68,69] | Language plays an integral role in the thought process for social interaction. This reliance can create an opportunity to invoke ‘thinking confusion’ through language. For example, to induce thinking confusion, an attacker can engage his/her victim in a statement with non-grammatical or unclear meaning. Such statements can tempt the victim into acting based on presumption, i.e., a statement “that’s touching to hear” may result in the victim touching his ear. Another example can be an incomplete statement, such as “I can’t hear”, which could encourage the victim to check or adjust the equipment. |
No. | Attack | Methods to Influence Victims | Human Vulnerability Exploited |
---|---|---|---|
1 | Impersonation, business email compromise (BEC), clone phishing | Moral influence social responsibility, similarity, persuasion using authority/credibility, interpersonal deception theory (IDT), complicating the thinking process, curiosity | Being helpful charity, kindness, trying to be acceptable in social norms |
2 | Spear phishing, pretexting, smishing phishing, whaling phishing, vishing phishing deepfake | Similarity, persuasion using authority/credibility, impression/commitment behavior affects attitude, scarcity, deception/fraud, complicating the thinking process | Being helpful, being obedient to authority, helping nature, panic negligence |
3 | Social media phishing, scareware, reverse-engineering attack, deepfake | Group influence, informative influence/normative influence, social exchange, theory/reciprocity norm, moral influence/social responsibility | Friendly nature, negligence, trustful nature, credibility |
4 | Waterhole attack, deepfake | Persuasion using authority/credibility, framing effect/cognitive bias, complicating the thinking process | Curiosity, greed, excitement fear |
No. | Training | Cyber Security Policies | Communication Policies | Company Equipment | Spam Filter/ Antivirus/ Firewall | Encrypted Communication | Password/Data Management | Incident Report |
---|---|---|---|---|---|---|---|---|
[8] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
[33] | ✓ | ✓ | ✓ | ✓ | ✓ | |||
[58] | ✓ | ✓ | ✓ | |||||
[62] | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
[70] | ✓ | |||||||
[71] | ✓ | ✓ | ✓ | |||||
[72] | ✓ | ✓ | ✓ | ✓ | ||||
[73] | ✓ | ✓ | ||||||
[74] | ✓ | ✓ | ✓ | |||||
[75] | ✓ | ✓ | ✓ | |||||
[76] | ✓ | ✓ | ✓ | |||||
[77] | ✓ | ✓ | ✓ | |||||
[78] | ✓ | ✓ | ||||||
[79] | ✓ | ✓ | ✓ | ✓ | ✓ | |||
[80] | ✓ | ✓ | ✓ |
Section | Section Summary |
---|---|
Section 1 | In this section, readers are introduced to the idea and working of SE attacks. To highlight the importance of cyberattacks based on SE, some of the most recent and prominent attacks are presented to the readers in the section. |
Section 2 | This section covers the most common types of SE attacks and their methodologies, i.e., phishing, dumpster diving, scareware, water hole, reverse SE. |
Section 3 | In this section, the methods to influence or exploit human vulnerabilities to conduct SE attacks are discussed. The section also provides the interconnection between SE attacks, methods of influence, and human vulnerabilities. The mapping of SE attacks, methods of influence, and human vulnerabilities plays a key role in understanding and countering cyberattacks based on SE. |
Section 4 | This section presents the reader with recent research on methods to counter SE attacks. The section also provides readers with an elaborate understanding of different countermeasures proposed to counter SE attacks, including the most prominent ML-based methods. The section also covers existing concerns about ML-based countermeasures. |
Section 5 | In this section, concerns over recently proposed methods to counter SE attacks are discussed. The section also emphasizes the need for a multidimensional approach to counter SE attacks. The limitations of the paper are also highlighted in this section. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Siddiqi, M.A.; Pak, W.; Siddiqi, M.A. A Study on the Psychology of Social Engineering-Based Cyberattacks and Existing Countermeasures. Appl. Sci. 2022, 12, 6042. https://doi.org/10.3390/app12126042
Siddiqi MA, Pak W, Siddiqi MA. A Study on the Psychology of Social Engineering-Based Cyberattacks and Existing Countermeasures. Applied Sciences. 2022; 12(12):6042. https://doi.org/10.3390/app12126042
Chicago/Turabian StyleSiddiqi, Murtaza Ahmed, Wooguil Pak, and Moquddam A. Siddiqi. 2022. "A Study on the Psychology of Social Engineering-Based Cyberattacks and Existing Countermeasures" Applied Sciences 12, no. 12: 6042. https://doi.org/10.3390/app12126042
APA StyleSiddiqi, M. A., Pak, W., & Siddiqi, M. A. (2022). A Study on the Psychology of Social Engineering-Based Cyberattacks and Existing Countermeasures. Applied Sciences, 12(12), 6042. https://doi.org/10.3390/app12126042