You are currently viewing a new version of our website. To view the old version click .

Safety, Security, Privacy, and Cyber Resilience

Section Information

Artificial intelligence pervades all aspects of contemporary life and shapes increasingly how we work, communicate and organize ourselves as a society. Its rapid diffusion across industry, research and public services will continue to accelerate, leading to increasingly complex and interdependent AI-driven systems. The driving force behind all these successful applications is machine learning and knowledge extraction. Here, the need for resilient, trustworthy and secure approaches grows substantially.

Modern applications operate in open, dynamic and often adversarial environments, creating vulnerabilities ranging from data poisoning and adversarial manipulation to model drift, misuse and systemic failures.

At the same time, AI has become indispensable for maintaining cyber security itself, supporting anomaly detection, threat intelligence, incident response, compliance checking and the continuous monitoring of complex infrastructures. These developments require not only technical safeguards but also governance structures, auditability, transparency and alignment with organizational and societal norms.

This section therefore invites contributions from researchers and practitioners that address fundamental and applied challenges of MAKE in security, safety, privacy and cyber resilience.

Submissions that connect machine learning with knowledge extraction, hybrid or domain-informed methods, causal reasoning, verification techniques or human-centred evaluation are particularly welcome, as they reflect the interdisciplinary scope of the journal.

Work on emerging directions such as secure foundation models, provenance tracking, watermarking, robustness under distributional shift, risk-aware learning, explainability for safety-critical contexts and red-team evaluation fits equally well.

Scope

Topics of this section include, but are not limited to, the following:

  • Resilience of AI;
  • Trustworthy AI;
  • Responsible AI;
  • Privacy-preserving technologies;
  • AI risk management;
  • AI safety;
  • Governance of AI;
  • Security testing of AI;
  • Auditing of AI;
  • AI for cyber security;
  • AI for compliance;
  • Penetration testing and AI;
  • AI Malware;
  • Policy checking;
  • Regulatory monitoring;
  • AI and threat intelligence;
  • Bias mitigation;
  • Fairness;
  • Differential privacy;
  • Robustness of AI.

Published Papers

Get Alerted

Add your email address to receive forthcoming issues of this journal.

XFacebookLinkedIn
Mach. Learn. Knowl. Extr. - ISSN 2504-4990