Notice: details about the Research Projects might be updated in the next days.
Curriculum: Foundational Aspects of Cybersecurity
Additional benefits: -
Website: https://www.iit.cnr.it/en/, https://tsp.iit.cnr.it/en/
Description
This PhD project investigates the potential of large language models (LLMs) to perform fact-checking not only by classifying claims as true or false, but also by generating explanations that align with those produced by professional fact-checkers. Current benchmarks typically reduce fact-checking to a binary or scalar task, overlooking the rich reasoning and evidence found in real-world fact-checking reports.
FACTUAL aims to bridge this gap by evaluating both the verdict accuracy and the explanatory coherence of LLMs. The research will build on datasets that pair claims with full fact-check reports from organizations such as PolitiFact, Full Fact, and Facta, capturing the nuanced reasoning behind assessments. The project will explore prompting and fine-tuning strategies to encourage LLMs to not only decide on a claim’s veracity but also provide justifications grounded in real-world evidence and logic.
Evaluation will focus on factual alignment with human-written explanations, and on the consistency and trustworthiness of model outputs. The ultimate goal is to understand whether LLMs can support scalable, interpretable, and semi-automated fact-checking workflows for journalists, researchers, and civil society.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.disa.unisa.it/
Description
This doctoral research investigates how intelligent agents can enhance collaborative reasoning through the construction and use of ontologies and knowledge graphs as structured semantic representations of shared knowledge induced from their interactions, behavior, and decision-making processes.
Ontologies and knowledge graphs provide the logical backbone for conceptual alignment and symbolic inference, while the corresponding knowledge graphs capture evolving representations of entities, relations, and context. These structures allow agents to align their understanding of complex environments, coordinate actions, and explain their behavior in a transparent and logically consistent manner.
Each agent combines sub-symbolic capabilities (e.g., language models, neural classifiers) with symbolic reasoning modules to support interpretable, verifiable decision-making. This hybrid integration enhances both adaptability and explainability, addressing common issues such as hallucinations, inconsistencies, and opacity. Shared semantic structures serve as a basis for distributed inference, conflict resolution, and coordinated task execution.
The framework will be validated through multi-agent scenarios involving knowledge integration, planning, and strategic decision-making under uncertainty, with a focus on disinformation mitigation and cognitive warfare as application domains. Agents will be tested in detecting, interpreting, and countering coordinated influence operations by reasoning over narrative structures, behavioral patterns, and semantic inconsistencies.
Expected outcomes include a formal methodology for extracting ontologies and knowledge graphs from agent interactions, reusable neuro-symbolic reasoning modules, software prototypes, and contributions to the fields of explainable AI, symbolic reasoning, and multi-agent systems. These results will enable the development of reliable, transparent, and collaborative AI systems for cognitive security, disinformation analysis, and intelligent decision support.
Curriculum: Human, Economic, and Legal Aspects in Cybersecurity
Additional benefits: -
Website: www.smartlex.eu
Description
To be adequately developed, the project must be able to benefit from both sufficient legal and technological expertise to design and initiate, at least at an experimental level, not necessarily to put into production, the design of the characteristics of an AI system, designing the knowledge base, how this can feed into the training of a local model, and how the latter can interact with the knowledge base understood as a RAG. Programming skills are key to the project.
It is important that the design and deployment plan be demonstrably in line with the legal and ethical constraints of the Italian legal system.
Curriculum: Data Governance & Protection
Additional benefits: -
Website: https://dawsec.dicom.uninsubria.it/elena.ferrari
Description
The PhD project aims to explore the interplay between generative AI (genAI) and data privacy. Two are the main research challenges that this project will cover. On the one hand, the project will explore the main privacy threats related to the massive usage of personal data by genAI models and develop suitable countermeasures. On the other hand, it will investigate novel solutions to leverage genAI tools to help organizations/companies comply with privacy regulations, like through automated privacy policy/preference analysis, data classification, and automatic detection of rule violations.
Curriculum: Data Governance & Protection
Additional benefits: -
Website: https://www.motionanalytica.com/, https://www.imtlucca.it
Description
In collaboration with Motion Analytica, this research project explores innovative methods for analyzing human mobility through Big Data. The candidate will study ""digital mobility traces"" from people or devices to identify recurring patterns across space and time, with attention to geographic and functional areas (e.g., residential or commercial zones). The project integrates contextual data such as social information, weather, and app usage to enrich behavioral modeling.
Advanced deep learning techniques will be applied, including Transformers, Auto-encoders, CNNs, RNNs, GNNs, and embedding methods, as well as generative models like Diffusion Models and VAEs. These approaches will support the detection of mobility patterns, the segmentation of urban functions, and the simulation of realistic scenarios, enhancing both insight and privacy.
The aim is to uncover latent behaviors and interests, offering a deeper understanding of how people interact with urban spaces throughout the day and seasons. Applications include assessing service demand, travel motivations (e.g., tourism, commuting), and clustering urban zones based on functional similarities.
Aligned with the open-lab framework at IMT, this project will address real-world challenges posed by external stakeholders, bridging academic research and public/private interests. A key focus will be on privacy-by-design, leveraging techniques such as homomorphic encryption and secure multi-party computation to ensure ethical, anonymized, and regulation-compliant data analysis.
The project also contributes to the “Generative AI for Urban Data” initiative, aiming to develop synthetic datasets for safe and scalable research on urban mobility.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: Might be negotiated on an individual basis
Website: http://www.fitnesslab.eu/
Description
This research project aims to advance the development of secure, privacy-preserving technologies for data sharing and processing across diverse digital domains, with a particular emphasis on sensitive and regulated environments. The initiative will focus on the integration of privacy-by-design principles from the earliest stages of system development, incorporating appropriate building blocks and reference architectures to facilitate compliant, modular, and reusable solutions. Central to the project is the enhancement of data spaces enabling trusted cross-border and cross-sectoral data sharing, aligned with the objectives of the European Data Strategy.
The project will explore innovative cryptographic techniques, such as homomorphic encryption, multi-party computation, and zero-knowledge proofs, alongside decentralized privacy-enhancing technologies to protect data confidentiality and integrity. Special attention will be given to strengthening the capabilities of small and medium-sized enterprises (SMEs) in securely managing and utilizing personal and industrial data.
By fostering the adoption of privacy-enhancing technologies (PETs), the project will contribute to a more secure and resilient digital ecosystem, supporting the trustworthy deployment of AI and other data-driven services. It will also evaluate the usability and scalability of proposed solutions in real-world scenarios, thus ensuring their applicability across domains such as healthcare, finance, and industrial manufacturing.
Curriculum: Foundational Aspects of Cybersecurity
Additional benefits: -
Website: https://www.saiferlab.ai/people/fabioroli
Description
This project deals with the development of novel methods for evaluating the security of AI systems to assess their resistance against adversarial attacks. Nowadays, security of AI systems is empirically estimated through the application of one or more adversarial attacks. However, such empirical attacks have been proved to produce suboptimal security evaluation, thus overestimating the security of the AI systems. This research project will investigate how to improve the reliability of security evaluation of AI systems.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.santannapisa.it/it/istituto/tecip
Description
This project focuses on the study and analysis of methodologies to assess and certify compliance of AI mechanisms, based on generative AI or traditional ML and the contents they generate, on the side of the model robustness, ethics of decision and of generated contents, as well as for alignment with EU regulations such as Data Act and AI act. The results of this work aim at laying the basis for a compliance verification pipeline for existing and novel AI mechanisms and AI based applications.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.dmif.uniud.it, https://mads.uniud.it
Description
Microservice and serverless architectures are prevalent in e-commerce, online banking, and eHealth. Service meshes are crucial for securing these applications, providing a transparent infrastructure layer that manages and secures service communication through intelligent proxies. This enables essential features like traffic management, observability, and security mechanisms such as mTLS and policy enforcement.
This PhD project addresses the critical challenge of ensuring robust security in modern microservice and serverless environments. It aims to advance service mesh security by developing adaptive and resilient defense techniques for cloud-native applications, specifically through innovations in anomaly detection and fine-grained access control.
The project pursues two key directions. First, it focuses on creating sophisticated real-time anomaly detection mechanisms for service meshes using deep learning models. The SF-SOINN architecture, often combined with autoencoders, is a promising candidate due to its ability for continuous learning and efficient anomaly detection, crucial for the dynamic nature of service mesh traffic and detecting novel attacks.
Second, the research will investigate techniques for automatically synthesizing security policies, particularly fine-grained access control policies for service meshes. The focus will be on developing methods that can take security requirements specified in high-level formalisms (e.g. some temporal logic) to ensure clarity and precision, and automatically generate corresponding access control policies in a language suitable for service mesh deployment, such as Cedar. Different approaches to policy synthesis will be explored, including rule-based systems and machine learning techniques, such as the potential use of Large Language Models (LLMs).
Curriculum: Software, System, and Infrastructure Security
Additional benefits: The PhD candidate may be eligible for financial support for technical and scientific editorial work, as well as opportunities to collaborate with third parties, subject to prior approval by the FUB General Directorate and the PhD Academic Board.
Website: https://fub.it, https://nam.cnit.it
Description
This PhD project, funded by Fondazione Ugo Bordoni and part of the prestigious Cysec XLI doctoral program, aims to develop a cutting-edge penetration testing and security assessment framework tailored specifically for 5G networks. Its core mission is to proactively identify, exploit, and mitigate critical 5G-specific vulnerabilities before they can be leveraged by malicious actors. The research targets four key technical domains:
• 5G-native vulnerabilities (e.g. signaling protocols, communication layers, SDN/NFV-based network functions, and misconfigured security components).
• Advanced attack modeling, including the simulation of realistic threat scenarios and offensive techniques.
• AI-driven penetration testing, leveraging machine learning and threat intelligence to enhance vulnerability detection, exploit path discovery, and prioritization.
• Proactive defense strategies, using artificial intelligence to anticipate threats and support adaptive mitigation in near real-time.
The methodological approach includes a theoretical foundation, involving an in-depth study of 5G architecture, attack surfaces, penetration testing methodologies, threat modeling techniques, AI based approaches to 5G security and familiarization with the experimental infrastructure, ensuring a solid understanding of the tools, environments, and configurations required for effective testing and simulation. This phase is followed by an experimental phase mainly dedicated to the development of a realistic modular 5G testing environment, implementation and simulation of realistic and sophisticated attacks, validation of the framework, and creation of AI-powered tools for automated vulnerability assessment and anomaly detection.
Curriculum: Data Governance & Protection
Additional benefits: The PhD candidate may be eligible for financial support for technical and scientific editorial work, as well as opportunities to collaborate with third parties, subject to prior approval by the FUB General Directorate and the PhD Academic Board.
Website: https://www.fub.it/competenza/cloud-e-dati/, https://dii.univpm.it/en-gb/cybersec_en/
Description
The growing adoption of cloud solutions in Public Administration (PA) raises crucial concerns about the security of sensitive data, especially in light of emerging threats stemming from quantum computing. Current cryptographic protocols (e.g., RSA, ECC) will be vulnerable to quantum algorithms such as Shor’s, making a transition to quantum-safe approaches increasingly urgent.
Institutions like NIST are leading standardization efforts in post-quantum cryptography (PQC) to ensure long-term cybersecurity. However, securely migrating PA cloud infrastructures requires not only integrating PQC algorithms, but also rethinking data management methods and infrastructure design.
This research aims to define a cloud architecture for PA that supports a secure and effective transition to quantum-resistant systems. The key objectives include:
• Identifying the most suitable PQC algorithms for PA use cases, aligned with NIST standards.
• Exploring integration strategies for PQC within existing cloud infrastructures with minimal operational disruption.
• Defining hybrid cryptography models to enable gradual, backward-compatible transitions.
• Developing data and digital certificate management approaches compatible with PQC standards.
• Assessing the potential impacts of such transitions on security and performance.
The experimental phase is currently under design and will be refined through possible collaboration with interested PA partners. This would allow validation of proposed models and strategies in realistic settings and facilitate knowledge transfer to institutions undergoing similar transitions. The project aligns with the ACN Cybersecurity Research and Innovation Agenda 2023–2026, particularly in the areas of data security, cryptography, and trusted information sharing, contributing both to technological innovation and public sector resilience against future quantum threats.
Curriculum: Foundational Aspects of Cybersecurity
Additional benefits: -
Website: www.unica.it, www.saiferlab.ai
Description
Malware analysis still poses a huge challenge, as newer versions are more sophisticated than the older ones, and the malicious actions are often hidden using multiple obfuscation and information hiding approaches. During the past years, machine learning has proved to be effective in malware detection and classification in families, but still, the understanding of the technical details of the malware requires a huge effort of manual activities by human experts. Information that is extracted through malware analysis tasks includes the vulnerabilities that are exploited, the malicious actions performed, any relationship with other software components, network connections, etc., that later will form the set of “Indicators of Compromise”. As the amount of new malware increases steadily, there is an urgent need for semi-automatic mechanisms to help human experts identify the malicious components, the pre-conditions for the malware to be effective, and the post-conditions after the malware executes. Large Language Models are a promising tool to help analyse executables, scripts, and source code, thanks to the huge code base used during the training process. This project will investigate different settings for using LLMs and other machine learning tools for malware analysis. On one hand, different representations of malware will be investigated to understand the amount of prior knowledge needed to achieve reliable results. On the other hand, different LLMs will be used, either directly, based on the training data used by the developers, or through fine-tuning or RAG.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: Ca' Foscari provides housing facilities for incoming students (10-minute bus ride from the historic city center of Venice)
Website: https://www.unive.it/pag/28183
Description
The research project focuses on the security of cryptographic software and hardware, aiming to explore disciplined programming practices that reduce vulnerabilities arising from the improper use of cryptographic libraries and mechanisms. The goal is to bridge the gap between cryptographic theory and practical development by designing automated tools and techniques that support secure programming. This includes providing developers with guidance and feedback to help avoid common pitfalls and follow best practices when using cryptographic components.
Depending on the student’s background and interests, the project may take different directions. From a theoretical perspective, it may contribute to the modeling and certification of cryptographic systems using automated tools and interactive proof assistants, enhancing the formal assurance of critical software. From a practical standpoint, the student will analyze real-world implementations to uncover vulnerabilities and propose effective solutions. A key area of interest is side-channel analysis, focusing on detecting and mitigating leaks caused by timing variations, power consumption, or other hardware-level behaviors that could reveal sensitive data. Another critical aspect is the misuse of cryptographic APIs, such as incorrect parameters, insecure algorithm choices, or improper formats, often stemming from limited awareness or reliance on manual implementations.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.unipa.it/, https://sites.unipa.it/networks
Description
With the increasing adoption of machine learning (ML) in critical sectors such as industry, finance, healthcare and cyber security itself, the reliability of such systems has become a major concern. Although the performance of many ML algorithms is now excellent in controlled (test) environments, most learning models have inherent vulnerabilities that can become critical in open and potentially hostile settings. In real-world scenarios, for instance, adversaries may want to compromise the efficiency and accuracy of the system through data manipulation and targeted attacks, including model extraction, membership inference, evasion, and data poisoning. The research activity in the field of machine learning security aims to study these and other related topics, considering different and complex application scenarios such as cyber-threats analysis, detection and countering of disinformation campaigns, induction and detection of recommendation bias, and, more generally, any situation in which decisions are made based on supervised or unsupervised learning techniques. The candidate interested in this field will study the technical and theoretical characteristics of new attack techniques, analysing their modes of operation, the potential vulnerabilities they exploit and the effects on the behaviour of the models. Furthermore, it will be essential to design and implement effective defence mechanisms capable of mitigating or preventing such attacks. This entails developing robust solutions that can adapt to different application contexts and threat scenarios.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.iit.cnr.it/en/, https://ui.iit.cnr.it/en
Description
The increasing proliferation of sensitive data and the emergence of quantum networks open new opportunities and challenges for secure collaborative learning. This PhD program focuses on the design and analysis of secure distributed systems composed of multiple quantum-enabled devices with limited computational capabilities, interconnected via quantum communication links. The objective is to enable these devices to cooperatively train machine learning models while preserving data confidentiality, integrity, and system resilience. Unlike approaches to distributed quantum computing that assume powerful quantum processors, this research explores architectures where small-scale quantum devices (e.g., quantum sensors, edge nodes, or processors with shallow circuits) contribute partial computations in a federated or multi-party setting. Emphasis is placed on the development of cryptographic and protocol-level mechanisms that leverage quantum properties—such as entanglement and quantum key distribution (QKD)—to ensure secure coordination, authenticated communication, and privacy-preserving data exchange during the training process. Core challenges include the design of quantum-assisted secure aggregation protocols, quantification of information leakage in noisy quantum channels, robustness of privacy guarantees under adversarial behavior, and the hybrid integration of classical and quantum cryptographic techniques. The research also aims to identify realistic deployment scenarios for near-term quantum network infrastructures and assess the practical performance-security tradeoffs in these systems. This program lies at the intersection of quantum communication, distributed systems, machine learning, and cybersecurity, and contributes to building the foundations for trustworthy quantum-enhanced collaborative intelligence.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: Canteen, access to FBK Academy, discounts for the transport system, listening and psychological support desk.
Website: https://www.fbk.eu/en/, https://www.fbk.eu/en/cybersecurity/
Description
Software Systems are continuously and rapidly evolving, requiring engineers to address increasingly new complex and multi-dimensional aspects. These include for example the integration of Artificial Intelligence (AI), compliance with new and evolving EU Regulations (e.g., EU AI Act, NIS2, GDPR, etc.), and ensuring that systems are secure, ethical and trustworthy.
To meet these demands, current practices in Secure Software Engineering and DevSecOps (Development, Security, and Operations) must be extended to address these new challenges. Especially when considering DevSecOps for Cloud Native Applications, where the attack surface spans multiple layers (e.g., code, container, deployment, orchestrator, etc.). ""The purpose and intent of DevSecOps is to build on the mindset that everyone is responsible for security with the goal of safely distributing security decisions at speed and scale to those who hold the highest level of context without sacrificing the safety required"", describes Shannon Lietz, co-author of the "DevSecOps Manifesto".
DevSecOps is an approach to automate the integration of cybersecurity processes at every phase of the software development lifecycle, from initial design through integration, testing, deployment, and software delivery. It represents a natural and necessary evolution in the way development organizations approach security. For Cloud Native Applications, security regards multiple levels (code, container, deployment, orchestrator, etc.) and the approach to introduce security should consider all of them. In this context, the thesis aims to investigate one or more of the following topics: Securing and Monitoring Software Supply Chain in SDLC; Development of novel techniques for Secure Software Engineering; Application of AI to DevSecOps as a support for configuration, diagnosing, resolving problems or compliance with Regulations (EU AI Act, NIS2, GDPR, NIST, etc.); Development of Trustworthy and Transparent Software Systems.
Curriculum: Data Governance & Protection
Additional benefits: -
Website: https://www.wishinnovation.it
Description
The project aims to investigate data control solutions and scenarios to support data sharing in a selective and secure manner, while ensuring functionality, efficiency and scalability. The data protection solutions developed will encourage new application scenarios and introduce new opportunities for sharing data, in a controlled manner, while respecting privacy and access restrictions, ensuring the integrity of data and analysis results.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.iit.cnr.it/, https://tsp.iit.cnr.it/en/
Description
The MFA2C project aims to study, design, and develop mechanisms and protocols to ensure data security and privacy in Vehicle-to-Infrastructure (V2I) communications and, more generally, Vehicle-to-Everything (V2X) communications.
Vehicles, considered cyber-physical systems, are vulnerable to both software and hardware attacks. The solutions developed must integrate not only established security mechanisms but also strategies involving the use of physical communication channels such as wireless radio frequency (RF), optical (visible light or infrared), and acoustic (ultrasound or audible sound) channels.
For machine identity management, the project will adopt the Self-Sovereign Identity (SSI) paradigm, which enables a decentralized and secure approach. This will allow vehicles to autonomously manage their identifying information during interactions with third parties. The integration of PUFs (Physically Unclonable Functions) with the SSI paradigm will further enhance security by providing each vehicle with a secure and non-replicable identity, which is essential for authentication and protection against malicious attacks.
In accordance with Article 21 of the NIS2 Directive, the project includes the adoption of secure identification solutions and multi-factor and continuous authentication for effective risk management.
MFA2C focuses on dynamic vehicular networks, where autonomous physical agents (vehicles) and humans coexist, making it necessary to also consider the impact of the proposed solutions on safety.
In the context of transport infrastructures, authentication plays a key role in accessing various services, including communication with road infrastructure, traffic light priority for emergency vehicles, electric vehicle charging, and other advanced applications for smart mobility.
Curriculum: Foundational Aspects of Cybersecurity
Additional benefits: -
Website: https://clem.dii.unisi.it/~vipp/
Description
This research project explores the theoretical and practical challenges of applying Artificial Intelligence (AI) techniques in adversarial and hostile environments, focusing on security-critical scenarios. The main goal is to investigate how adversaries can compromise AI systems - particularly during inference - by generating adversarial examples designed to cause malfunctions or misclassifications. The project begins with a rigorous taxonomy of threats across the AI lifecycle (from data collection to deployment), followed by a detailed review of current attack and defense methods.
The core theoretical component focuses on analyzing the reason for the ubiquitous existence adversarial examples. The project also extends this analysis to complex AI systems, including generative models and inference mechanisms beyond classification.
Parallel to this, a strong experimental component will validate theoretical findings in real-world applications, primarily in biometric authentication and multimedia forensics - domains highly vulnerable to adversarial manipulation.
The second phase of the project addresses defenses, particularly adversarial training and verifiable robustness, examining its robustness-accuracy tradeoff and dependency on data distribution. Finally, practical attacks will be designed and tested, particularly those that work in black-box settings and survive real-world, physical-world conditions, contributing to a deeper understanding of AI's reliability in adversarial scenarios.
Curriculum: Data Governance & Protection
Additional benefits: -
Website: https://www.unimi.it/en, https://spdp.di.unimi.it
Description
Data are the central resource for any modern society. Also, the availability of highly performing systems and services (e.g., cloud/fog/edge/IoT) for gathering, storing, and processing data, as well as of efficient machine learning and AI-based solutions operating on large data collections, brings great benefits on a personal, business, economic and social level. On the other hand, data may be sensitive or company-confidential and cannot be shared openly, and their confidentiality, as well as their integrity, should be guaranteed even when non fully trusted parties are involved in data storage or processing.
The goal of the project is to contribute to the development of advanced scientific and technological solutions enabling the different actors (e.g., individuals, companies, institutions) with control over their data in the various data release, sharing, and analysis scenarios. The research is in the area of computer science and can entail investigation of different scientific and technological issues contributing to solving the problem of protecting data in emerging scenarios. Technological aspects that can be investigated include: data modeling for enforcing security and privacy restrictions; access control languages and models; data protection in release, storage, or computation by untrusted parties; data integrity; data security and privacy in artificial intelligence scenarios; and AI-based security and privacy solutions.
Curriculum: Foundational Aspects of Cybersecurity
Additional benefits: -
Website: https://www.iit.cnr.it/, https://ui.iit.cnr.it/en/
Description
Public datasets have widely fueled centralized AI, yet vast untapped data still resides on personal devices, sensors, and embedded systems—data we cannot centralize due to privacy, ownership, and latency constraints. Leveraging this distributed data requires a decentralized, pervasive AI approach, enabling local and collaborative learning at the point of data generation. However, such interconnected intelligence networks are vulnerable to disruptions, node failures, and adversarial threats.
This PhD project focuses on enhancing the resilience and robustness of decentralized learning systems under challenging conditions. The candidate will explore techniques for detecting and mitigating malicious or faulty nodes, developing robust data aggregation strategies, and ensuring reliable knowledge transfer among nodes. Drawing from federated learning, network science, and robust optimization, the research approach—encompassing theoretical modeling, algorithmic innovation, simulations, and practical validations—will be adapted to the candidate’s expertise and interests.
This project aims to advance decentralized AI, promoting secure, resilient, and efficient distributed intelligence networks.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: www.icar.cnr.it
Description
The proposed PhD project is positioned within the current context of radical transformation driven by Artificial Intelligence (AI), and in particular by Large Language Models (LLMs), across strategic sectors such as healthcare, finance, manufacturing, and public administration. In these areas, the growing adoption of intelligent systems opens up transformative opportunities while simultaneously introducing new attack surfaces and risk vectors. Emerging threats include malicious attacks, semantic manipulations, sensitive data exfiltration, prompt injection, automated social engineering, unauthorized use in restricted contexts, as well as the use of AI for generating offensive code and planning offensive campaigns.
The project aims to develop new theoretical and practical research directions for AI protection, with a particular focus on machine learning, deep learning, autonomous systems, and advanced language models. The candidate will be expected to study, design, and evaluate innovative solutions to ensure the integrity, confidentiality, and availability of AI systems, both in real-world and simulated scenarios.
Objectives include: the analysis of specific threats (adversarial attacks, data poisoning, model stealing, backdoor attacks); the development of countermeasures and defense techniques (adversarial robustness, watermarking, anomaly detection); the control of information flows in LLM-driven environments to prevent data leakage, unsafe content generation, or the propagation of bias; and the exploration of new foundational architectures, particularly those based on transformers specialized for security.
Curriculum: Foundational Aspects of Cybersecurity
Additional benefits: -
Website: https://cybersecurity.dimes.unical.it
Description
Many critical operations carried out daily by citizens involve the use of strong authentication protocols (e.g., e-banking). These protocols may be flawed in their design or implementation, with obvious consequences for the security of citizens and their data. The aim of the project is to develop a new methodology capable of automating the formal verification of protocols starting from their client-side implementation (e.g., browser agents or mobile apps). This methodology will leverage advanced code analysis techniques to reconstruct protocol flows from client-server interactions, AI tools to generate formal models from execution traces, model checking to identify vulnerabilities, and testing tools to generate exploits for discovered vulnerabilities.
The project aims to develop a framework for the formal analysis of strong authentication protocols directly from the client application code. The goal is to verify the security of protocols used in mobile applications and identify any vulnerabilities that may compromise user authentication.
Project Phases
1. Reverse Engineering and Modeling
• Automated extraction of the client app’s code using decompilation and dynamic slicing techniques.
• Identification of critical APIs and backend interactions to reconstruct the authentication protocol.
2. Translation into Formal Models
• Conversion of the extracted protocol into a formal language (e.g., Tamarin, ProVerif).
• Definition of security properties such as authentication, confidentiality, and forward secrecy.
1. Verification and Validation
• Automated protocol analysis to detect vulnerabilities such as replay attacks, man-in-the-middle, and key compromise impersonation.
• Experimental validation on real applications through dynamic and static testing.
The project leverages several enabling technologies, including:
• Advanced formal analysis tools (Tamarin, ProVerif, Maude).
• Reverse engineering frameworks (Frida, JADX, Ghidra) for protocol extraction.
• Expertise in cybersecurity and collaboration with research centers and companies.
Final Outcome
The output will be an automated tool that enables the verification of authentication protocol security directly from app code, making a significant contribution to the security of digital services.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://www.leonardo.com/
Description
AI framework for the autonomous generation and execution of multi-stage cyber-attacks within complex simulated environments. This approach is based on the combination of symbolic AI planning (logic/graph-based reasoning) for strategic decision-making with Large Language Models (LLMs) and autonomous agents for tactical execution and dynamic plan adaptation. This hybrid architecture enables long-term attack planning while maintaining flexibility in responding to unexpected changes, such as detection or defensive countermeasures.
Validation will take place in cyber ranges and digital twin environments, with a focus on:
- Modeling autonomous adversarial agents capable of emulating realistic multi-step intrusions aligned with the MITRE ATT&CK framework.
- Designing adaptive algorithms for real-time plan refinement under uncertainty and partial observability.
- Measuring effectiveness, stealth, and robustness of attack strategies.
The research methodology includes:
- A state-of-the-art review on AI planning, neural-symbolic systems, and red teaming frameworks.
- Design and implementation of a layered architecture.
- Experimental validation using simulated scenarios relevant to critical infrastructure protection.
Expected outcomes include a functional and modular hybrid AI framework compatible with existing simulators, scientific contributions on AI-driven decision-making in cyber-physical contexts, and an operational demonstrator for training and defensive strategy evaluation.
Curriculum: Human, Economic, and Legal Aspects in Cybersecurity
Additional benefits: -
Website: https://www.unifi.it
Description
The proposed project addresses the urgent need to innovate and harmonise the regulatory framework for cybersecurity, integrating legal, ethical, social, and economic perspectives into a coherent and strategic policy approach. It aims to train a researcher capable of analysing the evolving EU cybersecurity landscape—particularly the implementation of the NIS2 Directive, the Cyber Resilience Act, and related measures—while assessing their interaction with national legal systems and institutional practices.
The project focuses on new regulatory models, the role of independent authorities, and the challenges posed by the distribution of responsibilities across public and private actors. Particular attention will be paid to the protection of fundamental rights (especially privacy and non-discrimination), risk governance, due diligence obligations, and ethical standards in cybersecurity decision-making.
The researcher will contribute to the development of a digital database mapping legislation, case law, and regulatory actions in the field, supporting both empirical research and policy reflection. The project will be carried out at the CybeRights Centre and the European Observatory on Digital Regulation (EODiR) at the University of Florence.
Curriculum: Software, System, and Infrastructure Security
Additional benefits: -
Website: https://serlab.di.uniba.it
Description
Technological evolution and global digitization have radically changed the way relevant information is collected, analysed, interpreted and disseminated to support strategic decision-making processes and operational activities. At the heart of this transformation lies cyber intelligence, which has reshaped the landscape of national and international security. Consequently, there is a need to collect and analyse the techniques, tactics and procedures employed by each threat actor, to understand the attack vectors and modalities.
This enables the identification and implementation of 'active' security strategies to facilitate effective defence, as well as the development of effective intelligence strategies to enhance decision-making processes in various contexts. These strategies build appropriate countermeasures that integrate not only the technological dimension, but also psychological, social and legal aspects.
Therefore, given the multidisciplinary nature of cyber phenomena, this proposal aims to define multidisciplinary methods, techniques, and tools capable of operating within the context of Cyber Social Security, to support cyber intelligence processes that can reinterpret the traditional functions of Detection, Response, and Prevention.