FMT Logo

Formal Methods and Tools Laboratory

To err is human, but to really foul things up you need a computer
(William E. Vaughan - 1969)

Formal Methods

Formal Methods
Formal Methods play a major role in computer engineering and in the broader context of software and systems engineering, where also the interaction of machines with humans is taken into consideration. In fact:
All engineering disciplines make progress by employing mathematically based notations and methods. Research on formal methods follows this model and attempts to identify and develop mathematical approaches that can contribute to the task of creating computer systems (both their hardware and software components).
Cliff Jones, Thinking Tools for the Future of Computing Science [1]
The very origins of FM go back to Mathematical Logic, or, more precisely, to that branch of Mathematical Logic which gave rise to Logics in Computer Science (LICS) and Theoretical Computer Science (TCS). Several fields of LICS and TCS have contributed to the development of the foundations of FM, like language and automata theory, Petri nets, programming/specification language syntax and semantics, and program verification.
More recently, solid contributions to the specification and analysis of systems, in particular concurrent/distributed systems have been provided. Among them we recall process algebras, temporal logics, and model checking.
It occurred to me that often complex communication and synchronization protocols were specified by state machines and that an efficient algorithm for checking formulas on models could be used to see if the state machines satisfied their specifications. This was my “Eureka moment”! I realized that the important problem for verification was not the synthesis problem but the problem of checking formulas on finite models. I began to work on a depth-first search algorithm for the Model Checking problem.
Edmund M. Clarke (1945-2020), The Birth of Model Checking [2]
The Formal Methods and Tools (FMT) lab has longstanding experience in the development and application of formal notations, methods and software support tools for the specification, design and verification of complex systems. These efforts are typically based on the foundational concepts and techniques of process algebras, temporal logics, spatial logics, and model checking. FMT is also active in requirements engineering and various branches of software engineering, and in the areas of computer ethics and information technology and society. Important application fields concern the railway domain and, recently, medical image analysis.
The FMT Lab is part of the Institute of Information Science and Technologies (ISTI), one of the institutes of the Italian National Research Council (CNR).

[1] Cliff Jones, Thinking Tools for the Future of Computing Science. In R. Wilhelm, editor, Informatics 10 Years Back 10 Years Ahead, Lecture Notes in Computer Science, volume 2000. Springer, 2000.
[2] Edmund M. Clarke, The Birth of Model Checking. In O. Grumberg and H. Veith, editors, 25 Years of Model Checking: History, Achievements, Perspectives, Lecture Notes in Computer Science, volume 5000. Springer, 2008. Obituary Edmund M. Clarke

Research Topics

Verification tools and techniques in Formal Methods encompass automated and semi-automated methods to check and prove that a system satisfies a given formal specification, using mathematics and logic. The activity typically includes modeling, specifying properties, and applying verification engines to detect errors or prove correctness.
Key aspects:
  • Model checking: automatically explores the system’s state space against a temporal logic specification (e.g., LTL, CTL) to verify safety and liveness properties or to produce counterexamples when a property is violated. This technique shines in finding defects early but can struggle with very large systems due to state explosion.
  • Deductive verification: generates proof obligations from the system and its specifications and discharges them using interactive or automatic theorem provers. This approach can scale to complex properties but may require formal annotations and substantial reasoning.
  • Abstract interpretation and symbolic methods: use abstractions to reason about infinite or large state spaces, enabling analysis of data-dependent properties while controlling precision.
  • Symbolic execution and SMT-based verification: reason about program paths symbolically and discharge constraints using satisfiability modulo theories solvers, often aiding in proving invariants or finding input conditions that violate properties.
  • Semi-formal modeling techniques: combine formal rigor with pragmatic modeling (often simplified models) to identify defects without incurring the full overhead of full formal proofs.
Typical outputs and goals:
  • Prove that a design satisfies a safety property (e.g., absence of deadlock, assertion violations) or a liveness property (progress guarantees).
  • Produce counterexamples or execution traces that demonstrate property violations.
  • Identify invariants, pre/post-conditions, and refinement relations guiding correct-by-construction development.
  • Provide insights into scalability, tool support, and integration into development workflows.
Common domains and representations:
  • Hardware and digital systems: model checking with timing and concurrency properties.
  • Software systems: verification of algorithms, safety-critical software, and security properties.
  • Safety requirements: prove absence of runtime errors, boundary violations, or correctness of control logic.
How these tools are typically used:
  • Start from a formal specification of desired properties and a model of the system.
  • Choose verification technique(s) aligned with the problem size, properties, and required guarantees.
  • Run automated checks; interpret results, refine models, or add annotations as needed.
  • In practice, a combination of methods (co-verification) often yields the best coverage: quick defect detection via model checking complemented by deeper proofs for critical components.

Requirements Engineering (RE) is a foundational subfield of Software Engineering focused on identifying, documenting, validating, and managing the needs and constraints of stakeholders for a software system. It bridges business goals and technical implementation by capturing functional and quality requirements, ensuring traceability, and guiding design and verification activities throughout a project’s life cycle.
Core focus areas
  • Requirements elicitation and stakeholder analysis: developing methods to uncover needs from diverse users, experts, and external systems, including complex organizational contexts and regulatory constraints.
  • Requirements modeling and specification: creating formal and semi-formal representations (models, ontologies, goal-oriented approaches) that improve, communication, and traceability across teams.
  • Validation, verification, and quality attributes: techniques to ensure requirements meet stakeholder intent and constraints, including consistency checking, ambiguity detection, and early defect reduction.
  • Traceability and change management: maintaining links from requirements to design, implementation, and testing artifacts to support impact analysis when changes occur.
Emerging trends
  • Integration with AI and language technologies: using natural language processing and large language models to assist in drafting, clustering, and validating requirements, while addressing reliability and governance concerns.
  • Agile and continuous RE: adapting RE practices to fast-paced, iterative development, emphasizing lightweight elicitation, just-in-time requirements, and continuous stakeholder feedback.
  • RE for special domains: tailoring methods for safety-critical, highly regulated, or data-intensive systems (e.g., embedded, healthcare, automotive, machine learning-enabled systems).
  • Empirical RE research: conducting case studies, experiments, and theory-building to understand RE practices, challenges, and impact on project outcomes.
Methodological approaches
  • Goal-oriented and scenario-based techniques: using goal models, use cases, and scenarios to articulate and reason about requirements at a higher level of abstraction.
  • Formal and semi-formal specification languages: leveraging models like UML, KAOS, or decision-rnetworks to improve precision and traceability.
  • Tool support and process integration: developing and evaluating RE-specific tools for requirements capture, versioning, conflict resolution, and traceability within broader ALM (application lifecycle management) ecosystems.
Challenges and open questions
  • Handling uncertainty and changing requirements in dynamic environments.
  • Achieving effective stakeholder involvement across diverse, distributed teams.
  • Balancing rigor with agility to avoid documentation debt while maintaining clarity and verifiability.
  • Ensuring RE practices scale in large, complex systems and when integrating AI components.
Impact and outcomes
  • Improved alignment between stakeholders and delivered software, reducing rework and project risk.
  • Better visibility into system scope, constraints, and dependencies, enabling more informed design and verification decisions.
  • Enhanced ability to manage changes, ensure compliance, and support regulatory audits in critical domains.

Natural Language Processing (NLP) in Software Engineering is an active subfield focused on applying linguistic and statistical methods to software-related text data to improve analysis, design, and testing processes. Researchers study how NLP can bridge human language with software artifacts, enabling more accurate requirements, traceability, and quality assurance.
Key themes and current directions
  • Requirements engineering with NLP: NLP techniques are used to extract, analyze, and clarify requirements from natural language documents, detect ambiguities or inconsistencies, and support automatic traceability between requirements and design or code. This includes tasks like requirement classification, entity extraction, and resolving terminology conflicts.
  • Traceability and impact analysis: NLP tools assist in establishing and maintaining traceability links between requirements, design, code, and tests, as well as analyzing the potential impact of changes in textual artifacts. This is increasingly important for maintaining consistency across the software lifecycle.
  • Software testing and quality assurance: Natural language requirements and defect reports can be interpreted by NLP systems to generate test cases, prioritize testing efforts, and identify language issues that may lead to defects. This area explores semantic understanding and context-aware testing strategies.
  • Language model-enabled development aids: Large language models and other NLP-based assistants are explored as aids in software development, from generating or refining requirements and documentation to assisting in review and collaboration workflows.
  • Repositories and app ecosystem analyses: NLP is applied to analyze app reviews, issue trackers, and repository data to extract trends, feature requests, and user feedback, informing requirements and product decisions.
What makes it distinctive
  • The focus is on translating unstructured or semi-structured natural language text into structured software engineering insights and artifacts.
  • It often combines NLP with domain-specific ontologies, traceability frameworks, and human-in-the-loop approaches to balance automation with domain expertise.
  • The domain-specific challenges include domain terminology, ambiguity in natural language, and the evolving nature of software requirements and defensive measures against misinterpretation.
Representative directions you might encounter in literature
  • Systematic mappings and surveys of NLP4RE (Natural Language Processing for Requirements Engineering) that categorize techniques, tools, and evaluation metrics.
  • Studies on improving requirement extraction, ambiguity detection, and traceability using NLP plus semi-automated or human-in-the-loop methods.
  • Applications of NLP to software testing, including using natural language requirements to inform test generation and coverage analysis.

Empirical Software Engineering (ESE) is a subfield of software engineering that focuses on using systematic empirical methods to study and evaluate software engineering practices, tools, processes, and human factors. It emphasizes observation, measurement, and analysis of real-world artifacts to build evidence about what works in practice.
Key aspects
  • Definition and scope:ESE, also known as evidence-based software engineering, uses empirical data to validate theories, approaches, and techniques in software development. It covers a broad range of artifacts and activities, including code, defects, requirements, development tools, processes, and team dynamics.
Common methods
  • Controlled experiments, case studies, surveys, action research, and mining software repositories are typical empirical approaches used to gather data and test hypotheses about software engineering practices.​
  • Data sources include version control histories, issue trackers, bug reports, and communication traces to understand patterns and outcomes in software projects.
Goals and impact
  • The aim is to provide evidence about the effectiveness of techniques and processes, help improve software quality, productivity, and maintainability, and guide decision-making for practitioners and researchers.​
  • ESE acknowledges that software engineering knowledge often resides in real-world contexts and thus requires rigorous empirical validation beyond theoretical or lab-based studies.
Subfields and connections
  • Related areas include software evolution, software maintenance, and mining software repositories, all of which rely heavily on empirical data to draw conclusions about software behavior and performance.
  • ESE integrates quantitative analyses (statistical methods, data mining) and qualitative insights (case narratives, expert judgments) to build robust, generalizable knowledge.

Human factors in Software Engineering is a subfield that studies how people—their cognitive limits, skills, teamwork, communication, and organizational context—shape the design, development, and operation of software systems. It treats software engineering as a sociotechnical activity where human performance, collaboration, and organizational culture impact quality, productivity, and safety.
Key focus areas
  • Human-centered requirements and software design: aligning systems with users’ mental models, workflows, and social practices to improve usability and usefulness.
  • Team collaboration and socio-technical coordination: understanding how developers, testers, managers, and customers coordinate work, share knowledge, and manage dependencies.
  • Program comprehension and knowledge management: how engineers understand codebases, reproduce issues, and transfer tacit knowledge, often addressing onboarding and maintenance productivity.
  • Trust, decision making, and governance: how stakeholders trust architectural choices, data handling, and risk management processes, influencing project outcomes.
  • Culture, diversity, and organizational factors: how distribution, culture, and practices across teams or sites affect communication, conflict resolution, and performance.
  • Empirical methods in SE human factors: applying case studies, surveys, interviews, observational studies, and experiments to uncover how human factors drive success or failure in software efforts.
Why it matters
  • Software quality and safety: user interfaces, error prevention, and system reliability are tightly linked to human performance and usability.
  • Productivity and cost: effective collaboration and knowledge transfer reduce rework, bottlenecks, and onboarding time.
  • Field-specific implications: in safety-critical or mission-critical software (e.g., aviation, healthcare, Arctic field operations), human factors govern error rates, operability under stress, and compliance with safety standards.
Typical research questions
  • How do team structures and communication patterns affect defect rates and delivery timelines?
  • Which design patterns and interface principles best support developers’ cognition and task switching?
  • How can socio-technical theories explain success or failure in large, distributed software projects?
  • What empirical methods reliably measure the impact of human factors on software outcomes?
Representative concepts and methods
  • Socio-technical coordination theory, trust and reliance in teams, knowledge sharing, and collaboration metrics.
  • Human-centered design, usability testing, cognitive walkthroughs, and program comprehension experiments.
  • Field studies in real development environments, controlled experiments, and systematic literature reviews focusing on human factors in SE.

Software Product Line Engineering (SPLE) formal approaches focus on applying rigorous, mathematical, or machine-checked methods to design, analyze, and verify families of related software products efficiently. The core idea is to treat a family of products as a single, reusable engineering artifact, with formal foundations ensuring consistency across variants and enabling automated reasoning about behavior, safety, and correctness.
Brief description of the subfield
  • Formal modeling and verification: Use formal languages (e.g., state machines, temporal logics, or feature-based modeling formalisms) to describe the common and variable parts of a software product line, and to reason about correctness properties across all products in the family. This supports proving that all derived products satisfy specified safety, liveness, or reliability requirements.
  • Uniform semantic foundations: Establish a shared formal semantics for the product line model (including features, configurations, and bindings) so that multiple views—such as structure, behavior, and constraints—can be analyzed coherently. This enables consistent validation and verification across different viewpoints.
  • Variant management with formal guarantees: Model features and their interdependencies with precise constraints, and use formal analysis to detect conflicts, dead configurations, or unintended interactions among features before product derivation.
  • Tool-supported workflows: Leverage formal methods tools for model checking, theorem proving, and automatic code generation where possible. Executable models allow simulation and rapid prototyping of family behavior, while ensuring that generated implementations preserve formal properties.
  • Early design-time verification: Apply formal approaches at early design stages to catch architectural and behavioral defects before instantiation of products, reducing cost and risk in large-scale families.
  • Integration with lifecycle processes: Tie formal SPLE activities to standard SPLE practices, such as domain engineering (developing core assets and feature catalogs) and application engineering (deriving products), ensuring traceability from requirements to implementations across the product line.
Key research themes
  • Formalization of feature models and their semantics to enable rigorous reasoning about configuration spaces.
  • Formal specifications of common and variable components with compositional verification techniques.
  • Verification of product line variability, ensuring that system-wide invariants hold across all feasible configurations.
  • Formal code generation and ensuring that generated variants preserve the proven properties of the formal models.
  • Scalable methods for large product lines, including modular and incremental verification approaches.
Why it matters
  • Improves confidence in the correctness and safety of a family of products, not just a single product.
  • Enables earlier detection of configuration-level issues that could compromise quality or compliance.
  • Supports automated derivation of correct-by-construction products from a formally validated product line model.

Formal modelling and verification of spatially distributed systems is a subfield of Software Engineering focused on specifying, analyzing, and proving properties of systems whose components operate across multiple locations or nodes and interact through communication networks. Typical targets include correctness, safety, liveness, and reliability under distributed execution, timing, and potential faults.
Key aspects
  • Formal modelling: Use of precise mathematical formalisms (e.g., Petri nets, process algebras, temporal logics, automata) to describe spatially distributed architectures, communications, and coordination mechanisms.
  • Verification techniques: Model checking, theorem proving, and abstract interpretation to confirm that the model satisfies desired specifications such as safety properties (no bad states), mutual exclusion, deadlock-freedom, and correct synchronization, across spatially separated components.
  • Temporal and spatial reasoning: Incorporates both time (ordering of events, deadlines) and space (physical or logical locality, broadcast or multi-hop communication) to capture realistic behavior of distributed systems.
  • Common domains: sensor networks, distributed control systems, autonomous vehicle fleets, distributed databases, and cloud-edge environments where components coordinate over networks.
  • Challenges: State-space explosion due to combinatorial growth with added devices and interactions; modeling of imperfect networks (latency, loss, faults); scalability of verification methods; integration with development workflows.
Typical research directions
  • Compositional and modular verification to manage complexity by proving properties of components and then inferring system-level results.
  • Spatial logics and real-time extensions to capture geographic or network topology constraints.
  • Fault tolerance and resilience models to reason about partial failures and dynamic reconfiguration.
  • Synthesis and runtime verification to automatically generate correct implementations or monitor systems during operation.
  • Toolchains and case studies applying model checking, theorem proving, and runtime monitors to real-world spatially distributed systems.

Formal modelling and verification of service-oriented systems is a subfield of Software Engineering that focuses on using rigorous mathematical formalisms to specify, model, and prove properties of distributed, loosely coupled services that interact over interfaces and networks. The goal is to ensure correctness, reliability, and safety of complex SOA deployments by verifying architectural and behavioral properties such as compatibility, compositionality, communication safety, security, and QoS constraints, often through formal languages, refinement techniques, and automated tooling.
Key aspects
  • Formal specification and modelling: Use precise languages (e.g., Event-B, process algebras, state machines) to capture service interfaces, interactions, and data flows, enabling unambiguous reasoning about behavior.
  • Verification and refinement: Apply model checking, theorem proving, and refinement frameworks to prove properties like deadlock freedom, protocol conformance, invariants on data, and preservation of safety and liveness across service compositions.
  • Service composition and orchestration: Analyze how individual services combine into workflows or orchestrations, ensuring that composite services preserve correctness when components are replaced or updated.
  • Non-functional properties: Extend verification to timing, reliability, security, and dependability aspects, often via formal contracts, security models, or probabilistic analyses.
  • Tool-supported workflows: Employ automated or semi-automated tools to generate, simulate, and verify models, and to derive correct-by-construction implementations from abstract specifications.
Representative themes
  • Modelling and refinement in Event-B for scalable development of service-oriented systems, enabling stepwise refinement from abstract specifications to concrete implementations with correctness guarantees.
  • Integration of formal methods with software engineering practices to raise the maturity and readiness of formally verified service architectures for practical deployment.
  • Exploration of testing and verification synergies to provide theoretical guarantees about bug detection and to manage the state explosion problem in large SOA models.
Why it matters
  • SOA environments are inherently complex, with asynchronous communication, dynamic service discovery, and evolving contracts. Formal modelling and verification help catch design errors early, reduce risk in critical deployments, and support certified interoperability between services, which is especially valuable in safety- and mission-critical domains.

Quantitative formal methods for Collective Adaptive Systems (CAS) is a subfield of Software Engineering that develops and applies mathematically rigorous techniques to specify, analyze, and verify large, distributed systems in which many heterogeneous components adapt their behavior at runtime based on local observations and interactions. The focus is on deriving quantitative guarantees—such as performance, reliability, safety, and robustness—about emergent collective behavior rather than single-agent correctness.
Key aspects
  • Modeling frameworks: Use of stochastic, probabilistic, and dynamical formalisms (e.g., stochastic Petri nets, process algebras, dynamic logic, differential equations) to capture both the adaptive rules and the interactions among many agents or components.
  • Quantitative properties: Analysis targets include runtime performance metrics (throughput, latency), resource usage, reliability under perturbations, fault tolerance, convergence and stability of collective behaviors, and long-run statistical properties.
  • Verification and analysis techniques: Techniques such as model checking with quantitative extensions, statistical model checking, parameter synthesis, abstraction and refinement for scalability, and simulation-based approximations to handle open, dynamic environments.
  • Scalability challenges: CAS involve open-ended, large populations of agents; methods emphasize scalable abstractions, compositional reasoning, and data-driven or learning-informed analysis (e.g., Gaussian processes, smoothing in model checking) to manage state-space explosion.
  • Applications: Self-organizing sensor networks, swarm robotics, crowd management, adaptive cyber-physical systems, and large-scale distributed software architectures where collective adaptation is central to correct operation.
Representative themes and directions
  • Ensemble and aggregation reasoning: treating groups of agents as ensembles to reason about collective properties rather than tracking individuals.
  • Statistical and probabilistic guarantees: moves from absolute correctness to probabilistic assurances of desired collective outcomes under uncertainty.
  • Human-in-the-loop considerations: incorporating human decision-making or oversight into the validation and safety analysis of CAS.
  • Real-time and open systems: addressing runtime changes, dynamic topologies, and partial observability common in CAS.

Railway control systems research within Software Engineering focuses on applying formal methods, modeling techniques, and verification to the safety-critical subsystems that manage train movement, signaling, and interlocking. The goal is to ensure correctness, reliability, and safety under real-time constraints and complex operational environments.
Key subthemes
  • Formal modeling and verification: Using formal languages and tools to specify and prove properties such as deadlock freedom, safety (e.g., collision avoidance), and liveness of train movements. Typical approaches include temporal logics, model checking, and theorem proving.
  • Model-based design and analysis: Creating abstract, executable models of railway operations (e.g., interlocking, signaling, and train control) to simulate scenarios, analyze safety properties, and guide code generation for real-time controllers.
  • Safety-critical architecture and standards: Aligning modeling and analysis practices with railway safety standards (e.g., EN 50128) to ensure process, software, and hardware confidence through lifecycle activities such as requirements traceability, V&V, and security considerations.
  • Dynamic and hybrid systems: Capturing both discrete control logic (signals, points, interlocking states) and continuous device behavior (train dynamics, braking) using hybrid or dynamic state machine formalisms to reason about overall system behavior.
  • Verification and validation techniques: Employing model checking, symbolic analysis, and simulation-based testing, including hardware-in-the-loop or virtual machine approaches, to validate control software against safety requirements and performance criteria.
Representative approaches and emphases
  • Interlocking and route-setting analysis: Formalizing interlocking tables and routing logic to guarantee safe train sequencing and to detect potential conflicts under various scenarios.
  • Railway cybersecurity: Extending models to incorporate threat models and containment strategies to mitigate risks from cyber-physical attacks on signaling or control networks.
  • Tool chains and runtimes: Leveraging domain-specific modeling environments and back-end toolchains that compile models into executable code or verification artifacts, supporting traceability from requirements to implementation.
Why this matters
  • The railway domain demands rigorous assurance due to potential consequences of failures, including collisions and derailments. Formal modeling and analysis help identify invalid states early, enable rigorous safety proofs, and support regulatory compliance throughout the software lifecycle.

Formal approaches to medical image analysis in Software Engineering is a research area that applies rigorous formal methods—such as formal specification, modeling, verification, and automated reasoning—to the development, analysis, and validation of medical image analysis systems. The goal is to improve correctness, safety, reliability, and traceability of the entire software stack involved in processing medical images, from image acquisition and preprocessing to segmentation, feature extraction, and decision support.
Key aspects
  • Formal modeling of image processing pipelines: using precise mathematical or logic-based representations to capture the behavior of algorithms and data flows, enabling rigorous reasoning about correctness and robustness.
  • Specification and verification: defining formal requirements for components (e.g., segmentation accuracy, uncertainty handling) and proving properties or using model checking to detect design flaws before deployment.
  • Validation and equivalence reasoning: ensuring that alternative algorithms or configurations preserve expected medical outcomes, and that transformations (e.g., preprocessing steps) do not introduce unintended biases.
  • Safety- and compliance-oriented development: aligning medical image software with regulatory and safety standards by providing verifiable evidence of correctness and traceability from requirements to code.
  • Tool-supported workflows: employing formal methods tools to generate tests, proofs, or artifacts that facilitate certification, reproducibility, and maintenance across evolving medical imaging systems.
Why it matters
  • Medical image analysis often underpins critical clinical decisions; formal approaches help reduce risk by making assumptions explicit, enabling rigorous validation beyond empirical testing alone.
  • The field benefits from cross-pollination with model-based design, software safety standards, and reproducible research practices, especially as imaging modalities and AI components become more complex.
Representative themes and directions
  • Formalizing image-forming models and reconstruction procedures to ensure stable outputs under variations in acquisition.
  • Verifying segmentation and registration pipelines against clinically meaningful properties (e.g., boundary accuracy, lossless transformations).
  • Combining formal methods with machine learning to reason about uncertainty, explainability, and safety in diagnostic support systems.
  • Integrating formal artifacts into development lifecycles to support traceability and regulatory compliance.

Formal modelling of business processes is a subfield of Software Engineering that studies the rigorous specification, analysis, and refinement of business workflows using formal methods. It aims to create precise, machine-checkable representations of processes so that properties such as correctness, consistency, and performance can be verified mathematically, rather than inferred from informal descriptions.
What it encompasses
  • Formal notations and languages: Using mathematical formalisms such as Petri nets, process algebras, BPMN with formal semantics, temporal logics, and event structures to model tasks, data, resources, and control flows.
  • Verification and validation: Proving properties like deadlock freedom, liveness, safety, reachability, and conformance to requirements; validating models against real-world constraints and stakeholder goals.
  • Refinement and refinement-based design: Gradually transforming abstract process models into more concrete implementations while preserving correctness properties through formal refinement techniques.
  • Model checking and theorem proving: Applying automated or interactive tools to exhaustively explore state spaces or derive proofs about process behavior.
  • Integration with business constraints: Linking formal models with real-world constraints such as regulatory requirements, service-level agreements, and data provenance.
Key approaches and paradigms
  • BPMN with formal semantics: Extending or interpreting BPMN diagrams in a mathematically rigorous way to enable formal analysis while retaining familiarity for business stakeholders.
  • Petri nets and workflow nets: Using Petri nets to capture concurrency, synchronization, and resource sharing in business processes; often extended to colored Petri nets for data-rich workflows.
  • Process calculi: Applying algebraic formalisms (e.g., LOTOS, CSP-based calculi) to reason about process composition, communication patterns, and refinement.
  • Model checking: Verifying properties on finite-state representations of processes; common goals include deadlock detection, reachability of goals, and adherence to safety properties.
  • Formal refinement and contracts: Specifying pre/post-conditions, invariants, and service contracts to ensure that evolving process models remain correct as requirements change.
Relation to practice
  • Alignment with requirements engineering: Formal modelling supports precise translation of stakeholder requirements into verifiable process specifications, reducing ambiguity early in the design.
  • Assurance for critical domains: In industries with high compliance or safety needs, formal modelling provides rigorous evidence of correctness and reliability of business processes.
  • Tool-supported workflow analysis: A range of tools exists to encode, simulate, verify, and validate formal models, enabling automated analysis and traceability from requirements to implementation.
A concise example trajectory
  • Start with an informal description of a business process (e.g., order-to-cash), extract key activities, data objects, and constraints.
  • Create a formal representation using a suitable formalism (e.g., a Petri-net-based workflow net).
  • Verify properties such as absence of deadlock, proper sequencing, and data consistency.
  • Refine the model to align with actual IT systems, and prove that the refined version preserves the verified properties.

Projects

Recent Projects


Past Projects


People

Lab Head (Contact Person)


Staff Members


  • Giovanna Broccia
  • Research Staff (temporary)
  • giovanna.broccia(AT)isti.cnr.it
  • Publications: DBLP - Iris
  • Roberto Cirillo
  • Research Staff
  • roberto.cirillo(AT)isti.cnr.it
  • Publications: DBLP - Iris
  • Lucio Lelii
  • Research Staff
  • lucio.lelii(AT)isti.cnr.it
  • Publications: DBLP - Iris
  • Giorgio Oronzo Spagnolo
  • Research Staff
  • giorgio.oronzo.spagnolo(AT)isti.cnr.it
  • Publications: DBLP - Iris
  • Gianluca Trentanni
  • Technical Staff
  • gianluca.trentanni(AT)isti.cnr.it
  • Publications: DBLP - Iris
  • Maria Luisa Trivella
  • Administrative Staff
  • mariarita.trivella(AT)isti.cnr.it

Associate Members


  • Francesco Girardello
  • Doctoral Student
  • francesco.girardello (AT) phd.unipi.it
  • Giulia Millitarì
  • Doctoral Student
  • giulia.millitari(AT)isti.cnr.it
  • Publications: DBLP - Iris

Publications

From there you can export bibtex records or access the online version of the papers.

FMT Lab. Publications on ISTI Open Portal


A list of publications can be found as well in the official CNR repository (namely IRIS): IRIS - Research Products database

Events

Recent Events


Past Events

  • FM 2023 25th International Symposium On Formal Methods, Lübeck, Germany, March 6-10, 2023
  • REFSQ 2023 29th International Conference on Software Engineering: Foundation for Software Quality, Barcelona, Spain, April 17–20, 2023
  • TiCSA 2023 1st Workshop on Trends in Configurable Systems Analysis, Paris, France, April 23, 2023
  • FSEN 2023 10th IPM International Conference on Fundamentals of Software Engineering, Tehran, Iran, May 4-5, 2023
  • FormaliSE 2023 11th FME workshop on Formal Methods in Software Engineering, Melbourne, Victoria, Australia, May 14-15, 2023
  • COORDINATION 2023 25th International Conference on Coordination Models and Languages, Lisbon, Portugal, June 19-23, 2023
  • SPLC 2023 27Th Acm International Systems And Software Product Line Conference, Tokyo, Japan, August 28 – September 1, 2023
  • FMICS 2023 28th International Conference on Formal Methods for Industrial-Critical Systems, Antwerp, Belgium, September 20-22, 2023
  • DataMod 2023 11th International Symposium From Data to Models and Back at SEFM 2023, Eindhoven, The Netherlands, November 6-7, 2023
  • iFM 2023 18th International Conference on integrated Formal Methods, Leiden, the Netherlands, November 13-15, 2023

  • VaMoS 2022 16th International Working Conference on Variability Modelling of Software-Intensive Systems, Florence, Italy, 23-25 February 2022
  • QAVS 2022 3rd Workshop on Quantitative Aspects of Variant-rich Systems at ETAPS 2022, Munich, Germania, 3 April, 2022
  • FormaliSE 2022 10th FME workshop on Formal Methods in Software Engineering, Pittsburgh, PA, USA, 22-23 May, 2022
  • iFM 2022 18th International Conference on integrated Formal Methods, Lugano, Switzerland, 7-10 June, 2022
  • COORDINATION 2022 24th International Conference on Coordination Models and Languages, Lucca, Italy, 13-17 June, 2022
  • RE 2022 30th IEEE International Requirements Engineering Conference, Melbourne, Australia, 15-19 August, 2022

  • QAVS 2021 2nd Workshop on Quantitative Aspects of Variant-rich Systems at ETAPS 2021, Luxembourg, 28 March, 2021
  • NLP4RE 2021 4th Workshop on Natural Language Processing for Requirements Engineering at REFSQ, 12 April, 2021
  • FSEN 2021 9th IPM International Conference on Fundamentals of Software Engineering - Theory and Practice, Tehran, Iran, 19-21 May, 2021
  • FormaliSE 2021 9th FME workshop on Formal Methods in Software Engineering. Madrid, Spain, 23-24 May, 2021
  • SPLC 2021 25th International Systems and Software Product Line Conference, Leicester, UK, 6-10 September, 2021
  • DataMod 2021 10th International Symposium From Data to Models and Back at SEFM 2021, 6-7 December 2021

  • QAVS 2020 1st Workshop on Quantitative Aspects of Variant-rich Systems at eQONFEST 2020, Vienna, Austria, 31 August, 2020
  • AIRE 2020 7th International Workshop on Artificial Intelligence and Requirements Engineering at RE, Zurich, Switzerland, 1 September, 2020
  • DataMod 2020 9th International Symposium From Data to Models and Back at CIKM 2020, 20 October 2020
  • REFSQ 2020 26th International Conference on Software Engineering: Foundation for Software Quality. Pisa, Italy, March 24-27, 2020
  • FormaliSE 2020 8th FME workshop on Formal Methods in Software Engineering. Seoul, Korea, 25-26 May 2020
  • DisCoTec 2020 15th International Federated Conference on Distributed Computing Techniques. Valletta, Malta, 15-19 June, 2020
  • FMICS 2020 25th International Conference on Formal Methods for Industrial-Critical Systems. Vienna, Austria, September 2-3, 2020
  • FSEN 2019 8th IPM International Conference on Fundamentals of Software Engineering. Tehran, Iran, 1-3 May, 2019

  • FormaliSE 2019 7th FME workshop on Formal Methods in Software Engineering. Montréal, Canada, 27 May 2019
  • DisCoTec 2019 14th International Federated Conference on Distributed Computing Techniques. Lyngby, Denmark, 18-21 June, 2019
  • FM 2019 23rd International Symposium on Formal Methods. Porto, Portugal, 7-11 October, 2019
  • SG65 Colloquium in Honour of Stefania Gnesi. Porto, Portugal, 8 October, 2019
  • Castiglioncello 2019 18th Castiglioncello International Conference. The Crisis of the Arms Control Regime. Pugwash-USPID Joint meeting. Castiglioncello (LI), Italy, 3-5 October, 2019

  • FormaliSE 2018 6th FME workshop on Formal Methods in Software Engineering. Gothenburg, Sweden, 2 June 2018
  • FM 2018 22nd International Symposium on Formal Methods. Oxford, UK, 15-17 July, 2018

  • VaMoS 2017 11th International Workshop on Variability Modeling of Software-Intensive Systems. Eindhoven, The Netherlands, 1-3 February, 2017
  • FormaliSE 2017 5th FME workshop on Formal Methods in Software Engineering. Buenos Aires, Argentina, 27 May 2017
  • COORDINATION 2017 19th International Conference on Coordination Models and Languages. Neuchatel, Switzerland, 19-22 June 2017

  • FormaliSE 2016 4th FME workshop on Formal Methods in Software Engineering. Austin, TX, USA, 15 May 2016
  • FORECAST 2016 Workshop on FORmal methods for the quantitative Evaluation of Collective Adaptive SysTems. Vienna, Austria, 8 July, 2016
  • FMICS-AVoCS 2016 International Workshop on Formal Methods for Industrial Critical Systems and Automated Verification of Critical Systems. Pisa, 26-28 September, 2016
  • FM 2016 21st International Symposium on Formal Methods. Limassol, Cyprus, 7-11 November, 2016

  • FMSPLE 2015 6th International Workshop on Formal Methods and Analysis in Software Product Line Engineering, 11 April 2015, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2015, London, UK, 11-18 April 2015
  • FormaliSE 2015 3rd FME workshop on Formal Methods in Software Engineering. Florence, Italy, 18 May 2015
  • WWV 2015 11th International Workshop on Automated Specification and Verification of Web Systems. Oslo, Norway, 23 June, 2015

  • FASE 2014 17th International Conference on Fundamental Approaches to Software Engineering, Grenoble, France, 5-13 April 2014
  • FM 2014 19th International Symposium on Formal Methods, Singapore, 14-16 May 2014
  • FMSPLE 2014 5th International Workshop on Formal Methods and Analysis in Software Product Line Engineering, 10 October 2014, held as track of the 6th International Symposium On Leveraging Applications of Formal Methods, Verification and Validation, Corfu, Greece, 8-11 October 2014
  • FormaliSE 2nd FME Workshop on Formal Methods in Software Engineering, Hyderabad, India, 3 June 2014
  • SPLC 2014 18th International Software Product Line Conference, Florence, Italy, 15-19 September 2014
  • VaMoS 2014 8th International Workshop on Variability Modelling of Software-intensive Systems, Nice, France, 22-24 January 2014
  • WWV 2014 10th International Workshop on Automated Specification and Verification of Web Systems, Vienna, Austria, 18 July 2014

  • VAMOS 2013 7th International Workshop on Variability Modelling of Software-intensive Systems, Pisa, Italy, 23-25 January 2013
  • FormaliSE 2013 1st FME Workshop on Formal Methods in Software Engineering, San Francisco, CA, USA, 25 May 2013
  • SPLC 2013 17th International Software Product Line Conference, Tokio, Japan, 26-30 August 2013

  • iFM & ABZ 2012 Joint conference in honour of Egon Boerger’s 65th birthday for his contribution to state-based formal methods, ISTI-CNR, Pisa, Italy, 18-22 June 2012
  • QAPL 2012 10th Workshop on Quantitative Aspects of Programming Languages, Tallinn, Estonia, March 31 - April 1, 2012. Satellite WS of ETAPS 2012
  • VaMoS 2012 6th International Workshop on Variability Modelling of Software-intensive Systems, Leipzig, Germany, 25-27 January, 2012
  • FM 2012 18th International Symposium on Formal Methods, CNAM Paris France August 27-31 2012
  • SPLC 2012 16th International Software Product Line Conference, Costa da Sauipe 2-7 September 2012
  • FormSERA Formal Methods in Software Engineering: Rigorous and Agile Approaches (FormSERA) Workshop on Saturday 2 June 2012, Zurich, Switzerland, in conjunction with ICSE 2012
  • WS-FM 2012 9th International Workshop on Web Services and Formal Methods, Tallinn, Estonia, 6-7 September 2012
  • FMSPLE 2012 3rd International Workshop on Formal Methods and Analysis in Software Product Line Engineering, Salvador, Brazil, 2 September 2012

Old Events

  • QAPL 2011

  • SEFM 2010
  • ISARCS 2010
  • WS-FM 2010
  • FMICS 2010
  • ISoLA 2010
  • IFM 2010
  • ICFEM 2010
  • ICSE 2010
  • Vamos 2010

  • Vamos 2009
  • ICFEM 2009
  • FM+AM 2009
  • SEFM 2009
  • OPENCERT 2009
  • FM 2009
  • SPLC 2009
  • Fmics 2009
  • YR-SOC 2009

  • SPLC 2008
  • SEFM 2008
  • SofSem 2008
  • ICFEM 2008
  • Fmics 2008
  • Serene 2008
  • STM 2008
  • Isola 2008
  • China 2008
  • VODCA 2008
  • FM 2008

  • ESOP 2007
  • FMICS 2007
  • DASD 2007
  • ASSE 2007
  • SPLC 2007

  • REFSQ 2006
  • EFTS 2006
  • SPLC 2006
  • FM 2006
  • BPM 2006
  • VODCA 2006

  • FMICS 2005
  • FORTE 2005
  • SEFM 2005
  • ICEIS 2005
  • FM 2005
  • ICSE 2005

  • VODCA 2004
  • ICFEM 2004
  • ATVA 2004
  • FMICS 2004
  • WS-FM 2004
  • DASD 2004
  • SEFM 2004
  • FORTE 2004

  • PFE-5 2003
  • ICEIS 2003
  • SEFM 2003
  • FMICS 2003
  • FORTE 2003
  • FM 2003

  • FORTE 2002
  • FM&&T Day 2002
  • ISODARCO 2002

  • From the Distributed Systems to Internet - A day held in Norma Lijtmaer's honour

  • FMICS 2002

  • I3 Spring Days 2001
  • FORTE 2001
  • FMICS 2001

  • FMICS 2000
  • IFIP EUROCONFERENCE 2000
  • FORTE 2000
  • CHI 2000
  • Elsewhere 2000

  • FMICS 1999
  • ISODARCO 1999
  • SPIN Workshop 1999
  • FORTE 1999
  • FMICS 1998
  • FMICS 1997
  • FMICS 1996

Tools

-->

FMT has developed the following tools


Other no longer available tools

  • JACK, JACK2,JACK3 - Just Another Concurrency Kit
  • HAL - History-dependent Analysis Laboratory - Pi-calculus verification environment with tcl/tk interface
  • HAL-on-Line - The whole HAL environment on-line
  • AMC - ACTL model checker for fc2 automators
  • BMC - BDD based ACTL model checker for fc2 networks / regular CCS / regular LOTOS specification
  • WCS (Witness and Counterexample Server) - Witness and Counterexample Automata Generator for ACTL


Contact Us

Address:

Formal Methods and Tools Laboratory
Istituto di Scienza e Tecnologie dell'Informazione
Consiglio Nazionale delle Ricerche
Area della Ricerca di Pisa
Via G. Moruzzi, 1
56124 Pisa
ITALY

Contact Person:

Maurice ter Beek
Office: +39 050 621 3471
Secretary: +39 050 621 2920
fax: +39 050 315 2810
e-mail: maurice.terbeek (at) isti.cnr.it
e-mail: trivella (at) isti.cnr.it

Reach Us

to reach us follow the instruction from our institute site: https://www.isti.cnr.it/en/about/reach-us

 

About this site

Privacy And Cookie Policy

This Privacy Policy is made in accordance with article 13 of the GDPR 2016/679 (General Data Protection Regulation), of the Recommendation no. 2/2001 of the Article 29 Working Party and of the General Provision of the Italian Data Protection Authority, regarding cookies of 8 May 2014, n. 229.
The following applies only to the site: https://www.fmt.isti.cnr.it
Links to external sites are provided as a simple service to users, with the exclusion of any responsibility with regard to the completeness and correctness of the information provided on the sites themselves.
The Data Controller and the Data Processor of the site does not have any control over the contents or their use and the quality and completeness of the information of external sites whose responsibilities fall directly on their owners and / or managers.

Purpose of data processing

The aim is to provide information about the activities and services provided by the Labortaory.
Any further purposes will be the object to explicit information about the processing of personal data and, if necessary, of the specific and optional consent of the data subject.

Cookie Policy

This website only uses session cookies strictly necessary to allow specific required functions guaranteeing normal navigation and use of the website.

 

Disclaimers

This website incorporates images and portions of text generated using artificial intelligence (AI) technologies. While every effort is made to ensure accuracy and quality, AI-generated content may include errors, omissions, or biases and should not be considered a substitute for professional advice. By using this website, you acknowledge that the AI content is provided "as is" without warranties of any kind, express or implied, including but not limited to accuracy, reliability, or fitness for a particular purpose.

The use of AI-generated content is disclosed in compliance with applicable legal and regulatory frameworks, including transparency obligations under laws such as the European Union's AI Act, the U.S. Federal Trade Commission guidelines, and other relevant regulations. Users are encouraged to independently verify critical information before relying on it.

The website owner disclaims any liability for damages or losses arising directly or indirectly from the use, reliance upon, or interpretation of AI-generated content. Use of this content is at the user's own risk.

 
CNR Logo