Pub Date : 2025-09-01Epub Date: 2025-03-07DOI: 10.1016/j.scico.2025.103280
Thomas Flinkow , Barak A. Pearlmutter , Rosemary Monahan
Extensive research on formal verification of machine learning systems indicates that learning from data alone often fails to capture underlying background knowledge, such as specifications implicitly available in the data. Various neural network verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties; however, they typically assume a trained network with fixed weights. A promising approach for creating machine learning models that inherently satisfy constraints after training is to encode background knowledge as explicit logical constraints that guide the learning process via so-called differentiable logics. In this paper, we experimentally compare and evaluate various logics from the literature, present our findings, and highlight open problems for future work. We evaluate differentiable logics with respect to their suitability in training, and use a neural network verifier to check their ability to establish formal guarantees. The complete source code for our experiments is available as an easy-to-use framework for training with differentiable logics at https://github.com/tflinkow/comparing-differentiable-logics.
{"title":"Comparing differentiable logics for learning with logical constraints","authors":"Thomas Flinkow , Barak A. Pearlmutter , Rosemary Monahan","doi":"10.1016/j.scico.2025.103280","DOIUrl":"10.1016/j.scico.2025.103280","url":null,"abstract":"<div><div>Extensive research on formal verification of machine learning systems indicates that learning from data alone often fails to capture underlying background knowledge, such as specifications implicitly available in the data. Various neural network verifiers have been developed to ensure that a machine-learnt model satisfies correctness and safety properties; however, they typically assume a trained network with fixed weights. A promising approach for creating machine learning models that inherently satisfy constraints after training is to encode background knowledge as explicit logical constraints that guide the learning process via so-called differentiable logics. In this paper, we experimentally compare and evaluate various logics from the literature, present our findings, and highlight open problems for future work. We evaluate differentiable logics with respect to their suitability in training, and use a neural network verifier to check their ability to establish formal guarantees. The complete source code for our experiments is available as an easy-to-use framework for training with differentiable logics at <span><span>https://github.com/tflinkow/comparing-differentiable-logics</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"244 ","pages":"Article 103280"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143620597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To face complexity and scalability, the design of software-intensive systems requires the decomposition of the system into components, each modeled and analyzed separately from the others, and the composition of their analysis. Moreover, compositional model simulation is recognized as the only alternative available in practice when systems are large and complex, like in the cyber-physical domain, and intrinsically require combining the specification of ensembles of different parts (subsystems). Therefore, the need for simulation engines for composed model execution is getting a growing interest.
Along this research line, this paper presents the results of the compositional modeling and validation by scenarios of an industrial medical system, called MVM-Adapt, that we designed as an adaptive version of an existing mechanical lung ventilator deployed and certified to treat pneumonia during the COVID-19 pandemic.
We exploit the I/O Abstract State Machine formalism to model the device components as separate and interacting sub-systems that communicate through I/O events and adapt the device ventilation mode at run-time based on the health parameters of the patient. An orchestrated simulation coordinates the overall execution of these communicating I/O ASMs by exploiting suitable workflow patterns.
This compositional simulation technique has proved to be useful in practice to validate the new adaptive MVM's behavior and thus to support architects in better understanding this new mode of operation of the prototyped system.
{"title":"Formal specification and validation of the MVM-Adapt system using Compositional I/O Abstract State Machines","authors":"Silvia Bonfanti , Elvinia Riccobene , Patrizia Scandurra","doi":"10.1016/j.scico.2025.103299","DOIUrl":"10.1016/j.scico.2025.103299","url":null,"abstract":"<div><div>To face complexity and scalability, the design of software-intensive systems requires the <em>decomposition</em> of the system into components, each modeled and analyzed separately from the others, and the <em>composition</em> of their analysis. Moreover, compositional model simulation is recognized as the only alternative available in practice when systems are large and complex, like in the cyber-physical domain, and intrinsically require combining the specification of ensembles of different parts (subsystems). Therefore, the need for simulation engines for composed model execution is getting a growing interest.</div><div>Along this research line, this paper presents the results of the compositional modeling and validation by scenarios of an industrial medical system, called MVM-Adapt, that we designed as an adaptive version of an existing mechanical lung ventilator deployed and certified to treat pneumonia during the COVID-19 pandemic.</div><div>We exploit the I/O Abstract State Machine formalism to model the device components as separate and interacting sub-systems that communicate through I/O events and adapt the device ventilation mode at run-time based on the health parameters of the patient. An orchestrated simulation coordinates the overall execution of these communicating I/O ASMs by exploiting suitable workflow patterns.</div><div>This compositional simulation technique has proved to be useful in practice to validate the new adaptive MVM's behavior and thus to support architects in better understanding this new mode of operation of the prototyped system.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"244 ","pages":"Article 103299"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143682184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-03-20DOI: 10.1016/j.scico.2025.103302
Lucas Carvalho , Tayana Conte
In the development of modern software solutions, architecture plays a crucial role in the success of the solution, as it comprises structural and behavioral features of the system, along with critical decisions about the system. Given this, several studies have been conducted in the literature about architecture decision-making process, but these studies are normally conducted with professionals outside of the Brazilian perspective or restricted to a specific company. Although architecture knowledge is the same, differences, like cultural behaviors, professional experiences and academic background, may arise. For this reason, this study presents the architecture decision-making from the view of Brazilian software practitioners. The results are a compilation of findings from 12 semi-structured interviews with senior practitioners from different companies and provide insights about the decision-making process. Regarding the context in which decisions are made, it is found that practitioners often guide and influence decisions and use experience as their main foundation, and the process is normally influenced by features of the company, like its domain and size. Concerning the decision-making process, practitioners in most cases use PoCs, organizational patterns or requirements to define the architecture. Group decision-making is often followed, although no systematic approach is used. Also, no tools are employed to support decision-making, and the decisions are documented and revised frequently.
{"title":"Software architecture decision-making process: The practitioners' view from the Brazilian industry","authors":"Lucas Carvalho , Tayana Conte","doi":"10.1016/j.scico.2025.103302","DOIUrl":"10.1016/j.scico.2025.103302","url":null,"abstract":"<div><div>In the development of modern software solutions, architecture plays a crucial role in the success of the solution, as it comprises structural and behavioral features of the system, along with critical decisions about the system. Given this, several studies have been conducted in the literature about architecture decision-making process, but these studies are normally conducted with professionals outside of the Brazilian perspective or restricted to a specific company. Although architecture knowledge is the same, differences, like cultural behaviors, professional experiences and academic background, may arise. For this reason, this study presents the architecture decision-making from the view of Brazilian software practitioners. The results are a compilation of findings from 12 semi-structured interviews with senior practitioners from different companies and provide insights about the decision-making process. Regarding the context in which decisions are made, it is found that practitioners often guide and influence decisions and use experience as their main foundation, and the process is normally influenced by features of the company, like its domain and size. Concerning the decision-making process, practitioners in most cases use PoCs, organizational patterns or requirements to define the architecture. Group decision-making is often followed, although no systematic approach is used. Also, no tools are employed to support decision-making, and the decisions are documented and revised frequently.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"244 ","pages":"Article 103302"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143682185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-03-04DOI: 10.1016/j.scico.2025.103297
Dominik Grundt, Astrid Rakow, Philipp Borchers, Eike Möhlmann
Artificial Intelligence (AI) plays an important role in managing the complexity of automated driving. Nonetheless, training and ensuring the safety of AI is challenging. The safe generalization from a known to an unknown situation remains an unsolved problem. Infusing knowledge into AI driving functions seems a promising approach to address generalization, development costs, and training efficiency. We reason that ascertaining the relevance of infused knowledge provides a strong indication of the correct execution of previous development phases of knowledge infusion. As a causal reason for AI performance, relevant knowledge is important for explaining AI behavior. This paper defines a novel notion of relevant knowledge in knowledge-infused AI and for requirements satisfaction in traffic scenarios. We present a scenario-based testing procedure that not only checks whether a knowledge-infused AI model satisfies a given requirement R but also provides statements on the relevance of infused knowledge. Finally, we describe a systematic method for generating abstract knowledge scenarios to enable an efficient application of our relevance testing procedure.
{"title":"What does AI need to know to drive: Testing relevance of knowledge","authors":"Dominik Grundt, Astrid Rakow, Philipp Borchers, Eike Möhlmann","doi":"10.1016/j.scico.2025.103297","DOIUrl":"10.1016/j.scico.2025.103297","url":null,"abstract":"<div><div>Artificial Intelligence (AI) plays an important role in managing the complexity of automated driving. Nonetheless, training and ensuring the safety of AI is challenging. The safe generalization from a known to an unknown situation remains an unsolved problem. Infusing knowledge into AI driving functions seems a promising approach to address generalization, development costs, and training efficiency. We reason that ascertaining the relevance of infused knowledge provides a strong indication of the correct execution of previous development phases of knowledge infusion. As a causal reason for AI performance, relevant knowledge is important for explaining AI behavior. This paper defines a novel notion of <em>relevant knowledge</em> in knowledge-infused AI and for requirements satisfaction in traffic scenarios. We present a scenario-based testing procedure that not only checks whether a knowledge-infused AI model satisfies a given requirement <span>R</span> but also provides statements on the relevance of infused knowledge. Finally, we describe a systematic method for generating abstract knowledge scenarios to enable an efficient application of our relevance testing procedure.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"244 ","pages":"Article 103297"},"PeriodicalIF":1.5,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-02-18DOI: 10.1016/j.scico.2025.103284
Minnan Zhang , Jingdong Jia , Luiz Fernando Capretz , Xin Hou , Huobin Tan
The concept of code smell was first proposed in the late nineties, to refer to signals that code may need refactoring. While not necessarily affecting functionality, code smell can hinder understandability and future scalability of the program. As a result, the precise detection of code smell has become an important topic in coding research. However, current detection methods are limited by imbalanced and industrial-irrelevant datasets, a lack of sufficient structural and logical information on the code, and simple model architecture. Given these limitations, this paper utilized an industry-relevant and sufficient dataset and then developed a graph neural network to better detect code smell. First, we identified Long Method and Blob as our research subjects due to their frequent occurrence and impacts on the maintainability of software. We then designed modified fuzzy sampling with focalloss to address the issue of data imbalance. Second, to deal with the large volume of data, we proposed a global and local attention scoring mechanism to extract the key information from the code. Third, in order to design a graph neural network specifically for the abstract syntax tree of code, we combined Euclidean space and non-Euclidean space. Finally, we compared our method with other machine learning methods and deep learning methods. The results demonstrate that our method outperforms the other methods on Long Method and Blob, which indicates the effectiveness of our proposed method.
{"title":"Graph neural network-based long method and blob code smell detection","authors":"Minnan Zhang , Jingdong Jia , Luiz Fernando Capretz , Xin Hou , Huobin Tan","doi":"10.1016/j.scico.2025.103284","DOIUrl":"10.1016/j.scico.2025.103284","url":null,"abstract":"<div><div>The concept of code smell was first proposed in the late nineties, to refer to signals that code may need refactoring. While not necessarily affecting functionality, code smell can hinder understandability and future scalability of the program. As a result, the precise detection of code smell has become an important topic in coding research. However, current detection methods are limited by imbalanced and industrial-irrelevant datasets, a lack of sufficient structural and logical information on the code, and simple model architecture. Given these limitations, this paper utilized an industry-relevant and sufficient dataset and then developed a graph neural network to better detect code smell. First, we identified Long Method and Blob as our research subjects due to their frequent occurrence and impacts on the maintainability of software. We then designed modified fuzzy sampling with focalloss to address the issue of data imbalance. Second, to deal with the large volume of data, we proposed a global and local attention scoring mechanism to extract the key information from the code. Third, in order to design a graph neural network specifically for the abstract syntax tree of code, we combined Euclidean space and non-Euclidean space. Finally, we compared our method with other machine learning methods and deep learning methods. The results demonstrate that our method outperforms the other methods on Long Method and Blob, which indicates the effectiveness of our proposed method.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"243 ","pages":"Article 103284"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143488855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-02-19DOI: 10.1016/j.scico.2025.103281
Shaowen Li, Hiroyuiki Sato
System programming expects programmers to have fine control over available resources to ensure both the correctness and efficiency of the system. Programming languages designed for this type of task provide little abstraction of the underlying hardware. With greater power to interact directly with the machine comes greater responsibility for programmers to manage the hardware themselves to avoid any undefined behavior. C and C++ have been the long-standing de facto languages in this field as they offer both the programming experience of a modern language and the ability to manipulate low-level resources with the abstraction of pointers. However, this responsibility is demanding for programmers, leading to numerous bugs caused by improper resource management.
Rust is a rising system programming language aiming to combine both low-level resource manipulation and high-level resource management. The design philosophy of Rust is to make the compiler play a vital role in resource management. A set of static analysis unique to Rust are performed at compile time to ensure resources are handled correctly without runtime cost. Nevertheless, static analysis is inherently conservative and Rust addresses this by providing a feature called unsafe Rust, which is exempt from its strict static checks. Various unsafe operations, such as raw pointer dereferencing and foreign function calls, are only permitted within an unsafe code block. This is essential to make the language sufficiently expressive. Nonetheless, Rust's unsafe block only matters statically for type checking, without any runtime assurance. As a consequence, the effects of unsafe operations within an unsafe block can spread to the outside safe code and jeopardize its safety.
We present Safety Block Division (SBD), a completely automatic solution to isolate safe Rust from unsafe Rust. The fundamental design of SBD is its safety data-flow analysis performed on Rust intermediate representation (IR) to fully incorporate language features. This distinguishes SBD from previous works. Past designs primarily operate on LLVM IR and require manual efforts or external tools. SBD is entirely built into the Rust compiler, and thus no programmer involvement is required. We extensively evaluate SBD on popular Rust crates (libraries). Our experiments reveal that SBD incurs negligible binary size (0.31% increase on average) and runtime (a geometric mean of 7.52% increase across eight benchmarks) overhead. We also demonstrate that SBD is capable of protecting against real-world vulnerabilities.
{"title":"SBD: Securing safe rust automatically from unsafe rust","authors":"Shaowen Li, Hiroyuiki Sato","doi":"10.1016/j.scico.2025.103281","DOIUrl":"10.1016/j.scico.2025.103281","url":null,"abstract":"<div><div>System programming expects programmers to have fine control over available resources to ensure both the correctness and efficiency of the system. Programming languages designed for this type of task provide little abstraction of the underlying hardware. With greater power to interact directly with the machine comes greater responsibility for programmers to manage the hardware themselves to avoid any undefined behavior. C and C++ have been the long-standing de facto languages in this field as they offer both the programming experience of a modern language and the ability to manipulate low-level resources with the abstraction of pointers. However, this responsibility is demanding for programmers, leading to numerous bugs caused by improper resource management.</div><div>Rust is a rising system programming language aiming to combine both low-level resource manipulation and high-level resource management. The design philosophy of Rust is to make the compiler play a vital role in resource management. A set of static analysis unique to Rust are performed at compile time to ensure resources are handled correctly without runtime cost. Nevertheless, static analysis is inherently conservative and Rust addresses this by providing a feature called <em>unsafe Rust</em>, which is exempt from its strict static checks. Various unsafe operations, such as raw pointer dereferencing and foreign function calls, are only permitted within an unsafe code block. This is essential to make the language sufficiently expressive. Nonetheless, Rust's <em>unsafe block</em> only matters statically for type checking, without any runtime assurance. As a consequence, the effects of unsafe operations within an unsafe block can spread to the outside safe code and jeopardize its safety.</div><div>We present <em><u>S</u>afety <u>B</u>lock <u>D</u>ivision</em> (SBD), a completely automatic solution to isolate safe Rust from unsafe Rust. The fundamental design of SBD is its safety data-flow analysis performed on Rust intermediate representation (IR) to fully incorporate language features. This distinguishes SBD from previous works. Past designs primarily operate on LLVM IR and require manual efforts or external tools. SBD is entirely built into the Rust compiler, and thus no programmer involvement is required. We extensively evaluate SBD on popular Rust crates (libraries). Our experiments reveal that SBD incurs negligible binary size (0.31% increase on average) and runtime (a geometric mean of 7.52% increase across eight benchmarks) overhead. We also demonstrate that SBD is capable of protecting against real-world vulnerabilities.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"243 ","pages":"Article 103281"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-02-19DOI: 10.1016/j.scico.2025.103279
Peter Rivière, Neeraj Kumar Singh, Yamine Ait-Ameur, Guillaume Dupont
EB4EB, standing for Event-B for Event-B, is a framework that supports the formalisation of Event-B models using first-order logic and set-theory, so that it becomes possible to manipulate them as first-class objects. This framework relies on an Event-B algebraic theory, which serves as a meta-theory formalising, explicitly, all of the features of an Event-B machine. States, events, invariants, variants, etc... are formalised through data-types and operators. When this meta-theory is instantiated, an Event-B model becomes a first-order logic and set-theoretic formula, described in an Event-B context.
Because it can handle machine elements as terms in formulas, the EB4EB framework enables the definition of new specific proof obligations and analyses. Such analyses may then be applied to any EB4EB machines in order to establish advanced properties, not natively present in the Event-B method, such as deadlock-freeness or liveness requirements. These analyses are non-intrusive since they do not require to alter the machine in order to be performed.
In the previous formalisation of the EB4EB framework only states and events were handled, limiting the expressive reasoning power of the framework. This paper presents an extension of the EB4EB framework to support parameterised events, an important feature of Event-B. This extension is not straightforward in EB4EB. Indeed, the typing system supported by Event-B theories is not rich enough to describe such extension in a constructive manner as for the other Event-B features formalised in EB4EB. The proposed solution, described in this paper, consists in defining an axiomatic formalisation of event parameters definitions. We also show that the proof obligations and model analyses we have defined scale to handle event parameters. The approach is illustrated on different case studies we have developed.
{"title":"Extending the EB4EB framework with parameterised events","authors":"Peter Rivière, Neeraj Kumar Singh, Yamine Ait-Ameur, Guillaume Dupont","doi":"10.1016/j.scico.2025.103279","DOIUrl":"10.1016/j.scico.2025.103279","url":null,"abstract":"<div><div>EB4EB, standing for <em>Event-B for Event-B</em>, is a framework that supports the formalisation of Event-B models using first-order logic and set-theory, so that it becomes possible to manipulate them as first-class objects. This framework relies on an Event-B algebraic theory, which serves as a meta-theory formalising, explicitly, all of the features of an Event-B machine. States, events, invariants, variants, etc... are formalised through data-types and operators. When this meta-theory is instantiated, an Event-B model becomes a first-order logic and set-theoretic formula, described in an Event-B context.</div><div>Because it can handle machine elements as terms in formulas, the EB4EB framework enables the definition of new specific proof obligations and <em>analyses</em>. Such analyses may then be applied to any EB4EB machines in order to establish advanced properties, not natively present in the Event-B method, such as deadlock-freeness or liveness requirements. These analyses are <em>non-intrusive</em> since they do not require to alter the machine in order to be performed.</div><div>In the previous formalisation of the EB4EB framework only states and events were handled, limiting the expressive reasoning power of the framework. This paper presents an extension of the EB4EB framework to support parameterised events, an important feature of Event-B. This extension is not straightforward in EB4EB. Indeed, the typing system supported by Event-B theories is not rich enough to describe such extension in a constructive manner as for the other Event-B features formalised in EB4EB. The proposed solution, described in this paper, consists in defining an axiomatic formalisation of event parameters definitions. We also show that the proof obligations and model analyses we have defined scale to handle event parameters. The approach is illustrated on different case studies we have developed.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"243 ","pages":"Article 103279"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143480241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-01-23DOI: 10.1016/j.scico.2025.103267
Diego Marmsoler, Billy Thornton
Smart contracts are programs stored on the blockchain, often developed in a high-level programming language, the most popular of which is Solidity. Smart contracts are used to automate financial transactions and thus bugs can lead to large financial losses. With this paper, we address this problem by describing a verification environment for Solidity in Isabelle/HOL. To this end, we first describe a calculus to reason about Solidity smart contracts. The calculus is formalized in Isabelle/HOL and its soundness is mechanically verified. Then, we verify a theorem which guarantees that all instances of an arbitrary contract type satisfy a corresponding invariant. The theorem can be used to verify invariants for Solidity smart contracts. This is demonstrated by a case study in which we use our approach to verify a simple token implemented in Solidity. Our results show that the framework has the potential to significantly reduce the verification effort compared to verifying directly from the semantics.
{"title":"Deductive verification of solidity smart contracts with SSCalc","authors":"Diego Marmsoler, Billy Thornton","doi":"10.1016/j.scico.2025.103267","DOIUrl":"10.1016/j.scico.2025.103267","url":null,"abstract":"<div><div>Smart contracts are programs stored on the blockchain, often developed in a high-level programming language, the most popular of which is Solidity. Smart contracts are used to automate financial transactions and thus bugs can lead to large financial losses. With this paper, we address this problem by describing a verification environment for Solidity in Isabelle/HOL. To this end, we first describe a calculus to reason about Solidity smart contracts. The calculus is formalized in Isabelle/HOL and its soundness is mechanically verified. Then, we verify a theorem which guarantees that all instances of an arbitrary contract type satisfy a corresponding invariant. The theorem can be used to verify invariants for Solidity smart contracts. This is demonstrated by a case study in which we use our approach to verify a simple token implemented in Solidity. Our results show that the framework has the potential to significantly reduce the verification effort compared to verifying directly from the semantics.</div></div>","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"243 ","pages":"Article 103267"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143176861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-01Epub Date: 2025-01-08DOI: 10.1016/j.scico.2025.103262
Neeraj Kumar Singh , Cristina David , Meng Sun , Meng Wang
{"title":"Introduction to the TASE 2023 Special Issue","authors":"Neeraj Kumar Singh , Cristina David , Meng Sun , Meng Wang","doi":"10.1016/j.scico.2025.103262","DOIUrl":"10.1016/j.scico.2025.103262","url":null,"abstract":"","PeriodicalId":49561,"journal":{"name":"Science of Computer Programming","volume":"243 ","pages":"Article 103262"},"PeriodicalIF":1.5,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143562613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}