Tasks performed
Head of Studies at the Faculty of Computer Sciences with responsibility for:
 Oneyear Programme in Information Technology
 Bachelor in Computer Science
 Bachelor in Digital Media and Design
 Bachelor in Information Systems
 Bachelor in Computer Engineering
 Master in Applied Computer Science
Background
I took on the role as Head of Studies at the faculty in August 2013. Prior to that I worked as associate professor at the faculty with software reliability as my main research area. My PhD is from Department of Mathematics at University of Oslo. In my PhD I developed quantitative methods for assessing reliability of compound software.
Publications

Samuelsen, Terje; ColomoPalacios, Ricardo & Kristiansen, Monica Lind (2016). Learning software project management in teams with diverse backgrounds, In F.J. GarcíaPeñalvo (ed.),
Proceedings of the Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality.
Association for Computing Machinery (ACM).
ISBN 9781450347471.
Paper.
s 127
 131

Due, Beathe; Kristiansen, Monica; ColomoPalacios, Ricardo & The Hien, Dang Ha (2015). Introducing big data topics: a multicourse experience report from Norway, In G. Alves (ed.),
Proceedings of the 3rd International Conference on Technological Ecosystems for Enhancing Multiculturality.
ACM Publications.
ISBN 9781450334426.
Paper.
s 565
 569

Kristiansen, Monica; Natvig, Bent & Winther, Rune (2014). Assessing software reliability of multistate systems, In Raphaël Steenbergen; P.H.A.J.M. van Gelder; S. Miraglia & A.C.W.M. Vrouwenvelder (ed.),
Safety, reliability and risk analysis : beyond the horizon : proceedings of the European Safety and Reliability Conference, ESREL 2013, Amsterdam, the Netherlands, 29 September2 October 2013.
CRC Press.
ISBN 9781138001237.
KAPITTEL.

Kristiansen, Monica; Nätt, Tom Heine & Heide, Christian F. (2013). Kvantitativ undersøkelse av mulige sammenhenger mellom vurderingsform og karakterer i høyere utdanning. UNIPED.
ISSN 18938981.
36(2), s 62 80 . doi:
10.3402/uniped.v36i2.21516
Show summary
I denne artikkelen undersøker vi om det gis bedre karakterer når vurderingsformen er mappe enn når vurderingsformen er tradisjonell skriftlig eksamen. For å gjøre dette, har vi tatt utgangspunkt i karakterene studenter ved Høgskolen i Østfold (HiØ) har fått i perioden f.o.m. høstsemesteret 2006 t.o.m. høstsemesteret 2010 i alle emner ved alle avdelinger. Resultatene fra undersøkelsene viser at studenter i gjennomsnitt får bedre karakterer dersom vurderingsformen er mappe enn når vurderingsformen er skriftlig eksamen. I våre data ligger gjennomsnittlig differanse mellom mappevurdering og skriftlig eksamen på 0,86 karaktertrinn når vi ser på alle avdelingene samlet. Størst gjennomsnittlig differanse finner vi ved avdeling for informasjonsteknologi. Ved denne avdelingen er gjennomsnittlig differanse mellom mappevurdering og skriftlig eksamen på hele 1,25 karaktertrinn. Videre viser våre data at omkring 80 % av studentene får bedre gjennomsnittskarakter på sine mappevurderinger enn på sine skriftlige eksamener når vi ser på alle avdelingene samlet. Ved avdeling for informasjonsteknologi får hele 88,7 % av studentene bedre gjennomsnittskarakter på sine mappevurderinger enn på sine skriftlige eksamener. Dette har imidlertid også sammenheng med studentenes gjennomsnittskarakter

Nätt, Tom Heine; Heide, Christian F. & Kristiansen, Monica (2013). Myter og sannheter om årsaker til studenters prestasjoner i programmering, I: Erlend Tøssebro & Hein Meling (red.),
Norsk informatikkonferanse NIK 2013, Universitetet i Stavanger, 18.  20.november 2013.
Akademika forlag.
ISBN 9788232103652.
Article.
s 76
 87
Show summary
I denne artikkelen studerer vi studentenes karakterer gitt i et introduksjonsemne i programmering de åtte siste årene innenfor bachelorstudiene ingeniørfag  data, informatikk, informasjonssystemer, digital medieproduksjon og årsstudiet i informasjonsteknologi ved Høgskolen i Østfold. Spesielt undersøker vi hvorvidt studentenes karakterer i dette emnet korrelerer med karakterer i andre emner som også krever abstrakt tenkning (matematikk og objektorientert programmering), samt studentenes gjennomsnittskarakter i alle andre emner. I tillegg undersøker vi om mannlige studenter generelt presterer bedre enn kvinnelige studenter, og om dette i noen grad varierer mellom de ulike studieretningene. Resultatene fra undersøkelsene viser bl.a. at kvinnelige studenter gjør det 0.7 karaktertrinn dårligere enn sine mannlige medstudenter i introduksjonsemnet Innføring i programmering, men at dette langt ifra er signifikant for alle studieprogrammene. Videre viser resultatene korrelasjoner i størrelsesorden 0.50.6 mellom karakterene gitt i introduksjonsemnet og karakterene gitt i matematikk, objektorientert programmering og studentenes gjennomsnittskarakter.

Heide, Christian F.; Kristiansen, Monica Lind & Nätt, Tom Heine (2012). Vurderingsform og karakterbruk, I: Trond Aalberg (red.),
Norsk informatikkonferanse NIK 2012; Universitetet i Nordland 19 – 21 november 2012.
Akademika forlag.
ISBN 9788232100132.
Article.
s 61
 71
Show summary
I denne artikkelen tar vi utgangspunkt i emner gitt ved ulike avdelinger ved Høgskolen i Østfold (HiØ), og undersøker om det gis bedre karakterer i emner hvor vurderingsformen er mappe enn emner hvor vurderingsformen er skriftlig eksamen. I tillegg undersøker vi i hvilken grad dette varierer mellom ulike avdelinger ved HiØ. Videre undersøker vi om svake karakterer brukes sjeldnere i emner som vurderes med mappe enn i emner som vurderes med skriftlig eksamen, og om dette i noen grad varierer mellom de ulike avdelingene. Som utgangspunkt for analysen benyttes karakterene studenter ved HiØ har f°att f.o.m. høstsemesteret 2004 t.o.m. v°arsemesteret 2012. Resultatene fra undersøkelsene viser at gjennomsnittskarakterene generelt er bedre i emner hvor vurderingsformen er mappe enn i emner hvor vurderingsformen er skriftlig eksamen. Datamaterialet viser at gjennomsnittskarakterene i emner vurdert med mappe ligger omkring 0.61.0 karaktertrinn høyere enn gjennomsnittskarakterene i emner vurdert med skriftlig eksamen. Videre viser resultatene at variansen i studentenes karakterer er signifikant større i emner med skriftlig eksamen enn i emner med mappevurdering. Data viser at standardavviket i emner med skriftlig eksamen er ca 0.260.46 karaktertrinn større enn standardavviket i emner med mappevurdering. Videre viser resultatene tydelig mindre hyppig bruk av svake karakterer i emner som vurderes med mappe enn i emner som vurderes med skriftlig eksamen.

Kristiansen, Monica Lind; Natvig, Bent & Winther, Rune (2012). A componentbased approach for assessing reliability of compound software, In . PSAM&ESREL (ed.),
11th International Probabilistic Safety Assessment and Management Conference and the Annual European Safety and Reliability Conference 2012, 2529 June 2012, Helsinki, Finland.
Curran Associates, Inc..
ISBN 9781622764365.
KAPITTEL.
s 1823
 1832

Kristiansen, Monica Lind; Winther, Rune & Natvig, Bent (2012). Establishing prior probability distributions for probabilities that pairs of software components fail simultaneously, In Christophe Bérenguer; Antoine Grall & Carlos Guedes Soares (ed.),
Advances in Safety, Reliability and Risk Management  proceedings of the European Safety and Reliability Conference, ESREL 2011.
CRC Press.
ISBN 9780415683791.
Kapittel.
s 96
 104

Kristiansen, Monica Lind; Winther, Rune & Natvig, Bent (2011). A Bayesian hypothesis testing approach for finding upper bounds for probabilities that pairs of software components fail simultaneously. International Journal of Reliability, Quality and Safety Engineering (IJRQSE).
ISSN 02185393.
18(3), s 209 236 . doi:
10.1142/S021853931100410X

Kristiansen, Monica Lind; Winther, Rune & Natvig, Bent (2010). On component dependencies in compound software. International Journal of Reliability, Quality and Safety Engineering (IJRQSE).
ISSN 02185393.
17(5), s 465 493 . doi:
10.1142/S0218539310003895

Kristiansen, Monica; Winther, Rune & Natvig, Bent (2010). Identifying possible rules for selecting the most important component dependencies in compound software, In Ben Ale; Ioannis Papazoglou & Enrico Zio (ed.),
Reliability, risk and safety : back to the future.
CRC Press.
ISBN 9780415604277.
Artikkel.
s 1561
 1568

Kristiansen, Monica; Winther, Rune & Simensen, John Eldar (2010). Identifying the most important component dependencies in compound software : an experimental study, In Radim Bris; Sebastián Martorell & C. Guedes Soares (ed.),
Reliability, Risk and Safety. Theory and Applications.
CRC Press.
ISBN 9780415555098.
Kapittel.
s 1333
 1340
Show summary
Since it is practically impossible to include all component dependencies in a system?s reliability calculation, a more viable approach would be to include only those dependencies that have a significant impact on the assessed reliability. In this paper, the concepts dataserial and dataparalell components are defined. Then a test system, consisting of five components, is investigated to identify possible rules for selecting the most important component dependencies. To do this, two techniques are applied: 1) direct calculation and 2) Principal Component Analysis (PCA). The results from the analyses clearly show that including partial dependency information may give substantial improvements in the reliability predictions, compared to assuming independence between all software components. However, this is only as long as the most important component dependencies are included in the reliability calculations. It is also apparent that dependencies between dataparallel components are far more important than dependencies between dataserial components. Further the analyses indicate that including only dependencies between dataparallel components may give predictions close to the system?s true failure probability. Including only dependencies between dataserial components may however result in predictions even worse than by assuming independence between all software components.

Kristiansen, Monica; Winther, Rune; van der Meulen, Meine & Revilla, Miguel A. (2010). The use of metrics to assess software component dependencies, In Carlos Guedes Soares; Radim Briš & Sebastián Martorell (ed.),
Reliability, Risk, and Safety: ESREL 2009.
CRC Press.
ISBN 9780415555098.
.
s 1359
 1366
Show summary
In this paper, we present an experimental study which investigates the relations between a set of internal software metrics (McCabe's cyclomatic complexity, Halstead volume, program depth, Source Lines Of Code, etc.) and stochastic failure dependency between software components. The experiment was performed by analysing a large collection of program versions submitted to the same program specification. By analysing the available source code, a set of relevant internal software metrics were calculated for each of the program versions. Additionally, we knew if the program versions would fail or succeed for a large set of possible program inputs. This gave us an ideal situation to study stochastic failure dependencies between software components.

Sarshar, Sizarta; Kristiansen, Monica Lind & Sivertsen, Terje (2010). Survey on techniques for modeling of dependencies in the digital I&C design phase, In The American Nuclear Society The American Nuclear Society (ed.),
Proceedings of the 7th International Topical Meeting on Nuclear Plant Instrumentation, Control and HumanMachine Interface Technologies.
American Nuclear Society.
ISBN 9780894488436.
x.

Kristiansen, Monica; Winther, Rune & Simensen, John Eldar (2009). Identifying the most important component dependencies in compound software : an experimental study, In Radim Bris; Carlos Guedes Soares & Sebastian Martorell (ed.),
Reliability, Risk and Safety – Theory and Applications, (contains papers presented at the 18th European Safety and Reliability Conference (Esrel 2009) in Prague, Czech Republic, September 2009.).
CRC Press.
ISBN 9780415555098.
Artikkel.
s 1333
 1340
Show summary
Predicting the reliability of software systems based on a component approach is inherently difficult, in particular due to failure dependencies between the software components. In this paper we investigate the possibility of including only partial dependency information, i.e. the effect of including only a subset of the actual component dependencies when assessing a system's failure probability. To do this, we have developed a simulator that mimics the failure behaviour of dependent software components. By using the simulator to assess the system's "true'' failure probability, we can compare this to the failure probability predictions we get when various component dependencies are ignored. This makes it possible to evaluate whether partial dependency modelling is worthwhile. In addition, it gives us the possibility to identify the component dependencies that are likely to have the highest impact on the system's predicted failure probability. From the simulation results, we clearly see that ignoring failure dependencies between parallel components have the largest impact on the system's predicted failure probability. Furthermore, our test case indicates that including only dependencies between parallel components might be adequate. In fact, the results from our simulations indicate that including dependencies between components in serial has a more unpredictable effect on the system's predicted failure probability.

Kristiansen, Monica & Winther, Rune (2007). Assessing reliability of compound software, In
Risk, reliability and societal safety : proceedings of the European Safety and Reliability Conference 2007 (ESREL 2007).
Taylor & Francis.
s 1731
 1738
Show summary
An important challenge when assessing the reliability of compound software is to include dependency aspects in the software reliability models. The objective of this paper is to present an approach on how knowledge on individual software components, as well as their structural usage, can be used to establish upper bounds for failure probabilities of compound software. This is of relevance not only for systems consisting of general inhouse software components, but also for systems where predeveloped software components, e.g. COTS, are used. Although there have been several proposed approaches to construct componentbased software reliability models (Hamlet 2001; Krishnamurthy et al. 1997, Kuball et al. 1999) most of these approaches tend to ignore the failure dependencies that usually exist between software components. The approach suggested in this paper utilizes Bayesian hypothesis testing principles (Cukic et al. 2003, Kristiansen 2005, Kristiansen et al. 2004; Smidts et al. 2002) which both consider prior information regarding the software components as well as testing.

Winther, Rune & Kristiansen, Monica (2006). On the modelling of failure dependencies between software components, In
Safety and reliability for managing risk.
Balkema.
s 1443
 1450
Show summary
The problem of assessing reliability of software has been a research topic for more than 30 years, and several successful methods for predicting the reliability of an individual piece of software based on testing have been presented. An important reason why we are still struggling with componentbased software reliability methods is that software components rarely can be assumed to fail independently. In this paper, we review some of the componentbased approaches that have been proposed, with special emphasis on the handling of dependencies. We then discuss the implications of this problem when assessing reliability of compound software, hoping to provide improved understanding and thus support the development of improved assessment methods.

Kristiansen, Monica (2005). Finding Upper Bounds for Software Failure Probabilities  Experiments and Results. Lecture Notes in Computer Science.
ISSN 03029743.
(LNCS 3688), s 179 193
Show summary
Abstract. This paper looks into some aspects of using Bayesian hypothesis testing to find upper bounds for software failure probabilities, which consider prior information regarding the software component in addition to testing. The paper shows how different choices of prior probability distributions for a software component’s failure probability influence the number of tests required to obtain adequate confidence in a software component. In addition, it evaluates different choices of prior probability distributions based on their relevance in a software context. The interpretations of the different prior distributions are emphasised. As a starting point, this paper concentrates on assessment of single software components, but the proposed approachwill later be extended to assess systems consisting of multiple software components. Software components include both general inhouse software components, as well as predeveloped software components (e.g. COTS, SOUP, etc).

Kristiansen, Monica & Winther, Rune (2004). Finding Upper Bounds for Dependencies between Software Components by Using Bayesian Hypothesis Testing. ?.
(1), s 9 15
Show summary
This paper presents ongoing work on an approach that is intended to make it more feasible to include software component dependencies in software reliability models. By using an approach proposed by Smidts el. al. [11] for estimating an upper bound for a system’s probability of failure on demand (pfd), consisting of statistical testing and Bayesian hypothesis testing, we believe that prior probabilities for simultaneous failures for sets of components can be confirmed with given confidence levels. Using this approach we will possibly, with a comparatively small effort, be able to find upper bounds for the probabilities of simultaneous failures for sets of components, thus making it possible to include dependency aspects in the reliability models. This has relevance not only for general software component models but also for assessing systems where predeveloped software (PDS), e.g. COTS, is used.
View all works in Cristin

Due, Beathe; Kristiansen, Monica; Dang, Ha The Hien & ColomoPalacios, Ricardo (2015). Introducing Big Data topics: a multicourse experience report from Norway.

Kristiansen, Monica; Holone, Harald & Natvig, Bent (2014). A Componentbased Approach for Assessing Reliability of Compound Software.

Kristiansen, Monica; Natvig, Bent & Winther, Rune (2013). Assessing software reliability of multistate systems.

Nätt, Tom Heine; Heide, Christian F. & Kristiansen, Monica (2013). Myter og sannheter om årsaker til studenters prestasjoner i programmering.

Heide, Christian F.; Kristiansen, Monica Lind & Nätt, Tom Heine (2012). Vurderingsform og karakterbruk.

Kristiansen, Monica Lind; Natvig, Bent & Winther, Rune (2012). A componentbased approach for assessing reliability of compound software.

Kristiansen, Monica Lind (2011). A componentbased approach for assessing reliability of compound software. Series of dissertations submitted to the Faculty of Mathematics and Natural Sciences, University of Oslo.. 1081.

Kristiansen, Monica Lind; Winther, Rune & Natvig, Bent (2011). A Bayesian hypothesis testing approach for finding upper bounds for probabilities that pairs of software components fail simultaneuosly. Statistical research report (Universitetet i Oslo. Matematisk institut. 1.
Show summary
Predicting the reliability of software systems based on a componentbased approach is inherently difficult, in particular due to failure dependencies between software components. One possible way to assess and include dependency aspects in software reliability models is to find upper bounds for probabilities that software components fail simultaneously and then include these into the reliability models. In earlier research, it has been shown that including partial dependency information may give substantial improvements in predicting the reliability of compound software compared to assuming independence between all software components. Furthermore, it has been shown that including dependencies between pairs of dataparallel components may give predictions close to the system's true reliability. In this paper, a Bayesian hypothesis testing approach for finding upper bounds for probabilities that pairs of software components fail simultaneously is described. This approach consists of two main steps: 1) establishing prior probability distributions for probabilities that pairs of software components fail simultaneously and 2) updating these prior probability distributions by performing statistical testing. In this paper, the focus is on the first step in the Bayesian hypothesis testing approach, and two possible procedures for establishing a prior probability distribution for the probability that a pair of software components fails simultaneously are proposed.

Kristiansen, Monica Lind; Winther, Rune & Natvig, Bent (2011). Establishing prior probability distributions for probabilities that pairs of software components fail simultaneously.

Kristiansen, Monica; Winther, Rune & Natvig, Bent (2010). Identifying possible rules for selecting the most important component dependencies in compound software.
Show summary
Since it is practically impossible to include all component dependencies in a system?s reliability calculation, a more viable approach would be to include only those dependencies that have a significant impact on the assessed reliability. In this paper, the concepts dataserial and dataparalell components are defined. Then a test system, consisting of five components, is investigated to identify possible rules for selecting the most important component dependencies. To do this, two techniques are applied: 1) direct calculation and 2) Principal Component Analysis (PCA). The results from the analyses clearly show that including partial dependency information may give substantial improvements in the reliability predictions, compared to assuming independence between all software components. However, this is only as long as the most important component dependencies are included in the reliability calculations. It is also apparent that dependencies between dataparallel components are far more important than dependencies between dataserial components. Further the analyses indicate that including only dependencies between dataparallel components may give predictions close to the system?s true failure probability. Including only dependencies between dataserial components may however result in predictions even worse than by assuming independence between all software components.

Kristiansen, Monica; Winther, Rune & Natvig, Bent (2010). On component dependencies in compound software. Statistical research report (Universitetet i Oslo. Matematisk institut. 5.
Show summary
Predicting the reliability of software systems based on a component approach is inherently difficult, in particular due to failure dependencies between the software components. Since it is practically difficult to include all component dependencies in a system's reliability calculation, a more viable approach would be to include only those dependencies that have a significant impact on the assessed system reliability. This paper starts out by defining two new concepts: dataserial and dataparallel components. These concepts are illustrated on a simple compound software, and it is shown how dependencies between dataserial and dataparallel components, as well as combinations of these, can be expressed using conditional probabilities. Secondly, this paper illustrates how the components' marginal reliabilities put direct restrictions on the components' conditional probabilities. It is also shown that the degrees of freedom are much fewer than first anticipated when it comes to conditional probabilities. At last, this paper investigates three test cases, each representing a wellknown software structure, to identify possible rules for selecting the most important component dependencies. To do this, three different techniques are applied: 1) direct calculation, 2) Birnbaum's measure and 3) Principal Component Analysis (PCA). The results from the analyses clearly show that including partial dependency information may give substantial improvements in the reliability predictions, compared to assuming independence between all software components.

Kristiansen, Monica; Winther, Rune & Simensen, John Eldar (2009). Identifying the most important component dependencies in compound software: an experimental study.
Show summary
Predicting the reliability of software systems based on a component approach is inherently difficult, in particular due to failure dependencies between the software components. In this paper we investigate the possibility of including only partial dependency information, i.e. the effect of including only a subset of the actual component dependencies when assessing a system's failure probability. To do this, we have developed a simulator that mimics the failure behaviour of dependent software components. By using the simulator to assess the system's "true'' failure probability, we can compare this to the failure probability predictions we get when various component dependencies are ignored. This makes it possible to evaluate whether partial dependency modelling is worthwhile. In addition, it gives us the possibility to identify the component dependencies that are likely to have the highest impact on the system's predicted failure probability. From the simulation results, we clearly see that ignoring failure dependencies between parallel components have the largest impact on the system's predicted failure probability. Furthermore, our test case indicates that including only dependencies between parallel components might be adequate. In fact, the results from our simulations indicate that including dependencies between components in serial has a more unpredictable effect on the system's predicted failure probability.

Kristiansen, Monica; Winther, Rune; van der Meulen, Meine & Revilla, Miguel A. (2009). The use of metrics to assess software component dependencies.
Show summary
In this paper, we present an experimental study which investigates the relations between a set of internal software metrics (McCabe's cyclomatic complexity, Halstead volume, program depth, Source Lines Of Code, etc.) and stochastic failure dependency between software components. The experiment was performed by analysing a large collection of program versions submitted to the same program specification. By analysing the available source code, a set of relevant internal software metrics were calculated for each of the program versions. Additionally, we knew if the program versions would fail or succeed for a large set of possible program inputs. This gave us an ideal situation to study stochastic failure dependencies between software components.

Winther, Rune & Kristiansen, Monica (2007). Assessing reliability of compound software.

Winther, Rune & Kristiansen, Monica (2007). Further considerations of dependency aspects in software reliability.
Show summary
In the first part of the report, the problem of modelling software component dependencies when assessing the reliability of compound software is addressed. We suggest that mechanisms that cause dependent failure behaviour can be split into two distinct categories, namely:  Developmentcultural aspects (DCaspects): This includes factors that cause different people, tools, methods, etc. to make the same mistakes.  Structural aspects (Saspects): This includes factors that allow a failure in one component to affect the execution of another component. We believe that some of the mechanisms can be treated better by more detailed models on the failure mode level, but realize that this will require much more information about the components and the system than is usually assumed available in other methods. In the second part of the report, ideas on how the Bayesian hypothesis testing approach can be extended to assess compound software are discussed further. Through the research, two sources of information relevant for assessing failure dependencies between software components are identified. These are: "Information for assessing the reliabilities of single software components" and "Information on how the software components are used in the compound software". In addition, ideas on how to identify the most important component failure dependencies, i.e. those dependencies that have the most impact on system reliability are presented.

Winther, Rune & Kristiansen, Monica (2006). On the modelling of failure dependencies between software components.
Show summary
The problem of assessing reliability of software has been a research topic for more than 30 years, and several successful methods for predicting the reliability of an individual piece of software based on testing have been presented. An important reason why we are still struggling with componentbased software reliability methods is that software components rarely can be assumed to fail independently. In this paper, we review some of the componentbased approaches that have been proposed, with special emphasis on the handling of dependencies. We then discuss the implications of this problem when assessing reliability of compound software, hoping to provide improved understanding and thus support the development of improved assessment methods.

Kristiansen, Monica & Winther, Rune (2005). Finding upper Bounds for Software Failure Probabilities  Experiments and results.
Show summary
This report looks into some aspects of using Bayesian hypothesis testing to find upper bounds for software failure probabilities. In the first part, the report evaluates the Bayesian hypothesis testing approach for finding upper bounds for failure probabilities of single software components. The report shows how different choices of prior probability distributions for a software component’s failure probability influence the number of tests required to obtain adequate confidence in a software component. In the evaluation, both the effect of the shape of the prior distribution as well as one’s prior confidence in the software component were investigated. In addition, different choices of prior probability distributions are discussed based on their relevance in a software context. In the second part, ideas on how the Bayesian hypothesis testing approach can be extended to assess systems consisting of multiple software components are given. One of the main challenges when assessing systems consisting of multiple software components is to include dependency aspects in the software reliability models. However, different types of failure dependencies between software components must be modelled differently. Identifying different types of failure dependencies are therefore an important condition for choosing a prior probability distribution, which correctly reflects one’s prior belief in the probability for software components failing dependently. In this report, software components include both general inhouse software components, as well as predeveloped software components (e.g. COTS, SOUP, etc).

Helminen, Atte; Gran, Bjørn Axel; Kristiansen, Monica & Winther, Rune (2004). Use of Operational Data for the Assessment of PreExisting Software.
Show summary
To build sufficient confidence on the reliability of the safety systems of nuclear power plants all available sources of information should be used. One important data source is the operational experience collected for the system. The operational experience is particularly applicable for systems of preexisting software. Even though systems and devices involving preexisting software are not considered for the functions of highest safety levels of nuclear power plants, they will most probably be introduced to functions of lower safety levels and to nonesafety related applications. In the paper we shortly discuss the use of operational experience data for the reliability assessment of preexisting software in general, and the role of preexisting software in relation to safety applications. Then we discuss the modelling of operational profiles, the application of expert judgement on operational profiles and the need of a realistic test case. Finally, we discuss the application of operational experience data in Bayesian statistics.

Kristiansen, Monica & Winther, Rune (2004). Finding Upper Bounds for Dependencies between Software Components by using Bayesian Hypothesis Testing.
Show summary
This paper presents ongoing work on an approach that is intended to make it more feasible to include software component dependencies in software reliability models. Based on an approach proposed by Smidts el. al. [4] for estimating an upper bound for a system’s probability of failure on demand (pfd), we believe that prior probabilities for simultaneous failures for sets of components can be confirmed with given confidence levels. By further development of this approach we will, possibly with a comparatively small effort, be able to find upper bounds for the probabilities of simultaneous failures for sets of components, thus making it possible to include dependency aspects in the reliability models.
View all works in Cristin
Published June 12, 2018 4:15 PM
 Last modified Sep. 13, 2019 12:56 PM