Category Archives: ppgsigc

Do research institutes benefit from their network positions in research collaboration networks with industries or/and universities?

S01664972.gif

Publication date: Available online 6 December 2017
Source:Technovation
Author(s): Kaihua Chen, Yi Zhang, Guilong Zhu, Rongping Mu
There is scarce empirical evidence on the impact of inter-organizational collaboration across research institutes, industries or/and universities on the scientific performance of research institutes. This paper fills this gap by examining how the research institutes’ bilateral/trilateral collaborations with industries or/and universities influence their research outputs from a network perspective. We construct a unique dataset based on the Chinese Academy of Sciences’ inter-organizational research collaboration networks with industries or/and universities, which enables us to build three homogeneous, heterogeneous and hybrid inter-organizational research networks as our multi-scenario sample. Our study confirms that the scientific performance of research institutes is significantly affected by their network positions in the research collaboration networks with industries or/and universities. Specifically, in the homogeneous “University-Research Institute” (UR) collaboration network, the degree centrality and the structural holes of the research institutes affect their scientific performance respectively in an inverted U-shaped manner and a positive linear one. By contrast, in both the heterogeneous “Industry-Research Institute” (IR) and the hybrid “Industry-University-Research Institute” (IUR) collaboration networks, the degree centrality and the structural holes of research institutes affect their scientific performance respectively in a positive linear manner and an inverted U-shaped one. Our findings indicate that the impact pattern of the network positions of innovative organizations on their performance likely varies with the network structure and composition in different inter-organizational contexts.

http://ift.tt/2nAsghW

A Comparative Study of Different Source Code Metrics and Machine Learning Algorithms for Predicting Change Proneness of Object Oriented Systems. (arXiv:1712.07944v1 [cs.SE])

Change-prone classes or modules are defined as software components in the
source code which are likely to change in the future. Change-proneness
prediction is useful to the maintenance team as they can optimize and focus
their testing resources on the modules which have a higher likelihood of
change. Change-proneness prediction model can be built by using source code
metrics as predictors or features within a machine learning classification
framework. In this paper, twenty one source code metrics are computed to
develop a statistical model for predicting change-proneness modules. Since the
performance of the change-proneness model depends on the source code metrics,
they are used as independent variables or predictors for the change-proneness
model. Eleven different feature selection techniques (including the usage of
all the $21$ proposed source code metrics described in the paper) are used to
remove irrelevant features and select the best set of features. The
effectiveness of the set of source code metrics are evaluated using eighteen
different classiffication techniques and three ensemble techniques.
Experimental results demonstrate that the model based on selected set of source
code metrics after applying feature selection techniques achieves better
results as compared to the model using all source code metrics as predictors.
Our experimental results reveal that the predictive model developed using
LSSVM-RBF yields better result as compared to other classification techniques

http://ift.tt/2BvXbSY

The Impact of Tailoring Criteria on Agile Practices Adoption: A Survey with Novice Agile Practitioners in Brazil

alertIcon.gif

Publication date: Available online 13 December 2017
Source:Journal of Systems and Software
Author(s): Amadeu Silveira Campanelli, Ronaldo Darwich Camilo, Fernando Silva Parreiras
The software development industry adopts agile methods in different ways by considering contextual requirements. To fulfill organizational needs, adoption strategies consider agile methods tailoring. However, tailoring according to the context of the organization remains a problem to be solved. Literature on criteria for adopting software development methods exists, but not specifically for agile methods. Given this scenario, the following research question arises: What is the impact of software method tailoring criteria on the adoption of agile practices? To answer this question, we conducted a survey among agile practitioners in Brazil to gather data about importance of the tailoring criteria and agile practices adopted. A model for agile practices adoption based on the tailoring criteria is proposed using the results of the survey with a majority of novice agile practitioners. The proposed model was validated using PLS-SEM2 (partial least squares structural equation modeling) and the survey data. Results show that adoption of agile practices was influenced by criteria such as external environment, previous knowledge and internal environment. Results also indicate that organizations tend to use hybrid/custom software methods and select agile practices according to their needs.

http://ift.tt/2BikjU2

Predicting bug-fixing time: A replication study using an open source software project

Publication date: February 2018
Source:Journal of Systems and Software, Volume 136
Author(s): Shirin Akbarinasaji, Bora Caglayan, Ayse Bener
Background: On projects with tight schedules and limited budgets, it may not be possible to resolve all known bugs before the next release. Estimates of the time required to fix known bugs (the “bug fixing time”) would assist managers in allocating bug fixing resources when faced with a high volume of bug reports.Aim: In this work, we aim to replicate a model for predicting bug fixing time with open source data from Bugzilla Firefox.Method: To perform the replication study, we follow the replication guidelines put forth by Carver [J. C. Carver, Towards reporting guidelines for experimental replications: a proposal, in: 1st International Workshop on Replication in Empirical Software Engineering, 2010.]. Similar to the original study, we apply a Markov-based model to predict the number of bugs that can be fixed monthly. In addition, we employ Monte-Carlo simulation to predict the total fixing time for a given number of bugs. We then use the k-nearest neighbors algorithm to classify fixing times into slow and fast.Result: The results of the replicated study on Firefox are consistent with those of the original study. The results show that there are similarities in the bug handling behaviour of both systems.Conclusion: We conclude that the model that estimates the bug fixing time is robust enough to be generalized, and we can rely on this model for our future research.

http://ift.tt/2iFsdj8

Root cause analysis in IT infrastructures using ontologies and abduction in Markov Logic Networks

Publication date: Available online 13 November 2017
Source:Information Systems
Author(s): Joerg Schoenfisch, Christian Meilicke, Janno von Stülpnagel, Jens Ortmann, Heiner Stuckenschmidt
Information systems play a crucial role in most of today’s business operations. High availability and reliability of services and hardware, and, in the case of outages, short response times are essential. Thus, a high amount of tool support and automation in risk management is desirable to decrease downtime.We propose a new approach for calculating the root cause for an observed failure in an IT infrastructure. Our approach is based on abduction in Markov Logic Networks. Abduction aims to find an explanation for a given observation in the light of some background knowledge. In failure diagnosis, the explanation corresponds to the root cause, the observation to the failure of a component, and the background knowledge to the dependency graph extended by potential risks. We apply a method to extend a Markov Logic Network in order to conduct abductive reasoning, which is not naturally supported in this formalism.Our approach exhibits a high amount of reusability and facilitates modeling by using ontologies as background knowledge. This enables users without specific knowledge of a concrete infrastructure to gain viable insights in the case of an incident. We implemented the method in a tool and illustrate its suitability for root cause analysis by applying it to a sample scenario and testing its scalability on randomly generated infrastructures.

http://ift.tt/2iygL8Q

Motivating the Contributions: An Open Innovation Perspective on What to Share as Open Source Software

S01641212.gif

Publication date: Available online 2 October 2017
Source:Journal of Systems and Software
Author(s): J. Linåker, H. Munir, K. Wnuk, C.E. Mols
Open Source Software (OSS) ecosystems have reshaped the ways how software-intensive firms develop products and deliver value to customers. However, firms still need support for strategic product planning in terms of what to develop internally and what to share as OSS. Existing models accurately capture commoditization in software business, but lack operational support to decide what contribution strategy to employ in terms of what and when to contribute. This study proposes a Contribution Acceptance Process (CAP) model from which firms can adopt contribution strategies that align with product strategies and planning. In a design science influenced case study executed at Sony Mobile, the CAP model was iteratively developed in close collaboration with the firm’s practitioners. The CAP model helps classify artifacts according to business impact and control complexity so firms may estimate and plan whether an artifact should be contributed or not. Further, an information meta-model is proposed that helps operationalize the CAP model at the organization. The CAP model provides an operational OI perspective on what firms involved in OSS ecosystems should share, by helping them motivate contributions through the creation of contribution strategies. The goal is to help maximize return on investment and sustain needed influence in OSS ecosystems.

http://ift.tt/2g5xEDq

Test case prioritization approaches in regression testing: A systematic literature review

Publication date: Available online 1 September 2017
Source:Information and Software Technology
Author(s): Muhammad Khatibsyarbini, Mohd Adham Isa, Dayang N.A. Jawawi, Rooster Tumeng
ContextSoftware quality can be assured by going through software testing process. However, software testing phase is an expensive process as it consumes a longer time. By scheduling test cases execution order through a prioritization approach, software testing efficiency can be improved especially during regression testing.ObjectiveIt is a notable step to be taken in constructing important software testing environment so that a system’s commercial value can increase. The main idea of this review is to examine and classify the current test case prioritization approaches based on the articulated research questions.MethodSet of search keywords with appropriate repositories were utilized to extract most important studies that fulfill all the criteria defined and classified under journal, conference paper, symposiums and workshops categories. 69 primary studies were nominated from the review strategy.ResultsThere were 40 journal articles, 21 conference papers, three workshop articles, and five symposium articles collected from the primary studies. As for the result, it can be said that TCP approaches are still broadly open for improvements. Each approach in TCP has specified potential values, advantages, and limitation. Additionally, we found that variations in the starting point of TCP process among the approaches provide a different timeline and benefit to project manager to choose which approaches suite with the project schedule and available resources.ConclusionTest case prioritization has already been considerably discussed in the software testing domain. However, it is commonly learned that there are quite a number of existing prioritization techniques that can still be improved especially in data used and execution process for each approach.

http://ift.tt/2wWy2JS

AI will fundamentally change how we manage content

 Content management is about to undergo a foundational shift as AI and machine learning bring long-sought order to enterprise content. As the volume of content has increased, the ability to manage it all seems to have alluded us. Ironic, since Content Management Systems were supposed to solve the enterprise content organization problem and help prevent your employees from reinventing the wheel. Read More

http://ift.tt/2vyG9fa

Modeling and measuring attributes influencing DevOps implementation in an enterprise using structural equation modeling

Publication date: Available online 24 July 2017
Source:Information and Software Technology
Author(s): Viral Gupta, P.K. Kapur, Deepak Kumar
ContextDevOps refer to set of principles that advocate a tight integration between development and operation to achieve higher quality with faster turnaround. It is paramount to assess and measure the DevOps attributes in an enterprise. The literature provides references to these attributes but the detail assessment of these attributes and determination of the maturity of DevOps implementation is still a challenge.ObjectiveThis paper provides important insights for practitioners to assess and measure the DevOps attributes using statistical analysis and Two-way assessment. The proposed framework facilitates the detailed assessment of eighteen attributes to identify key independent attributes and measure them to determine the maturity of DevOps implementation in an enterprise.MethodThe relationship between eighteen attributes was examined; a structural model was established using Exploratory and Confirmatory Factor Analysis, the model was validated using Structural Equation Modelling. Key independent attributes were identified which influences other attributes and overall DevOps implementation. Using Two-way assessment, key independent attributes were measured and the maturity of the DevOps implementation was determined in an enterprise.ResultsUsing Exploratory and Confirmatory Factor Analysis, 18 attributes were categorized under 4 latent variables namely Automation, Source Control, Cohesive Teams and Continuous Delivery. Using Structural Equation Modelling, 10 key independent attributes were determined, that influenced other attributes and overall DevOps implementation. Two-way assessment was applied to measure the key independent attributes and it was found that 4 of these attributes were performing below threshold level. Corrective actions were taken by the management team, and the revised measurement of these attributes demonstrated 40% improvement in the maturity level of DevOps implementation.ConclusionThe proposed framework contributes significantly to the field of DevOps by enabling practitioners to conduct the detailed assessment and measurement of DevOps attributes to determine the maturity of DevOps implementation to achieve higher quality.

http://ift.tt/2wmAFcd

Key Factors that Influence Task Allocation in Global Software Development

Publication date: Available online 5 July 2017
Source:Information and Software Technology
Author(s): Sajjad Mahmood, Sajid Anwer, Mahmood Niazi, Mohammad Alshayeb, Ita Richardson
ContextPlanning and managing task allocation in Global Software Development (GSD) projects is both critical and challenging. To date, a number of models that support task allocation have been proposed, including cost models and risk-based multi-criteria optimization models.ObjectiveThe objective of this paper is to identify the factors that influence task allocation in the GSD project management context.MethodFirst, we implemented a formal Systematic Literature Review (SLR) approach and identified a set of factors that influence task allocation in GSD projects. Second, a questionnaire survey was developed based on the SLR, and we collected feedback from 62 industry practitioners.ResultsThe findings of this combined SLR and questionnaire survey indicate that site technical expertise, time zone difference, resource cost, task dependency, task size and vendor reliability are the key criteria for the distribution of work units in a GSD project. The results of the t-test show that there is no significant difference between the findings of the SLR and questionnaire survey. However, the industry study data indicates that resource cost and task dependency are more important to a centralized GSD project structure while task size is a key factor in a decentralized GSD project structure.ConclusionGSD organizations should try to consider the identified task allocation factors when managing their global software development activities to better understand, plan and manage work distribution decisions.

http://ift.tt/2tCF34l