Recommendation System in Software Engineering

Recommendation System (RS)

An RS is a software tool providing useful suggestions about a particular item in order to help the user making a decision, for instance which book to buy or what music to listen to (Ricci et al., 2011). The need for RSs emerged especially with the introduction of e-commerce web sites and the explosive growth of information available on the web. In this perspective, an item denotes what the RS recommends to a user which can be a CD, a book, a movie, and so on. There are two types of recommendations: personalized and non-personalized recommendations (Ricci et al., 2011).
Personalized recommendations are usually presented as a ranked list of items. This ranking can be defined as a prediction of the most useful items. It is computed based on the user preferences. These preferences can be expressed:
explicitly, e.g. as ratings assigned by the user for a particular item; or implicitly which are usually inferred by interpreting users’ actions, e.g. visiting a particular item’s page can be considered as an implicit sign of preference for that item.
Non-personalized recommendations are simpler to produce and are often used in magazines and newspapers, for instance, suggesting the top ten selections of magazines. This type of recommendations is not typically addressed by RS researches.

RS surveys

Many RS reviews have been conducted in order to present an overview of different existing recommendation techniques and to identify their drawbacks. In this section, we report some relevant RS reviews.
Adomavicius and Tuzhilin (2005) conducted a survey analyzing a sample of recommendation approaches in order to identify various limitations and to propose possible extensions. The analyzed approaches are classified into three categories: Content-Based Filtering (CBF), Collaborative Filtering (CF) and hybrid recommendation approaches.
CBF approach recommends items similar to the ones liked by the user in the past, based on his preferences and personal interests. These preferences can be presented as a set of keywords or categories. They are usually extracted from items descriptions which contain textual information, for instance, in a movies RS, preferences can be genres, lead actors, directors. A weighted measure can be affected to each keyword in order to determine its relevance in the textual description, e.g. Term Frequency – Inverse Document Frequency (TF-IDF) which is a weight assigned to every term according to its importance (frequency) to a document in a corpus or a collection of documents. In order to compute similarity, the recommendation techniques used can be based on heuristics, e.g. cosine similarity measure, or based on models using machine learning techniques, e.g. artificial neural networks.
Adomavicius and Tuzhilin (2005) identified three main limitations in CBF approach:
limited content analysis which is usually performed by a computer in the case of a textual content, otherwise it is performed manually which is often time-consuming;
overspecialization as the recommended items are limited to items similar to those liked by the user and may include the same information; and
new user problem as the new user did not yet rate items so the content-based approach would not be able to understand his preferences and, thus, to recommend relevant items.
CF approach recommends items which are most liked by users with similar preferences as the active user. For instance, in order to recommend a movie, CF approach tries to identify users with movies preferences similar to the ones of the active user and then recommends the movies which are most liked by the similar users previously identified. Similarity algorithms used by CF approaches are classified in two main categories: heuristic-based, e.g. Nearest Neighbors (NN) algorithm, and model-based algorithms, e.g. clustering. Adomavicius and Tuzhilin (2005) discussed the three following limitations in CF approach:
new user problem which is identified in CBF limitations; new item problem as the new item cannot be recommended until it is rated by some users; and rating sparsity as the items that have been rated by few users would be rarely recommended. The same goes for users with different preferences; compared to the rest of users; who would not be able to have similar users and thus to get relevant recommendations. A possible solution to address this limitation is to use information incorporated in user profile in similarity computing.
Hybrid approach combines the CBF and CF approaches in order to address some of their limitations. This combination can be performed in four different ways:
implementing CBF and CF techniques separately and then combining the two recommenders by combining their ratings into one final rating;
adding some CBF characteristics to CF approach, for instance maintaining content-based users profiles in CF approach allows to address the ratings sparsity limitation as an unrated item could be recommended if it matches the user preferences;
adding some CF characteristics to CBF approach, for instance performing the collaborative approach on a set of user profiles; implementing a single recommendation approach that incorporates CBF and CF characteristics.

Relevant RSSE works

RSSE surveys

Compared to RS works, only a handful reviews have been conducted on RSSE. we report relevant RSSE reviews published in the last decade.
In (Happel and Maalej, 2008), the authors conducted a survey of the papers that were published between 2003 and 2008 (six RSSEs). This survey aims to identify potentials and limitations of the analyzed RSSEs with a particular focus on their architecture (e.g. client/ server, web application), their trigger events (proactive or reactive) and the type of recommended information (e.g. methods, project artifacts). Happel and Maalej (2008) outlined a recommendation landscape following two main dimensions:
when to recommend, i.e. recommendation process is triggered proactively (“Propose”) or reactively (“Ask to share”); and what to recommend, i.e. information to recommend which is classified into development information (e.g. code, project artifacts) and collaboration information (e.g. people to contact).

Building RSSE works

In (Robillard et al., 2010), the authors presented an overview of some relevant recommenders focusing on how RSSEs can help developers. Also they outlined some design dimensions, potentials and gaps of existing RSSEs. In this study, the main dimensions identified involve:
a data context collection process which can be implicit or explicit and mainly involve information like the user’s past interactions (e.g. browsed components, etc.) and the current task (e.g. debugging, adding new feature, etc.);
a recommendation engine that analyze additional data to generate recommendations using ranking techniques; and a user interface that triggers the recommendation process implicitly or explicitly (i.e. proactive and reactive modes) and presents results to the user.
This overview revealed some RSSE benefits like proactive mode which delivers automatically relevant information to developers rather than waiting for an explicit request.
However, many limitations have been identified such as: “cold-start problem” of a project that could be addressed by leveraging data from other similar projects, and the output form which is presented as a list of recommendations in most of RSSEs so there are a limited explanation features.

Evaluating RSSE works

As evaluation is an important step to complete the building process of any software tool, we present an overview of some relevant works on approaches and metrics to evaluate RSSEs.
In (Avazpour et al., 2014), the authors review a range of evaluation metrics, measures and commonly used approaches to evaluate RSSEs. As a first step, the authors investigate a set of dimensions that can be relevant to assess RSSE quality. These dimensions are grouped into four main categories :
Recommendation-centric dimensions which evaluate the generated recommendations; User-centric dimensions assess the degree to which RSSE fulfills the user needs; System-centric dimensions evaluate the recommendation system itself; and Delivery-centric dimensions gauge the recommendation system in the context of use.

RSSEs supporting developers in change tasks

Software developers, especially newcomers, often encounter difficulties in their change tasks. They usually have to understand the existing code, implementing the required modifications without breaking something in the process.
Thus, they need assistance to accomplish their first tasks. However, allocating an experienced member to assist newcomers could be expensive and not always possible for a long time period. And even for experienced members, locating the software artifacts relevant to the changing task at hand could be an error-prone and a time consuming task. In this perspective, RSSEs can be helpful by providing useful software artifacts. In the following, we describe some of the selected tools supporting these goals.
Mentor (Malheiros et al., 2012) assists newcomers in the realization of their first tasks by recommending solved change requests and their related source files. The process is triggered explicitly by developer’s request. It starts with an open change request composed of different fields (summary, description and developers’ comments) written in a natural language text.
Those fields are concatenated and compared to a set of solved change requests stored in a database which are processed beforehand into an advanced statistical model. The tool analyzes every stored change request and version control files in order to identify and store the associated relations (i.e. by scanning the commit messages). To do so, a heuristic based on regular expressions is used. To identify change requests similar to the open one, a comparison is performed using an entropy measure. The entropy of the open change request is calculated using the advanced statistical model of every stored change request. The similar changes are then classified according to their entropy scores and presented to the developer. Clicking on a recommended change request, the tool shows the associated revisions and source files.
Hipikat (Cubranic et al., 2005) is a similar RSSE which assists newcomers by recommending artifacts from the current project. A project memory is formed implicitly by all the artifacts of the project under development and links between those artifacts (e.g. file revisions which implemented a particular change request). An artifact could be:a change task artifact (e.g. feature request and bug report), a source file version stored in Control Version System (CVS),  a message (e.g. emails and forums), or an other document (e.g. design documents). The links between project artifacts are inferred by Hipikat using different heuristics such as:
clustering (e.g. clustering change task requests that have been fixed within the same time window);
regular expression based heuristics for instance used to match change requests with the related file versions, etc.

RSSEs supporting developers in refactoring tasks

Software refactoring aims to maintain and understand software systems by restructuring the existing source code. However, selecting the appropriate refactoring operation that could be relevant to the current project is a challenging task due to the lack of documentation and large code bases. In this perspective, RSSEs can be helpful by providing relevant refactoring opportunities.
In (Bavota et al., 2014), the authors proposed an approach that recommends refactoring opportunities based on team development activity. The basic assumption of this approach is that code entities (e.g. methods or classes) modified by the same team could be extracted and grouped together in a separate module (e.g. class or package). A team is defined as a group of developers who has worked on the same source code entities. A code analyzer is used to parse the source code of a project and to extract its change history and the associated authors within a specified time window. The retrieved changes (e.g. methods added, removed or updated) are tokenized and rendered into a BoW. A clustering technique is used to group developers working on same code entities into teams. The output of this technique is a tree, called dendrogram, where the leafs represent developers and the remaining nodes are the possible clusters (i.e. teams). In order to recommend refactoring opportunities, a detection algorithm is used, for instance the algorithm detects methods edited by the same team, if the number of these methods is superior to a threshold then those methods can be extracted to form a separate class. Thus, the approach recommends code entities to be extracted (e.g. methods, classes).

Le rapport de stage ou le pfe est un document d’analyse, de synthèse et d’évaluation de votre apprentissage, c’est pour cela rapport-gratuit.com propose le téléchargement des modèles complet de projet de fin d’étude, rapport de stage, mémoire, pfe, thèse, pour connaître la méthodologie à avoir et savoir comment construire les parties d’un projet de fin d’étude.

Table des matières

INTRODUCTION
CHAPTER 1 LITERATURE REVIEW 
1.1 Basic concepts
1.1.1 Recommendation System (RS)
1.1.2 Recommendation System in Software Engineering (RSSE)
1.2 RS surveys
1.3 Relevant RSSE works 
1.3.1 RSSE surveys
1.3.2 Building RSSE works
1.3.3 Evaluating RSSE works
1.4 Limitations of existing works
CHAPTER 2 RESEARCH METHODOLOGY
2.1 Planning of the study 
2.1.1 Research questions
2.1.2 Search strategy
2.1.3 Selection criteria
2.1.4 Data extraction strategy
CHAPTER 3 EXECUTION OF THE STUDY 
3.1 RSSEs supporting developers in change tasks
3.2 RSSEs supporting developers in API usage 
3.3 RSSEs supporting developers in refactoring tasks 
3.4 RSSEs supporting developers in solving exception failures, bugs, conflicts and testing tasks
3.5 RSSEs recommending reusable software components and components’ design 
3.6 RSSEs assisting developers in exploring local codebases and visited source locations
3.7 Other RSSEs 
3.7.1 RSSE assisting developers in software prototyping activities
3.7.2 RSSE assisting developers in tagging software artifacts
3.7.3 RSSE recommending experts
3.8 Conclusion 
CHAPTER 4 RESULTS ANALYSIS 
4.1 Context Extraction 
4.1.1 What is context in RSSE ?
4.1.2 Context Extraction: An overview
4.1.3 Trigger
4.1.4 Context input
4.1.4.1 Input Scope
4.1.4.2 Specific Elements to extract
4.1.5 Treatment
4.1.6 Output
4.2 Recommendation Engine 
4.2.1 Recommendation Engine: An overview
4.2.2 Corpus
4.2.2.1 Raw Data
4.2.2.2 Treatment
4.2.2.3 Processed Data
4.2.3 Recommendation
4.2.3.1 Treatment
4.2.3.2 Filtering / Ranking
4.2.3.3 Recommendations Nature
CHAPTER 5 DISCUSSION
5.1 Results Synthesis
5.1.1 Context extraction process
5.1.2 Recommendation engine
5.2 Validity threats 
5.2.1 External validity
5.2.2 Internal validity
CONCLUSION

Rapport PFE, mémoire et thèse PDFTélécharger le rapport complet

Télécharger aussi :

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *