iSE

Intelligent Software Engineering

南京大学智能软件工程实验室

Product

Massive Open Online Collaboration Testing

Based on the MOOCTEST platform, a multilingual online integrated development environment provides an online development and test learning environment for all users who have access to the system platform. Users can create the workspace of the project through WebIDE for online development, operation, submission and other operations. Users can use it anytime, anywhere, conveniently and quickly when they need, which greatly reduces the learning cost.

FEAT allow the users to access their automated testing techniques by implement some interface. So far, we access three automated techniques which have good compatibility: Monkey, AppCrawler and Appium.You can use FEAT to instrument and compile the Android source code, then use the techniques have been accessed to test them to get a test result, for evaluating the ability of these techniques.

BugHunter is a platform to solve the problem of low-quality and duplicate bug reports of crowdsourced testing. With the help of BugHunter, the crowdsourced workers can efficiently capture screenshot, write short description and create bug report. A series of bug reports are aggregated online and then recommended to the other workers in real time. The crowdsourced workers can (1) help review, verify and enrich each others' bug reports; (2) escape duplicate bug reports; (3) be guided to conduct more professional testing with the help of collective intelligence. BugHunter can improve the quality of the final report and reduce the test costs.

CTRAS is a tool for automatically aggregating and summarizing duplicate crowdsourced test reports on the fly. CTRAS is capable of automatically detecting duplicates based on both textual information and the screenshots, and further aggregates and summarizes the duplicate test reports. CTRAS provides the end users a comprehensive and comprehensible understanding of all duplicates by identifying the main topics across the group of aggregated test reports and highlighting supplementary topics that are mentioned in subgroups of test reports. Also, it provides the classic tool of issue tracking systems, such as the project-report dashboard and keyword searching, and automates their classic functionalities, such as bug triaging and best fixer recommendation, to assist end users in managing and diagnosing test reports.

FuRong builds a bug model with complete context information, such as screen-shoots, execution events and logs from multi-devices, which are significant for developers, and then inducts a classification rule for bugs, which is the foundation for bug classification and deduplication. FuRong classifies bugs and removes some redundant bug information. FuRong also recommends a possible fixing solution for each type of bug. An empirical study of 8 open source Android applications with automated testing on 20 devices has been conducted. The preliminary results show the effectiveness of FuRong with the average accuracy of 93%.

Crowdsourced testing has become a new trend in testing Android applications. However, the crowds work separately without collaborations and guidance in existing platforms. In this paper, we leverage dynamic automated techniques and static analysis to promote crowdsourced testing. MoYe firstly constructs the window transition graph (WTG) with dynamic automated testing techniques, which provides the annotation of suspicious bugs. Then MoYe enhances WTG by extracting window transitions from APK files with static analysis techniques. Then, MoYe builds a recommendation engine based on the enhanced WTG model and user operations intercepted in real time to allocate different test tasks for different crowd workers. Further, MoYe provides real-time guidance for crowd workers to complete test tasks to verify suspicious bugs or explore new bugs. The preliminary experiments show that MoYe can improve the efficiency of crowdsourced testing. The demo video can be found at Youtube.

product_I
MAF

MAF, a plagiarism detection technology for test code, which relies on a constant similarity threshold to determine whether there is plagiarism between two pieces of test code. However, finding an appropriate threshold is never easy. We realize that a constant threshold cannot be used in every circumstance. To address this issue and make MAF more usable, we developed MAF-2 by applying a stable and reliable classifier based on Support Vector Machine classification algorithm. Experiments were conducted on three test code data sets, and the results show that MAF-2 can achieve plagiarism detection effectively. The video presentation of MAF-2 is available at Youtube and the source code can be downloaded at Github.

The fullname called "The Design and Implementation of Crowdsourced Feedback Review and Delivery System Based on Test Report Summary". Jie Mei has designed and implemented a crowdsourced review and delivery system based on the test report summary. The system has introduced a novel mechanism, which is based on reporting summarization to solve the problems of the crowdsourced feedback review.

product_I
OAS

The online crash analysis system fully adapts to the mainstream categories of Android applications in the market. Extensive experiments have been performed for 10 different open-source applications 20 Android devices. Experimental results demonstrate that, the system implemented in this thesis not only can automatically capture Android application crashes, but also can effectively classify, deduplicate and visualize crash information. The classification accuracy rate achieves 88.1%, and deduplication rate achieves 60.7%.

GUI TSRS designs and implements an Android application GUI test script repair system, which improves reusability by repairing GUI test scripts that have failed due to version updates. The system divides the repair process into four parts. Firstly, the GUI components of Android applications are extracted by using the automated traversal tool and the event flow graph model is established.

product_I
QCS

QCS improves the quality of bug reports and the quality awareness of crowd workers, enables managers to identify the quality of bug reports and malicious crowd workers quickly, and makes it easy to review massive bug reports for managers. Also, it promotes quality control study in collaborative crowdsourcing mode.

product_I
AGS

AGS designs and implements an automatic generation system for bug re- port of mobile applications to solve the problems of low readability of logs and low efficiency of bug location in automated testing. By mining multiple logs generated by automated testing, this system builds a bug model containing complete context information, including operation sequence, stack information, bug screenshots, etc.

product_I
IMS

The system introduces the Isolation Forest algorithm to implement data annotation preprocessing, thereby reducing the workload of data labeling for operation and maintenance personnel. Furthermore, the accuracy of anomaly detection and trend prediction is improved by iteratively updating data labels and resetting models. The system is mainly divided into four modules. The monitoring module is responsible for the collection and storage of monitoring data.

product_I
BLS

BLS selects a total 1243 bug reports created before October 16,2018 about the open source project Zookeeper to verify the bug location accuracy of this system initially. The hit rate of Top1 in the bug localization result is 52.55%, the hit rate of Top 5 is 78.94%, and the hit rate of Top 10 is 87.45%. In addition, we simulated 500 users concurrently accessing 7 common interfaces in the system performance test. The results show that the average response time of the seven interfaces is 247ms, the highest is 447ms. All responses take less than 500ms and the error rate is 0.

CTSR&RS selects 10 mainstream mobile devices and 5 mobile applications to analyze and verify the system availability. The experiment proves that the record process of this system has better support for different devices under the Android platform, and the script cross-device replay success rate is over 80%. The cross-platform replay mode has a success rate of about 60%. The system has been deployed for internal mobile application testing. Through a single record of test scripts and cross-platform replay of scripts, it effectively reduces the number of test script maintenance and the cost of automated testing.

The platform solves the problem of insufficient human and device resources through online real machine control and bug crowdsourced verification. Crowd workers can remotely record and replay test scripts in the device, and they can modify and perfect test scripts. Bug crowdsourced verification allows task requesters to verify bugs through crowdsource and improve verification efficiency based on the results of verification submitted by crowd workers and the statistical distribution of results.

CQCS mainly expounds the design and implementation of the code quality control system for development team based on Git. The main functions of the system are responsible by three modules together, including the code quality control module that extracts the quality characteristics of the code, the Git control module that manage and excavates the evolution history of the project, and the team’s code quality evaluation module that integrates the data to give the evaluation results. In order to ensure the well scalability of the system, the module transmits information through metadata, and invokes the initiator module to fit the metadata from the invoked module through the data converter.

This system is realized by mainstream frameworks, with Angular2 as the frontend framework, Spring Boot as the back-end framework, Redis as the query cache, and MongoDB as the database. Similar report recommendation is implemented by Word2Vec and WMD algorithm. The audit task recommendation is implemented using a model-based collaborative filtering approach. The test page recommendation is implemented using the multi-source shortest path method based on users’histories.



TOP