MoocTest

MEMBERS
Teacher

陈振宇
ZhenYu Chen

Professor

Teacher

刘嘉
Jia Liu

Associate Professor

Teacher

王兴亚
Xingya Wang

Associate Professor

Student

许金
Jin Xu

Ph.D. student

Student

赵源
Yuan Zhao

Ph.D. student

Student

郝蕊
Rui Hao

Ph.D. student

Student

钟怡
Yi Zhong

Ph.D. student

Student

李玉莹
Yuying Li

Ph.D. student

Student

钱瑞祥
Ruixiang Qian

M.Sc. Student

Student

张松涛
Songtao Zhang

M.Sc. Student

Student

李成浩
Chenghao Li

M.Sc. Student

Student

王瑾
Jin Wang

M.Sc. Student

Student

贺璐
Lu He

M.Sc. Student

Student

王睿智
Ruizhi Wang

M.Sc. Student

Student

杨郁芩
Yuqin Yang

M.Sc. Student

Student

郭超
Chao Guo

M.Sc. Student

Student

郭楠楠
Nannan Guo

M.Sc. Student

Student

吉品
Pin Ji

M.Sc. Student

Student

张欢
Huan Zhang

M.Sc. Student

Student

段梦洋
Mengyang Duan

M.Sc. Student

Student

李文龙
Wenlong Li

M.Sc. Student

Student

李紫欣
Zixin Li

M.Sc. Student

Student

梁越勇
Yueyong Liang

M.Sc. Student

Student

刘芳潇
Fangxiao Liu

M.Sc. Student

Student

徐佳炜
Jiawei Xu

M.Sc. Student

Student

徐世诚
Shicheng Xu

M.Sc. Student

Student

薛晓波
Xiaobo Xue

M.Sc. Student

Student

孙加辉
Jiahui Sun

M.Sc. Student

Student

朱齐
Qi Zhu

M.Sc. Student

PRODUCTS

MoocTest

Massive Open Online Collaboration Testing

WebIDE

Based on the MOOCTEST platform, a multilingual online integrated development environment provides an online development and test learning environment for all users who have access to the system platform. Users can create the workspace of the project through WebIDE for online development, operation, submission and other operations. Users can use it anytime, anywhere, conveniently and quickly when they need, which greatly reduces the learning cost.

FEAT Sources

FEAT allow the users to access their automated testing techniques by implement some interface. So far, we access three automated techniques which have good compatibility: Monkey, AppCrawler and Appium.You can use FEAT to instrument and compile the Android source code, then use the techniques have been accessed to test them to get a test result, for evaluating the ability of these techniques.

BugHunter

BugHunter is a platform to solve the problem of low-quality and duplicate bug reports of crowdsourced testing. With the help of BugHunter, the crowdsourced workers can efficiently capture screenshot, write short description and create bug report. A series of bug reports are aggregated online and then recommended to the other workers in real time. The crowdsourced workers can (1) help review, verify and enrich each others bug reports; (2) escape duplicate bug reports; (3) be guided to conduct more professional testing with the help of collective intelligence. BugHunter can improve the quality of the final report and reduce the test costs.

CTRAS

CTRAS is a tool for automatically aggregating and summarizing duplicate crowdsourced test reports on the fly. CTRAS is capable of automatically detecting duplicates based on both textual information and the screenshots, and further aggregates and summarizes the duplicate test reports. CTRAS provides the end users a comprehensive and comprehensible understanding of all duplicates by identifying the main topics across the group of aggregated test reports and highlighting supplementary topics that are mentioned in subgroups of test reports. Also, it provides the classic tool of issue tracking systems, such as the project-report dashboard and keyword searching, and automates their classic functionalities, such as bug triaging and best fixer recommendation, to assist end users in managing and diagnosing test reports.

MoYe

Crowdsourced testing has become a new trend in testing Android applications. However, the crowds work separately without collaborations and guidance in existing platforms. In this paper, we leverage dynamic automated techniques and static analysis to promote crowdsourced testing. MoYe firstly constructs the window transition graph (WTG) with dynamic automated testing techniques, which provides the annotation of suspicious bugs. Then MoYe enhances WTG by extracting window transitions from APK files with static analysis techniques. Then, MoYe builds a recommendation engine based on the enhanced WTG model and user operations intercepted in real time to allocate different test tasks for different crowd workers. Further, MoYe provides real-time guidance for crowd workers to complete test tasks to verify suspicious bugs or explore new bugs. The preliminary experiments show that MoYe can improve the efficiency of crowdsourced testing. The demo video can be found at Youtube.

CFR&DS

The fullname called "The Design and Implementation of Crowdsourced Feedback Review and Delivery System Based on Test Report Summary". Jie Mei has designed and implemented a crowdsourced review and delivery system based on the test report summary. The system has introduced a novel mechanism, which is based on reporting summarization to solve the problems of the crowdsourced feedback review.

OAS

The online crash analysis system fully adapts to the mainstream categories of Android applications in the market. Extensive experiments have been performed for 10 different open-source applications 20 Android devices. Experimental results demonstrate that, the system implemented in this thesis not only can automatically capture Android application crashes, but also can effectively classify, deduplicate and visualize crash information. The classification accuracy rate achieves 88.1%, and deduplication rate achieves 60.7%.

GUI TSRS

GUI TSRS designs and implements an Android application GUI test script repair system, which improves reusability by repairing GUI test scripts that have failed due to version updates. The system divides the repair process into four parts. Firstly, the GUI components of Android applications are extracted by using the automated traversal tool and the event flow graph model is established.

QCS

QCS improves the quality of bug reports and the quality awareness of crowd workers, enables managers to identify the quality of bug reports and malicious crowd workers quickly, and makes it easy to review massive bug reports for managers. Also, it promotes quality control study in collaborative crowdsourcing mode.

AGS

AGS designs and implements an automatic generation system for bug report of mobile applications to solve the problems of low readability of logs and low efficiency of bug location in automated testing. By mining multiple logs generated by automated testing, this system builds a bug model containing complete context information, including operation sequence, stack information, bug screenshots, etc.

BLS

BLS selects a total 1243 bug reports created before October 16,2018 about the open source project Zookeeper to verify the bug location accuracy of this system initially. The hit rate of Top1 in the bug localization result is 52.55%, the hit rate of Top 5 is 78.94%, and the hit rate of Top 10 is 87.45%. In addition, we simulated 500 users concurrently accessing 7 common interfaces in the system performance test. The results show that the average response time of the seven interfaces is 247ms, the highest is 447ms. All responses take less than 500ms and the error rate is 0.

CTSR&RS

CTSR&RS selects 10 mainstream mobile devices and 5 mobile applications to analyze and verify the system availability. The experiment proves that the record process of this system has better support for different devices under the Android platform, and the script cross-device replay success rate is over 80%. The cross-platform replay mode has a success rate of about 60%. The system has been deployed for internal mobile application testing. Through a single record of test scripts and cross-platform replay of scripts, it effectively reduces the number of test script maintenance and the cost of automated testing.

ACOVP

The platform solves the problem of insufficient human and device resources through online real machine control and bug crowdsourced verification. Crowd workers can remotely record and replay test scripts in the device, and they can modify and perfect test scripts. Bug crowdsourced verification allows task requesters to verify bugs through crowdsource and improve verification efficiency based on the results of verification submitted by crowd workers and the statistical distribution of results.

CQCS

CQCS mainly expounds the design and implementation of the code quality control system for development team based on Git. The main functions of the system are responsible by three modules together, including the code quality control module that extracts the quality characteristics of the code, the Git control module that manage and excavates the evolution history of the project, and the team’s code quality evaluation module that integrates the data to give the evaluation results. In order to ensure the well scalability of the system, the module transmits information through metadata, and invokes the initiator module to fit the metadata from the invoked module through the data converter.

Middle-end System for Android Application Automation Test

Middle-end System for Android Application Automation Test

QRcode Combined with the high efficiency of automated testing and the idea of universality and flexibility of testing Middle-end system, we proposes and implements an Appium-based Middle-end system for Android application automation test. This tool first analyzes the APK file that need to be tested, and then selects the currently idle Android deivces and performs an automated traversal test through the Appium framework and ADB tools. At the same time, the tool supports dynamic execution of custom Java scripts through Groovy to increase system scalability. By specifying different report generation services, the tool can analyze the intermediate data from the perspective of performance, functionality, stability, robustness, and compatibility to generate multi-dimensional test reports. Also, the tool supports custom test report analysis tools and the tools can be hot plugged. 0At present, the system has landed on the Mooctest cloud platform, providing APK grading services for the education version, and stable and reliable automated testing services for the enterprise version.

BREGAT

BREGAT

QRcode BREGAT is used to generate structured bugs for multi-device automated testing results, classify and deduplicate bugs via the defined taxonomy. We first defined the GUI inconsistency and device inconsistency based on multi-device automated testing results, and proposed a structured bug model that can be used for inconsistency analysis. Through the manual review of the real automated testing results of 50 applications on 20 devices, analyzing the relationship between bugs and inconsistencies, confirming the root cause of bugs, and extracting common log patterns of similar bugs, we constructed an extensible bug taxonomy with inconsistency labels, which contains a total of 67 bug categories. Finally, heterogeneous data in the testing results such as screenshots, test operations, and device logs are combined to generate reproducible and easy-to-understand bug reports. BREGAT removes 97% of duplicate bugs, and the deduplication precision reaches 100%, which is more efficient than existing tools.

ColinTest

ColinTest

QRcode ColinTest is an iterative Android application automated testing system, which automatically records, extracts, and fuses the user operation flows of the test users, and enters them into the testing tool. Through iterations in testing process, user operation information is introduced into the tool, and the output of the tool is passed back to the user to generate the next round of user operation information. By modifying the Appium framework, ColinTest automatically obtains the user operation flows in the background. The experimental results show that the test results of the system have been significantly improved compared to the case without the introduction of user information. When the test time is set to one hour, the average code coverage rate reaches 37.83%, which exceeds Monkey under the same conditions, which average code coverage is 28.90%. In addition, the test results after the introduction of the information completely included the coverage of the user or the tool when it was tested alone, and no coverage omissions were generated, which undoubtedly proved the usability of the system.

Crowdsourced Testing Requirement Generation System Driven by Android Automatic Testing

QRcode This tool implements a crowdsourced test requirement generation system driven by Android automatic testing. The tool processes the automatic test data of the target application and extracts the exceptions triggered during the test. Based on test operation sequences and screen shots, workers are guided to find exceptions in different environments by reproduction steps. Using decompilation and static analysis techniques, the types of components involved in the conditional branch statements in the source code and the windows not covered in the automatic test are extracted to guide crowdsourced workers to explore new exceptions. 10 Android open source applications were selected and controlled experiments were carried out on 15 popular mobile devices. The experimental results showed that by guiding the crowdsourced workers with our generated requirements, 30% more exceptions could be triggered and 15% more coverage rate was achieved than the control group. In summary, this tool provides a service for generating crowdsourced requirements and accepting crowdsourced test results.

Report Generation Service in Web Application Automation Test System

QRcode According to the characteristics of Web application testing, test report service implements the Web application automation test process based on the monitoring of test execution, including the generation of test results and the display of test reports, software quality assessment reports. For the characteristics of widely use and high complexity of Web applications, we selects 50 representative websites from nine different industries to experiment on and evaluate test report service. The experiment shows that the service can provide distinct and easy-to-use test reports. The success rate of the test script generated by the test report service reaches 99.8%, the recurrence rate of existing vulnerabilities reached 89.4%, and the accuracy rate of vulnerabilities classification reached 97.5%. The experiment also finds that using scripts to reproduce vulnerabilities is nearly three times more efficient than manually reproducing vulnerabilities. Currently, the service has been integrated on the Mooctest platform and is running well, providing testers with good Web application automation test report services.

Execution Service in Web Application Automation Testing System

QRcode This tool is used to solve the needs of automated testing of complex dynamic Web pages. Testing vulnerabilities reports and software quality assessment can be generated. The service applies the idea of Web page state detection based on finite state machine to test execution, searches the web page state space, and generates executable test paths. The service is based on microservice technology and distributed technology. We selected common types of websites and conducted preliminary verification of this automated testing service. Each application can detect an average of 89 web application states, and on average can find out 83 vulnerabilities on each application. The execution success rate of the test scripts generated by the state diagram constructed by the test execution service is 99.8%. The vulnerabilities recurrence rate after script execution is 89.4%, and the vulnerabilities classification accuracy rate is 97.5%. At the same time, the testing service can improve the testing efficiency by 2.12 times compared to manual.

Analysis Service in Web Application Automation Testing System

QRcode This tool is mainly for the defect scanning and defect analysis of web application testing, and provides intelligent solutions. The system relies on multi-source“black box”data of the system under testing, such as related logs, screenshots and so on, which are recorded by the execution service and scheduling component responsible for automated traversal testing. With methods such as clustering algorithms and feature detection, a web application related defect system is constructed, realizing and integrating some system defect detectors corresponding to relevant data. Finally, the system discovers defects through detection, and constructs test report of the application to be tested. Experiment based on 50 real online websites shows that the system can effectively discover various web application defects such as broken links, resource loading failures, server-side errors, JavaScript errors and so on. At the same time, both the accuracy rate of system defect classification and the correct rate of export test script execution exceeds 95% based on manual sample and review.

Multidimensional Evaluation System for Software Testing(META)

QRcode This system aims to solve the problem of how to systematically evaluate the effectiveness of students' test. According to the talent training objectives of colleges and universities and the requirements of enterprises, we constructs a multi-dimensional evaluation system to evaluate students' testing effect from various aspects on the basis of 7 evaluation indicators in combination with test products and test behaviors, aiming at three test types of developer unit test, Web application automation test and mobile application automation test. The tool we roposed aims to more comprehensively evaluate the test effects of students. Its evaluation indicators has been widely used in the software testing contest hosted by the teaching platform, and the evaluation result has been unanimously recognized by the expert group of the contest. The system has been used in the software test teaching platform, through a variety of evaluation of students' test effect, in order to help students find shortcomings, so as to promote their progress.

Expressive Ability Evaluation System Based on Distributed Task Queue

Expressive Ability Evaluation System Based on Distributed Task Queue

QRcode This tool is an expressive ability evaluation system based on distributed task queue. It can provide evaluation services for ordinary users and provide related system management functions for administrator users. The answers should be made by speech in Mandarin. The system will record, store, and score users' speeches. The system is divided into four modules: evaluation module, permission management module, questions management module, and asynchronous tasks module. The expressive ability evaluation system meets the expected performance requirements, and provides a good support for the quick expressive ability test method. The system can reduce the capital, site, and labor costs in the expressive ability test, improve the test efficiency and test result's objectivity. It also brings a more convenient test experience to users. It can be used to provide reference for recruitment and to provide a evaluation or exercise platform for people who want to improve their expressive abilities.

Problem Construction and Analysis System for Expression Ability Evaluation

Problem Construction and Analysis System for Expression Ability Evaluation

QRcode This tool focuses on the entire process of the core element of the "problem kit" in the evaluation, from generation and initialization to use and analysis and optimization. First, the system needs to regularly crawl network resources and initialize it as a backup question. Second, the system will use audio analysis and text analysis methods to analyze the user's expression level and provide brief comments. Third, the system needs to use the answer data submitted by the user and use statistical learning method to optimize the parameters of the problem kit and improve the stability of the model. In addition, the system will also provide the necessary support so that professionals can intervene in the problem life cycle, generate new problems from the alternate corpus, or modify the problem parameters artificially. The experimental results show that the number of samples within 5 points between the system's scoring results and the expert's score exceeds 75%, indicating that the system can quantify the evaluator's expressive ability to some extent.

Human-machine Collaboration-Based System for Java Vulnerability Scanner

QRcode In order to reduce the false positive ratio of current vulnerability scanners and save maintenance costs for developers, we design a human-machine collaborative Java bytecode vulnerability scanning system. It analyzes static vulnerability scanners and common false positive vulnerabilities. This tool implements bytecode context extraction, code feature extraction, and related machine learning classification models. It also integrates crowdsourcing auditing and combines the vulnerability scanning requirements in actual scenarios to implement the system. The human-machine collaborative Java bytecode vulnerability scanning system can provide better vulnerability scanning service. Experiments on the OWASP dataset show that the system can reduce the false positive ratio by 22%, when the recall is 95.39%. The result of experiments, which are based on randomly selected open source projects on the GitHub, shows that the system can effectively reduce the false positive ratio of traditional static vulnerability scanners while ensuring a low false negative ratio.

C++ Source Code Vulnerability Static Scanning System

QRcode C++ source code vulnerability static scanning is digging the potential vulnerability of C++ source code using taint analysis and data flow analysis without running the program. In order to improve the developer vulnerability review process and reduce the difficulty of vulnerability review, the C++ source code vulnerability static scanning system urgently needs to reduce the false positive ratio of false negatives to assist developers in delivering more robust code. This system innovatively introduces an iterative false alarm filtering mechanism based on machine learning to reduce false positive ratio in vulnerability scanning. The F1 value of the system is increased by 30% and 22% compared with TscanCode and Cppcheck, and it can effectively reduces false positive in the C++ source code vulnerability static scanning. This system improves the availability of static scanners for C++ source code vulnerabilities, reduces the number of false positive vulnerabilities, reduces the burden on developers to review vulnerabilities, and provides guarantees for delivering highly reliable code.

The Design and Implementation of Static Code Analysis System based on Machine Learning for Java

QRcode This system aims to apply academic research to the real world. For one of the most common languages in Web development, Java, this system utilizes taint analysis, program slicing, and BLSTM to provide more accurate code analysis services for development or security engineers. In terms of taint analysis, the system uses a large number of rules in Find Security Bugs to ensure low false negatives. In addition, it can give taint propagation paths to make the report more readable. In order to ensure slicing efficiency and stability, this system optimizes the slicer for the actual Jar package and proposes an idea called ​​segmented slicing. The experimental results show that the system can obtain more accurate scanning results within an acceptable scanning time. In terms of efficiency, the system optimizes traditional slicing to ensure that the scan time of each project does not exceed 1 hour. For accuracy, the system's precision rate reaches 90.53%. In other words, compared to Find Security Bugs, the system has eliminated 25.44% false positives, which greatly reduced the code audit work.

Crowdsourced Feedback Review Tasks Distribution System Based on User Features

Crowdsourced Feedback Review Tasks Distribution System Based on User Features

QRcode Based on the crowdsourced review platform, a crowdsourced review task allocation system based on user characteristics is proposed. This system fully considers the user's reputation characteristics, ability characteristics and behavior characteristics, and designs two modes of static task allocation and dynamic task allocation. For static review tasks, the static task assignment mode is invoked, and each user is assigned the same number of test reports to be reviewed according to the needs of crowdsourcing review tasks. For dynamic review tasks, the system will call the dynamic task assignment mode, use the collaborative filtering algorithm to generate a user task recommendation list, and generate a task heat list based on task assignment and complete details. After the comprehensive calculation, the user will be assigned a test report. The average review coverage rate reaches 100\%, the accuracy rate of the static task review result reaches 87.8\%, and the accuracy rate of the dynamic task review result reaches 95.12\%. Therefore, the system has a high efficiency.

Unified Task Allocation System for Crowdsourced Testing

Unified Task Allocation System for Crowdsourced Testing

QRcode This system focuses on selecting the desired tasks for crowdsourced workers and completing the design and implementation of a unified task allocation system for crowdsourced testing. Relying on a domestic well-known collaborative testing platform M, this system creatively proposes a multi-objective dynamic task allocation mechanism based on priority to improve the testing efficiency. Specifically, the system analyzes the fitness between tasks and workers from four aspects: test ability, experience, willingness and credibility, and uses uses breadth-first test traversal and timer to achieve reasonable task allocation.At the same time, the task pricing mechanism based on user choice will greatly stimulate the enthusiasm of crowdsourced workers to complete the their tasks with high quality on the premise of fully ensuring workers' free choice. Therefore, according to these requirements, the system can be divided into four functional modules: test task adaptation, dynamic task allocation, test task pricing and records of the implementation status of crowdsourced workers.

Crowdtest Data Provenance System Based on Consortium Blockchain

Crowdtest Data Provenance System Based on Consortium Blockchain

QRcode In order to achieve reliable traceability of public test data, a Consortium blockchain composed of crowdtesting demanders, crowd workers, and crowdtest platform parties is established, and a crowdtest data provenance system based on consortium blockchain is designed and implemented. This system collects crowdtest data in real time and attaches source information as provenance data for consortium blockchain storage. The stored process is verified by each participant as a transaction in the consortium blockchain network and is automatically completed by the smart contract. Traceability data is maintained as a transaction ledger by multiple parties and is difficult to tamper with. The test results show that the system can provide reliable source data for crowdsourcing testing based on consortium blockchain. The consortium blockchain in this system can still maintain good availability under the transaction throughput of 50tps. This system provides crowdtesting data traceability services for crowdsourcing testing, improves trust among participants, and contributes to the development of crowdsouring testing.

Crowdsourcing Testing Assets Confirmation System Based on Blockchain

Crowdsourcing Testing Assets Confirmation System Based on Blockchain

QRcode Relying on the crowdsourced testing platform of MoocTest, we designed and implemented a crowdsourced testing assets confirmation system based on blockchain technology, aiming at solving the problems existing in the current crowdsourced testing. By storing the data involved in the crowdsourced testing process into the blockchain, and applying the technologies of chain data storage, consensus algorithm, intelligent contract and so on, the system realizes the mutual trust of multiple participants in the data level, and makes the data in the flow of the crowdsourced testing process open and transparent. The results show that the system can realize the safe storage, real-time tracing and assets confirmation of the data. The functions of the system includes the data storing, querying, verifying and asset confirming. In terms of system performance, the system throughput can reach 300tps, which can support the business requirements in real scenarios. In terms of system security, there are no security holes in the blockchain smart contract of the system, and the system can guarantee the fault tolerance rate of 1/3.

PerTether

PerTether

QRcode By analyzing Ethereum's PoW consensus protocol and Gas mechanism, we finally recognized two performance impact factors for private Ethereum blockchain: Difficulty, Gas Limit. We introduce fault injection technology. By analyzing the common faults of distributed systems along with blockchain characteristics, we propose four types of faults on private Ethereum blockchain: application, consensus, smart contracts, network. Afterwards, we divide each failure into three levels according to the severity of the fault. Accordingly, we implement a performance testing System for private Ethereum blockchain via fault injection. Through fault injection technology, real-world faults can be simulated in the test environment. We design and perform two sets of experiments, including impact factor verification experiments and fault injection verification experiments. In the impact factor verification experiment, with the change of the factor, the throughput decreased and the latency increased significantly. In the fault injection verification experiment, the performance of Ethereum showed a downward trend during fault injection. The experimental results show that the system can reflect the performance and stability of Ethereum under fault injection through performance indicators.

MuSC

MuSC

QRcode We design and implement a mutation testing system for Solidity language. Specifically, we studies the language characteristics of solidity from the aspects of keywords, global variables/functions, exception detection and contract vulnerabilities, and propose 16 special mutation operators for Solidity. MuSC can generate a large number of mutants efficiently and accurately at the level of abstract syntax tree, and supports automatic deployment and testing of contracts based on the Truffle framework. In addition, MuSC also provides many other functions such as mutant display, custom test chain creation, test report generation, etc. Experimental results show that the method based on mutation testing is superior to the coverage-based method in defect detection rate (96.01% vs. 55.68%), indicating that mutation testing can better measure the adequacy of ESC testing than code coverage. In addition, we browses and classifies 729 real ESC error reports, and finds that 117 of them are related to the Solidity special mutation operator, indicating that the new mutation operator proposed in this paper can effectively reveal true defects.

PROJECTS
  1. National key R&D program of China: R&D and application of integrated crowdsourcing test service platform for information products and technology services (2018YFB1403400), 2019-2021
    国家重点研发计划:信息产品及科技服务集成化众测服务平台研发与应用 (2018YFB1403400),2019-2021

  2. National natural science foundation of China (61802171):Human-machine collaborative mobile application testing based on comprehensible information fusion (61802171), 2019-2021
    国家自然科学基金项目:基于可理解信息融合的人机协同移动应用测试研究(61802171), 2019-2021

  3. Nanjing Customs of the People's Republic of China: Technical support services for operational data visualization research, 2018-2019
    中华人民共和国南京海关:运行数据可视化研究相关技术支持服务,2018-2019

  4. Project for Tencent: Training system and experimental platform for software testing course, 2018-2019
    腾讯项目:软件测试课程实训体系与实验平台,2018-2019

  5. Jiangsu Tongxingbao traffic Intelligent transportation technology Co. Ltd.: Electronic account, MTC electronic payment and Su-Card customer behavior analysis and software testing project, 2018-2019
    江苏通行宝智慧交通科技有限公司:电子账户、MTC电子支付和苏卡通客户行为分析+软件测试项目,2018-2019

  6. The Jiangsu planned projects for postdoctoral research funds: Automated test generation for mobile applications based on continuous fusion model(2018K028C), 2018-2019
    江苏省博士后科研资助计划:基于持续融合模型的移动应用测试自动生成(2018K028C),2018-2019

  7. National natural science foundation of China (General Program): Research on report analysis and fusion of collaborative crowdsourcing test (61772014), 2018-2021
    国家自然科学基金项目(面上项目):协作式众包测试报告分析与融合技术研究(61772014), 2018-2021

  8. Nanjing NARI Corporation project: Mobile crowdsourcing test platform(20170413), 2017-2018
    南京南瑞集团项目:移动众测平台软件(20170413),2017-2018

  9. National quality supervision and testing center for software projects (Jiangsu): Crowdsourcing test software testing system platforms (Transfer and implementation of patent), 2015-2016
    国家软件产品质量监督检验中心(江苏):众测软件测试系统平台(专利转让与实施),2015-2016

  10. Prospective industry-academia-research project of Jiangsu province: Research on generation and evolution technologies of mobile application testing (BY2015069-03), 2015-2017
    江苏省产学研前瞻项目:移动应用测试生成与演化技术研究(BY2015069-03),2015-2017

  11. National natural science foundation of China (General Program): Program-characteristics-based testing data diversity analysis and application (61373013), 2014-2017
    国家自然科学基金项目(面上项目):基于程序特征的测试数据多样性分析及其应用(61373013), 2014-2017

  12. National quality supervision and testing center for software projects (Jiangsu): Software testing optimized system (Transfer and implementation of patent), 2014-2015
    国家软件产品质量监督检验中心(江苏):软件测试优化系统(专利转让与实施),2014-2015

  13. The second phase of golden gate project of general administration of customs: Integrated service for Nanjing customs security data switching platform, 2014
    海关总署金关工程二期:南京海关安全数据交换平台集成服务,2014

  14. Key projects employing foreign experts of ministry of education of China: Research on Test Case evolution technology, 2013-2014
    教育部聘请外国专家重点项目:测试用例演化技术研究,2013-2014

  15. National natural science foundation of China (Special Funds): 2013 graduate student Summer School-Application software engineering (J1321010), 2013
    国家自然科学基金项目(专项基金项目):2013年研究生暑期学校-应用软件工程(J1321010), 2013

  16. National natural science foundation of China:Academy for software engineering education and training (61210306018). 2012-2012
    国家自然科学基金:软件工程教育与培训会议(61210306018),2012-2012

  17. National natural science foundation of China:Multi-stage integrated Test Case evolving techniques (61170067). 2012-2015
    国家自然科学基金:多阶段融合测试用例演化技术(61170067),2012-2015

  18. Beijing Baidu Netcom Technology Co., Ltd.: Testing services based on software behavior analysis (Patent licensing and implementation),2012-2013
    北京百度网讯科技有限公司:基于软件行为分析的测试服务(专利许可与实施),2012-2013

  19. National natural science foundation of China (Key Program):Software dependability evaluation-oriented testing techniques (90818027), 2009-2012
    国家自然科学基金(重点项目):面向软件可信性演化的测试技术(90818027),2009-2012

  20. Jiangsu planned projects for postdoctoral research funds:Some key techniques on software logic testing(0701003B), 2007-2008
    江苏省博士后科研资助计划:软件逻辑测试的若干关键技术(0701003B), 2007-2008

PUBLICATIONS

2020

  1. Di Liu, Yang Feng, Xiaofang Zhang, James A. Jones, Zhenyu Chen. Clustering Crowdsourced Test Reports of Mobile Applications Using Image Understanding IEEE Transactions on Software Engineering
  2. Zhenfei Cao, Xu Wang, Shengcheng Yu, Yexiao Yun and Chunrong Fang. STIFA: Crowdsourced Mobile Testing Report Selection Based on Text and Image Fusion Analysis ASE 2020 Demo

2019

  1. Yang Feng, Yi Wang, Chunrong Fang, Nannan Guo and Zhenyu Chen. An approach of developing highly trustworthy crowd workforce (in Chinese) SCIENTIA SINICA Informationis
  2. Zixin Li, Haoran Wu, Jiehui Xu, Xingya Wang , Lingming Zhang, Zhenyu Chen. MuSC: A tool for mutation testing of ethereum smart contract. ASE 2019-Demo
  3. Yuan Zhao, Yang Feng, Yi Wang, Rui Hao, Zhenyu Chen. Quality assessment of crowdsourced test scripts. NASAC 2019
  4. Yuying Li, Rui Hao, Yang Feng, James Jone, Xiaofang Zhang, Zhenyu Chen. CTRAS: A tool for aggregating and summarizing crowdsourced test reports. ISSTA 2019 Demo
  5. Haoyu Li, Chunrong Fang, Zhibin Wei, Zhenyu Chen. CoCoTest: Collaborative crowdsourced testing for Android applications. ISSTA 2019 Demo
  6. Weiqiang Zhang, Shing-Chi Cheung, Zhenyu Chen, Yuming Zhou, Bin Luo. File-level socio-technical congruence and its relationship with bug proneness in OSS projects. Journal of Systems and Software
  7. Xingya Wang, Weisong Sun, Yuan Zhao, Linghuan Hu, Eric Wong, Zhenyu Chen. Software testing contest-observations and lessons learned. IEEE Computer
  8. Xin Chen, He Jiang, Zhenyu Chen, Tieke He, Liming Nie. Automatic test report augmentation to assist crowdsourced testing. Frontiers of Computer Science
  9. Rui Hao, Yang Feng, James Jones, Yuying Li, Zhenyu Chen. CTRAS: Crowdsourced test report aggregation and summarization. ICSE 2019

2018

  1. Yuan Zhao, Tieke He, Zhenyu Chen. A unified framework for bug report assignment. International Journal of Software Engineering and Knowledge Engineering
  2. W. Eric Wong, Linghuan Hu, Haoliang Wang, and Zhenyu Chen. Improving software testing education via industry sponsored contest. FIE 2018
  3. Weiqin Zou, David Lo, Zhenyu Chen, Xin Xia, Yang Feng, Baowen Xu. How practitioners perceive automated bug report management techniques. IEEE Transactions on Software Engineering ( Early Access )
  4. Ruizhi Gao,Yabin Wang,Yang Feng, Zhenyu Chen, W. Eric Wong. Successes, challenges, and rethinking ? an industrial investigation on crowdsourced mobile application testing. Empirical Software Engineering
  5. He Jiang, Xin Chen, Tieke He, Zhenyu Chen, Xiaochen Li. Fuzzy clustering of crowdsourced test reports for apps.[Chinese Brief] ACM Transactions on Internet Technology. 18(2): 18:1-18:28
  6. Xin Chen, He Jiang, Xiaochen Li, Tieke He, Zhenyu Chen. Automated quality assessment for crowdsourced test reports of mobile applications.[Chinese Brief] SANER 2018, pp. 368-379

2017

  1. Xiaofang Zhang, Yang Feng, Di Liu, Zhenyu Chen, Baowen Xu. Research progress of crowdsourced software testing Journal of Software
  2. Tieke He, Zhenyu Chen, Jia Liu, Xiaofang Zhou, Xingzhong Du, Weiqing Wang. An empirical study on user-topic rating based collaborative filtering methods WWW Journal
  3. Zhiyi Zhang, Zhenyu Chen, Ruizhi Gao, W. Eric Wong, Baowen Xu. An empirical study on constraint optimization techniques for test generation SCIENCE CHINA Information Sciences 60(1): 12105

2016

  1. Zhiyi Zhang, Zhenyu Chen, Zicong Liu, Qingkai Shi, Baowen Xu. An empirical study on constraint optimization techniques for test generation Science China-Information Science
  2. Xin Zhang, Zhenyu Chen, Chunrong Fang, Zicong Liu. Guiding the crowds for android testing[Chinese Brief] ICSE 2016-Poster, pp. 752-753
  3. Xiaoyuan Xie, Zicong Liu, Shuo Song, Zhenyu Chen, Jifeng Xuan, Baowen Xu. Revisit of automatic debugging via human focus-tracking analysis ICSE 2016[Tool][Data], pp. 808-819
  4. Yang Feng, Qin Liu, Mengyu Dou, Jia Liu, Zhenyu Chen. Mubug:a mobile service for rapid bug tracking Science China-Information Science, 2016 [Demo]
  5. Qingkai Shi, Jeff Huang, Zhenyu Chen, Baowen Xu. Verifying synchronization for atomicity violation fixing IEEE Transactions on Software Engineering, 2016 [Tool]
  6. Zebao Gao,Zhenyu Chen, Atif Memon, Yunxiao Zou. SITAR: GUI test script repair.[Chinese Brief] IEEE Transactions on Software Engineering, 2016 [Data]

Before the 2015

  1. Yang Feng, Zhenyu Chen, James Jones, Chunrong Fang and Baowen Xu. Test report prioritization to assist crowdsourced testing.[Chinese Brief] ESEC/FSE 2015, pp. 225-236
  2. Weiqin Zou, Xin Xia, Weiqiang Zhang, Zhenyu Chen, David Lo. An empirical study of bug fixing rate. COMPSAC 2015, pp. 254-263
  3. Yibiao Yang, Yuming Zhou, Hongmin Lu, Lin Chen, Zhenyu Chen, Baowen Xu, Hareton Leung, Zhengyu Zhang. Are slice-based cohesion metrics actually useful in effort aware fault-proneness prediction? An empirical study. IEEE Transactions on Software Engineering
  4. Weiqiang Zhang, Liming Nie, He Jiang, Zhenyu Chen, Jia Liu. Developer social networks in software engineering: construction, analysis, and application.[Chinese Brief] Science China-F
  5. Haoyu Yang, Chen Wang, Qingkai Shi, Yang Feng, Zhenyu Chen. Bug inducing analysis to prevent fault prone bug fixes. SEKE 2014, pp. 620-625
  6. Xiaoran Xu, Chunrong Fang, Qing Wu, Jia Liu, Zhenyu Chen. Testing as an investment. SEKE 2014
  7. Yunxiao Zou, Zhenyu Chen, Yunhui Zheng, Xiangyu Zhang, Zebao Gao. Virtual DOM coverage for effective testing of dynamic web applications. ISSTA 2014, pp. 60-70
  8. Zicong Liu, Zhenyu Chen, Chunrong Fang, Qingkai Shi. Hybrid test data generation. ICSE 2014, pp. 630-631
  9. Zhenyu Chen, Bin Luo. Quasi-crowdsourcing testing for educational projects.[Chinese Brief] ICSE 2014, pp. 272-275
  10. Yunxiao Zou, Chunrong Fang, Zhenyu Chen, Xiaofang Zhang, Zhihong Zhao. A hybrid coverage criterion for DynamicWeb testing(s). SEKE 2013, pp. 210-213
  11. Zhiyi Zhang, Zhenyu Chen, Baowen Xu, Rui Yang. Research progress on test case evolution (in Chinese). Chinese Journal of Software
  12. Chunrong Fang, Zhenyu Chen, Baowen Xu. Comparing logic coverage criteria on test case prioritization. Science China Information Science
  13. Yabin Wang, Zhenyu Chen, Yang Feng, Bin Luo, Yijie Yang. Using weighted attributes to improve cluster test selection. SERE 2012, pp. 138-146
  14. Zhenyu Chen, Yongwei Duan, Zhihong Zhao, Baowen Xu, Ju Qian. Using program slicing to improve the efficiency and effectiveness of cluster test selection.[Chinese Brief] International Journal of Software Engineering and Knowledge Engineering
  15. Zhiyi Zhang, Dongjiang You, Zhenyu Chen, Yuming Zhou, Baowen Xu. Mutation selection: some could be better than all. EAST Workshop
  16. Zhenyu Chen, Xiaofang Zhang, Baowen Xu. A degraded ILP approach for test suite reduction. SEKE 2008, pp. 494-499
  17. Zhenyu Chen, Baowen Xu, Changhai Nie. A detectability analysis of fault classes for boolean specifications. ACM SAC 2008, pp. 390-394
  18. Zhenyu Chen, Baowen Xu, Xiaofang Zhang, Changhai Nie. A novel approach for test suite reduction based on requirement relation contraction. ACM SAC 2008, pp. 826-830
  19. Zhenyu Chen, Zhihong Tao, Hans Kleine Büning, Lifu Wang. Applying variable minimal unsatisfiability in model checking. Chinese Journal of Software, 19(1): 39-47
  20. Zhenyu Chen, Baowen Xu, Changhai Nie. Comparing fault-based testing strategies of general boolean specifications. IEEE COMPSAC 2007, pp. 621-622
  21. Zhenyu Chen, Zhihong Tao, Baowen Xu, Lifu Wang. Implication-based approximating bounded model checking. SEN, LNCS 2007, pp. 350-363
  22. Zhenyu Chen, Decheng Ding. Variable minimal unsatisfiability. TAMC, LNCS 2006, pp. 262-273
  23. Zhenyu Chen, Conghua Zhou, and Decheng Ding. Automatic abstraction refinement for petri nets verification. HLDVT 2005, pp. 168-174
TOP