
Shengzhi Zhang
Associate Professor Associate Chair, Computer Science
Dr. Shengzhi Zhang earned his PhD in Computer Science and Engineering from Penn State University in 2012. His research focuses on cybersecurity, including but not limited to Internet of Things (IoT) security, automobile security, mobile security, and operating system security, among others. He has most recently worked as an assistant professor in the Department of Computer Science at the Florida Institute of Technology. Prior to academia, Dr. Zhang conducted various research projects in Cisco, IBM, and Honeywell Aerospace labs. His existing partnerships, both nationally and internationally, include researchers from Ford Motor, IBM, GE, Indiana University, Penn State, Kuwait University, and the Chinese Academy of Sciences. Dr. Zhang has published many papers and served as program committee members in top-tier security conferences and journals.
Research Interests
- IoT security
- Automobile Security
- Mobile Security
- System Security
Courses
- MET CS 544 – Foundations of Analytics and Data Visualization
Scholarly Works
Editor-in-Chief, Journal of Secure Communications and Systems (January 2017–present)
Book Chapters
Peng Liu, Xiaoqi Jia, Shengzhi Zhang, Xi Xiong, Yoon-chan Jhi, Kun Bai, and Jason H. Li. “Cross-Layer Damage Assessment for Cyber Situational Awareness.” In Cyber Situational Awareness: Issues and Research, edited by S. Jajodia, P. Liu, V. Swarup, and C. Wang. Springer International Series on Advances in Information Security (November 2009). ISBN: 98-1-4419-0139-2
Refereed Journal Articles
Shengzhi Zhang, Xiaoqi Jia, and Peng Liu. “Towards service continuity for transactional applications via diverse device drivers.” International Journal of Information and Computer Security 8, no. 4 (2016): 382–400.
Shengzhi Zhang, Wenjie Wang, Haishan Wu, Athanasios V. Vasilakos, and Peng Liu. “Towards transparent and distributed workload management for large scale web servers.” Future Generation Computer Systems 29, no. 4 (2013): 913–925.
Xiaoqi Jia, Rui Wang, Jun Jiang, Shengzhi Zhang, and Peng Liu. “Defending return-oriented programming based on virtualization techniques.” Security and Communication Networks 6, no. 10 (2013): 1236–1249.
Shengzhi Zhang and Sang-Jo Yoo. “Hidden node collision recovery protocol for low rate wireless personal area networks.” Wireless Communications and Mobile Computing 12, no. 15 (2012): 1351–1362.
Shengzhi Zhang, Xiaoqi Jia, Peng Liu, and Jiwu Jing. “PEDA: Comprehensive Damage Assessment for Production Environment Server Systems.” IEEE Transactions on Information Forensics and Security 6, no. 4 (2011): 1323–1334.
Refereed Conference and Workshop Papers
Shengzhi Zhang, Omar Makke, Oleg Yu Gusikhin, Ayush Shah, and Athanasios Vasilakos. “A security model for dependable vehicle middleware and mobile applications connection.” In VEHITS 2018: Proceedings of the 4th International Conference on Vehicle Technology and Intelligent Transport Systems, edited by Gusikhin, O. and Helfert, M. (2018): 379–386.
Le Guan, Chen Cao, Peng Lui, Xinyu Xing, Xinyang Ge, Shengzhi Zhang, Meng Yu, and Trent Jaeger. “Building a Trustworthy Execution Environment to Defeat Exploits from both Cyber Space and Physical Space for ARM.” IEEE Transactions on Dependable and Secure Computing (July 31, 2018). DOI: 10.1109/TDSC.2018.2861756
Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Kai Chen, Shengzhi Zhang, Heqing Huang, and Xiaofeng Wang. “CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition.” In Proceedings of the 27th Usenix Security Symposium, Baltimore, Maryland (2018): 40–64.
Shengzhi Zhang, Ayush Shah, Oleg Gusikhin, and Omar Makke. “A Security Model for Dependable Vehicle Middleware and Mobile Application Connection.” The 4th International Conference on Vehicle Technology and Intelligent Transport Systems (VEHITS 2018).
Le Guan, Peng Liu, Xinyu Xing, Xinyang Ge, Shengzhi Zhang, Meng Yu, and Trent Jaeger. “Trust-Shadow: Secure Execution of Unmodified Applications with ARM TrustZone.” The 15th ACM International Conference on Mobile Systems, Applications, and Services (Mobisys 2017).
Shengzhi Zhang, Xiaoqi Jia, and Weijuan Zhang. “Towards Comprehensive Protection for Open Flow Controllers.” The 19th Asia-Pacific Network Operations and Management Symposium (APNOMS 2017). (Best paper award.)
Sultan Aldossary, William Allen, and Shengzhi Zhang. “Mathematical Model for Using Moving Target Defense to Mitigate Memory Randomization Weaknesses.” The 4th Annual Conference on Computational Science and Computational Intelligence (2017).
Weijuan Zhang, Xiaoqi Jia, Chang Wang, Shengzhi Zhang, Qingjia Huang, Mingsheng Wang, and Peng Liu. “A Comprehensive Study of Co-residence Threat in Multi-tenant Public PaaS Clouds.” The 18th IEEE International Conference on Information and Communications Security (ICICS 2016).
Zimin Lin, Rui Wang, Xiaoqi Jia, Shengzhi Zhang, and Chuankun Wu. “Classifying Android Malware with Dynamic Behavior Dependency Graphs.” The 15th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TRUSTCOM 2016).
Zimin Lin, RuiWang, Xiaoqi Jia, Shengzhi Zhang, and Chuankun Wu. “Analyzing Android Repackaged Malware by Decoupling Their Event Behaviors.” International Workshop on Security (IWSEC 2016). (Award paper.)
Mark Fioravanti, Ayush Shah, and Shengzhi Zhang. “A Study of Network Domains Used in Android Applications.” The 9th International Conference on Network and System Security (NSS 2015).
Craig Sanders, Ayush Shah, and Shengzhi Zhang. “Comprehensive Analysis of the Google Play’s Auto-Update Policy.” The 11th International Conference on Information Security Practice and Experience (ISPEC 2015).
Rui Wang, Xiaoqi Jia, Qinlei Li, and Shengzhi Zhang. “Machine Learning based Cross-site Scripting Detection in Online Social Network.” The 6th International Symposium on Cyberspace Safety and Security (CSS 2014).
Shengzhi Zhang and Peng Liu. “Assessing the Trustworthiness of Drivers.” The 15th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2012).
Shengzhi Zhang and Peng Liu. “Letting Applications Operate through Attacks Launched from Compromised Device Drivers.” ACM Conference on Data and Application Security and Privacy (ASIACCS 2012).
Jun Jiang, Xiaoqi Jia, Dengguo Feng, Shengzhi Zhang, and Peng Liu. “HyperCrop: A Hypervisor-based Countermeasure for Return Oriented Programming.” In Proceedings of the 13th International Conference on Information and Communications Security (ICICS 2011), November 2011.
Shengzhi Zhang, Haishan Wu, Wenjie Wang, Bo Yang, Peng Liu, and Athanasios V. Vasilakos. “Distributed Workload and Response Time Management for Web Applications.” In Proceedings of the 7th International Conference on Network and Service Management (CNSM 2011), October 2011.
Junfeng Yu, Shengzhi Zhang, Peng Liu, and Zhitang Li. “LeakProber: A Framework for Profiling Sensitive Data Leakage Path.” In Proceedings of the first ACM Conference on Data and Application Security and Privacy (CODASPY ’11), February 2011: 75–84.
Shengzhi Zhang, Xiaoqi Jia, Peng Liu, and Jiwu Jing. “Cross-Layer Comprehensive Intrusion Harm Analysis for Availability-Critical Server Systems.” In Proceedings of the 26th Annual Computer Security Applications Conference (ACSAC ’10), December 2010: 297–306.
Shengzhi Zhang, Xi Xiong, and Peng Liu. “Challenges in Improving the Survivability of Data Centers.” In Proceedings of Workshop on Survivability in Cyberspace, April 2010. (Invited paper.)
Shengzhi Zhang, Xi Xiong, Xiaoqi Jia, and Peng Liu. “Availability-Sensitive Intrusion Recovery.” In Proceedings of the 2nd ACM workshop on Virtual machine security (VMSec ’09), November 2009: 43–48. (Position paper.)
Xiaoqi Jia, Shengzhi Zhang, Jiwu Jing, and Peng Liu. “Using Virtual Machines to Do Cross-Layer Damage Assessment.” In Proceedings of the 1st ACM Workshop on Virtual Machine Security (VMSec ’08), October 2008: 29–38.
Shengzhi Zhang and Sang-Jo Yoo. “Fast Recovery from Hidden Node Collision for IEEE 802.15.4 LR-WPANs.” In Proceedings of the 7th IEEE International Conference on Computer and Information Technology (CIT 2007), November 2007: 393–398.
Technical Report
Shengzhi Zhang, Srivatsan Varadarajan, and Allalaghatta Pavan. “Non-Interference Verification of Zeroization.” Technical Report in Platform Systems Group, Honeywell Aerospace.
Posters
Shengzhi Zhang, Xiaoqi Jia, and Peng Liu. “Rupi’s Dance: Cross-Layer Comprehensive Infection Diagnosis for Availability-Critical Server Systems.” In poster section of the 5th ACM SIGOPS EuroSys Conference (Eurosys 2010).
Inventions and Patents
Sang-Jo Yoo, Shengzhi Zhang, and Ju-hyun Lee. “Adaptive Hidden Node Collision Recovery Protocol for IEEE 802.15.4 LR-WPANs.” Patent NO. 10-0896986, filed with Korean Intellectual Property Office on May 4, 2009.
Faculty Q&A
What is your area of expertise?
My research focuses on cyber security, especially security issues that could impact people’s daily life, such as AI security, IoT security, automobile security, and smartphone security.
Please tell us about your work. Can you share any current research or recent publications?
Currently, I have three ongoing projects. The first is to study the security issues in machine learning, especially deep neural networks (DNN). We craft hard-to-notice “perturbations” into audio/video/image, which can deceive the DNN-based recognition systems and cause a misprediction. For instance, a voice recognition system, e.g., Google Assistant, may decode “call 911” from a song integrated with our perturbation, but people listening to the song would not be able to interpret that. Such “adversarial attacks” in machine learning not only exist in voice/image recognition, but also in object detection (widely used in autonomous driving) based on our research.
This research was covered by The Register (https://www.theregister.co.uk/2018/01/30/boffins_songs_ai_assistants) and was included in the following publications: “CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition” (Proceedings of the 27th Usenix Security Symposium, Baltimore, Maryland, 2018) and “Practical Adversarial Attack Against Object Detector” (arXiv, preprint arXiv:1812.10217).
The second project is to comprehensively protect the execution environment for unmodified applications running on ARM-based IoT devices. By taking advantage of ARM TrustZone technology, we construct a trusted execution environment for security-critical applications, which is isolated from the untrusted operating systems. Publications in this area include “TrustShadow: Secure execution of unmodified applications with ARM trustzone” (Proceedings of the 15th ACM International Conference on Mobile Systems, Applications, and Services, Niagara Falls, New York, 2017) and “Building a Trustworthy Execution Environment to Defeat Exploits from both Cyber Space and Physical Space for ARM” (IEEE Transactions on Dependable and Secure Computing, 2018).
Finally, there is a joint project with Ford Motor Company designing a security model for the headunit systems on future cars. Car manufacturers, such as Ford, are developing the next generation headunit systems with software modules and connectivity, giving the passenger/driver a seamless experience and increasing safety while driving the vehicle. We are collaborating to propose a novel security model that integrates cryptography, network security, and system security approaches to eliminate the threats against car security. You can read more in “A security model for dependable vehicle middleware and mobile applications connection” (VEHITS 2018: Proceedings of the 4th International Conference on Vehicle Technology and Intelligent Transport Systems).
How does the subject you work in apply in practice? What is its application?
I will take the first aforementioned project as an example to explain the practical impact of our research.
Recently, Deep Neural Networks have advanced artificial intelligence in many areas, such as speech recognition, face recognition, strategic games, and, especially, in some safety critical tasks, such as autonomous driving and medical diagnostics. However, deep neural networks are known to be vulnerable to adversarial examples, which leverage a few perturbations on original inputs to fool neural networks into misclassification. Most of the recent research is limited to image classifiers, rather than speech recognition or object detectors. Our research reveals that the adversarial attacks can also be crafted against speech recognition and object detection systems. Such findings demonstrate that using the deep learning techniques without security in mind will significantly impact the safety of everyone’s daily life.
For instance, our work “CommanderSong” crafts small perturbations into a song, thus enabling the “revised” song to be decoded by speech recognition system as a valid command to operate—even while a human would not be able to interpret the command. Consider a modern home with Amazon Echo connected to smart locks, lights, switches, and more. The resident gets home, tired, and starts music streaming from YouTube. If the song from YouTube happens to be the “CommanderSong” uploaded by us (supposing we are hackers), Amazon Echo will be triggered to decode valid commands from the song—for instance, “Echo, open the door” (the exact command to be decoded can be controlled by the perturbations we added into the original song). Then Echo will operate the command and unlock the door. Due to the small perturbations, the command in the revised song can’t be interpreted by human ears, and it would even be hard for a person to notice any anomaly, based on our survey. Considering the popularity of speech recognition in smart homes, smartphones, and even cars, security countermeasures are highly demanded when using deep neural networks in speech recognition.
Another example of an adversarial attack is to deceive the modern DNN-based object detectors, widely used in many fields such as autonomous driving. For instance, most of the existing autonomous driving cars rely on cameras to capture the surrounding environment, from which the object detectors recognize stop signs, traffic lights, pedestrians, and so forth. Our research demonstrates two kinds of adversarial attacks against object detectors: a “hiding” attack that fools the detector, rendering it unable to recognize the object; and an “appearing” attack that fools the detector into falsely recognizing a non-existent object. For the hiding attack, we attached our carefully crafted perturbations onto a stop sign, which the object detector was then unable to recognize from different angles and distances. For the appearing attack, the object detector marks our crafted adversarial image as a traffic light, but a driver would not interpret it the same way. Since object detectors play a significant role in autonomous driving, such adversarial attacks require immediate attention to assure the safety of passengers.
What course(s) do you teach at MET?
I teach Foundation of Analytics with R (MET CS 544).
Please highlight a particular project within this course that most interests your students. If you previously worked in industry, what “real-life” exercises do you bring to class?
My students are encouraged to choose datasets based on their own interests, and apply the analytics techniques learned in class to analyze the data. Since students taking CS 544 have diverse backgrounds—from departments of biology, finance, mechanical engineering, psychology, etc.—they typically have their own specific dataset that they want to analyze. The project allows students to target their datasets, as well as the ones that are closely related to their own research, which makes the project highly “practical” and “appealing” to students.
What advice do you have for new students?
Information Technology has become a very broad field. Pick areas within IT that interest you the most. You will enjoy your work, develop a habit of continuing learning, and add experience to rely on in the future.