Publications

Peer-Reviewed Publications from NortonLifeLock Research Group

pdf
Examining the Adoption and Abandonment of Security, Privacy, and Identity Theft Protection Practices

In Proceedings of ACM CHI Conference on Human Factors in Computing Systems (CHI 2020) (Honorable Mention Award)
Our online survey of 902 individuals studies the reasons for which users struggle to adhere to expert-recommended security, privacy, and identity-protection practices. We examined 30 of these practices, finding that gender, education, technical background, and prior negative experiences correlate with practice adoption levels. We found that practices were abandoned when they were perceived as low-value, inconvenient, or when overridden by subjective judgment. We discuss how tools and expert recommendations can better align to user needs.

pdf
Robust Federated Learning via Collaborative Machine Teaching

In proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI 2020)
For federated learning systems deployed in the wild, data flaws hosted on local agents are widely witnessed. On one hand, given a large amount of training data are corrupted by systematic sensor noise and environmental perturbations, the performances of federated model training can be degraded significantly. On the other hand, it is prohibitively expensive for either clients or service providers to set up manual sanitary checks to verify the quality of data instances. In our study, we echo this challenge by proposing a collaborative and privacy-preserving machine teaching method. Specifically, we use a few trusted instances provided by teachers as benign examples in the teaching process. Our collaborative teaching approach seeks jointly the optimal tuning on the distributed training set, such that the model learned from the tuned training set predicts labels of the trusted items correctly. The proposed method couples the process of teaching and learning and thus produces directly a robust prediction model despite the extremely pervasive systematic data corruption. The experimental study on real benchmark data sets demonstrates the validity of our method.

pdf
Webs of Trust: Choosing Who to Trust on the Internet

To appear in the proceedings of the ENISA Annual Privacy Forum (APF 2020)
We discuss the problem of creating an open, decentralized, secure and privacy-aware reputation system for the Internet.

pdf
The Many Kinds of Creepware Used for Interpersonal Attacks

In Proceedings of the 41st IEEE Symposium on Security and Privacy (S&P 2020)
Technology increasingly facilitates interpersonal attacks such as stalking, abuse, and other forms of harassment. While prior studies have examined the ecosystem of software designed for stalking, our study uncovers a larger landscape of apps---what we call creepware---used for interpersonal attacks. We discover and report on apps used for harassment, impersonation, fraud, information theft, concealment, hacking, and other attacks, as well as creative defensive apps that victims use to protect themselves.

pdf
Tactical Provenance Analysis for Endpoint Detection and Response Systems

In Proceedings of the 41st IEEE Symposium on Security and Privacy (S&P 2020)

pdf
Adversarial Campaign Mitigation via ROC-Centric Prognostics

In Proceedings of the Annual Conference of the PHM Society (PHM 2020)
We introduce turbidity detection as a practical super set of the adversarial input detection problem,coping with adversarial campaigns rather than statistically invisible one-offs. This perspective is coupled with ROC-theoretic design guidance that prescribes an inexpensive do-main adaptation layer at the output of a deep learning model during an attack campaign. The result aims to approximate the Bayes optimal mitigation that ameliorates the detection models degraded health.

pdf
SEAL: Attack Mitigation for Encrypted Databases via Adjustable Leakage

To appear in the proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC '20)
We propose SEAL, a family of new searchable encryption schemes with adjustable leakage. In SEAL, the amount of privacy loss is expressed in leaked bits of search or access pattern and can be defined at setup. As our experiments show, when protecting only a few bits of leakage (e.g., three to four bits of access pattern),enough for existing and even new more aggressive attacks to fail, SEAL query execution time is within the realm of practical for real-world applications (a little over one order of magnitude slowdown compared to traditional SE-based encrypted databases).

pdf
Recurrent Attention Walk for Semi-supervised Classification

In Proceedings of the 13th ACM International Conference on Web Search and Data Mining (WSDM 2020) In this paper, we study the graph-based semi-supervised learning for classifying nodes in attributed networks, where the nodes and edges possess content information. Recent approaches like graph convolution networks and attention mechanisms have been pro-posed to ensemble the first-order neighbors and incorporate the relevant neighbors. However, it is costly (especially in memory) to consider all neighbors without a prior differentiation. We propose to explore the neighborhood in a reinforcement learning setting and find a walk path well-tuned for classifying the unlabeled target nodes. We let an agent (of node classification task) walk over the graph and decide where to move to maximize classification accuracy. We define the graph walk as a partially observable Markov decision process (POMDP). The proposed method is flexible for working in both transductive and inductive setting. Extensive experiments on four data sets demonstrate that our proposed method outperforms several state-of-the-art methods. Several case studies also illustrate the meaningful movement trajectory made by the agent.

pdf
Training Older Adults to Resist Scams with Fraud Bingo and Scam Detection Challenges

In Proceedings of the 2020 CHI Workshop on Designing Interactions for the Ageing Populations - Addressing Global Challenges
Older adults are disproportionately affected by scams, many of which target them specifically. We present Fraud Bingo, an intervention designed by WISE \& Healthy Aging Center in Southern California prior to 2012, that has been played by older adults throughout the United States. We also present the Scam Defender Obstacle Course (SDOC), an interactive web application that tests a user's ability to identify scams, and subsequently teaches them how to recognize the scams. SDOC is patterned after existing phishing-recognition training tools for working professionals. We present the results of running a workshop with 17 senior citizens, where we performed a controlled study that and used SDOC to measure the effectiveness of Fraud Bingo. We outline the difficulties several participants had with completing SDOC, which indicate that tools like SDOC should be tailored to the needs of older adults.

pdf
Dirty Clicks: A Study of the Usability and Security Implications of Click-related Behaviors on the Web

In Proceedings of The Web Conference (WWW 2020)
We present the first comprehensive study of the possible security and privacy implications that clicks can have from a user perspective, analyzing the disconnect that exists between what is shown to users and what actually happens after.

pdf
Attackability Characterization of Adversarial Evasion Attack on Discrete Data

In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2020)

pdf
Cookies from the Past: Timing Server-Side Request Processing Code for History Sniffing

In Digital Threats: Research and Practice (DTRAP 2020) - ACSAC Special Issue

pdf
The Tangled Genealogy of IoT Malware

In Proceedings of the 36th Annual Computer Security Applications Conference (ACSAC 2020)

pdf
SoK! Cyber Insurance - Technical Challenges and a System Security Roadmap.pdf

In Proceedings of the 41st IEEE Symposium on Security and Privacy (S&P 2020)
This paper looks at past research conducted in the area of cyber insurance and classifies previous studies in four different areas. Then it identifies, a group of practical research problems where security experts could help the cyber insurance domain.

Related News

Secure systems map

Secure Systems

Central to trust in an increasingly digital world is the ability to detect and prevent attacks in modern (and not so modern) information systems. This research includes building secure software, supporting forensics, malware analysis, browser/web/network security, and information-centric security.

LEARN MORE
Man entering credit card details on tablet

Privacy, Identity, and Trust

Consumers and corporations are driven to engage in a digital world that they cannot adequately trust. We are developing paradigms to enable online commerce and facilitate machine learning in ways that provide privacy and protect user identities, by leveraging such concepts as local differential privacy, federated machine learning, identity brokering, and blockchain technology.

LEARN MORE
machine learning image

Robust and Fair Machine Learning, Data Mining, and Artificial Intelligence

The tremendous growth in the learning capacity of Machine Learning methods has yet to be met with a corresponding growth in our ability to understand these models. Equally troubling, our ability to build robust machine learning models has not kept pace with research in adversarial attacks against machine learning. As we increasingly hand over decision making to automated machine learning and AI systems, we must find ways that the life-altering decisions made by these systems can be audited for fairness, safety, robustness to adversaries, and the preservation of privacy of any personally identifiable information over which they operate.

LEARN MORE