My current research is in security techniques for operating systems and the applications that use them. Potential graduate students (at both the Masters and PhD level) are very welcome to contact me via email, especially students with interests and/or experience in the areas of operating systems, compilers or low-level programming. I also maintain an interest in privacy for social networks, especially as pertains to social applications and the filesystems that support them.

OS and application security

Transparent computing

I am currently working on tools to answer the question, “what caused this bad thing happen on my host/network, and what other effects did it have?” Answering this question without stochastic intrusion detection requires far greater insight into the processes and OS kernels running on end-hosts that we can currently dream of, as well as distributed systems work across the network. This lofty goal has been set by the DARPA Transparent Computing program, and I am working as part of a collaborative effort among Memorial, BAE Systems and the University of Cambridge to address it.

Here at Memorial, we are using compiler and OS techniques (based on LLVM and FreeBSD/DTrace, respectively) to yield far greater levels of transparency than current software can provide and to trace strong causal link among program and OS events. This work follows on from the TESLA work described below. In addition to the usual openings for graduate students interested in these topics, I’m currently looking for a research software developer to assist with this project.


I also work in the area of OS security foundations and the applications that use them. I'm a collaborator in the [Capsicum project](, which brings capability-based confinement FreeBSD (also [ported to Linux by Google]( Capsicum allows applications as simple as tcpdump or as complex as Chromium to be sandboxed with robust confinment and a simple API. However, the existence of OS-level primitives do not automatically improve application security: research continues in the areas of application [static analysis]( to improve utilization of sandboxing features and in the provision of tools to support application compartmentalization needs.

This research is supported by the NSERC Discovery and the RDC Ignite R&D programs.


The Transparent Computing project above builds on my previous work on TESLA: Temporally Enhanced Security Logic Assertions (part of the CTSRD project). TESLA allows programmers to make assertions about their code that are temporal in nature: rather than asserting that “x is non-negative”, programmers can assert that “between the beginning of the system call and now, an access control check was done” or “the life cycle of this structure follows an explicit finite-state automaton”. This allows programmers to succinctly describe complex systemic invariants (e.g. locking, access control) whose violations are difficult to detect with conventional bug-finding tools.


Capsicum provides process-based isolation that works well but does not scale to thousands of security domains. At that scale, the use of virtual-memory–based protection (UNIX processes) collides with limitations of today’s translation lookaside buffers (TLBs). To go further requires new hardware primitives, as embodied by the CHERI research processor develped as part of the CTSRD project (to which I contributed as a PhD student and Postdoc in Cambridge), also funded by DARPA.

Privacy in Social Networks

I completed my PhD, entitled “Privacy engineering in social networks”, at the University of Cambridge Computer Laboratory under the supervision of Dr Frank Stajano. I was also a member of Trinity College.

We all know how useful today's online social networking tools can be, but they often have the unfortunate side effect of sharing your life with people and companies that you haven't chosen. Aside from demonstrating problems with the status quo, I designed an architecture and prototype implementation to prove that it's possible to have both privacy and the benefits of social networking.

Without trusting any provider to enforce users’ desired sharing and security policies, this work provided:

Some of these ideas are summarised in our May–June 2013 IEEE Security & Privacy article as well as an earlier workshop paper.