Efforts in the Distributed Sanctuaries project have resulted in the development of mechanisms and prototypes examining various aspects of service and information sharing between entities belonging to multiple administrative domains that enter into assymmetric coalition relationships with each other. Our general architecture comprises of multiple coalition partners, each with multiple assets (hosts), that can run delegate computations initiated by agents and host objects (representing services) whose access is subject to directives derived from high-level trust relationships between partners.

Our current research is focused on the following four areas:

1.       Distributed Authentication and Access Control Frameworks: When a service request arrives at an object we need to identify the agent initiating the request, and determine the privileges associated with that agent. What complicates the problem is the fact that the service may be completely unaware of the agent as might happen when the agent belongs to a different organization in the coalition. To address this problem, we have designed a credential-based security framework for dynamic coalitions. Central to this framework is a decentralized trust management system called dRBAC. The framework uses dRBAC to extend intra-member role-based access control to a coalition setting by defining “service visibility” and “role remapping” relationships based on negotiations between coalition partners. The former determines which services are accessible while the latter dictates the level of service a coalition user becomes entitled to. Novel aspects of this framework include the presence of access control tuples that qualify access given a certain role (e.g., priority labels, rate limits), and the ability to automatically and transitively compute the values of these tuples for coalition relationships that does not include their explicit definition. 

This framework has been implemented as a Java-based distributed service that handles certificate creation and verification of presented credentials. The access control system use the accesser’s certificates and the tuples present in the authenticators to allow or deny access by first finding a verifiable credential chain leading to a role permitted access, and second inferring the access control tuples associated with such access. This framework is being extended to support transitive access methods, which would allow a person running a computation on site A to spawn new computations on site B that may in turn spawn computations (or access data) on site C. The access control system would track the security context associated with each delegate computation, preventing spoofing of identities in such transitive relationships. An ICDCS’02 paper provides details about dRBAC.

2.       Information Sharing Infrastructures: As a part of the work on enabling secure information management across members of a coalition we have developed a prototype, the Secure Information Management System (SIMS). SIMS consists of a set of information servers that store encrypted data and a few authentication sites that store the keys, along with a set of certificate authorities, history managers and access control protocol modules. The key idea in the system is the separation of access control and data storage. The main algorithms used are secure methods (via cryptographic protocols) of reading, writing and updating the data. As a part of the SIMS prototype we have implemented a PKI infrastructure for issuing and validating certificates.

3.       Service Distribution Infrastructure and Algorithms: To permit partial caching of coalition services in partly-trusted environments, we have developed a security-sensitive service distribution infrastructure called Partitionable Services. A Java-based prototype permits services to be built up from components, and in response to a client request, have some of these components dynamically migrate downstream closer to the client, while respecting constraints on which components can be placed on which hosts. The service distribution infrastructure permits coherent caching of service components realized as object views, on partly trusted hosts belonging to coalition members without compromising on security. The infrastructure permits continuous control over cached components using a monitored service subscription abstraction: the controlling service can dynamically update the content in cached components and policies governing their use. Parts of this service distribution infrastructure are being developed as a standalone toolkit, called Disco, that can be reused in several related contexts. Additional details about the Partitionable Services framework can be found in our HPDC’02 and HPDC’03 papers.

We have also investigated “planning” algorithms, embodied in a system called Sekitei, which, starting from the interface requirements of the client, choose both the components to migrate and their placement so as to optimize a metric such as client throughput. Sekitei leverages multiple AI planning techniques, and is notable in respecting very general constraints, including for instance that an underlying node must satisfy certain trust properties before a component can be deployed there, and capturing sophisticated resource effects, e.g., that the resource utilization of a component is dependent on the requests it satisfies. An IPDPS’03 paper provides details about Sekitei.

4.       Secure Execution Environments: Caching of delegate and object components on partly trusted hosts requires the availability of execution environments that isolate the components and the host from each other. Protecting the host from the component is the classical sand-boxing problem and several researchers have proposed techniques that ensure compliance either statically (e.g., Java bytecode verification and Proof Carrying Code) or dynamically by actively intercepting an application's interactions with the underlying host hardware and operating system (e.g., Software Fault Isolation and systems such as Janus). We have extended the latter class of techniques to enforce quantitative restrictions on resource usage (e.g., 20% of the CPU) in addition to the traditional qualitative restrictions. Such capacity sandboxes have the advantage that they can prevent an otherwise compliant component from using excessive resources on a host. A further novel component of the approach is its flexible user-level nature. A  USENIX-WSS'00 paper reports on the sandbox implementation and performance on Windows NT platforms.

Protecting the component from a malicious host is the flip side of the secure execution environments problem, but has received relatively little attention. The issues here include guaranteeing that a component's algorithms and internal state cannot be reverse engineered and/or that its interactions with the underlying hardware are not maliciously intercepted and reinterpreted. We are investigating and have had some success with mechanisms for allowing security-unaware components to execute in partly trusted environments relying upon assistance from tamper-resistant hardware, such as collocated cryptographic coprocessors and smart cards. As part of this work, we have developed a scheme to partition certain application structures between an untrusted host and a tamper-resistant smart-card, such that the smart-card ensures integrity of execution on the untrusted host. The execution scheme of trusted code on a trusted coprocessor relies on cryptographic keys stored both in the trusted processor as well as the host from which the code is to be downloaded. The scheme permits both secure downloading of data and code into a trusted processor and the secure execution and retrieval of results from this code.