Persistent Protection of Data

by Jay Wack, President
Tecsec, Inc. USA

IEEE Internet Initiative eNewsletter, July 2017

Discuss this topic on Collabratec:

The use of encryption has evolved into broader use cases during the past years. With the advent of the internet, in addition to concerns for intercept (otherwise referred as Data in Transmission), a new security paradigm has surfaced that is referred to as Data in Storage (or at rest). In addition to protecting the information, per se, encryption has been coupled to access control designs that have encryption enforcing who has access to specific information, locations, functions, and often even who can access the transport layer.

There are other layered variations for encrypting data, but network and content offer the best examples. The two methodologies for implementing encryption have major differences. A Network solution with encryption results in a secure channel that information can pass through and can be viewed as an encrypted pipe for information; whereas, a content solution has encryption bound to the information content, so that each piece of content is encrypted separately. Content encryption can also be thought of as a persistent protection through encryption resulting with a binding of the encryption to the content or to a message throughout the life cycle of the message.

In an overall context, security for information can be summed up as authorizing someone to have access to information and, additionally, enforcing access to information through attributes, rules or roles. 

Once that authentication is confirmed, access to the information becomes important. Identity is a global event, and authentication of that identity is directly related to authorizations which should be thought as a local event. The identity of the person and the authentication process remains with that person regardless of what content application is being accessed. Access to the information should be thought of as under the control of the asset owner and in the context of roles or rules and the use of attributes.

Typically, transport protection has focused on securing the channel through a protocol such as the Virtual Private Network (VPN). A protocol such as VPN can be leveraged as the key establishment architecture for using a Constructive Key Management (CKM) schema. CKM is a process, codified by multiple standardization organizations, providing the best features of symmetrical/asymmetrical/ephemeral key constructs. In addition to securing the channel, protection or signing of information (content) may also be extended into manipulating the content directly. Depending on the architectural requirements, a symmetric or an asymmetric CKM schema may be used. In addition, various CKM schemas of symmetric/asymmetric/ephemeral approaches is the choice of an algorithm for the credential key being applied and accommodation to the protocol involved (streaming data/fixed data for example).

The message may be protected through the network, as in the example of a channel in which there is no direct binding of encryption to the message, or may be protected at the content level for which there can be a direct binding of encryption to the content itself. 

For content protection the CKM schema can apply to on-line and off-line communications environments. The result is to create persistent protection, through encryption, resulting with a binding of the encryption to the message (content) throughout its life cycle. Message content may be in the form of data or information.  The CKM encryption schema for content protection may be viewed as shifting from a temporal keying design that may be used to establish the communications channel to an ephemeral keying design that may also be used to protect the content in transit and in storage. 

CKM relies upon the assignment of subject attributes to subjects and object attributes to objects and the development of policy that describes the access rules for each. Each object within the system must be tagged or assigned specific object attributes that describe the object and can include usage rules.

Every object within the system must have at least one policy that defines the access rules for the object. This policy is normally derived from documented or procedural rules that describe the business processes and allowable actions within the organization. For example, in a hospital setting a rule may state that only approved medical personnel shall be able to access a patient’s medical record. If a subject has a Personnel Attribute with a value of Non-Medical Support Staff and they are trying to perform the operation Read upon a document with a Record Attribute of Patient Medical Record, access will be denied and the operation will be disallowed.

Once object attributes, subject attributes, and policies are established, objects can be protected using CKM. Cryptographically enforced access control mechanisms guard access to the objects by limiting access for allowable operations by allowable subjects including machines. This access control mechanism approach not only protects the data and who or what has access to it; but it can protect the processes that are running the machines, providing yet another layer of security under the same security architecture.

The CKM run-time environment assembles the policy, subject attributes, and object attributes, then renders and enforces a decision based on the logic provided in the policy. CKM run-time is able to manage the workflow required to make and enforce the decision, including determining what policy to retrieve, which attributes to retrieve in what order, and where to retrieve attributes. The run-time environment then performs the computation necessary to render a decision.

The policies that can be implemented in a CKM model are limited only to the degree imposed by the language. This flexibility enables the greatest breadth of subjects to access the greatest breadth of objects without having to specify individual relationships between each subject and each object.

We are all in the midst of an information technology transformation process;  from the architectural disruptions of the cloud, to the underlying process changes of evolving storage practices, to fundamental data center design evolution. Basic security and compliance requirements now mean data centers need to be more compartmentalized.

Organizations tend to be more distributed, with multiple data centers and applications and services that span them. Broadly, there is greater demand for central services supporting local implementation; allowing business units more autonomy while still being able to manage costs and support compliance and security requirements. And that’s before we even get into cloud, and the constantly increasing requirement to encrypt nearly everything.

Outsourcing isn’t anything new, nor is co-locating in a data center owned and/or managed by someone else (and shared with others), but we see increasing moves towards both, and not always as part of a move towards cloud. This results in hosting data in multi-tenant or less trusted environments. Encrypting the data while still maintaining control of the keys yourself is one of the best ways to keep your data isolated in a shared environment. Containers continue the trend of distributing processing and storage on an even more ephemeral basis, where containers might appear in microseconds and disappear in minutes in response to application and infrastructure demands.

Encryption, at the object level, supported by CKM, a standards based approach, codified by ANSI2, ISO3, and NIST4, is one of the security linchpins that can span our IT deployments. It can play a central role in keeping us compliant, maintaining proper separation of duties, enforcing security, and providing the artifacts to keep satisfied legal and audit departments. This is accomplished and enforced on the ubiquitous connectivity platform of the internet with all of its variations while providing quantum safe access control and ensuring confidentiality. The process provides control over who can see data, and what they are allowed to do with that data, with the provision of an audit trail to validate what has been done with data and by whom.

The CKM dynamic key design moves the encryption and decryption to the edge of the enterprise, removing the bottle neck of server reach-back and other processing choke points to provide confidentiality and data integrity to data and information persistently, indifferent to network configuration and storage decisions.

Part 2 of this article will be published in the next issue:

The Internet Architecture Board, the Internet Engineering Steering Group and others, have recognized that the growth of the internet depends on users having confidence that the network would protect their private information. RFC 1984 documented this need.  Since that time, we have seen evidence that the capabilities and activities of attackers are greater and more pervasive than previously known.

Collectively we all now believe it is important for protocol designers, developers, and operators to make encryption the norm for internet traffic and content. This can be done with existing standards, and doing so will make the internet safer for us all.

References:

[1] NISTIR 8105 Report on Post Quantum Cryptography

[2] ANSI X9.69, X9.73, X9.125

[3] ISO 11568

[4] NIST 140-2


Dr. Waleed Ejaz Jay Wack

Jay has over 45 years in the electronic security industry. He has been awarded over a dozen US patents in the areas of cryptography and security product design. A strong supporter of standards and an active participant in ANSI, ISO, IEEE, and CIGRE working groups. SME in cryptography, key management and digital currency.

 


Editor:

Dr. Waleed Ejaz Dr. Waleed Ejaz

Waleed Ejaz (S’12, M’14, SM'16) is a Senior Research Associate at the Department of Electrical and Computer Engineering, Ryerson University, Toronto, Canada. Prior to this, he was a Post-doctoral fellow at Queen's University, Kingston, Canada. He received his Ph.D. degree in Information and Communication Engineering from Sejong University, Republic of Korea in 2014. He earned his M.Sc. and B.Sc. degrees in Computer Engineering from National University of Sciences & Technology, Islamabad, Pakistan and University of Engineering & Technology, Taxila, Pakistan, respectively. He worked in top engineering universities in Pakistan and Saudi Arabia as a Faculty Member.His current research interests include Internet of Things (IoT), energy harvesting, 5G cellular networks, and mobile cloud computing. He is currently serving as an Associate Editor of the Canadian Journal of Electrical and Computer Engineering and the IEEE ACCESS.  In addition, he is handling the special issues in IET Communications, the IEEE ACCESS, and the Journal of Internet Technology. He also completed certificate courses on Teaching and Learning in Higher Education from the Chang School at Ryerson University.