The foundations of cybersecurity will never change, even on web3

21/12/2022

Information security has become a rapidly evolving field, and, unlike more established sectors, it possesses many of the most complex challenges that societal politics, economics, philosophy, and law individually face [1], despite its closer association with technology. As a reflection of such, there are several definitions of information security, specifically around its intrinsic characteristics. This paper will firstly discuss these information security properties, followed by two related concepts: declarative objectives and operational tools. Lastly, the importance of sustainability and resilience in security will be detailed, all whilst giving examples to support the points above.

Regarding the properties of information security, according to the international standard [2], information security is defined as the preservation of confidentiality, integrity and availability of information, which are three characteristics generally referred to as the CIA triad.
More specifically, confidentiality is concerned with making sure that only the right entity, whether it is a person, a collective, or a system, can access the presented information. For instance, a confidentiality breach happens when an attacker unrightfully obtains usernames and passwords from a database and accesses the information within, which may be the medical records of certain hospital patient. To help allay this issue, implementing processes such as two-factor authentication or non-replicable biometric parameters, like retina scans, may solve it [3]. It is important to highlight a common category error, which defends that authentication should act as an extension of the CIA triad, instead of it rather representing a process that strengths and supports the idea of confidentiality [4].

Parallelly, integrity is another property of the CIA triad that seeks to prevent the unauthorised tampering of the information, by dishonestly executing or abusing its write access. To further strengthen the integrity of information, non-repudiation objectives must be put in place. For such, digital signatures, mathematically guarantee that, for any transaction, it was indeed the right agent in control of a private key (from) who sent the right data (information integrity) to the right recipient (to). Should any of these attributes change, the resulting hash of the resulting digital signature would completely differ from the original one [5]. Furthermore, availability refers to a property that assures the continued and reliable access to the information at just the right time. An attack (physical or logical) to the servers on a datacentre through a denial-of-service cyber-attack, may result in physical damage, either permanent or not.

Overall, by gathering the definitions of confidentiality, integrity, and availability, security can be defined as the process that ensures that only the right agents (confidentiality) have just the right access to the right information (integrity) at just the right time (availability) [6].

Moreover, the CIA triad can equally support the relation between security and privacy, except it comes with trade-offs. For instance, for the example mentioned above of retina scans to ensure the confidentiality of information, privacy concerns may arise given the potential of this scan to disclose personal, confidential information like diabetes.

One definition of privacy defends that it is the notion that a user has regarding the perceived strength of the preventive measures set it place to avoid unauthorised access to their data, making them feel more in control of their own information [7]. The user’s perceived control over their data might sometimes even be higher than the actual efficacy of the security measures put it place [8].

Another definition of privacy is the distrust of the users with regards to the loss of personal privacy in e- commerce markets or social media, placing trust as the most important asset that a business is based on, and hence the intrinsic dependence between privacy and trust [9].
All-inclusive, it can be argued that the definition of privacy has shifted over time, from the personal right to be left alone [10], to the right to be forgotten (right to erasure) by controlling the information available [11], or the notion of privacy as a social norm that enables different freedoms [12].

However, a solution to the necessary evil when sacrificing privacy over security or vice-versa, might be mitigated by replacing the trusted authority with a trust-less system, more below.

To better explain how the CIA triad can be applied to other general systems, it is necessary to clarify what a system is [13]:
A product or component, the latter plus an OS, the latter plus IT staff, any or all previous plus internal users or management, and finally any or all previous plus external users.

The CIA triad represents the cornerstone of access controls, which are the main security architecture of other larger, more general systems. Access controls are primary divided into three main categories: administrative (company policies, regulations, etc.), technical (firewalls, routers, encryption, etc.) and physical (gates, bodyguards, locks, etc.). Equally, they are divided into three main models: Discretionary Access Controls (DACs), Nondiscretionary Access Controls (NACs), and Mandatory Access Controls (MACs).

For the former, DACs, users are granted full privileges (read-write-execute) to the files they have been given access to, and they are allowed to share them with other users of the system. On the other hand, NACs, whose privileges are based on their role, instead of the individual itself. They are followed by MACs, which are focused on enforcing confidentiality above anything else, by not being able to share files to other users that lack proper clearance. It is often employed in the military and other governmental institutions [14], with instances like the Bell-La Padula model [15].

Declarative objectives from operational tools must be differentiated, to better separate the design from the implementation of these secure, CIA-oriented systems, with different priorities.
For the former, stating that certain documents must remain confidential is a declarative objective that supports confidentiality. Likewise, the regulation for not allowing weapons into an aircraft through fliers, represents a declarative objective that supports the integrity of the passengers. Lastly, if all data is to always be reachable on a particular server, a declarative objective must be drafted with such indications, granting a focus around availability.

On the other hand, operational tools are the description of how the declarative objectives are to be achieved. For example, for an operational tool to achieve the confidentiality of all documents, passwords and encryption must be implemented. Equally, to make sure that the passenger integrity objectives on the aircraft are met, security searches must be put in place. Likewise, to achieve availability of all data in the servers, RAID back-ups might be a good operational tool application [14].

However, even two organisations within the same field do not generally have the same CIA priorities. An example of such is the difference between the overarching required governance for a UK bank and a DeFi DAO (decentralised finance and autonomous organisation), on blockchain.
In terms of general governance, a UK-based bank is governed by its CEO, which in turn answers to the board of directors, chaired by the appointed chairman. However, should the bank lack the motivation to further comply with the applicable UK regulations and policies, the UK government can make a hostile intervention. In any case, this code of conduct and governance is kept strictly private to a few, with the objective of maintaining the secrecy of financial information, among others.

In contrast, DeFi are decentralised applications that live on the Ethereum blockchain network, forming most commonly a DAO, which are member-owned communities without centralised leadership, as the governance is set on an immutable public smart-contract.
In terms of confidentiality priorities, most banks depend on their employees, which already makes them vulnerable to any unauthorised disclosure of personal information, together with the centralised bank acting as a single point of failure. To mitigate the risks of this issue, operational tools such as password, encryption, and two-factor authentication must be set in place.

As opposed to centralised banks, DeFi DAOs live on the blockchain itself, a completely decentralised public ledger. More specifically, despite them being public, any public address (which forms the digital wallet address) is not tied with IDs, credentials of any other form of identification. Even though it could be argued that these public addresses can be tied to a single individual, there are encryption techniques, such as homomorphic encryption, together with zero-trust implementations, that allow for true anonymity [16].

In terms of integrity of transactions, central banks must recall and store them in a central ledger, which can be manipulated by its employees. To mitigate the risks of this issue, operational tools must be implemented, such as software that verifies the parity of every transaction.
On the other hand, for DeFi DAOs, every single transaction is validated on the Ethereum network, which is made up of thousands of computers around the globe, whereby the digital signature means that the right agent has indeed made such transaction, the data has not been tampered with and it is intended to the right recipient. This level of computation to replicate, or even needed to falsify one transaction, is nearly impossible for any centralised computing power.

Finally, regarding the availability of information, such as account balances, they need to go through the bank, whereby the bank unavoidably acts as a central point of failure. To reduce this issue, central banks may implement operational tools such as continuous monitoring and access.
Nevertheless, for DeFi DAOs, every single transaction makes up the general EVM’s state (Ethereum Virtual Machine), which is saved on every single full node that belongs to the Ethereum network. In addition to such decentralised distribution, Ethereum also inherits the blockchain’s Byzantine Fault Tolerance, which sits around 46% [17].

Its main trade-offs, however, are the computing requirements needed to process every single user transaction, resulting is a lower number of transactions per second, when compared to traditional banking. For the latter, its main trade-off is the dependency, forced trust and lack of user control over their own data.

As previously mentioned above, the CIA triad, whilst supporting privacy and security with ideally minimal trade-offs, must equally ensure the long-term, healthy, and non-inflationary longevity of the system; in other words, it needs be sustainable. Equally, given that eventually problems arise, such as accidents or attacks that might endanger the survival of the system, declarative objectives through operational tools must be correctly put in place to recover from adversities, otherwise referred to as resiliency. In the example of traditional banking, to mitigate the risk of the central digital system going down and hence blocking all its transactions and functionality, they could set up private Ethereum-based blockchain back-up software system that is up to date with the central mainframe’s private ledger, and, once the central mainframe is back online, all these transactions are updated on the private ledger.

To conclude, an ideal equilibrium between the trade-offs of privacy and security, from a socio-political and economic standpoint, will be to develop a set of blockchain-based (financial, governing, etc.) systems that protect their user’s anonymity and privacy, through the use of homomorphic encryption and zero-trust models, whilst still allowing governments to input certain encrypted data in order for the system to flag bad actors from using them and release further information as needed.

 

References

  1. [1]  R. Anderson and T. Moore, “The Economics of Information Security.”

  2. [2]  “ISO – ISO/IEC 27002:2005 – Information technology — Security techniques — Code of practice for

    information security management.” https://www.iso.org/standard/50297.html.

  3. [3]  E. Conrad, “Domain 2: Access Control,” Elev. Hour CISSP, pp. 19–37, Jan. 2011, doi: 10.1016/B978-1-

    59749-566-0.00002-3.

  4. [4]  M. E. Whitman and H. J. Mattord, “Principles of information security,” p. 728.

  5. [5]  “Ethereum Whitepaper | ethereum.org.” https://ethereum.org/en/whitepaper/ (accessed Oct. 02,

    2022).

  6. [6]  C. Ioannidis, D. Pym, and J. Williams, “Information security trade-offs and optimal patching policies,”

    Eur. J. Oper. Res., vol. 216, no. 2, pp. 434–444, Jan. 2012, doi: 10.1016/j.ejor.2011.05.050.

  7. [7]  T. Dinev, H. Xu, J. H. Smith, and P. Hart, “Information privacy and correlates: An empirical attempt to

    bridge and distinguish privacyrelated concepts,” Eur. J. Inf. Syst., vol. 22, no. 3, pp. 295–316, 2013,

    doi: 10.1057/EJIS.2012.23.

  8. [8]  B. Schneier, “The psychology of security,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes

    Artif. Intell. Lect. Notes Bioinformatics), vol. 5023 LNCS, pp. 50–79, 2008, doi: 10.1007/978-3-540-

    68164-9_5/COVER.

  9. [9]  H. Wang, M. K. O. Lee, and C. Wang, “Consumer Privacy Concerns about Internet Marketing,”

    Commun. ACM, vol. 41, no. 3, pp. 63–70, 1998, doi: 10.1145/272287.272299.

  10. [10]  “Warren and Brandeis, ‘The Right to Privacy.’”

    https://groups.csail.mit.edu/mac/classes/6.805/articles/privacy/Privacy_brand_warr2.html

    (accessed Oct. 24, 2022).

  11. [11]  “Right to erasure | ICO.” https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-

    the-general-data-protection-regulation-gdpr/individual-rights/right-to-erasure/.

  12. [12]  Ferdinand Schoeman, Privacy: Philosophical Dimensions, vol. 4, no. 5. 1890.

  1. [13]  R. Anderson, Security Engineering – A Guide to Building Dependable Distributed Systems. 2020.

  2. [14]  M. Nyanchama and S. Osborn, “Modeling Mandatory Access Control in Role-Based Security

    Systems,” pp. 129–144, 1996, doi: 10.1007/978-0-387-34932-9_9.

  3. [15]  T. Ting, S. Demurjian, M. H.-R. of the I. W. 11. . Workshop, and undefined 1991, “Requirements,

    capabilities, and functionalities of user-role based security for an object-oriented design model,”

    dl.acm.org. [Online]. Available: https://dl.acm.org/doi/abs/10.5555/646112.679622.

  4. [16]  S. Steffen, B. Bichsel, R. Baumgartner, and M. Vechev ETH Zurich, “ZeeStar: Private Smart Contracts

    by Homomorphic Encryption and Zero-knowledge Proofs,” files.sri.inf.ethz.ch, Available:

    https://files.sri.inf.ethz.ch/website/papers/sp22-zeestar.pdf.

  5. [17]  H. Samy, A. Tammam, A. Fahmy, and B. Hasan, “Enhancing the performance of the blockchain

    consensus algorithm using multithreading technology,” Ain Shams Eng. J., vol. 12, no. 3, pp. 2709– 2716, Sep. 2021, doi: 10.1016/J.ASEJ.2021.01.019.

Share this:

Like this:

Like Loading...
%d