Achieving superior blockchain security with SGX and ZK proofs

Achieving superior blockchain security with SGX and ZK proofs


Including fatal SGX vulnerabilities and ways to solve them with technolgies like zero-knowledge proofs or full-homomorphic encryption.


The realm of blockchain technology and cryptographic security continually evolves, with innovations such as ZK-Rollups, circuits, and SNARKs at the forefront of this transformation. However, the complexity and relative immaturity of these technologies pose risks, including potential bugs and security vulnerabilities. A strategic approach to mitigating these risks involves the adoption of a multi-prover system. This system leverages a variety of proof types (like validity proofs and fraud proofs), different proving systems (e.g., SNARK and STARK), and distinct implementations crafted by diverse teams. Such a multi-layered approach significantly enhances the security and robustness of blockchain networks.

This article unfolds in five critical parts: Firstly, we delve into the essence of the multi-prover strategy, exploring its components beyond just the proving system. Secondly, we illuminate the concept and mechanics of SGX (Software Guard Extensions) as a form of validity proof and examine its role as a reliable addition to cryptographic and economic frameworks. Then, we elucidate how the multi-prover system functions within nodes and smart contracts, particularly focusing on its application in rollups. Equally, we dig deeper into some common and critical vulnerabilities of SGX. Finally, we compare how ZK and FHE can indeed complement this technology for it to be applicable widely in the blockchain space.


Challenges in Current Technologies

Blockchain technologies, especially those involving complex constructs like ZK-EVM circuits, are inherently intricate and laden with thousands of code lines. This complexity inevitably leads to a longer path towards achieving a bug-free environment. The risks are manifold — spanning across the proving systems, circuit compilers, and the ZK-EVM code itself.

Moreover, the cryptographic primitives underpinning ZK-Rollups, predominantly SNARKs, are relatively new. Their novelty brings an inherent uncertainty in terms of undiscovered vulnerabilities within the proving systems themselves.

The Multi-Prover Solution

In response to these challenges, the multi-prover approach offers a resilient safety net. This strategy involves generating several types of proofs for a single block. Consequently, even if one proof type is compromised, the presence of others obstructs the exploitation of the same vulnerability. In scenarios where a particular proof type fails to validate a block, the multi-prover system’s design could lead to a temporary halt in the chain’s operations, contingent on the acceptance criteria set for validating a block. This additional layer of requiring diverse proof types significantly elevates the overall security.

Disclaimer: Henceforth, our reference to “multi-prover” encapsulates the diversity in proof types.

A Conceptual Analogy

To better grasp the concept of a multi-prover system, we can draw a parallel with the client diversity in Ethereum’s network. This diversity allows Ethereum to withstand and recover from partial network failures. As Vitalik Buterin articulates in his discourse, Ethereum’s approach to using multiple clients is a form of decentralization that brings both technical and social benefits. These benefits range from architectural diversity to the prevention of power centralization within the network’s operational team.

Similarly, the multi-prover philosophy in Rollups champions both technical and social advantages. Technically, it encompasses varied proving systems, architectural designs, and implementations. Socially, it promotes a distribution of power, ensuring that no single entity holds disproportionate influence within the blockchain infrastructure.


The architecture of a multi-prover system is inherently diverse, incorporating various elements that collectively fortify the blockchain against a myriad of vulnerabilities.

Variety in Proof Types

Multi-prover systems don’t just hinge on a single type of proof. They amalgamate different kinds, such as:

    • Fraud Proofs: Commonly utilized in optimistic rollups, these proofs operate on fundamentally different principles compared to validity proofs, greatly reducing the likelihood of correlated vulnerabilities.
    • Validity Proofs: Typically seen in zk-rollups, these proofs are based on different computational assumptions and methodologies.
    • Innovative Proving Systems like SGX: As a less conventional but emerging option, SGX (Software Guard Extensions) provides an additional layer of security. We’ll explore SGX in greater detail later in this article.

Multiple Proving Systems

The multi-prover framework isn’t just about using different types of proofs; it’s also about employing varied proving systems. For instance, in the context of zk-rollups, both SNARK and STARK backends can be utilized. The dual-use of these systems not only introduces redundancy but also mitigates the risk of systemic failures in any single proving system.

Diverse Implementations by Varied Teams

A robust multi-prover approach involves different teams working on separate implementations. This diversity significantly reduces the risk of a universal bug affecting the entire network, as the likelihood of different teams making identical errors in their code is minimal.

Addressing the Risks of Broken Proofs

In a multi-prover ecosystem, the possibility exists that different proof types may yield conflicting results for the same block. This situation necessitates an understanding of two key properties of proofs: completeness and soundness.

    • Completeness Issues: If a proof for a valid block cannot be generated, it implies a breakdown in completeness. In such scenarios, depending on the blockchain’s design (e.g., the number of required proofs), the entire chain might temporarily halt.
    • Soundness Concerns: This arises when multiple proofs for a valid block result in different block hashes. Ideally, only the proof for the correct blockhash should be possible. If the same vulnerability impacts multiple proof types, leading to the submission of incorrect blockhashes, the integrity of the multi-prover system could be compromised.

To address these risks, especially when using two types of proofs, a multi-sig governance mechanism can serve as an emergency resolution method. This entity would determine the correct blockhash and take necessary actions (like updating or excluding the faulty proof type). The decision-making in scenarios involving more than two proof types might hinge on a majority consensus regarding the validity of the proofs, followed by corrective measures against the erring proofs.


What is SGX?

SGX, or Software Guard Extensions, represents a significant stride in the domain of secure computing. Developed by Intel, SGX is a set of security-related instruction codes that enable the creation of protected memory spaces, known as enclaves.

How SGX Works

At its core, SGX allows for the establishment of enclaves, which act as fortified, isolated compartments within a processor. These enclaves are designed to be a secure space where code and data can be kept protected from external threats, including those originating from the system’s own operating system.

In blockchain applications, SGX can be a game-changer, particularly for private smart contracts. For instance, running an oracle within an SGX enclave can enhance privacy and security, providing proof of data integrity without revealing the data itself.


Secure remote computation. A user relies on a remote computer, owned by an untrusted party, to perform some computation on her data. The user has some assurance of the computation’s integrity and confidentiality

SGX’s Structure and Key Components

    • Trusted Computing Base (TCB): This encompasses the hardware, firmware, and software essential for creating a secure computing environment.
    • Hardware Secrets: Such as the Root Provisioning Key (RPK) and Root Seal Key (RSK), which are integral to the hardware’s secure operations.
    • Attestation Mechanisms: These are crucial for verifying the integrity and security of the code running within the enclaves. SGX offers two attestation forms: local and remote. Local attestation is used within the same trusted computing base, whereas remote attestation allows external entities to verify the enclave’s integrity.


Software attestation proves to a remote computer that it is communicating with a specific secure container hosted by a trusted platform. The proof is an attestation signature produced by the platform’s secret attestation key. The signature covers the container’s initial state, a challenge nonce produced by the remote computer, and a message produced by the container

Challenges and Limitations of SGX

Despite its robust design, SGX isn’t without its challenges. Trusting Intel’s hardware and the security of the SGX technology itself can be seen as a potential risk. Furthermore, the complexity and novelty of SGX mean that its reliability in various attack scenarios is still an evolving area. Security in SGX also doesn’t exist in isolation and often needs to be supplemented with other cryptographic methods or economic incentives for a well-rounded defense strategy.


1. Trust in the Technology and Provider

Utilizing SGX necessitates a degree of trust in Intel, the technology’s creator. This trust extends to the belief that the SGX technology is robust against various attack vectors and that Intel will be responsive in patching new vulnerabilities as they emerge. However, skepticism remains, especially given the history of delay in addressing known SGX attacks. Furthermore, Intel’s decision to deprecate SGX in consumer CPUs in 2021 raises questions about its long-term viability and support.

2. The Newness of the Technology

SGX, being relatively new, is still under scrutiny regarding its overall reliability. The lack of extensive real-world testing and understanding of potential attack vectors means that the full spectrum of its security capabilities is yet to be established.

3. Security Beyond the Hardware

While SGX provides a secure hardware environment, it isn’t a standalone solution for all security needs. To build a comprehensive security model, SGX should be integrated with other cryptographic methods such as Multi-Party Computation (MPC), Zero-Knowledge Proofs (ZKP), Fully Homomorphic Encryption (FHE), and Oblivious RAM (ORAM). Balancing SGX’s technical strengths with economic incentives and other security mechanisms can create a more resilient overall system.

4. High-level CPU Diagram

Random Access Memory (SRAM) cells, generally known as registers, which are significantly faster than DRAM cells, but also a lot more expensive. An instruction performs a simple computation on its inputs and stores the result in an output location. The processor’s registers make up an execution context that provides the inputs and stores the outputs for most instructions. For example, ADD RDX, RAX, RBX performs an integer addition, where the inputs are the registers RAX and RBX, and the result is stored in the output register RDX. The registers mentioned in Figure 6 are the instruction pointer (RIP), which stores the memory address of the next instruction to be executed by the processor, and the stack pointer (RSP), which stores the memory address of the topmost element in the call stack used by the processor’s procedural programming support. Under normal circumstances, the processor repeatedly reads an instruction from the memory address stored in RIP, executes the instruction, and updates RIP to point to the following instruction. Unlike many RISC architectures, the Intel architecture uses a variable-size instruction encoding, so the size of an instruction is not known until the instruction has been read from memory. While executing an instruction, the processor may encounter a fault, which is a situation where the instruction’s preconditions are not met. When a fault occurs, the instruction does not store a result in the output location. Instead, the instruction’s result is considered to be the fault that occurred. In a nutshell, the processor first looks up the address of the code that will handle the fault, based on the fault’s nature, and sets up the execution environment in preparation to execute the fault handler. Each device attached to the bus decodes the operation codes and addresses of all the messages sent on the bus and ignores the messages that do not require its involvement. For example, when the processor wishes to read a memory location, it sends a message with the operation code READ-REQUEST and the bus address corresponding to the desired memory location. The memory sees the message on the bus and performs the READ operation. 6 At a later time, the memory responds by sending a message with the operation code READ-RESPONSE, the same address as the request, and the data value set to the result of the READ operation. The computer communicates with the outside world via I/O devices, such as keyboards, displays, and network cards, which are connected to the system bus. Devices mostly respond to requests issued by the processor.


A processor fetches instructions from the memory and executes them. The RIP register holds the address of the instruction to be executed.


1. Development Tools and Frameworks

To facilitate the integration of SGX into blockchain applications, several Software Development Kits (SDKs) and libraries are available, including Intel’s SGX SDK, Open Enclave, Rust SGX, Teaclave, and others. For existing applications that need to run in isolated environments without significant code modifications, library Operating Systems (libOSes) like Gramine, Occlum, and EGo have been developed. These provide a Linux-like abstraction layer, helping to adapt applications to secure enclave environments.

2. Pioneering Efforts in Blockchain

Organizations like Flashbots and Nethermind have been at the forefront of exploring SGX within the blockchain space. One notable application is running Ethereum’s Geth client inside an SGX enclave. Though feasible, this approach presents challenges, including high memory requirements, long startup times, and potential information leakage risks.

How the memory is handled with SGX:


The virtual memory abstraction gives each process its own virtual address space. The operating system multiplexes the computer’s DRAM between the processes, while application developers build software as if it owns the entire computer’s memory.

3. Block Building and Transaction Privacy within SGX

Another critical application of SGX in blockchain is in the area of private transactions and decentralized block building. By running the block-building process within SGX enclaves, the contents of user transactions can be kept private, and the block construction can be done in a verifiable manner. However, challenges such as managing the large state size of blockchain databases and the additional performance overheads remain topics of active research and development.

Applying SGX to Rollups:

SGX’s application to rollups brings promising possibilities, particularly in verifying the correctness of block execution. In rollups, the verification process can be rendered stateless: the program inside the SGX enclave receives only the necessary data (like the state root, transaction list, and block parameters) as input. It then verifies these inputs against the expected output, ensuring the integrity of the block execution.

This method aligns well with the principles of zero-knowledge proofs, where direct access to the state isn’t possible. The SGX enclave thus serves as a trusted component that can attest to the correctness of a block’s execution, adding another layer of security and integrity to the rollup process.

Multi-Prover in a Rollup Setting

In the context of rollups, implementing a multi-prover system requires the participation of nodes dedicated to proof generation. These nodes gather necessary data such as transactions, state accesses, and Merkle proofs. The generation of zk-based proofs might be resource-intensive and time-consuming, whereas SGX-based proofs can be quicker and less resource-demanding.

Once generated, these proofs are submitted to the rollup smart contract collectively, ensuring that all required proofs are available for verification. The contract then checks each proof for correctness. In the case of zk proofs, this involves running a verifier algorithm; for SGX proofs, it might be as simple as verifying an ECDSA signature. The final step is to ensure that all proofs agree on the same blockhash, confirming the block’s validity and marking it as proven on the chain.


1. SGX and Economic Incentives: A Balanced Approach

When considering the security model of blockchain systems, a nuanced approach that combines the technical assurances of SGX with the strategic application of economic incentives is vital. SGX, with its hardware-based encryption and isolated execution environments, offers a robust defense against many attack vectors. However, its effectiveness can be further enhanced when used in conjunction with well-designed economic incentives.

2. Complementing SGX with Economics

The integration of SGX with economic models needs to consider various scenarios:

    • High-Risk Situations: In instances where economic models face extreme stress or breakdown (e.g., market crashes or systemic financial failures), SGX’s technical security can provide a stable defense line, independent of economic fluctuations.
    • Prevention of Exploits: SGX alone, while formidable, can still be susceptible to sophisticated attacks. Therefore, pairing SGX with economic deterrents (like high costs for attempting security breaches) can enhance overall system resilience.

WSGX in Blockchain: Beyond the Ideal

While SGX presents an impressive array of features for enhancing blockchain security, it’s crucial to acknowledge that it isn’t a panacea. The complexity and evolving nature of SGX mean that it should be part of a broader strategy incorporating other security measures and techniques. For instance, combining SGX with different cryptographic primitives and economic models can help create a more well-rounded and formidable security framework.


Understanding SGX and Its Operational Model

Before diving into the vulnerabilities, let’s understand the basic functioning of SGX. SGX allows applications to create protected areas in memory, called enclaves, which are designed to be resilient to attacks from both the operating system and the hardware. The cornerstone of SGX is its promise that code and data loaded in the enclave are protected and cannot be seen or modified from outside the enclave.

1. Exploiting Side Channels: A Pathway to Breaching SGX

Side-channel attacks have emerged as a potent threat to SGX, exploiting indirect paths to glean sensitive information. These attacks typically observe variations in physical parameters like execution time or power consumption to infer data or cryptographic keys.

a. Cache Attacks:

Cache attacks like Prime+Probe and Flush+Reload target the processor’s cache. By monitoring cache access patterns, an attacker can deduce the data being processed inside the enclave, thus compromising the confidentiality guaranteed by SGX.


A malicious OS can partition a cache between the software running inside an enclave and its own malicious code. Both the OS and the enclave software have cache sets dedicated to them. When allocating DRAM to itself and to the enclave software, the malicious OS is careful to only use DRAM regions that map to the appropriate cache sets. On a system with an Intel CPU, the the OS can partition the L2 cache by manipulating the page tables in a way that is completely oblivious to the enclave’s software.

b. Branch Prediction and Speculative Execution Attacks:

These attacks exploit the CPU’s branch prediction and speculative execution features to access protected memory areas. Notable examples include Spectre and Meltdown, which have also been adapted to target SGX enclaves.

c. Page Fault Attacks:

By inducing and observing page faults, attackers can figure out the access patterns inside the enclave, thereby inferring the data being processed.

2. Specific Vulnerabilities in SGX: xAPIC and MMIO

The xAPIC and MMIO vulnerabilities, as exposed in the Secret Network incident, are particularly concerning.

a. xAPIC Vulnerability:

This vulnerability arises from a flaw in the Advanced Programmable Interrupt Controller (xAPIC) that allows an attacker to infer the data being accessed inside the enclave. By manipulating and observing the interrupt handling mechanisms, sensitive data like the consensus seed in the Secret Network can be extracted.

b. MMIO Vulnerability:

Memory-Mapped I/O (MMIO) attacks exploit the SGX design’s handling of certain memory-mapped I/O regions. Attackers can potentially gain unauthorized access to enclave memory, leading to data leakage.

3. Real-World Impact and Mitigation Efforts

The real-world implications of these vulnerabilities are far-reaching. The Secret Network’s compromise and the potential extraction of AACS2 keys in PowerDVD are testaments to the threats posed by these SGX vulnerabilities.

Mitigating these vulnerabilities often involves firmware and microcode updates, which can be challenging to deploy due to their reliance on BIOS updates, leading to a slow and cumbersome patching process.

4. The Challenge of Patching SGX

Patching SGX vulnerabilities is not straightforward. Each update must be carefully tested to ensure it doesn’t inadvertently introduce new vulnerabilities or impact the performance or functionality of the system. The delay in deploying these updates can leave systems vulnerable for extended periods.

5. The Future of SGX Security

The ongoing discovery of SGX vulnerabilities necessitates a re-evaluation of the trust placed in enclave-based security models. While Intel and other stakeholders have been actively addressing these vulnerabilities, the evolving nature of side-channel attacks means that a single solution is unlikely to be sufficient.



ZKPs, a cryptographic method that enables one party to prove to another that a statement is true without revealing any information beyond the validity of the statement itself, can complement SGX in several ways:

1. Enhancing Privacy Against Side-Channel Attacks:

    • Mitigation Strategy: ZKPs can verify the correctness of a computation (like a smart contract execution) without revealing the input or internal state. This method counters specific side-channel attacks that target the data or the state of computations within SGX.
    • Use Case Example: In blockchain, a ZKP could validate a transaction’s correctness processed within an SGX enclave without exposing the transaction’s details, thus maintaining transaction confidentiality despite SGX vulnerabilities.

2. Building Trust Despite Vulnerable Environments:

    • Trustless Verification: ZKPs enable external validation of data processing in SGX enclaves, building trust even if the enclave itself might be compromised.
    • Blockchain Integration: For blockchain networks, ZKPs can ensure the integrity and correctness of off-chain computations done within SGX enclaves, crucial for maintaining consensus and trust.Fully Homomorphic Encryption (FHE): A Complementary Shield


FHE allows computations to be performed on encrypted data, and the results of these computations remain encrypted. FHE can address some of SGX’s vulnerabilities:

1. Securing Data in Use:

  • Protection Mechanism: Even if attackers bypass SGX protections, the data, if encrypted using FHE, remains secure as they cannot decrypt it without the corresponding keys.
  • Blockchain Application: Storing encrypted blockchain states or smart contracts and allowing computations (like transaction processing) in their encrypted form enhances security and privacy.

2. Extended Security Boundaries:

  • Data Security Outside Enclaves: FHE keeps data encrypted during processing outside the SGX enclave, mitigating risks related to data exposure in untrusted parts of the system.
  • Blockchain Scenarios: Blockchain nodes leveraging SGX can perform certain computations on encrypted data (e.g., transaction validation) without decrypting it, reducing the attack surface.


Integrating ZKPs and FHE with SGX in blockchain systems must consider the complexity and performance overheads. Implementing such systems requires a deep understanding of cryptographic primitives, SGX architecture, and blockchain protocols.

Implementing ZKPs with SGX:

  1. Designing ZKP Protocols: Choose ZKP types (e.g., SNARKs, STARKs) based on the trade-off between setup requirements, proof size, and computational overhead.
  2. Integration with SGX: Develop SGX enclaves to generate or verify ZKPs. Ensure that enclave’s code is optimized to mitigate known side-channel attacks during ZKP computation.

Utilizing FHE alongside SGX:

  1. Selecting FHE Schemes: Pick suitable FHE schemes that balance between encryption strength and computational efficiency.
  2. SGX-FHE Workflow: Design workflows where data is encrypted via FHE outside SGX, processed within the enclave, and results are sent out still encrypted.


1. Advancements in Technology and Security

he blockchain landscape is continually evolving, with new technologies and security challenges emerging regularly. SGX, as part of this landscape, will also evolve, addressing current limitations and adapting to new threats.

2. Holistic Security Strategies

Future blockchain implementations will likely adopt more holistic security strategies, integrating SGX with other advanced cryptographic methods and adapting to changing economic and technical environments. This approach ensures that blockchain systems remain resilient against a broad spectrum of threats, both known and unforeseen.



The article dug deep into the intricate world of multi-prover systems in blockchain, highlighting their significance, operational mechanisms, and the role of SGX within this framework. The multi-prover approach, by leveraging a diversity of proofs, proving systems, and implementations, significantly enhances the resilience and security of blockchain networks. SGX, with its ability to create secure and isolated execution environments, adds a valuable dimension to this approach, particularly in the context of private smart contracts and rollup verifications.

However, the journey towards achieving a foolproof blockchain security model is ongoing. The continuous evolution of threats and technologies necessitates an adaptable and multi-faceted security strategy, where innovations like SGX are integrated thoughtfully with other cryptographic techniques and economic models. As blockchain technology matures and faces new challenges, the integration of these diverse yet complementary elements will be crucial in building robust, secure, and trustworthy systems.

In conclusion, while the multi-prover strategy and tools like SGX significantly fortify blockchain systems, the search for impenetrable security continues.

Pentestify is committed to ensuring that the new web3 world thrives in abundance by making sure that web3 companies stay resilient throughout their lifecycles, iterations, and even attacks, and we believe it is not possible without continuous smart contract security resilience.

Modular AI and Mojo: Chris Lattner has done it again!

Modular AI and Mojo: Chris Lattner has done it again!


@clattner_llvm has done it again withv@Modular_AI!

My mind was blown and filled with new ideas after listening to @lexfridman‘s most recent podcast with him on the future of programming and AI.

Here are the top facts you need to learn about Mojo, the new programming language, 35000x faster than Python:


Mojo eliminates the need for programmers to manually adjust parameters for optimal performance on different systems, making the code more portable and efficient. It works by empirically testing different configurations on the target machine and caching the best result for that system.


  • Everything is Python is an object, and it is referenced as a pointer in the call memory stack. By eliminating these pointers and transferring these objects directly to the registers, there is already a x10 improvement in Mojo.
  • As opposed to Python, Mojo eliminates the Python GIL, and hence enables true multithreading. Fundamental with modern CPUs, GPUs, TPUs, etc.


Mojo, a superset of Python, brings remarkable improvements to typed programming. By integrating optional typing, Mojo not only maintains Python’s flexibility but also allows for better code optimization and completion. This facilitates gradual and flexible adoption of typing based on developers’ requirements and preferences, providing them the freedom to control code optimization levels.


  • Mojo allows optional typing, offering a balance between flexibility and optimization potential.
  • The introduction of types can be progressive in Mojo, enabling developers to add as many types as they need in their code. — Despite supporting types, Mojo remains fully compatible with Python, supporting all Python packages and dynamic operations.
  • Unlike Python, in Mojo, declared types are enforced by the compiler, adding a safety layer.
  • The approach of Mojo towards typing is designed to cater to different development situations, from prototyping to large-scale code bases. It aids in reducing errors, optimizing performance, and facilitating better code understanding.


One of the main advantages of Mojo is immutability, which improves performance. Immutability means that a value, once created, cannot change. This feature prevents bugs that could occur when an unexpected change to a value causes unforeseen consequences in other parts of the code. By providing what’s known as “value semantics,” Mojo ensures that if a data collection like an array or a dictionary is passed around within the code, a logical copy of all the data is made. This means that any changes made to that collection won’t affect the original data.


  • Mojo allows for the expression of complex data structures like atomic numbers or uniquely owned database handles.
  • Borrowed conventions in Mojo let the data be used without making copies, thus enhancing the performance.
  • Mutable references in Mojo can change the data without making a copy.
  • The abstraction level in Mojo can be adjusted from very low-level systems to application and scripting, making it a highly scalable system.
  • This level of detail is particularly beneficial to systems programmers who often need to directly manipulate bits and memory layouts.
  • Mojo combines the benefits of high-level languages (like Python) with the performance and flexibility of lower-level languages (like C++ or Rust).


Python was originally created for a completely different worlds, and Python libraries like 


 inherit these shortcomings. Mojo, on the other hand, when dealing with serving, complex distributed systems comes to the rescue, as it easily scales for new hardware without being hardware specific, and hence taking away the requirement of being an expert in hardware as a software engineer to create complex and efficient algorithms.


  • Mojo unlocks new hardware innovation, as it is easier to develop specific hardware for specific hardware needs when creating and designing an AI model for a certain task.
  • Heterogenous runtimes means that computing operations (CPUs, NPUs, GPUs communications) can now be better scheduled to avoid classical timely computer science problems.


To fully remediate the problem of needing to change the architecture and programming of extremely optimized AI models every single time third-party hardware vendors (Apple, NVIDIA, Intel, etc.) release new hardware, Mojo allows for an abstraction and scalable approach to factoring this complexity to avoid this problem.


  • Modular stack can adapt to new hardware designs and hence not requiring the software engineer to be equally proficient in hardware.
  • Vectorised and parallised techniques make it possible.
  • Tiling is also used for memory optimisation, and it makes sure the entire memory cache is used in each hardware.


  1. Mojo can implement CPython libraries.
  2. CPython objects can natively be used in Mojo directly.
  3. Mojo creates the ‚one world‘ that gets the best from Python, its interpreter, its libraries, etc.
  4. Latest Python research (by Microsoft) in latest release can only obtain a 20% improvement at best, far away from Mojo.

A few promises from @clattner_llvm himself:

  1. Mojo‘s developers will not have to go through the painful change from Python2 to Python3, whereby entire companies with huge codebases needed to completely rewrite their code.
  2. AGI could be run using Mojo programming language.
  3. Python code can run in Mojo code natively, at least as fast and generally faster than Python code alone.
  4. Mojo will NOT be Python 4.0, potentially avoiding a Cold War among Pythonians

Hope you enjoyed it as much as I did, and can‘t wait to see what the world builds with it.

Read more about it here and follow for more similar content:

The History of privacy and how Web3 will change it forever

The history of privacy and how Web3 will change it forever


“Know the past, secure the future – ignorance in history is the gateway to cyber warfare”


Privacy is a crucial concept in the modern world for a variety of reasons. It enables individuals to retain control over their personal information and how it is used and processed, which is essential for maintaining their autonomy, dignity, and security [1]. Moreover, privacy is frequently intertwined with concerns of power and inequality, and it can be a critical instrument for rebalancing power between major companies and individuals or for protecting individuals and marginalised communities [2]. Privacy can be defined differently depending on the context, but it generally refers to an individual’s capacity to determine who gets access to their personal information, thoughts, and actions [3].

This article provides a brief history of privacy, in addition to an examination of its contemporary concerns with regard to individuals, organisations, and states. It will conclude with a discussion of privacy regulations as they pertain to addressing these concerns, followed by a description of different alternatives to addressing these issues more effectively.

How has the concept of privacy changed over time?

The concept of privacy has evolved and changed throughout history, and it continues to evolve today. As society has become more industrialised and technology has progressed, the concept of privacy has grown in significance. A greater emphasis has been placed on protecting personal privacy and data, with many data regulations being approved and revisited constantly to further support the concept of privacy.

Aristotle’s Politics

Some believe Aristotle’s Politics to be one of the earliest references to the private domain and one of the earliest instances in which the concept of privacy is stated, circa 350 B.C.E. In his writing, he makes a clear distinction between the oikos (private family life) and the polis (in the public realm). Aristotle characterises the former as a mixture of three hierarchies: master and slave, husband and wife, and father and son. Equally, discussing polis, he contends that it is the result of coupling diverse private family units, and hence placing the public state before to the private individual [5].
Of course, nowadays, several notions have changed since then, such as the relationship between husband and wife, which has evolved towards a partnership and continues to do so, reinforcing the concepts of gender equality in terms of rights, responsibilities, and respect, all of which support personal privacy.

Early privacy definition changes

In December 1890, Harvard Law Review published The Right to Privacy (by Samuel D. Warren and Louis D. Brandeis), which pioneered in the Unites States a discussion around the concept of the right to be left alone, and entertained new points such as the law of defamation, where, even in absence of visible physical damages, compensation shall be allowed for injury for feelings as in the action of slander and libel [6]. However, the door is left open to where the limit of the state’s control on privacy should be drawn – Shall the courts thus close the front entrance to constituted authority, and open wide the back door to idle or prurient curiosity?

Shortly after, in 1967, Alan Westin’s Privacy and Freedom first founds the concept of the social value of privacy, which enables individuals and groups in society the preservation of autonomy, the release for role- playing, a time for self-evaluation and protected communications, a growing concern following its technology’s rapid evolution [7].

Complimentary to previous definitions of privacy, Ferdinand Schoeman in 1992 added the aspect of human dignity, autonomy, and freedom in his paper, where he defended the views of privacy divided into two categories: coherence thesis (there is something common to most of the privacy claims) and distinctiveness thesis (privacy claims are to be defended morally) [8].

Privacy’s role in early protection laws and vice-versa

Early protection laws put in place to defend the privacy of individuals and their houses date back as early as 1361 in the United Kingdom with The Justices of the Peace Act, where a Breach of the Peace happens when there is actual intentional damages to one’s property during their presence [9].
Later, in 1789, the U.S. Constitution, although not explicitly guaranteeing the right to privacy, mentions such a right in the First, Third, Fourth and Fifth amendments, against unreasonable searches and seizures [10]. Shortly after, in 1948, the United Nations’ Article 12 drafted the UDHU (U.N. Declaration of Human Rights), where: no one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks [11].

Since then, the concept of privacy, especially relative to the individual, has consolidated further as a human right, a pivotal building stone and an essential precedent to the advent of technology’s rapid growth.

Privacy in brave new worlds

As opposed to George Orwell’s dystopian novel set in 1984 and written in 1948 about the existence of a super-state whose inhabitants completely lacked their rights to privacy [12], the parallel proliferation and exponential growth of multiple technologies (computers, internet networks, mobile phones, etc.) awakened the need to better define online privacy and showcased the ever-evolving nature of these technologies. One of the first countries to support the right to online privacy through data protection laws was France, with Article 3 of the law of 6th January 1978, which triggered the digital rights movement that year, and states that every person has the right to know and to contest the information and the reasoning used in the automated treatments whose results are opposed to them [13].

Furthermore, the introduction of smart phones, next-generation computers, internet’s next evolution of infrastructure worldwide, IoT and smart wearables, together with their fast adoption, showcased that the existing legislative and regulatory system, instead of protecting the individual, organization, and state by default, it was rather playing catch-up with how these technologies were being used to exploit data and information.
As a reflection of such, one of the most impactful regulations that were drafted to support the concept of privacy was the European General Data Protection Regulation (GDPR) in 2016, which, outlines the breadth of personal data and provides a broad definition of data processing. Nowadays, the GDPR has inspired many other derivatives of the privacy law, such as UK’s own Data Protection Act, newer versions of EU’s ePrivacy Directive or Ukraine’s most recent Data Privacy Reform on June 2021 [11]. It proposes the creation of an independent government agency responsible for both policymaking and its enforcement, reinforcing data’s privacy.

To resume the above, the precise significance and meaning of privacy may vary from person to person and culture to culture, it remains a fundamental right that is essential to our personal and social well-being and encompasses both the boundary between the self and others, as well as the ability to control what to share [14]. It is important to highlight that, given the inherit differences between the physical realm and the ever- changing digital cyberspace, current and future definitions around the concept of privacy are going to be constantly challenged, as new technologies arise (virtual reality, cerebral implants, etc.), together with the directives and regulations required to further support online privacy.

What are the challenges of personal privacy in today’s online world?

Individuals, organisations, and states might have vastly different objectives and concerns regarding privacy. Privacy is frequently a key priority for individuals, as they wish to preserve their personal information. In contrast, organisations may emphasise the security of their systems and the protection of their intellectual property over the privacy of individuals. Parallelly, it is important to note that both organizations and states

might take into their advantage different techniques that strengthen the psychological notion of data security and privacy. In today’s modern and information driven world, there have been several examples that represent nicely the challenges between the costs, benefits, incentives, and trade-offs that individuals, organizations, and states constantly must balance, which will be explained in greater detail below. These include the Snowden Revelations, and the Web3 (public ledgers, DeFi, etc.) implications.

Snowden Revelations

Edward Snowden’s leaks have had a significant impact on the concept of privacy, especially how individuals, companies, and states perceive it. Prior to Snowden’s revelations, many individuals may have believed that their personal information and communications were mainly private. His leaks revealed that the NSA was collecting massive amounts of data on individuals, including their emails, phone calls, and online activity [15].

For individuals, the loss of privacy can have substantial personal and psychological repercussions. For firms, surveillance can incur economic costs, such as decreased production and creativity, as well as brand harm if their consumers and clients view them as untrustworthy or as violators of their privacy. The costs of surveillance for states and their potential international sanctions might include the expense of building and maintaining the requisite surveillance infrastructure [16].

However, there are potential advantages to surveillance, particularly for governments and organisations. For states, surveillance can give intelligence that can be utilised to safeguard national security and prevent crime and terrorism. For organisations, surveillance may give significant insights on customer behaviour and preferences, which can be leveraged to enhance goods and services and gain a competitive edge.

In the debate over monitoring and privacy, incentives and trade-offs are also crucial considerations. The motivation for individuals to safeguard their privacy may include a desire to keep control over their personal information and avoid illegal access to it. The need to preserve the confidence and loyalty of consumers and clients may motivate firms to safeguard personal information. The motivation for nations to engage in surveillance may stem from a desire to safeguard national security and prevent crime and terrorism. Nevertheless, these motivations must be weighed against the potential costs and trade-offs involved, such as the potential loss of privacy and the potential harm to reputations and relationships.

Overall, Edward Snowden’s revelations have had a significant influence on the idea of privacy and have increased awareness of the possible costs, benefits, incentives, and tradeoffs involved in surveillance and the protection of personal data, further challenging the shortcomings of personal online privacy in today’s world if combined with the perceived unlimited power of different states when it comes to online personal data privacy.


Web3 is the new, more secure, pseudonymous, and decentralised version of the internet. It is powered by blockchain technology, which is decentralised, immutable, permissionless and interoperable, with the goal of leveraging and implementing advancements in other technologies like AI or VR.
Regarding the individual, it is important to highlight that this technology is not anonymous, but rather pseudonymous, which plays an important distinction when it comes to privacy. On the other hand, in Web2, individuals are forced to give up their personal information and its processing control to the governing organizations, even though these organizations must comply with data protections laws and regulations, such as the European GDPR, which includes the right of data erasure. Contrary to the latter, in Web3, given the blockchain’s immutability property, it might prove impossible for the user to delete their data once it has been uploaded and validated by the network’s nodes.

Parallelly, regarding an organisation’s use of Web3, DAOs (Decentralised Autonomous Organisations) benefit from significantly faster and lower transaction fees, due to the innate ability to transfer money within the blockchain, as well as the lack of a potentially abusive and monopolistic central banking entity. However, the blockchain’s more complex and difficult technology may raise the total cost of development in an organization. Its self-governing control mechanism through smart contracts that are publicly available, together with its permissionless access, make it relatively easy to incentivise users and organisations to become early adopters and use this new technology. Finally, in terms of trade-offs, a bug in a smart contract might prove deadly, like YAM’s DAO, which killed the entire organization is less than 48 hours [17].

Furthermore, with the traditional internet, states have the power to obtain, store and process large amounts of personal information on their citizens, which can be used for surveillance and control. However, Web3 makes it harder for states to identify the individuals behind certain transactions, should they wish not to be identified, thanks to the pseudonymity attribute of the blockchain technology. Nevertheless, the states could theoretically be able to link every individual behind public addresses, by issuing blockchain-based passports and adopting cryptocurrencies as valid national currency [18].

Overall, Web3’s privacy advancements will provide individuals, organisations, and governments greater control over their online data.

How does modern privacy regulation address these challenges?

To briefly outline and pinpoint the key parameters that have proven to be challenging when it comes to online privacy, there are five concerns that must be addressed through regulations: extent of personal data, data controllers, data subjects, data processors and scope of the regulation in question. Three main regulations will be presented that support and tackle these five concerns, namely EU’s GDPR, California Consumer Privacy Act, and Canada’s PIPEDA. Equally, their shortcomings and how adequate they are at solving the issues at hand will be discussed, followed by a small conclusion with potential better approaches.


The EU’s General Data Protection Regulation (GDPR) was first implemented in 2018 to safeguard the online privacy and data of any individual by regulating the scope of data processing capacity within European cyberspace. Ironically, the GDPR library has just one reference to privacy, which relates to another rule known as the ePrivacy. Regarding the scope of personal data and data processing, the GDPR takes a somewhat expansive stance. For the former, it includes any information pertaining to a recognised or identifiable natural person, as well as the collecting, recording, structuring, and storing of such information. This greatly enhances an individual’s possible defence against privacy abuses [19] by either states or organisations (data controllers). Moreover, a tangible illustration of how the GDPR handles these main areas can be found on all websites that process online data in Europe, with clear, brief, comprehensible, and easy-to-read pop-ups that are accessible to all users to read and approve or reject. This might be taken even further by requiring an equal number of clicks for affirmative and negative responses when accepting the terms and conditions. Unfortunately for the latter, there have been a considerable number of online businesses that make their forms appear compliant even when they do not follow these regulations. On the organization’s side, many businesses owners regard to this legislation as “high pain, no gain” due to the minimal, if any, benefits of using GDPR, particularly for marketing executives (working with massive amounts of data) and SME (small and medium-sized enterprises) demanding a GDPR-lite version.

In conclusion, although the EU’s GDPR is heavily focused on giving back control to the individual (regarding data collection, processing, storage, etc.), it has also strongly encouraged businesses to rethink their internal IT infrastructure and their true need to collect additional data for later use, both of which benefit the user. This is further reinforced by the heavy penalties neglecting corporations incur, up to twenty million Euros or four per cent of their annual revenue, whichever proves higher.

California’s Consumer Privacy Act

The California Consumer Privacy Act of 2018 (CCPA) is a California privacy law that became effective on January 1, 2020. It is intended to prevent organizations from collecting and selling the personal information of California residents without their consent.
The CCPA governs the collection of personal information by California-based businesses. It defines personal information as any information that directly or indirectly identifies, refers to, characterises, is capable of being associated with, or might reasonably be related to a specific consumer or household. This consists of information such as names, addresses, email addresses, phone numbers, and IP addresses.

Companies that gather personal information are deemed data controllers under the CCPA. Data subjects are the individuals whose personal information is being gathered by these businesses. Companies or persons that handle personal information on behalf of data controllers are known as data processors.

The CCPA grants Californians the right to know what personal information is gathered about them, the right to have their personal information destroyed, and the right to opt out of the sale of their personal information. It also requires data controllers to offer notice to consumers regarding the collecting of their personal information and to let consumers to opt out of the selling of their personal information.

Overall, the CCPA is an important step towards tackling privacy issues in the digital age. It gives California citizens more control over their personal information and more visibility into how their data is gathered and utilised.

Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA)

The Personal Information Protection and Electronic Documents Act (PIPEDA) is a Canadian federal privacy legislation that regulates the collection, use, and disclosure of personal information in connection with commercial operations across the country. Individuals have the right to access, challenge, and update their personal information maintained by organisations.

Under PIPEDA, data collectors are entities that collect, utilise, or disclose personal information. Before collecting, processing, or disclosing the personal data of individuals (known as data subjects), these entities must seek their consent. This implies that data collectors must be honest about their purposes for collecting personal data, and that data subjects must be informed and allowed to make an educated decision regarding whether or not to provide consent.

PIPEDA also puts requirements on data processors, or businesses that process personal data on behalf of data collectors. Data processors are expected to adopt suitable technological, physical, and administrative protections to secure personal information, and they must only process personal data in line with the data collector’s instructions.

PIPEDA is intended to preserve the privacy of individuals’ personal information and make companies accountable for how they handle such information. It applies to data collectors, data subjects, and data processors, and it helps Canada solve several modern privacy concerns.

Are these adequate?

Modern privacy regulations aim to address the challenges of personal privacy in the current online environment by establishing clear rules and standards for the collection, processing, use, and disclosure of personal information and by granting individuals certain rights in relation to their personal information. This ensures that people’ privacy is maintained and that they have choice over how their personal information is used and shared. If online privacy is to remain a priority, it will be crucial for the next generations not to lose sight of these ideals and to ensure that the future does not ever resemble Orwell’s 1984, whether it is Web2, Web3 or beyond. Equally, these regulations might need to be even more restrictive on data collection, given the advancements in future mathematical and computational models, able to identify individuals from large collections of diverse datapoints.

Are there any other better approaches?

The General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the Personal Information Protection and Electronic Documents Act (PIPEDA) are all key privacy rules designed to safeguard the personal data and privacy of persons. Nevertheless, there may be more techniques that may be explored as alternatives to or enhancements to existing regulations.

A potential alternative to current restrictions may be a worldwide privacy legislation that applies consistently to all companies and individuals, regardless of their location. This will eliminate the need for varying privacy laws across countries and create a standard and predictable framework for data protection and privacy. The use of privacy-enhancing technologies, such as encryption, anonymization, and pseudonymization, might be an alternate strategy for protecting personal data and preventing illegal access, disclosure, and abuse. This would move the focus from regulatory compliance to technical solutions, which might increase the security and privacy of personally identifiable information, by using blockchain-technology or equivalent.

A third possible method may be to offer individuals greater control over their personal data via personal data stores, data trusts, and other mechanisms. Individuals would be able to manage and control their own personal data, rather than depending on corporations to do so.

Overall, despite the future’s unpredictability and as an attempt to make it rhyme with current modern successful regulations, placing the individual’s right and interests to online privacy before any other larger organism’s might be a good start towards building and ensuring a better future.




  1. [1]  R. Mahieu, N. J. van Eck, D. van Putten, and J. van den Hoven, “From dignity to security protocols: a scientometric analysis of digital ethics,” Ethics Inf. Technol., vol. 20, no. 3, pp. 175–187, Sep. 2018, doi: 10.1007/S10676-018-9457-5/FIGURES/8.

  2. [2]  “Privacy Technologies, Law and Policy.”

  3. [3]  “What is Privacy.” (accessed Dec. 05, 2022).

  4. [4]  Dr. Michael Veale, “Privacy Technologies, Law and Policy,” 2022.

  5. [5]  “Lessons from the Greeks: Privacy in Aristotelian Thought.”

    (accessed Dec. 06, 2022).

  6. [6]  “Warren and Brandeis, ‘The Right to Privacy.’”

    (accessed Oct. 24, 2022).

  7. [7]  A. F. Westin, “Privacy And Freedom,” Wash. Lee Law Rev., vol. 25, pp. 3–4, Accessed: Dec.

    06, 2022. [Online]. Available:

  8. [8]  “Privacy and Social Freedom – Ferdinand David Schoeman – Google Livres.” YC&oi=fnd&pg=PR9&dq=An+aspect+of+human+dignity,+autonomy,+and+freedom+%22sch oeman%22&ots=pJYtcNycCw&sig=2bj4nTrA34_ZnJJh8v-LYLrI__k#v=onepage&q&f=false (accessed Dec. 06, 2022).

  9. [9]  “Justices of the Peace Act 1361.” (accessed Dec. 06, 2022).

  10. [10]  “The Constitution of the United States: A Transcription | National Archives.” (accessed Dec. 06, 2022).

  11. [11]  “Data Privacy Reform in Ukraine: What’s New? – Connect On Tech.” (accessed Dec. 06, 2022).

  12. [12]  “Brave New World by Aldous Huxley | Goodreads.” (accessed Dec. 06, 2022).

  13. [13]  “Loi n° 78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés – Légifrance.” (accessed Dec. 06, 2022).

  14. [14]  T. Caulfield and D. Pym, “Philosophy, Politics, and Economics of Security and Privacy Lecture 2,” 2022.

  15. [15]  “Edward Snowden revelations have had limited effect on privacy – Open thread | Technology | The Guardian.” thread (accessed Dec. 08, 2022).

  16. [16]  “FACT SHEET: Imposing Costs for Harmful Foreign Activities by the Russian Government | The White House.” releases/2021/04/15/fact-sheet-imposing-costs-for-harmful-foreign-activities-by-the- russian-government/ (accessed Dec. 08, 2022).

  17. [17]  “What’s the Deal With Yam Finance?” the-short-unhappy-life-of-yam-finance (accessed Dec. 08, 2022).

  1. [18]  “Government sets out plan to make UK a global cryptoasset technology hub – GOV.UK.” cryptoasset-technology-hub (accessed Dec. 09, 2022).

  2. [19]  Dr. Michael Veale, “Privacy Technologies, Law and Policy.”

On war – Pentestify Labs

ON WAR - Pentestify Labs


The modern world is rapidly becoming an increasingly more complex system, from new economic models being continuously implemented, to new technologies that might affect a country’s critical infrastructures, such as electricity, water, or defence. As a consequence of such capabilities and their inherited convolution, together with globalisation, a raise in technological dependencies,  and hence central-points-of-failure can be observed across different countries, including, but not limited to the global supply chain, manufacturing, or chip design, which inevitably leads to abuse, attack, and potentially, to war.

This formal, educational article analyses and discusses different concepts of cyber war in the first part, namely around Thomas Rid’s perspective. Secondly, the past, current, and future state of cyber warfare, together with the role of cyber deterrence, will be covered. 


First, to best explain Rid’s work and argue it accordingly, the definition of cyber war must be expounded. Albeit there not being a unique valid answer for the last thirty years to denote what cyber war really is [1], it can be appropriate to start from the concept of war itself. 

Regarding the definition of war, according to Carl von Clausewitz’s magnum opus, On War, which extends his dictum stating that “war is not merely a political act but a real political instrument, a continuation of political intercourse, a carrying out of the same by other means” [2], defends that war must be political, instrumental, and violent, to force the enemy to compel to a certain political will by using violence. 

As for cyber war, definitional ambiguity, or even contradictory information, has not discouraged governments, academics, and the military from attempting to precisely define it. Since the early 1993 there have been provocative and controversial papers like the one titled Cyber war is coming, by Aquilla and Ronfeldt [3], that claimed that cyber war was an immediate threat, causing nation-wide alarm in the USA. On the other hand, Thomas Rid’s book in 2013 titled Cyber war will not take place [4], defends that, so far, no cyber-attacks have yet caused a cyber war, given that, for a cyber-attack to be considered a cyber weapon, it must possess a high-enough instrumental, political and violent value, which in turn leads to much more fatalities than previously recorded. 

However, it is the former White House counter-terrorism advisor, Richard Clarke, whose definition of cyber war has proven to be the most influential on Google Scholar, through his bestselling book named Cyber War: The Next Threat to National Security And What To Do About It [5]. Here, the author argues that cyber war is “actions by a nation-state to penetrate another nation’s computers or networks for the purpose of causing damage or disruption”, which does not necessarily have to lead to human deaths or extreme levels of violence.

Rid’s arguments

Thomas Rid discusses that, to better understand why cyber war will not take place, there are three main misconceptions that must be clarified:

Firstly, he mentions that it is wrong to think that there is such a thing as cyber war or cyber peace, with both not making any sense. 

Secondly, he argues that it is flawed to think that the government (e.g.: seemingly The Air Force in the USA) is indeed in charge of global security against cyber war and that they have control over the latter. Instead, he expresses that the security of technical systems is up to the individual or the company in question, not the government itself. 

Thirdly, an important distinction must be made to correctly differentiate its original meaning from a common metaphor attached to the word, as in the war against drugs or cancer, which does not contribute to the physical event of war and will not be discussed in this article herein.

More specifically, Rid’s arguments seem to use Clausewitz’s as a three-point foundation to delimit what differentiates a cyber-attack from an act of war, given that it must be violent, political, and instrumental [6].


Attached to political significance comes attribution, which refers to the identity of the offender that conducted the attack. Without such, Rid mentioned, an act of war cannot take place because it is impossible to force someone to compel to their offender’s will if their identity is unknown, which further makes it impossible for war to be an isolated event without any adversarial entity imposing their power.


For a cyber war to take place, Rid supports the idea that the act of war must be instrumental, whereby the enemy is forced to change. This, as stated by Clausewitz, might be achieved by employing force as an extension or alternative to other political means. It could be argued that, without the instrument of policy, the war itself would lose meaning, by lacking its element of subordination and control.


Although Rid states that, to comprehend the underlining violence correctly and fully in cyber war and caused by cyber weapons, the nature of the former phenomenon in traditional war must be first understood, where the line between a violent and non-violent act shall be drafted. To do such, Rid argues that if an attack does not have the potential of force, it is only violent indirectly, by its code not justifying a direct use of force. However, he then outlines that there has only been one major cyber-attack which caused relative physical harm, back in 2005, codenamed Stuxnet, and explained below in further detail. Equally, he theorises over the possibility of truly weaponizing a cyber weapon, such as a fully automated complex weapon system becoming the subject of a breach, whereby the offender gains complete flexibility and potential for chaos and damage. An example for the latter could be, in theory, hijacking a remotely controlled aircraft, like the Predator or Reaper drone. In practically, there have been a couple of examples that have come close to that, such as the hacking of a secret and stealthy CIA drone by the Iranians, back in 2011 [7].

Rather than past and current attacks being considered cyber-weapons or acts of war, he explains that they can be considered instruments of war, with high level of sophistication in the following areas: sabotage, espionage, and subversion. 


According to Rid, sabotage is rather technical in nature, with the objective of weakening and causing physical harm to an economic or military system, not always leading to physical destruction and overt violence. To better describe sabotage, Thomas Rid uses the USOSS’ (the precursor of the CIA) Sabotage Field Manual released in 1944, to further strengthen his view that sabotage “is carried out in such a way as to involve a minimum level of danger of injury, detection or reprisal”, and hence its limited use of violence [8].


As opposed to the more visible effects of sabotage, Thomas Rid indicates that it is very difficult to know the current state-of-the-art of espionage, because of its stealthy factor in nature. This attribute further contributes to lack of attribution, which makes it an instrument of war, instead of an act of war, as discussed above. It is only through publicly criticized and reprimanded figures like Snowden that its real extent can sometimes be known through leaks, where, in this case, the PDD-20 US document was uploaded to the public domain, sharing confidential information about nation-wide espionage by the NSA in 2013.


Having already briefly discussed sabotage and espionage, the third remaining offensive activity is subversion, which, according to Thomas Rid, despite its extremely advanced and avant-garde political techniques, has lacked considerable media and academic focus. From al-Qaeda’s attack on New York’s World Trade Centre, or the Occupy Wall Street movement ten years later, to the alter-globalization movements in more recent times, Rid outlines that these techniques have two constant attributes: the will to undermine the governing authority order as well as making use of new telecommunication technologies, such as encryption and mobile communications. In fact, it is technology that has mainly lowered the entry-to-market, but at the same time risen the threshold for success – this, in turn, means that subversion is becoming less violent, as per Rid’s conclusions, and thus an instrument of war at best.

Provided Rid’s above-mentioned attributes regarding common traits of acts of war, he declares that no past and current cyber-attacks fulfil any or all of these definitions at the same time, but instead further reinforces the idea of them being instruments of war. These very sophisticated cyber-attacks that, as opposed to cyber-weapons, do not need to be instrumental and intentional, and taken together, the lack of intentionality might lead to a third problem: the problem of learning agents. This problem refers back to the payload, regardless of its state-of-the-art, not being able to actively learn by itself, as seen during Stuxnet’s attack. For this, however, Rid points that recent advances in stealthy machine learning and artificial intelligence might solve this issue in upcoming attacks. 

Thomas Rid’s opposing arguments

Following Thomas Rid’s publication, several scholars reacted and investigated the phenomenon of cyber war itself, with respectable names such as Richard Clarke or John Stones.

For the former, Richard Clarke on his book Cyber war, explains how, even though the USA has not yet suffered a cyber war, smaller countries like Estonia or Georgia have. 

Secondly, John Stone’s article on Cyber War Will Take Place rather differs from Thomas Rid’s Clausewitzian view on war on two different accounts: the level of violence required from an instrument of war to be an act of war and the disproportionality between the force required to conduct and attack and the violence it creates once executed. When taken together, with his loose need for attribution for a war to be considered one, limited force, combined with instrumentality and being politically-drive, can indeed fall within the category of cyber war [10]. 

Furthermore, Sun Tzu’s ideas become particularly relevant when it comes to fighting a war without the use of extensive physical force, as he states that “to seize the enemy without fighting is the most skillful” [11]. Regarding the level of force to be used in a war, even Carl von Clausewitz’s paper defends that it might be a necessary tool for the adversary to make use of force, but he never states how much force must lead to violence, or fatalities. This makes Thomas Rid’s arguments overly restrictive [2]. 

However, it is worth mentioning that Thomas Rid’s main objective in the book, named as a French pun to the quote “la guère de Troie n’aura pas lieu” (the Trojan war will not take place), aims to clarify some common misconceptions between the meaning of war, and consequently, cyber war, to better categories and distinguish future acts of cyber war and the instrumentality of future cyber weapons, instead of implying some future predictions. 

Why Thomas Rid is not correct 

Firstly, although it might not be wise to state that the answer to whether cyber war has already happened or will happen is easy to develop, Thomas Rid’s perspective and analysis of cyber war seems to be overly narrow and antiquated. Rid further develops his definition of cyber war by restricting Clausewitz’s view on force and violence, defending that there should be approximately force leading to a matching level of violence during a war. This, beyond going against other scholar’s ideology, like John Stones, further reinforce the fact that he does not take into consideration the low force, yet catastrophic attributes of cyber-weapons used as an act of war. Hence, Rid fails to differentiate the terms violence and force, and their potentially disproportional relationship. As a reflection of this, he is not able to extend Stuxnet’s attack beyond a mere sophisticated sabotage. 

Moreover, he presents a rather pessimistic and suboptimal trust in people’s ability in the private sector to learn and overcome technical difficulties when engineering a state-of-the-art attack through the use of cyber-weapons. However, it is clear that every year, not only are there eight times more cyber-attacks, according to the FBI, [12] but also the tools and software used to design, build and execute cyber-attacks through the use of cyber-weapons are getting growingly more accessible and advanced. One of the main culprits for such might be the modern artificial intelligence models, with easier access to powerful hardware and resources. 

Thirdly, Thomas Rid’s thoughts on attribution might apply to the physical world, where it is very difficult to ignore an opponent’s ‘kinetic effect’. However, he does not accommodate the digital world’s stealth nature, and ability to minimize, delay or even completely bypass attribution. Regardless of what deterrences there are in place against attribution, the act of war is not made any less valid, quite the opposite. In fact, in Under the Law of Armed Conflict and Article 51, it is not yet clear the exact level of identification required to response accordingly to an act of cyber war, furthering strengthening Rid’s rather obsolete perspective on cyber war and its current capabilities. 


To analyze the extent of past and current cyber-attacks clearly and correctly, it is important to employ the paradoxical trinity that conceptualizes Clausewitz’s chaos of war and the tension between three main elements of war: the government, the people and the army [13]. However, over the previous decade, the world has suffered many cyber-attacks which led, in one way or another, to the weakening or destruction of one or more of those three elements. It will be therefore interesting to analyze how past cyber-attacks and cyber weapons have debilitated those elements, to be able to correctly accommodate future, objective predictions. 

Estonian cyber-attacks (2007-2008)

These attacks were carried out in three phases against the Estonian government, following Estonia’s government decision to move a Soviet World War II memorial of Bronze soldier to a military cemetery. 

In the first phase, the Russian government’s attacks consisted of some rather basic pings against Estonian’s digital governmental institutions, through Denial-of-Service cyber-attacks, website defacement, DNS servers and mass emailing spam.

In the second phase, many of these attacks were later reinforced by the use of distributed botnets and proxy-servers in other countries. Although it lasted a day, it went through Victory Day, which symbolizes the defeat of Nazi Germany. 

Finally, in the third phase, other governmental institutions or institutions of vital importance were hacked, like banks or informational government websites. These services were attacked from more than 180 countries at the same time. 

Stuxnet (2010)

Stuxnet is one of the first cyber-attacks that had a clear kinetic effect in the physical world, back in 2010, with a similar effect to a “cruise missile or commando raid” [14]. 

This attack, targeted at Iran’s nuclear enrichment plant’s centrifuges, required specific understanding and development of four zero-day exploits, including some very technical and advanced knowledge of how the machinery operated. For such, the offender (now attributed to the US and Israel) had to investigate and study SIEMENS’ PLC hardware, to find and exploit the frequencies at which the centrifuge motor would have to rotate to cause significant and irrecuperable physical damage, whilst simultaneously displaying false system diagnostic reports. 

In this case, this set the nuclear plant back 2-3 years, advancing the offender’s political and governmental intent.

NotPetya (2017)

In 2017, a ransomware malware appeared on many computers worldwide, which appeared to be targeted at Ukraine, during its Independence Day. It is now believed to be a result of a politically motivated attack by the Russian government to weaken and destabilise the Ukrainian government.

It was built upon the Petya malware and adapted to newer systems that were vulnerable to the EternalBlue exploit, developed by the NSA. 

After encrypting all files in the computer, it later on asked for a ransom to be paid in Bitcoin. This cryptographic address made it impossible to be attributed to one single entity, and hence its belief for this attack to create chaos. Unfortunately, many ransoms that were paid did not get any decryption keys at all, which is the case of Maersk’s computers, making them lose billions of USDs. 

Viasat (2022)

In early 2022, during Russia’s war in Ukraine, the Russian government took down the KA-SAT satellite networking, offering one of the last remaining lines of communication for the people stuck in Ukraine during the war. According to Thomas Rid and many other scholars, experts and even governments, this attack was deeply undervalued, given the potential negative repercussions that it had during the war and its technical complexity. It consisted of exploiting some software vulnerabilities in the satellites, as well as some firmware hardware vulnerabilities in the models used to connect to the satellites, in order to stop all communications between devices that were using it. These software attacks were most likely the result of Russia’s AcidRain and FancyBear zero-day exploits. This allowed Russia to weaken Ukrainian’s military communications. However, around six thousand wind turbines in Germany were also affected. 

The current state of cyberwarfare on norms

As previously discussed, because of the multiplicity of definitions around cyber war, cyber weapons, cybercrime and whether certain attacks are an act of war or simply an instrument of war or crime, it comes down to the definition each country has. In particular, the UK has developed a National Cyber Strategy, which, according to the UK government “plan to ensure that the UK remains confident, capable and resilient in this fast-moving digital world; and that we continue to adapt, innovate and invest in order to protect and promote our interests in cyberspace” [15]. Moreover, other entities like NATO have equally developed a framework to better focus on defence, given that they belief that the best offense is a good defence. Nevertheless, given the lack of a common framework to correctly classify and categorise defend cyber-attacks, the Tallinn Manual acts as an guidebook developed by a group of international experts to define the outline of what constitutes cyber war and how best to respond to it. 

How will it continue to evolve in the future?

As shown in the examples of cyber-attacks above, there seems to be a general tendency to abuse telecommunications channels and further develop exploits harnessing the power of AI. An example of such advancements will be Deep Fakes, with which an individual may be able to fake an influential figure for political or economic gain. Equally,  there will be a great shift in economic models, and hence security policies and vulnerabilities, following the advancements and popularisation of decentralised governance entities. 

What role will cyber deterrence play in the future? 

As mentioned previously, one of the elements of cyber war that will most likely change in the future will be deterrence, given that, as opposed to attacks with physical weapons and humans in the battleground, the next evolution of cyber-attacks and cyber weapons will be stealthy by nature. Fortunately, thanks to advancements in AI and mathematical models in the statistics and probability space, attribution will be made relatively easier to an ever-growing problem to correctly identify the offender. 

Overall, here at Pentestify Labs, we are committed to educating our clients, partners, or simply the next generation of Internet Web3 users about the real benefits of Web3, together with making sure that certain concepts are not abused or ignored, such as the government taking extreme surveillance measures under false or unnecessary pretences, like it has previously been seen with COVID-19. 


[1] C. Ashraf, “Defining cyberwar: towards a definitional framework,” Def. Secur. Anal., vol. 37, no. 3, pp. 274–294, 2021, doi: 10.1080/14751798.2021.1959141.

[2] “Carl von Clausewitz: ON WAR. Table of Contents.” (accessed Nov. 23, 2022).

[3] J. Arquilla and D. Ronfeldt, “Cyberwar is coming!,” Comp. Strateg., vol. 12, no. 2, pp. 141–165, 1993, doi: 10.1080/01495939308402915.

[4] “NATO: ’’Cyber War Will Not Take Place’’: Dr Thomas Rid presents his book at NATO Headquarters, 07-May.-2013.” (accessed Nov. 23, 2022).

[5] C. K. Borah, “Cyber war: the next threat to national security and what to do about it? by Richard A. Clarke and Robert K. Knake,”, vol. 39, no. 4, pp. 458–460, Jul. 2015, doi: 10.1080/09700161.2015.1047221.

[6] T. Rid, Cyber war will not take place. 2013.

[7] “Iranians Claim Hack Brought Down US Drone Spy Plane | Silicon UK Tech News.” (accessed Nov. 23, 2022).

[8] “Simple Sabotage Field Manual by United States. Office of Strategic Services – Free Ebook.” (accessed Nov. 23, 2022).

[9] “‘Prepare for all-out cyber war’ | The Independent | The Independent.” (accessed Nov. 23, 2022).

[10] J. Stone, “Cyber War Will Take Place!,”, vol. 36, no. 1, pp. 101–108, Feb. 2013, doi: 10.1080/01402390.2012.730485.

[11] Sun Tzu, “The Art of War .” (accessed Nov. 23, 2022).

[12] “FBI sees a 400% increase in reports of cyberattacks since the start of the pandemic | Insurance Business America.” (accessed Nov. 23, 2022).

[13] “The Trinity and the Law of War.” (accessed Nov. 23, 2022).

[14] L. J. Wedermyer, “The Changing Face of War: The Stuxnet Virus and the Need for International Regulation of Cyber Conflict.”

[15] “National Cyber Strategy 2022 (HTML) – GOV.UK.” (accessed Nov. 23, 2022).

The foundations of cybersecurity will never change, even on web3

The foundations of cybersecurity will never change, even on web3


Information security has become a rapidly evolving field, and, unlike more established sectors, it possesses many of the most complex challenges that societal politics, economics, philosophy, and law individually face [1], despite its closer association with technology. As a reflection of such, there are several definitions of information security, specifically around its intrinsic characteristics. This paper will firstly discuss these information security properties, followed by two related concepts: declarative objectives and operational tools. Lastly, the importance of sustainability and resilience in security will be detailed, all whilst giving examples to support the points above.

Regarding the properties of information security, according to the international standard [2], information security is defined as the preservation of confidentiality, integrity and availability of information, which are three characteristics generally referred to as the CIA triad.
More specifically, confidentiality is concerned with making sure that only the right entity, whether it is a person, a collective, or a system, can access the presented information. For instance, a confidentiality breach happens when an attacker unrightfully obtains usernames and passwords from a database and accesses the information within, which may be the medical records of certain hospital patient. To help allay this issue, implementing processes such as two-factor authentication or non-replicable biometric parameters, like retina scans, may solve it [3]. It is important to highlight a common category error, which defends that authentication should act as an extension of the CIA triad, instead of it rather representing a process that strengths and supports the idea of confidentiality [4].

Parallelly, integrity is another property of the CIA triad that seeks to prevent the unauthorised tampering of the information, by dishonestly executing or abusing its write access. To further strengthen the integrity of information, non-repudiation objectives must be put in place. For such, digital signatures, mathematically guarantee that, for any transaction, it was indeed the right agent in control of a private key (from) who sent the right data (information integrity) to the right recipient (to). Should any of these attributes change, the resulting hash of the resulting digital signature would completely differ from the original one [5]. Furthermore, availability refers to a property that assures the continued and reliable access to the information at just the right time. An attack (physical or logical) to the servers on a datacentre through a denial-of-service cyber-attack, may result in physical damage, either permanent or not.

Overall, by gathering the definitions of confidentiality, integrity, and availability, security can be defined as the process that ensures that only the right agents (confidentiality) have just the right access to the right information (integrity) at just the right time (availability) [6].

Moreover, the CIA triad can equally support the relation between security and privacy, except it comes with trade-offs. For instance, for the example mentioned above of retina scans to ensure the confidentiality of information, privacy concerns may arise given the potential of this scan to disclose personal, confidential information like diabetes.

One definition of privacy defends that it is the notion that a user has regarding the perceived strength of the preventive measures set it place to avoid unauthorised access to their data, making them feel more in control of their own information [7]. The user’s perceived control over their data might sometimes even be higher than the actual efficacy of the security measures put it place [8].

Another definition of privacy is the distrust of the users with regards to the loss of personal privacy in e- commerce markets or social media, placing trust as the most important asset that a business is based on, and hence the intrinsic dependence between privacy and trust [9].
All-inclusive, it can be argued that the definition of privacy has shifted over time, from the personal right to be left alone [10], to the right to be forgotten (right to erasure) by controlling the information available [11], or the notion of privacy as a social norm that enables different freedoms [12].

However, a solution to the necessary evil when sacrificing privacy over security or vice-versa, might be mitigated by replacing the trusted authority with a trust-less system, more below.

To better explain how the CIA triad can be applied to other general systems, it is necessary to clarify what a system is [13]:
A product or component, the latter plus an OS, the latter plus IT staff, any or all previous plus internal users or management, and finally any or all previous plus external users.

The CIA triad represents the cornerstone of access controls, which are the main security architecture of other larger, more general systems. Access controls are primary divided into three main categories: administrative (company policies, regulations, etc.), technical (firewalls, routers, encryption, etc.) and physical (gates, bodyguards, locks, etc.). Equally, they are divided into three main models: Discretionary Access Controls (DACs), Nondiscretionary Access Controls (NACs), and Mandatory Access Controls (MACs).

For the former, DACs, users are granted full privileges (read-write-execute) to the files they have been given access to, and they are allowed to share them with other users of the system. On the other hand, NACs, whose privileges are based on their role, instead of the individual itself. They are followed by MACs, which are focused on enforcing confidentiality above anything else, by not being able to share files to other users that lack proper clearance. It is often employed in the military and other governmental institutions [14], with instances like the Bell-La Padula model [15].

Declarative objectives from operational tools must be differentiated, to better separate the design from the implementation of these secure, CIA-oriented systems, with different priorities.
For the former, stating that certain documents must remain confidential is a declarative objective that supports confidentiality. Likewise, the regulation for not allowing weapons into an aircraft through fliers, represents a declarative objective that supports the integrity of the passengers. Lastly, if all data is to always be reachable on a particular server, a declarative objective must be drafted with such indications, granting a focus around availability.

On the other hand, operational tools are the description of how the declarative objectives are to be achieved. For example, for an operational tool to achieve the confidentiality of all documents, passwords and encryption must be implemented. Equally, to make sure that the passenger integrity objectives on the aircraft are met, security searches must be put in place. Likewise, to achieve availability of all data in the servers, RAID back-ups might be a good operational tool application [14].

However, even two organisations within the same field do not generally have the same CIA priorities. An example of such is the difference between the overarching required governance for a UK bank and a DeFi DAO (decentralised finance and autonomous organisation), on blockchain.
In terms of general governance, a UK-based bank is governed by its CEO, which in turn answers to the board of directors, chaired by the appointed chairman. However, should the bank lack the motivation to further comply with the applicable UK regulations and policies, the UK government can make a hostile intervention. In any case, this code of conduct and governance is kept strictly private to a few, with the objective of maintaining the secrecy of financial information, among others.

In contrast, DeFi are decentralised applications that live on the Ethereum blockchain network, forming most commonly a DAO, which are member-owned communities without centralised leadership, as the governance is set on an immutable public smart-contract.
In terms of confidentiality priorities, most banks depend on their employees, which already makes them vulnerable to any unauthorised disclosure of personal information, together with the centralised bank acting as a single point of failure. To mitigate the risks of this issue, operational tools such as password, encryption, and two-factor authentication must be set in place.

As opposed to centralised banks, DeFi DAOs live on the blockchain itself, a completely decentralised public ledger. More specifically, despite them being public, any public address (which forms the digital wallet address) is not tied with IDs, credentials of any other form of identification. Even though it could be argued that these public addresses can be tied to a single individual, there are encryption techniques, such as homomorphic encryption, together with zero-trust implementations, that allow for true anonymity [16].

In terms of integrity of transactions, central banks must recall and store them in a central ledger, which can be manipulated by its employees. To mitigate the risks of this issue, operational tools must be implemented, such as software that verifies the parity of every transaction.
On the other hand, for DeFi DAOs, every single transaction is validated on the Ethereum network, which is made up of thousands of computers around the globe, whereby the digital signature means that the right agent has indeed made such transaction, the data has not been tampered with and it is intended to the right recipient. This level of computation to replicate, or even needed to falsify one transaction, is nearly impossible for any centralised computing power.

Finally, regarding the availability of information, such as account balances, they need to go through the bank, whereby the bank unavoidably acts as a central point of failure. To reduce this issue, central banks may implement operational tools such as continuous monitoring and access.
Nevertheless, for DeFi DAOs, every single transaction makes up the general EVM’s state (Ethereum Virtual Machine), which is saved on every single full node that belongs to the Ethereum network. In addition to such decentralised distribution, Ethereum also inherits the blockchain’s Byzantine Fault Tolerance, which sits around 46% [17].

Its main trade-offs, however, are the computing requirements needed to process every single user transaction, resulting is a lower number of transactions per second, when compared to traditional banking. For the latter, its main trade-off is the dependency, forced trust and lack of user control over their own data.

As previously mentioned above, the CIA triad, whilst supporting privacy and security with ideally minimal trade-offs, must equally ensure the long-term, healthy, and non-inflationary longevity of the system; in other words, it needs be sustainable. Equally, given that eventually problems arise, such as accidents or attacks that might endanger the survival of the system, declarative objectives through operational tools must be correctly put in place to recover from adversities, otherwise referred to as resiliency. In the example of traditional banking, to mitigate the risk of the central digital system going down and hence blocking all its transactions and functionality, they could set up private Ethereum-based blockchain back-up software system that is up to date with the central mainframe’s private ledger, and, once the central mainframe is back online, all these transactions are updated on the private ledger.

To conclude, an ideal equilibrium between the trade-offs of privacy and security, from a socio-political and economic standpoint, will be to develop a set of blockchain-based (financial, governing, etc.) systems that protect their user’s anonymity and privacy, through the use of homomorphic encryption and zero-trust models, whilst still allowing governments to input certain encrypted data in order for the system to flag bad actors from using them and release further information as needed.



  1. [1]  R. Anderson and T. Moore, “The Economics of Information Security.”

  2. [2]  “ISO – ISO/IEC 27002:2005 – Information technology — Security techniques — Code of practice for

    information security management.”

  3. [3]  E. Conrad, “Domain 2: Access Control,” Elev. Hour CISSP, pp. 19–37, Jan. 2011, doi: 10.1016/B978-1-


  4. [4]  M. E. Whitman and H. J. Mattord, “Principles of information security,” p. 728.

  5. [5]  “Ethereum Whitepaper |” (accessed Oct. 02,


  6. [6]  C. Ioannidis, D. Pym, and J. Williams, “Information security trade-offs and optimal patching policies,”

    Eur. J. Oper. Res., vol. 216, no. 2, pp. 434–444, Jan. 2012, doi: 10.1016/j.ejor.2011.05.050.

  7. [7]  T. Dinev, H. Xu, J. H. Smith, and P. Hart, “Information privacy and correlates: An empirical attempt to

    bridge and distinguish privacyrelated concepts,” Eur. J. Inf. Syst., vol. 22, no. 3, pp. 295–316, 2013,

    doi: 10.1057/EJIS.2012.23.

  8. [8]  B. Schneier, “The psychology of security,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes

    Artif. Intell. Lect. Notes Bioinformatics), vol. 5023 LNCS, pp. 50–79, 2008, doi: 10.1007/978-3-540-


  9. [9]  H. Wang, M. K. O. Lee, and C. Wang, “Consumer Privacy Concerns about Internet Marketing,”

    Commun. ACM, vol. 41, no. 3, pp. 63–70, 1998, doi: 10.1145/272287.272299.

  10. [10]  “Warren and Brandeis, ‘The Right to Privacy.’”

    (accessed Oct. 24, 2022).

  11. [11]  “Right to erasure | ICO.”


  12. [12]  Ferdinand Schoeman, Privacy: Philosophical Dimensions, vol. 4, no. 5. 1890.

  1. [13]  R. Anderson, Security Engineering – A Guide to Building Dependable Distributed Systems. 2020.

  2. [14]  M. Nyanchama and S. Osborn, “Modeling Mandatory Access Control in Role-Based Security

    Systems,” pp. 129–144, 1996, doi: 10.1007/978-0-387-34932-9_9.

  3. [15]  T. Ting, S. Demurjian, M. H.-R. of the I. W. 11. . Workshop, and undefined 1991, “Requirements,

    capabilities, and functionalities of user-role based security for an object-oriented design model,” [Online]. Available:

  4. [16]  S. Steffen, B. Bichsel, R. Baumgartner, and M. Vechev ETH Zurich, “ZeeStar: Private Smart Contracts

    by Homomorphic Encryption and Zero-knowledge Proofs,”, Available:

  5. [17]  H. Samy, A. Tammam, A. Fahmy, and B. Hasan, “Enhancing the performance of the blockchain

    consensus algorithm using multithreading technology,” Ain Shams Eng. J., vol. 12, no. 3, pp. 2709– 2716, Sep. 2021, doi: 10.1016/J.ASEJ.2021.01.019.

How Facebook’s Metaverse will endanger the core human nature and massively help China

How Facebook’s Metaverse will endanger core human nature and massively help China


What is Technology and the Metaverse?

When it comes to creating new technology, there is always so much more than its core, technical development, such as its [financial, economical, cultural…] implications in our society or how positively it will alter human behaviour. In the case of Facebook’s Metaverse, its long-term implications are not that clear.

The metaverse is the convergence of the digital and the physical into a new experience of the internet, and it’s a hot topic at the moment. There’s good reason for all the attention. Consider the ways that augmented reality will overlay digital experiences onto physical spaces, alongside the build-out of virtual reality so robust that it mirrors real-life physical spaces, and there’s the potential to transform the human experience in profound ways.

Please note that this article does not intend to go against this new technology, but rather share several potentially dangerous spoilers that might indeed happen, in order to avoid such.


In the Meta universe, each individual user will have an Avatar, a digital representation of oneself. Meta users will therefore be able to have any look they like, reinforcing previous emotional baggage or insecurities, just like Instagram’s version of reality.

Furthermore, given that dear Zuckerberg will want you to stay as long as possible in their Meta universe, the platform will make you see, hear and sense what you want, ignoring how reality is. Just imagine targeted ads on steroids, but in this case, is all the surrounding universe. Even someone talking to you in this universe might potentially be filtered to suit your comfort.

For this, there is even a psychological term, the Dunning-Kruger effect which demonstrates widespread cognitive bias, in which people who know very little about a subject overestimate their knowledge, skills and abilities. This intellectual blind spot can happen to people who lack logical ability but are unaware they do, and the Meta platform will also make sure they never do, to maximize user retention.


Intercity business travel has already lost its long-term viability due to the rise of online collaboration technologies. Yes, business travel will pick up post-pandemic in the short run, but give Miro, Microsoft Teams, and Google Workspace a decade inside the metaverse, and those platforms will work with a fidelity that will mimic actual spaces while outperforming them in terms of capabilities. Even a 90-minute journey between Seattle and San Francisco — let alone a 14-hour flight to Shanghai — in the future, with such sophisticated technologies widely available, establishing eye contact and shaking hands becomes wasted folly. As a result, everyone on board a future aircraft — or zero-emission airship — will be a leisure traveller of some sort, doing what leisure travellers do: connecting with people and places in ways that the metaverse won’t be able to easily replace, from retreats and vacations to natural wonders and gastronomy.

This, together with the fact that the Metaverse will deliver digital goods, such as cars, houses, clothes and what-would-have-been Amazon delivery packets, commercial goods transportation will notably reduce as well. Keep an eye out, Jeff.


Speaking of virtual goods, there are already brands like Gucci, Apple, Microsoft or Amazon that want in the game, given that there are already producing a virtual representation in a 3D object of what would have been a physical object in the ‘real world.

Of course, most likely not to the likes of central banks, all these goods will be sold and bought in Facebook’s own cryptocurrency, and hence killing any other external dependency such as banks or taxes, especially after 2021, a year when Facebook’s platforms showed how much they depended on Apple’s own App Store or Google’s PlayStore. They want now to be the foundational platform for all these business verticals.

At the end, it was Facebook’s philosophy from the early beginning to be not only a social network but also a platform which other businesses interact and depend upon.


Facebook has traditionally been credited for coining the “move fast and break things” concept that is now widely practised in the digital industry. When combined with a system of smart sensors that may provide real-time insight into human biological processes and psychology, that method isn’t the most tempting.

In terms of potential for breaches, one of the main privacy issues is that a hypothetical VR system may function as a supercharged Alexa-style virtual assistant. Inside of houses, a headset or AR glasses might act as both a camera and a microphone; more powerful VR systems could combine this with heart and respiration rates, bodily motions, and dimensions, and utilize the unique combinations of all of this data for individual identification and tracking.

The potential for consumer harm is far bigger than anything observed so far from internet advertising and personal data collection methods. However, the Facebook metaverse will continue to exist.


In virtual worlds, VR and AR enable much more realistic homebuilding. There are furniture store apps that allow you to utilize augmented reality to place products in your home to see if they fit in the designated space. Is it possible that people will scan parts of their homes and insert them into Meta spaces? How about the true-to-life room and furniture replicas?

The worry is that we’ll end up producing scale models that may be utilized for whatever nefarious purpose you can think of. What if you could also make your home’s exterior look like the actual thing? Why stop at your house when you can use public map databases to import the entire street?

You now have a complete digital reproduction of your daily life that anyone may view. They can figure out where you live using this information and OSINT (open source intelligence).


China’s massive governmental surveillance and tech banning, together with the fact that in the upcoming decade it will most likely become the biggest kid in the block (doubling the USA’s market leadership and value) will indeed slow down all the negative aspects that the Metaverse will do to core human nature, especially regarding the user’s reward system (and ego), belief in reality / virtual reality, knowledge, habits, skills and their consumer behaviour.


It is therefore imperative for Facebook to be transparent with how its technology will be used, together with its implications and consequences, as well of how it deals with data. Looking a quick history recap, Facebook’s name does not provide unfortunately such comfort, especially with privacy, and its new Metaverse’s renaming is nothing but a marketing scheme to rebrand and start fresh, at least in the consumer’s eye.

A solution to such tendencies will indeed be to make the Metaverse open-source so that users can know precisely how their data is being used, strengthen its security and make society more aware of it overall.

Why Elon Musk’s Starlink will be the death of the Internet

Why Elon Musk’s Starlink will be the death of the Internet


Starlink is a satellite internet constellation being constructed by SpaceX providing satellite Internet access. The constellation will consist of thousands of mass-produced small satellites in low Earth orbit, working in combination with ground transceivers to give users access to the Internet, even in areas without the proper infrastructure.

What is the single best quality of the Internet?


The Internet is an open platform that embraces every user’s freedom and innovation without permission or judgement. In this article, I will explain why certain well-known individuals, under an aura of social impact and genius, are indeed about to break the Internet’s fundamental openness, which has been thus far taken for granted. Solutions to alleviate or avoid such a future will also be discussed and proposed.

Sometimes the Internet’s innovations are relatively minor or predictable. But in other instances, both individually and in combination with one another, they can be truly world-changing. I consider myself very fortunate to have experienced first-hand, not only as a software engineer, but rather as a citizen of the internet, what this freedom entails, and how easily it is for public institutions, regulators or even, given the nature of capitalism, private groups and companies to play with this essential, yet utterly delicate equilibrium. To dive into this very issue at hand, the following question must be answered: How can we establish such equilibrium, especially when dealing with the most international platform of them all, the worldwide Web?

To answer this complex question, let’s dive into how the Internet works under the hood:

First, the layered nature of the Internet describes its overall structural architecture. The use of layering means that functional tasks are divided up and assigned to different architectural layers. The internet layer is a group of internetworking methods, protocols, and specifications in the Internet protocol suite that are used to transport network packets from the originating host across network boundaries; if necessary, to the destination host specified by an IP address. A common design aspect in the internet layer is the robustness principle: “Be liberal in what you accept, and conservative in what you send” — as a misbehaving host can deny Internet service to many other users.

Second, the end-to-end design principle describes where applications are implemented on the Internet. The Internet was designed to allow applications to reside essentially at the ‘edges’ of the network, rather than in the core of the network itself. This is precisely the opposite of traditional telephony and cable networks, where applications and content are implemented in the core, away from the users at the edge.

Third, the design of the Internet Protocol (IP) separates the underlying networks from the services that ride on top of them. IP was designed to be an open standard so that anyone could use it to create new applications and new networks. The Internet does not need to know what is in the packets to convey them to their destination. The Internet routes data equally, without inherently favouring particular applications or content providers over others, and in this way, it is not designed for any particular use. As it turns out, IP quickly became the ubiquitous bearer protocol at the centre of the Internet. Thus, using IP, individuals are free to create new and innovative applications that they know will work on the network in predictable ways. This is one of the most fundamental principles of communication. In order for communication to take place, both parties (in this case, two separate computers, such as the server-client connection of Medium) have to be able to understand each other. Failure to do so will result in these protocols not being able to convey and represent useful information, which ultimately leads to the user not getting the desired result.

From these different yet interrelated design components, one can see the overarching rationale that no central gatekeeper should exert control over the Internet.

So… when and how exactly did Elon Musk go off the rails, or, in this case, the orbit?

I consider myself very fortunate to be pioneering the tech disruption of the cybersecurity industry, along with a very talented team at Pentestify, which is heavily devoted to breaking the Internet’s infrastructure in order to secure it right after, a technique known as white-hat hacking.

It is for this reason that, whilst being completely aware of the Internet’s limitations, I am convinced that there will always be a workaround to secure its architecture, without entirely changing the way computers communicate with each other, and hence endangering its fundamental openness and freedom within.

Therefore, I cannot stress enough that Starlink’s plan to fundamentally change the programming of the Internet protocols used to transmit information between each node (or satellite in orbit), whether it is for security, privacy, political or personal purposes, will culminate in the death of the open and neutral Internet that the Earth once had and loved. 

Let’s just hope that, when it rains, no satellites pour. In the end, not even Musk will be able to face the music in the quiet vacuum of space. To avoid such, the Internet must take advantage of its biggest strength: its worldwide community of users, many of which recognise the value of freedom and open-source, which must direct its future, instead of multi-billion dollar corporations. That is why Pentestify’s vision is, and will always be, to assure a secure future for all Web3 organisations to ensure users safely regain control that Web2 continuously tries to strip away.