Abstract

“On-chain Data Containers” (ODCs) are used for indexing and storing data in Smart Contracts called “Data Objects” (DOs). Information stored in Data Objects can be accessed and modified by implementing smart contracts called “Data Managers” (DMs). This ERC defines a series of interfaces for the separation of the storage of data from the implementation of the logic functions that govern such data. We introduce the interfaces for ODCs, the structures associated with DOs for abstracting storage, the DMs to access or modify the data, and finally the interfaces for compatibility Registries that enable Data Portability (horizontal mobility) between different implementations of this ERC.

Motivation

As the Ethereum ecosystem grows, so does the demand for on-chain functionalities. The market encourages a desire for broader adoption and more complex systems, so there is a constant need for improved efficiency. We have seen times where the market hype has driven an explosion of new standard token proposals. While each standard serves its purpose, most often requires more flexibility to manage interoperability with other standards.

While such diversity spurs innovation, different projects will implement their bespoke solutions for interoperability, resulting in a highly fragmented landscape. The lack of a standard mechanism for adapting tokenized across ERC standards is accentuating the interoperability issues.

We recognize there is no “one size fits all” solution to solve the standardization and interoperability challenges. Most assets - Fungible, Non-Fungible, Digital Twins, Real-world Assets, DePin, etc - have multiple mechanisms for representing them as on-chain tokens using different standard interfaces. However, for those assets to be exchanged, traded, or interacted with, protocols must implement compatibility with those standards before accessing and modifying the on-chain data. Moreover, the immutability of smart contracts plays a role in future-proofing their implementations by supporting new tokenization standards. A collaborative effort must be made to enable interaction between assets tokenized under different standards. The current ERC provides the tools for developing such On-chain Adapters.

We aim to abstract the on-chain data handling from both the logical implementation and the ERC interfaces exposing the underlying data. This EIP proposes a series of interfaces for storing and accessing data on-chain, codifying the underlying assets as generic “Data Points” that may be associated with multiple interoperable and even concurrent ERC interfaces. This proposal is designed to work by coexisting with previous and future token standards, providing a flexible, efficient, and coherent mechanism to manage asset interoperability.

  • Data Abstraction: We propose a standardized interface for enabling developers to separate the data storage code from the underlying token utility logic, reducing the need for supporting and implementing multiple inherited -and often clashing- interfaces to achieve asset compatibility. The data (and therefore the assets) can be stored independently of the logic that governs such data.

  • Standard Neutrality: A neutral approach must enable the underlying data of any tokenized asset to transition seamlessly between different token standards. This will significantly improve interoperability among different standards, reducing fragmentation in the landscape. Our proposal aims to separate the storage of data representing an underlying asset from the standard interface used for representing the token.

  • Consistent Interface: A uniform interface of primitive functions abstracts the data storage from the use case, irrespective of the underlying token’s standard or the interface used for exposing such data. Both data as well as metadata can be stored on-chain, and exposed through the same functions.

  • Data Portability: We provide a mechanism for the Horizontal Mobility of data between implementations of this standard, incentivizing the implementation of interoperable solutions and standard adapters.

Specification

Terms

ODC: A uniquely identifiable data structure used for indexing Data Objects or storing Data Points.

ODC Implementation: One or many Smart Contracts implementing the ODC access-management logic.

Data Point: A uniquely identifiable unit of information.

Data Object: One or many Smart Contracts implementing the low-level storage management of stored Data Points

Data Manager: One or many Smart Contracts implementing the high-level logic and end-user interface for managing the Data Points.

Data Point Registry: One or many Smart Contracts that define a space of compatible or interoperable Data Points.

The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in RFC 2119 and RFC 8174.

ODC Interface

  • ODC SHOULD manage internal IDs for each data container.
  • ODC SHOULD manage the access of Data Managers to Data Objects.
  • ODC SHOULD use the IODC interface:
interface IODC {
    /**
     * @notice Verifies if DataManager is allowed to write specific DataPoint on specific DataObject
     * @param dp Identifier of the DataPoint
     * @param dm Address of DataManager
     * @return if write access is allowed
     */
    function isApprovedDataManager(DataPoint dp, address dm) external view returns(bool);

    /**
     * @notice Defines if DataManager is allowed to write specific DataPoint
     * @param dp Identifier of the DataPoint
     * @param dm Address of DataManager
     * @param approved if DataManager should be approved for the DataPoint
     * @dev Function should be restricted to DataPoint maintainer only
     */
    function allowDataManager(DataPoint dp, address dm, bool approved) external;

    /**
     * @notice Verifies if DataObject is allowed to add Hooks to the DataPoint
     * @param dp Identifier of the DataPoint
     * @param dobj Address of DataObject
     * @return if write access is allowed
     */
    function isApprovedDataObject(DataPoint dp, address dobj) external view returns(bool);

    /**
     * @notice Defines if DataObject is allowed to add Hooks to the DataPoint
     * @param dp Identifier of the DataPoint
     * @param dobj Address of DataObject
     * @param approved if DataManager should be approved for the DataPoint
     * @dev Function should be restricted to datapoint maintainer only
     */
    function allowDataObject(DataPoint dp, address dobj, bool approved) external;

    /**
     * @notice Reads stored data
     * @param dobj Identifier of DataObject
     * @param dp Identifier of the datapoint
     * @param operation Read operation to execute on the data
     * @param data Operation-specific data
     * @return Operation-specific data
     */
    function read(address dobj, DataPoint dp, bytes4 operation, bytes calldata data) external view returns(bytes memory);

    /**
     * @notice Store data
     * @param dobj Identifier of DataObject
     * @param dp Identifier of the datapoint
     * @param operation Read operation to execute on the data
     * @param data Operation-specific data
     * @return Operation-specific data (can be empty)
     * @dev Function should be restricted to allowed DMs only
     */
    function write(address dobj, DataPoint dp, bytes4 operation, bytes calldata data) external returns(bytes memory);
}

Data Object Interface

  • Data Object SHOULD implement the logic directly related to handling the data stored on Data Points.
  • Data Object SHOULD implement the logic for transfering management of its Data Points to a different ODC Implementation.
  • Data Object SHOULD use the IDataObject interface:
interface IDataObject {
    /**
     * @notice Reads stored data
     * @param dp Identifier of the DataPoint
     * @param operation Read operation to execute on the data
     * @param data Operation-specific data
     * @return Operation-specific data
     */
    function read(DataPoint dp, bytes4 operation, bytes calldata data) external view returns(bytes memory);

    /**
     * @notice Store data
     * @param dp Identifier of the DataPoint
     * @param operation Read operation to execute on the data
     * @param data Operation-specific data
     * @return Operation-specific data (can be empty)
     */
    function write(DataPoint dp, bytes4 operation, bytes calldata data) external returns(bytes memory);

    /**
     * @notice Sets ODC Implementation
     * @param dp Identifier of the DataPoint
     * @param newImpl address of the new ODC implementation
     */
    function setOdcImplementation(DataPoint dp, address newImpl) external;
}

Data Objects can receive read() or write() requests when a Data Manager is requesting access to a Data Point.

The function setODCImplementation() SHOULD enable the delegation of the the management function to an IODC implementation.

Data Point Structure

  • Data Point SHOULD be bytes32 storage units.
  • Data Point SHOULD use a 4 bytes prefix for storing information relevant to the compatibility with other Data Points.
  • Data Point SHOULD use the last 20 bytes for storage identifying which Registry allocated them.
  • The RECOMMENDED internal structure of the Data Point is as follows:
/**
 * RECOMMENDED internal DataPoint structure on the Reference Implementation:
 * 0xPPPPVVRRIIIIIIIIHHHHHHHHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
 * - Prefix (bytes4)
 * -- PPPP - Type prefix (i.e. 0x4450 - ASCII representation of letters "DP")
 * -- VV   - Verison of DataPoint specification (i.e. 0x00 for the reference implementation)
 * -- RR   - Reserved
 * - Registry-local identifier
 * -- IIIIIIII - 32 bit implementation-specific id of the DataPoint
 * - Chain ID (bytes4)
 * -- HHHHHHHH - 32 bit of chain identifier
 * - REGISTRY Address (bytes20)
 * -- AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA - Address of Registry which allocated the DataPoint
**/

Data Point Registry Interface

  • Data Point Registry SHOULD store Data Point access management data for Data Managers and Data Objects
  • Data Point Registry SHOULD use the IDataPointRegistry interface:
interface IDataPointRegistry {

  /**
     * @notice Verifies if an address has an Admin role for a DataPoint
     * @param dp DataPoint
     * @param account Account to verify
     */
    function isAdmin(DataPoint dp, address account) external view returns (bool);

    /**
     * @notice Allocates a DataPoint to an owner
     * @param owner Owner of the new DataPoint
     * @dev Owner SHOULD be granted Admin role during allocation
     */
    function allocate(address owner) external payable returns (DataPoint);

    /**
     * @notice Transfers a DataPoint to an owner
     * @param dp Data Point to be transferred
     * @param owner Owner of the new DataPoint
     */
    function transferOwnership(DataPoint dp, address newOwner) external;

    /**
     * @notice Grant permission to grant/revoke other roles on the DataPoint inside an ODC Implementation
     * This is useful if DataManagers are deployed during lifecycle of the application.
     * @param dp DataPoint
     * @param account New admin
     * @return If the role was granted (otherwise account already had the role)
     */
    function grantAdminRole(DataPoint dp, address account) external returns (bool);

    /**
     * @notice Revoke permission to grant/revoke other roles on the DataPoint inside an ODC Implementation
     * @param dp DataPoint
     * @param account Old admin
     * @dev If an owner revokes Admin role from himself, he can add it again
     * @return If the role was revoked (otherwise account didn't had the role)
     */
    function revokeAdminRole(DataPoint dp, address account) external returns (bool);
}

Data Manager Contract

  • Data Manager MAY use read() or DataObject.read() to read data form Data Objects
  • Data Manager MAY use write() to write data to Data Objects
  • Data Manager MAY share Data Point with other Data Managers
  • Data Manager MAY use multiple Data Points
  • Data Manager MAY implement the logic for requesting Data Points from a Data Point Registry.

Data Managers are independent smart contracts that implement the business logic. They can either read() from a DataObject address, and write() through an ODC Implementation managing the delegated storage of the Data Points.

Rationale

The decision to encode Data Points as bytes32 data containers is primarily driven by flexibility and future-proofing. Using bytes32 allows for a wide range of data encodings. This provides the developer with many options to accommodate diverse use cases. Furthermore, as Ethereum and its standards continue to evolve, encoding as bytes32 ensures that the Standard Adapters can support future data types or structures without requiring significant changes to the standard adapter itself. The Data Point encoding should have a prefix so that the Data Object can efficiently identify compatibility issues when accessing the data storage. Additionally, the prefix should be used to find the Data Point Registry and verify admin access of the Data Point. The use of a suffix for identifying the Data Point Registry is also required, for the Data Object to quickly discard badly formed transactions that aim to use a Data Point from an unmatching Data Point Registry.

Data Manager implementations decide which Data Points they will be using. Their allocation is managed through a Data Point Registry, and the access to the Data Point is managed by passing through the ODC Implementation.

Data Objects being independent separate Smart Contracts that implement the same read/write interface for communicating with Data Managers is a decision mainly driven by the scalability of the system. Offering a simple interface for this 2-layer structure enables different applications to have their addresses for storage of data as well as assets. It is up to each implementation to manage access to this Data Point storage space. This enables a wide array of complex, dynamic, and interactive use cases to be implemented with multiple ERCs as well as other smart contracts.

Data Objects offer flexibility in storing mutable on-chain data that can be modified as per the requirements of each specific use case. This enables the Data Managers to hold mutable states in delegated storage and reflect changes over time, providing a dynamic layer to the otherwise static nature of storage through most other standardized interfaces.

As the Data Points can be set to respond to a specific ODC implementation, Data Managers can decide to migrate the complete storage of a Data Object from one ODC implementation to another. By leveraging multiple implementations of the IODC interface, this standard delivers a powerful framework that amplifies the potential of all ERCs (present and future).

Backwards Compatibility

This ERC is intended to augment the functionality of existing token standards without introducing breaking changes. As such, it does not present any backwards compatibility issues. Already deployed ERCs can be wrapped as Data Points or owned in Vaults as Data Objects, and later exposed through any implementation of Data Managers.

See Reference Implementation.

Reference Implementation

We present an example implementation of Asset Vaults, a deterministic Asset Vault Factory, and a Data Object specialized in managing Asset Vaults. The following implementation assumes the existance of both IDataPointRegistry and IODC implementations.

The Vault Data Object may hold multiple assets locked under management, which in turn may be exposed through a Data Manager interface implementing a logic equivalent to the fractionalization of the Vault.

Example Vault Interface

Asset Vaults are an example smart contract designed to implement the minimimalistic approach at asset management. Any user can deploy Asset Vaults through the factory. The role of an Asset Vault smart contract is to manage assets on behalf of the owner. We recommend Asset Vaults be Ownable2Step for security reasons.

pragma solidity ^0.8.0;
interface IVault {
    /**
     * @notice Executes a state-changing call on a target
     * @param target Contract to call
     * @param data Data sent to the target, including function selector
     * @param value Native coin value sent with the call
     * @dev Access to this function SHOULD be protected
     */
    function execute(address target, bytes calldata data, uint256 value) external returns (bytes memory);

    /**
     * @notice Executes a static call (non state-changing) on a target
     * @param target Contract to call
     * @param data Data sent to the target, including function selector
     */
    function executeStatic(address target, bytes calldata data) external view returns (bytes memory);
  ...
}

Vault Factories are example smart contracts that facilitate the deployment of Asset Vaults.

Example Vault Factory Interface

pragma solidity ^0.8.0;

import "IVault.sol";

interface IVaultFactory {
    function deploy(bytes calldata data) external returns (IVault);
    function deployDeterministic(bytes32 salt, bytes calldata data) external returns (IVault);
    function computeVaultAddress(address deployer, bytes32 salt, bytes calldata data) external view returns (address);
  ...
}

A Vault Data Object is an implementation of Base Data Object for managing assets through Asset Vaults. As an implementation of IDataObject, this smart contract must implement read() and write() functions related to handling the data stored on Data Points under management. The logic of read() can be implemented with the use of dispatchRead() and the write() logic with dispatchWrite(). In this example, the Vault Data Object also implements some signature verification logic.

Example Vault Data Object contract

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

import "IVaultFactory.sol";
import "BaseDataObject.sol";

contract VaultDataObject is IERC1271, BaseDataObject {
    using EnumerableSet for EnumerableSet.AddressSet;

    bytes4 public constant VAULT_FOR_SALT_SELECTOR = bytes4(keccak256("vaultForSalt(bytes32)")); //vaultForSalt(bytes32) returns(address)
    bytes4 public constant ALL_VAULTS_SELECTOR = bytes4(keccak256("allVaults()")); //allVaults() returns(address[] memory)

    bytes4 public constant DEPLOY_VAULT_SELECTOR = bytes4(keccak256("deployVault(address,bytes)")); //deployVault(address factory, bytes calldata data)
    bytes4 public constant DEPLOY_DETERMINISTIC_VAULT_SELECTOR = bytes4(keccak256("deployDeterministicVault(address,bytes32,bytes)")); //deployDeterministicVault(address factory, bytes32 salt, bytes calldata data)
    bytes4 public constant GRANT_SIGNATURE_VALIDATION_SELECTOR = bytes4(keccak256("grantSignatureValidation(address,bytes32,address)")); //grantSignatureValidation(address vault, bytes32 hash, address signer)
    bytes4 public constant REVOKE_SIGNATURE_VALIDATION_SELECTOR = bytes4(keccak256("revokeSignatureValidation(address,bytes32,address)")); //revokeSignatureValidation(address vault, bytes32 hash, address signer)
    bytes4 public constant VAULT_EXECUTE_SELECTOR = bytes4(keccak256("vaultExecute(address,address,bytes,uint256)")); //vaultExecute(address vault, address target, bytes calldata data, uint256 value)
    bytes4 private constant ERC1271_INVALID_SIGNATURE = 0xffffffff;

    error UnknownVault();
    error UnknownSalt();
    error UnknownDataPoint();
    error DeloyedVaultAlreadyRegistered();

    event SignatureValidationAdded(address vault, bytes32 hash, address signer);
    event SignatureValidationRevoked(address vault, bytes32 hash, address signer);

    struct SignatureVerificationData {
        mapping(address vault => mapping(address signer => bool valid)) validSigners;
    }

    struct DpData {
        EnumerableSet.AddressSet deployedVaults;
        mapping(bytes32 salt => address vault) deployedDeterministicVaults;
        mapping(bytes32 hash => SignatureVerificationData) validSignatures;
      ...
    }

    mapping(uint256 dpIdx => DpData) private dpDataStorage;
    mapping(address vault => DataPoint) internal vaultsToDataPoints;

    /**
     * @inheritdoc IERC1271
     */
    function isValidSignature(bytes32 hash, bytes memory signature) external view returns (bytes4 magicValue) {
      ...
    }

    function computeDeterministicVaultAddress(DataPoint dp, address factory, bytes32 salt, bytes calldata data) external view returns (address) {
      ...
    }

    function datapointOfVault(address vault) public view returns (DataPoint) {
        DataPoint dp = vaultsToDataPoints[vault];
        if (DataPoint.unwrap(dp) == bytes32(0)) revert UnknownVault();
        return dp;
    }

    function dispatchRead(DataPoint dp, bytes4 operation, bytes calldata data) internal view virtual override returns (bytes memory) {
        if (operation == VAULT_FOR_SALT_SELECTOR) {
            bytes32 salt = abi.decode(data, (bytes32));
            return abi.encode(_vaultForSalt(dp, salt));
        } else if (operation == ALL_VAULTS_SELECTOR) {
            return abi.encode(_allVaults(dp));
        } else {
            revert UnknownReadOperation(operation);
        }
    }

    function dispatchWrite(DataPoint dp, bytes4 operation, bytes calldata data) internal virtual override returns (bytes memory) {
        if (operation == DEPLOY_VAULT_SELECTOR) {
            (address factory, bytes memory factoryData) = abi.decode(data, (address, bytes));
            return abi.encode(_deployVault(dp, factory, factoryData));
        } else if (operation == DEPLOY_DETERMINISTIC_VAULT_SELECTOR) {
            (address factory, bytes32 salt, bytes memory factoryData) = abi.decode(data, (address, bytes32, bytes));
            return abi.encode(_deployDeterministicVault(dp, factory, salt, factoryData));
        } else if (operation == GRANT_SIGNATURE_VALIDATION_SELECTOR) {
            (address vault, bytes32 hash, address signer) = abi.decode(data, (address, bytes32, address));
            _grantSignatureValidation(dp, vault, hash, signer);
            return "";
        } else if (operation == REVOKE_SIGNATURE_VALIDATION_SELECTOR) {
            (address vault, bytes32 hash, address signer) = abi.decode(data, (address, bytes32, address));
            _revokeSignatureValidation(dp, vault, hash, signer);
            return "";
        } else if (operation == VAULT_EXECUTE_SELECTOR) {
            (address vault, address target, bytes memory vaultCallData, uint256 value) = abi.decode(data, (address, address, bytes, uint256));
            return _vaultExecute(dp, vault, target, vaultCallData, value);
        } else {
            revert UnknownWriteOperation(operation);
        }
    }
  ...
}

Finally, we expose a user-facing example Data Manager contract for managing the Data Object. The Data Manager will be implementing read() functions for checking the users’ balanceOf() directly from the Data Object, and write() functions through the IODC implementation for the transfer() logic. Since multiple Data Managers can make use of the same Data Point, it is possible for several concurrent Data Manager implementations to share a Data Point where they delegate their storage.

Example Fungible Token Data Manager implementation

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;

contract ERC20DataManager is IERC20, IERC20Metadata, IERC20Errors, Ownable, ERC20Approvals, ERC20Transfers, ERC20Burnable, ERC20Mintable, ERC20Metadata {
    bytes4 internal BALANCE_OF_SELECTOR = bytes4(keccak256("balanceOf(address)"));
    bytes4 internal TOTAL_SUPPLY_SELECTOR = bytes4(keccak256("totalSupply()"));
    bytes4 internal MINT_SELECTOR = bytes4(keccak256("mint(address,uint256)"));
    bytes4 internal BURN_SELECTOR = bytes4(keccak256("burn(address,uint256)"));
    bytes4 internal TRANSFER_SELECTOR = bytes4(keccak256("transfer(address,address,uint256)"));

    DataPoint internal immutable datapoint;
    IDataObject public immutable fungibleDO;
    IODC public odc;
    mapping(address account => mapping(address spender => uint256)) private _allowances;

    constructor(
        bytes32 _datapoint,
        address _odc,
        address _fungibleDO,
        string memory name_,
        string memory symbol_
    ) Ownable(msg.sender) ERC20Metadata(name_, symbol_) {
        datapoint = DataPoint.wrap(_datapoint);
        odc = IODC(_odc);
        fungibleDO = IDataObject(_fungibleDO);
    }

    function totalSupply() external view override returns (uint256) {
        return abi.decode(fungibleDO.read(datapoint, TOTAL_SUPPLY_SELECTOR, ""), (uint256));
    }

    function balanceOf(address account) external view override returns (uint256) {
        return abi.decode(fungibleDO.read(datapoint, BALANCE_OF_SELECTOR, abi.encode(account)), (uint256));
    }

    function _checkMinter() internal view override {
        _checkOwner();
    }

    function _writeTransfer(address from, address to, uint256 amount) internal override {
        if (from == address(0)) {
            odc.write(address(fungibleDO), datapoint, MINT_SELECTOR, abi.encode(to, amount));
        } else if (to == address(0)) {
            odc.write(address(fungibleDO), datapoint, BURN_SELECTOR, abi.encode(from, amount));
        } else {
            odc.write(address(fungibleDO), datapoint, TRANSFER_SELECTOR, abi.encode(from, to, amount));
        }
    }
}

Security Considerations

The access control is separated in three layers:

  • Layer 1: The Data Point Registry allocates for Data Managers and manages ownerhsip (admin/write rights) of Data Points.
  • Layer 2: The ODC smart contract implements Access Control by managing Approvals of Data Managers to Data Points. It uses the Data Point Registry to verify who can grant/revoke this access.
  • Layer 3: The Data Manager exposes functions that can perform write operations on the Data Point by calling the ODC implementation.

No further security considerations are derived specificly from this ERC.

Copyright and related rights waived via CC0.