From 86ca2b60527dff1f98c20e403a0c7bf1e1d4a812 Mon Sep 17 00:00:00 2001
From: Hari Dara
Date: Thu, 18 Dec 2025 15:47:54 +0530
Subject: [PATCH 1/7] HBASE-29368 [Feature] Key management for encryption at
rest
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
This PR implements the key management feature for HBase encryption at rest,
building on the API surface and refactoring introduced in the precursor PR (#7584).
Jira: [HBASE-29368](https://issues.apache.org/jira/browse/HBASE-29368)
Design doc: https://docs.google.com/document/d/1ToW_rveXHXUc1F6eFNQfu5LOeMAjzgq6FcYUDbdZrSM/edit?usp=sharing
Discussion thread: https://lists.apache.org/thread/q7g2rr2xcgl64rkn9j3mnokf6fvohp2y
Cumulative changes from feature branch corresponding to the following sub-tasks:
1. [Phase 1: Key caching and minimal service](https://issues.apache.org/jira/browse/HBASE-29402)
2. [Phase 2: Integrate key management with existing encryption](https://issues.apache.org/jira/browse/HBASE-29495)
3. [Phase 2: Migration path from current encryption to managed encryption](https://issues.apache.org/jira/browse/HBASE-29617)
4. [Phase 2: Admin API to trigger for System Key rotation detection as an alternative to failover.](https://issues.apache.org/jira/browse/HBASE-29643)
5. [Phase 3: Additional key management APIs](https://issues.apache.org/jira/browse/HBASE-29666)
This feature introduces a comprehensive key management system that extends HBase's existing encryption-at-rest capabilities. The implementation provides enterprise-grade key lifecycle management with support for key rotation, hierarchical namespace resolution for key lookup, key caching and improved integration with key management systems to handle key life cycles and external key changes.
**1. Managed Keys Infrastructure**
- Introduction of `ManagedKeyProvider` interface for pluggable key provider implementations on the lines of the existing `KeyProvider` interface.
- The new interface can also return Data Encryption Keys (DEKs) and a lot more details on the keys.
- Comes with the default `ManagedKeyStoreKeyProvider` implementation using Java KeyStore, similar to the existing `KeyStoreKeyProvider`.
- Enables logical key isolation for multi-tenant scenarios through custodian identifiers (future use cases) and the special default global custodian.
- Hierarchical namespace resolution for DEKs with automatic fallback: explicit CF namespace attribute → constructed `table/family` namespace → table name → global namespace
**2. System Key (STK) Management**
- Cluster-wide system key for wrapping data encryption keys (DEKs). This is equivalent to the existing master key, but better managed and operation friendly.
- Secure storage in HDFS with support for automatic key rotation during boot up.
- Admin API to trigger key rotation and propagation to all RegionServers without needing to do a rolling restart.
- Preserves the current double-wrapping architecture: DEKs wrapped by STK, STK sourced from external KMS
**3. KeymetaAdmin API**
- `enableKeyManagement(keyCust, keyNamespace)` - Enable key management for a custodian/namespace pair
- `getManagedKeys(keyCust, keyNamespace)` - Query key status and metadata
- `rotateSTK()` - Check for and propagate new system keys
- `disableKeyManagement(keyCust, keyNamespace)` - Disable all the keys for a custodian/namespace
- `disableManagedKey(keyCust, keyNamespace, keyMetadataHash)` - Disable a specific key
- `rotateManagedKey(keyCust, keyNamespace)` - Rotate the active key
- `refreshManagedKeys(keyCust, keyNamespace)` - Refresh from external KMS to validate all the keys.
- Internal cache management operations for convenience and meeting SLAs.
**4. Persistent Key Metadata Storage**
- New system table `hbase:keymeta` for storing key metadata and state which acts as an `L2` cache.
- Tracks key lifecycle: `ACTIVE`, `INACTIVE`, `DISABLED`, `FAILED` states
- Stores wrapped DEKs and metadata for key lookup without depending on external KMS.
- Optimized for high-priority access with in-memory column families
- Key metadata tracking with cryptographic hashes for integrity verification
**5. Multi-Layer Caching**
- L1: In-memory Caffeine cache on RegionServers for hot key data
- L2: Keymeta table for persistent key metadata that is shared across all RegionServers.
- L3: Dynamic lookup from external KMS as fallback when not found in L2.
- Cache invalidation mechanism for key rotation scenarios
**6. HBase Shell Integration**
- `enable_key_management` - Enable key management for a custodian and namespace
- `show_key_status` - Display key status and metadata
- `rotate_stk` - Trigger system key rotation
- `disable_key_management` - Disable key management for a custodian and namespace
- `disable_managed_key` - Disable a specific key
- `rotate_managed_key` - Rotate the active key
- `refresh_managed_keys` - Refresh all keys for a custodian and namespace
- **Backward Compatibility:** Changes are fully compatible with existing encryption-at-rest configuration
- **Gradual step-by-step migration**: Well defined migration path from existing configuration to new configuration
- **Performance:** Minimal overhead through efficient caching and lazy key loading
- **Security:** Cryptographic verification of key metadata, secure key wrapping
- **Operability:** Administrative tools for key life cycle and cache management
- **Extensibility:** Plugin architecture for custom key provider implementations
- **Testing:** Comprehensive unit and integration tests coverage
The implementation follows a layered architecture:
1. **Provider Layer:** Pluggable `ManagedKeyProvider` for KMS integration
2. **Management Layer:** `KeyMetaAdmin` API for administrative operations
3. **Persistence Layer:** `KeymetaTableAccessor` for metadata storage
4. **Cache Layer:** `ManagedKeyDataCache` and `SystemKeyCache` for performance
5. **Service Layer:** Coprocessor endpoints for client-server communication
I would particularly appreciate feedback on:
1. **API Design:** Is the `KeymetaAdmin` API intuitive and complete for common key management scenarios?
2. **Security Model:** Does the double-wrapping architecture (DEK wrapped by STK, STK from KMS) provide appropriate security guarantees?
3. **Performance:** Are there potential bottlenecks in the caching strategy or table access patterns?
4. **Operational Aspects:** Are the administrative commands sufficient for the needs of operations and monitoring?
5. **Testing Coverage:** Are there additional test scenarios we should cover?
6. **Documentation:** Is the design document clear? What additional documentation would be helpful?
7. **Compatibility:** Any concerns about interaction with existing HBase features?
After incorporating community feedback, I plan to:
1. Address any issues identified during review
2. Implement the work identified for future phases
3. Add additional documentation to the reference guide
This PR introduces changes across multiple modules, so I recommend focusing on these **core components** first:
**Core Architecture:**
1. Design document (linked above) - architectural overview
2. `ManagedKeyProvider`, `KeymetaAdmin`, `ManagedKeyData` interfaces (hbase-common)
3. `ManagedKeys.proto` - protocol definitions
4. `HMaster` and misc. procedure changes - initialization of `keymeta` in a predictable order
5. `FixedFileTrailer` + reader/writer changes - encode/decode additional encryption key in store files
**Key Implementation:**
1. `KeymetaAdminImpl`, `KeymetaTableAccessor`, `ManagedKeyUtils`, `SystemKeyManager`, `SystemKeyAccessor` - admin operations and persistence
2. `ManagedKeyDataCache`, `SystemKeyCache` - caching layer
3. `SecurityUtil` - encryption context creation
**Client & Shell:**
1. `KeymetaAdminClient` - client API
2. Shell commands and Ruby wrappers
**Tests & Examples:**
1. `TestKeymetaAdminImpl`, `TestManagedKeymeta` - for usage patterns
2. `key_provider_keymeta_migration_test.rb` - E2E migration steps
---
.rubocop.yml | 9 +
.../org/apache/hadoop/hbase/client/Admin.java | 24 +
.../hbase/client/AdminOverAsyncAdmin.java | 17 +
.../hadoop/hbase/client/AsyncAdmin.java | 23 +
.../hadoop/hbase/client/AsyncHBaseAdmin.java | 17 +
.../hbase/client/RawAsyncHBaseAdmin.java | 86 ++
.../hbase/keymeta/KeymetaAdminClient.java | 112 +-
.../org/apache/hadoop/hbase/HConstants.java | 51 +
.../hadoop/hbase/io/crypto/Encryption.java | 61 +-
.../hbase/io/crypto/KeyStoreKeyProvider.java | 10 +-
.../hbase/io/crypto/ManagedKeyData.java | 295 ++++-
.../hbase/io/crypto/ManagedKeyProvider.java | 99 ++
.../hbase/io/crypto/ManagedKeyState.java | 103 ++
.../io/crypto/ManagedKeyStoreKeyProvider.java | 145 +++
.../hbase/io/crypto/MockAesKeyProvider.java | 24 +-
.../io/crypto/tls/HBaseHostnameVerifier.java | 4 +-
.../hadoop/hbase/util/CommonFSUtils.java | 37 +-
.../apache/hadoop/hbase/util/GsonUtil.java | 10 +
.../hbase/io/crypto/KeymetaTestUtils.java | 216 ++++
.../io/crypto/MockManagedKeyProvider.java | 182 +++
.../hbase/io/crypto/TestKeyProvider.java | 14 +
.../io/crypto/TestKeyStoreKeyProvider.java | 97 +-
.../hbase/io/crypto/TestManagedKeyData.java | 210 ++++
.../io/crypto/TestManagedKeyProvider.java | 546 ++++++++
.../hbase/io/crypto/TestManagedKeyState.java | 113 ++
.../apache/hadoop/hbase/HBaseServerBase.java | 32 +-
.../hbase/client/AsyncRegionServerAdmin.java | 5 +
.../org/apache/hadoop/hbase/io/HFileLink.java | 9 +
.../apache/hadoop/hbase/io/hfile/HFile.java | 14 +-
.../hbase/io/hfile/HFileWriterImpl.java | 37 +-
.../hbase/keymeta/KeyManagementBase.java | 110 ++
.../hbase/keymeta/KeyManagementService.java | 79 +-
.../hbase/keymeta/KeyManagementUtils.java | 285 +++++
.../hbase/keymeta/KeyNamespaceUtil.java | 93 ++
.../hbase/keymeta/KeymetaAdminImpl.java | 246 +++-
.../hbase/keymeta/KeymetaServiceEndpoint.java | 312 +++++
.../hbase/keymeta/KeymetaTableAccessor.java | 428 ++++++-
.../hbase/keymeta/ManagedKeyDataCache.java | 309 ++++-
.../hbase/keymeta/SystemKeyAccessor.java | 127 +-
.../hadoop/hbase/keymeta/SystemKeyCache.java | 74 +-
.../apache/hadoop/hbase/master/HMaster.java | 102 +-
.../hadoop/hbase/master/MasterFileSystem.java | 4 +
.../hadoop/hbase/master/SystemKeyManager.java | 124 ++
.../hadoop/hbase/regionserver/HRegion.java | 18 +-
.../hbase/regionserver/HRegionServer.java | 5 +-
.../hadoop/hbase/regionserver/HStoreFile.java | 17 +-
.../hbase/regionserver/RSRpcServices.java | 69 +-
.../hbase/regionserver/StoreEngine.java | 3 +-
.../hadoop/hbase/security/SecurityUtil.java | 251 +++-
.../hadoop/hbase/util/EncryptionTest.java | 19 +-
.../hbase/keymeta/DummyKeyProvider.java | 37 +
.../ManagedKeyProviderInterceptor.java | 91 ++
.../hbase/keymeta/ManagedKeyTestBase.java | 128 ++
.../hbase/keymeta/TestKeyManagementBase.java | 86 ++
.../keymeta/TestKeyManagementService.java | 106 ++
.../hbase/keymeta/TestKeyManagementUtils.java | 181 +++
.../hbase/keymeta/TestKeyNamespaceUtil.java | 126 ++
.../hbase/keymeta/TestKeymetaEndpoint.java | 561 +++++++++
.../keymeta/TestKeymetaTableAccessor.java | 591 +++++++++
.../keymeta/TestManagedKeyDataCache.java | 862 +++++++++++++
.../hbase/keymeta/TestManagedKeymeta.java | 443 +++++++
.../hbase/keymeta/TestSystemKeyCache.java | 310 +++++
.../hbase/master/TestKeymetaAdminImpl.java | 1077 ++++++++++++++++
.../hbase/master/TestMasterFailover.java | 145 ++-
.../TestSystemKeyAccessorAndManager.java | 523 ++++++++
.../hbase/master/TestSystemKeyManager.java | 115 ++
.../hbase/regionserver/TestRSRpcServices.java | 315 +++++
.../hbase/rsgroup/VerifyingRSGroupAdmin.java | 16 +
.../hbase/security/TestSecurityUtil.java | 1105 +++++++++++++++++
.../hadoop/hbase/util/TestEncryptionTest.java | 76 ++
hbase-shell/src/main/ruby/hbase/hbase.rb | 28 +-
.../src/main/ruby/hbase/keymeta_admin.rb | 95 ++
hbase-shell/src/main/ruby/hbase_constants.rb | 1 +
hbase-shell/src/main/ruby/shell.rb | 21 +
hbase-shell/src/main/ruby/shell/commands.rb | 4 +
.../shell/commands/disable_key_management.rb | 45 +
.../shell/commands/disable_managed_key.rb | 45 +
.../shell/commands/enable_key_management.rb | 45 +
.../shell/commands/keymeta_command_base.rb | 45 +
.../shell/commands/refresh_managed_keys.rb | 45 +
.../ruby/shell/commands/rotate_managed_key.rb | 45 +
.../main/ruby/shell/commands/rotate_stk.rb | 51 +
.../ruby/shell/commands/show_key_status.rb | 45 +
.../hbase/client/TestKeymetaAdminShell.java | 143 +++
.../hbase/client/TestKeymetaMigration.java | 52 +
.../client/TestKeymetaMockProviderShell.java | 83 ++
.../shell/admin_keymeta_mock_provider_test.rb | 143 +++
.../src/test/ruby/shell/admin_keymeta_test.rb | 193 +++
.../shell/encrypted_table_keymeta_test.rb | 177 +++
.../key_provider_keymeta_migration_test.rb | 663 ++++++++++
.../rotate_stk_keymeta_mock_provider_test.rb | 59 +
.../hbase/thrift2/client/ThriftAdmin.java | 19 +
92 files changed, 13902 insertions(+), 316 deletions(-)
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyProvider.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyState.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyStoreKeyProvider.java
create mode 100644 hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/KeymetaTestUtils.java
create mode 100644 hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/MockManagedKeyProvider.java
create mode 100644 hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyData.java
create mode 100644 hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyProvider.java
create mode 100644 hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyState.java
create mode 100644 hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementBase.java
create mode 100644 hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementUtils.java
create mode 100644 hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyNamespaceUtil.java
create mode 100644 hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaServiceEndpoint.java
create mode 100644 hbase-server/src/main/java/org/apache/hadoop/hbase/master/SystemKeyManager.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/DummyKeyProvider.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementBase.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java
create mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java
create mode 100644 hbase-shell/src/main/ruby/hbase/keymeta_admin.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/disable_key_management.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/disable_managed_key.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/enable_key_management.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/refresh_managed_keys.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/rotate_managed_key.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/rotate_stk.rb
create mode 100644 hbase-shell/src/main/ruby/shell/commands/show_key_status.rb
create mode 100644 hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaAdminShell.java
create mode 100644 hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMigration.java
create mode 100644 hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java
create mode 100644 hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb
create mode 100644 hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb
create mode 100644 hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb
create mode 100644 hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb
create mode 100644 hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb
diff --git a/.rubocop.yml b/.rubocop.yml
index f877a052eea6..e1eb10a9245b 100644
--- a/.rubocop.yml
+++ b/.rubocop.yml
@@ -9,3 +9,12 @@ Layout/LineLength:
Metrics/MethodLength:
Max: 75
+
+GlobalVars:
+ AllowedVariables:
+ - $CUST1_ENCODED
+ - $CUST1_ALIAS
+ - $CUST1_ENCODED
+ - $GLOB_CUST_ENCODED
+ - $TEST
+ - $TEST_CLUSTER
\ No newline at end of file
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
index 65b3abcd413c..5c5b0efd3b7f 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
@@ -2705,4 +2705,28 @@ List getLogEntries(Set serverNames, String logType, Server
@InterfaceAudience.Private
void restoreBackupSystemTable(String snapshotName) throws IOException;
+
+ /**
+ * Refresh the system key cache on all specified region servers.
+ * @param regionServers the list of region servers to refresh the system key cache on
+ */
+ void refreshSystemKeyCacheOnServers(List regionServers) throws IOException;
+
+ /**
+ * Eject a specific managed key entry from the managed key data cache on all specified region
+ * servers.
+ * @param regionServers the list of region servers to eject the managed key entry from
+ * @param keyCustodian the key custodian
+ * @param keyNamespace the key namespace
+ * @param keyMetadata the key metadata
+ */
+ void ejectManagedKeyDataCacheEntryOnServers(List regionServers, byte[] keyCustodian,
+ String keyNamespace, String keyMetadata) throws IOException;
+
+ /**
+ * Clear all entries in the managed key data cache on all specified region servers without having
+ * to restart the process.
+ * @param regionServers the list of region servers to clear the managed key data cache on
+ */
+ void clearManagedKeyDataCacheOnServers(List regionServers) throws IOException;
}
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdminOverAsyncAdmin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdminOverAsyncAdmin.java
index 7117fd4fd33f..b13084f08b60 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdminOverAsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AdminOverAsyncAdmin.java
@@ -1157,4 +1157,21 @@ public List getCachedFilesList(ServerName serverName) throws IOException
public void restoreBackupSystemTable(String snapshotName) throws IOException {
get(admin.restoreBackupSystemTable(snapshotName));
}
+
+ @Override
+ public void refreshSystemKeyCacheOnServers(List regionServers) throws IOException {
+ get(admin.refreshSystemKeyCacheOnServers(regionServers));
+ }
+
+ @Override
+ public void ejectManagedKeyDataCacheEntryOnServers(List regionServers,
+ byte[] keyCustodian, String keyNamespace, String keyMetadata) throws IOException {
+ get(admin.ejectManagedKeyDataCacheEntryOnServers(regionServers, keyCustodian, keyNamespace,
+ keyMetadata));
+ }
+
+ @Override
+ public void clearManagedKeyDataCacheOnServers(List regionServers) throws IOException {
+ get(admin.clearManagedKeyDataCacheOnServers(regionServers));
+ }
}
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
index 56211cedc493..d0778ecdf580 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
@@ -1892,4 +1892,27 @@ CompletableFuture> getLogEntries(Set serverNames, Str
@InterfaceAudience.Private
CompletableFuture restoreBackupSystemTable(String snapshotName);
+
+ /**
+ * Refresh the system key cache on all specified region servers.
+ * @param regionServers the list of region servers to refresh the system key cache on
+ */
+ CompletableFuture refreshSystemKeyCacheOnServers(List regionServers);
+
+ /**
+ * Eject a specific managed key entry from the managed key data cache on all specified region
+ * servers.
+ * @param regionServers the list of region servers to eject the managed key entry from
+ * @param keyCustodian the key custodian
+ * @param keyNamespace the key namespace
+ * @param keyMetadata the key metadata
+ */
+ CompletableFuture ejectManagedKeyDataCacheEntryOnServers(List regionServers,
+ byte[] keyCustodian, String keyNamespace, String keyMetadata);
+
+ /**
+ * Clear all entries in the managed key data cache on all specified region servers.
+ * @param regionServers the list of region servers to clear the managed key data cache on
+ */
+ CompletableFuture clearManagedKeyDataCacheOnServers(List regionServers);
}
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
index 8132b184809c..3eadf1becdd0 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
@@ -696,6 +696,23 @@ public CompletableFuture updateConfiguration(String groupName) {
return wrap(rawAdmin.updateConfiguration(groupName));
}
+ @Override
+ public CompletableFuture refreshSystemKeyCacheOnServers(List regionServers) {
+ return wrap(rawAdmin.refreshSystemKeyCacheOnServers(regionServers));
+ }
+
+ @Override
+ public CompletableFuture ejectManagedKeyDataCacheEntryOnServers(
+ List regionServers, byte[] keyCustodian, String keyNamespace, String keyMetadata) {
+ return wrap(rawAdmin.ejectManagedKeyDataCacheEntryOnServers(regionServers, keyCustodian,
+ keyNamespace, keyMetadata));
+ }
+
+ @Override
+ public CompletableFuture clearManagedKeyDataCacheOnServers(List regionServers) {
+ return wrap(rawAdmin.clearManagedKeyDataCacheOnServers(regionServers));
+ }
+
@Override
public CompletableFuture rollWALWriter(ServerName serverName) {
return wrap(rawAdmin.rollWALWriter(serverName));
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
index ea51d27b99a4..ec1aa4736df5 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
@@ -76,6 +76,7 @@
import org.apache.hadoop.hbase.client.replication.TableCFs;
import org.apache.hadoop.hbase.client.security.SecurityCapability;
import org.apache.hadoop.hbase.exceptions.DeserializationException;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.ipc.HBaseRpcController;
import org.apache.hadoop.hbase.net.Address;
import org.apache.hadoop.hbase.quotas.QuotaFilter;
@@ -150,7 +151,10 @@
import org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.UpdateConfigurationRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.AdminProtos.UpdateConfigurationResponse;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.EmptyMsg;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.LastHighestWalFilenum;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyEntryRequest;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.NameStringPair;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ProcedureDescription;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.RegionSpecifier.RegionSpecifierType;
@@ -4697,4 +4701,86 @@ MasterProtos.RestoreBackupSystemTableResponse> procedureCall(request,
MasterProtos.RestoreBackupSystemTableResponse::getProcId,
new RestoreBackupSystemTableProcedureBiConsumer());
}
+
+ @Override
+ public CompletableFuture refreshSystemKeyCacheOnServers(List regionServers) {
+ CompletableFuture future = new CompletableFuture<>();
+ List> futures =
+ regionServers.stream().map(this::refreshSystemKeyCache).collect(Collectors.toList());
+ addListener(CompletableFuture.allOf(futures.toArray(new CompletableFuture>[0])),
+ (result, err) -> {
+ if (err != null) {
+ future.completeExceptionally(err);
+ } else {
+ future.complete(result);
+ }
+ });
+ return future;
+ }
+
+ private CompletableFuture refreshSystemKeyCache(ServerName serverName) {
+ return this. newAdminCaller()
+ .action((controller, stub) -> this. adminCall(controller, stub,
+ EmptyMsg.getDefaultInstance(),
+ (s, c, req, done) -> s.refreshSystemKeyCache(controller, req, done), resp -> null))
+ .serverName(serverName).call();
+ }
+
+ @Override
+ public CompletableFuture ejectManagedKeyDataCacheEntryOnServers(
+ List regionServers, byte[] keyCustodian, String keyNamespace, String keyMetadata) {
+ CompletableFuture future = new CompletableFuture<>();
+ // Create the request once instead of repeatedly for each server
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata);
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCustodian))
+ .setKeyNamespace(keyNamespace).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyMetadataHash)).build();
+ List> futures =
+ regionServers.stream().map(serverName -> ejectManagedKeyDataCacheEntry(serverName, request))
+ .collect(Collectors.toList());
+ addListener(CompletableFuture.allOf(futures.toArray(new CompletableFuture>[0])),
+ (result, err) -> {
+ if (err != null) {
+ future.completeExceptionally(err);
+ } else {
+ future.complete(result);
+ }
+ });
+ return future;
+ }
+
+ private CompletableFuture ejectManagedKeyDataCacheEntry(ServerName serverName,
+ ManagedKeyEntryRequest request) {
+ return this. newAdminCaller()
+ .action((controller, stub) -> this. adminCall(controller, stub, request,
+ (s, c, req, done) -> s.ejectManagedKeyDataCacheEntry(controller, req, done),
+ resp -> null))
+ .serverName(serverName).call();
+ }
+
+ @Override
+ public CompletableFuture clearManagedKeyDataCacheOnServers(List regionServers) {
+ CompletableFuture future = new CompletableFuture<>();
+ List> futures =
+ regionServers.stream().map(this::clearManagedKeyDataCache).collect(Collectors.toList());
+ addListener(CompletableFuture.allOf(futures.toArray(new CompletableFuture>[0])),
+ (result, err) -> {
+ if (err != null) {
+ future.completeExceptionally(err);
+ } else {
+ future.complete(result);
+ }
+ });
+ return future;
+ }
+
+ private CompletableFuture clearManagedKeyDataCache(ServerName serverName) {
+ return this. newAdminCaller()
+ .action((controller, stub) -> this. adminCall(controller, stub,
+ EmptyMsg.getDefaultInstance(),
+ (s, c, req, done) -> s.clearManagedKeyDataCache(controller, req, done), resp -> null))
+ .serverName(serverName).call();
+ }
}
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java
index 522ec006a853..9801750e5b7d 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java
@@ -19,71 +19,153 @@
import java.io.IOException;
import java.security.KeyException;
+import java.util.ArrayList;
import java.util.List;
+import org.apache.commons.lang3.NotImplementedException;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.yetus.audience.InterfaceAudience;
-/**
- * STUB IMPLEMENTATION - Feature not yet complete. This class will be fully implemented in
- * HBASE-29368 feature PR.
- */
+import org.apache.hbase.thirdparty.com.google.protobuf.ByteString;
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.BooleanMsg;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.EmptyMsg;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.GetManagedKeysResponse;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyEntryRequest;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyResponse;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ManagedKeysProtos;
+
+@InterfaceAudience.Public
@InterfaceAudience.Private
public class KeymetaAdminClient implements KeymetaAdmin {
+ private ManagedKeysProtos.ManagedKeysService.BlockingInterface stub;
public KeymetaAdminClient(Connection conn) throws IOException {
- // Stub constructor
+ this.stub =
+ ManagedKeysProtos.ManagedKeysService.newBlockingStub(conn.getAdmin().coprocessorService());
}
@Override
public ManagedKeyData enableKeyManagement(byte[] keyCust, String keyNamespace)
- throws IOException, KeyException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ throws IOException {
+ try {
+ ManagedKeyResponse response = stub.enableKeyManagement(null, ManagedKeyRequest.newBuilder()
+ .setKeyCust(ByteString.copyFrom(keyCust)).setKeyNamespace(keyNamespace).build());
+ return generateKeyData(response);
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
}
@Override
public List getManagedKeys(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ try {
+ GetManagedKeysResponse statusResponse =
+ stub.getManagedKeys(null, ManagedKeyRequest.newBuilder()
+ .setKeyCust(ByteString.copyFrom(keyCust)).setKeyNamespace(keyNamespace).build());
+ return generateKeyDataList(statusResponse);
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
}
@Override
public boolean rotateSTK() throws IOException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ try {
+ BooleanMsg response = stub.rotateSTK(null, EmptyMsg.getDefaultInstance());
+ return response.getBoolMsg();
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
}
@Override
public void ejectManagedKeyDataCacheEntry(byte[] keyCustodian, String keyNamespace,
String keyMetadata) throws IOException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ throw new NotImplementedException(
+ "ejectManagedKeyDataCacheEntry not supported in KeymetaAdminClient");
}
@Override
public void clearManagedKeyDataCache() throws IOException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ throw new NotImplementedException(
+ "clearManagedKeyDataCache not supported in KeymetaAdminClient");
}
@Override
public ManagedKeyData disableKeyManagement(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ try {
+ ManagedKeyResponse response = stub.disableKeyManagement(null, ManagedKeyRequest.newBuilder()
+ .setKeyCust(ByteString.copyFrom(keyCust)).setKeyNamespace(keyNamespace).build());
+ return generateKeyData(response);
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
}
@Override
public ManagedKeyData disableManagedKey(byte[] keyCust, String keyNamespace,
byte[] keyMetadataHash) throws IOException, KeyException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ try {
+ ManagedKeyResponse response = stub.disableManagedKey(null,
+ ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCust))
+ .setKeyNamespace(keyNamespace).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyMetadataHash)).build());
+ return generateKeyData(response);
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
}
@Override
public ManagedKeyData rotateManagedKey(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ try {
+ ManagedKeyResponse response = stub.rotateManagedKey(null, ManagedKeyRequest.newBuilder()
+ .setKeyCust(ByteString.copyFrom(keyCust)).setKeyNamespace(keyNamespace).build());
+ return generateKeyData(response);
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
}
@Override
public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
- throw new UnsupportedOperationException("KeymetaAdmin feature not yet implemented");
+ try {
+ stub.refreshManagedKeys(null, ManagedKeyRequest.newBuilder()
+ .setKeyCust(ByteString.copyFrom(keyCust)).setKeyNamespace(keyNamespace).build());
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
+ }
+
+ private static List generateKeyDataList(GetManagedKeysResponse stateResponse) {
+ List keyStates = new ArrayList<>();
+ for (ManagedKeyResponse state : stateResponse.getStateList()) {
+ keyStates.add(generateKeyData(state));
+ }
+ return keyStates;
+ }
+
+ private static ManagedKeyData generateKeyData(ManagedKeyResponse response) {
+ // Use hash-only constructor for client-side ManagedKeyData
+ byte[] keyMetadataHash =
+ response.hasKeyMetadataHash() ? response.getKeyMetadataHash().toByteArray() : null;
+ if (keyMetadataHash == null) {
+ return new ManagedKeyData(response.getKeyCust().toByteArray(), response.getKeyNamespace(),
+ ManagedKeyState.forValue((byte) response.getKeyState().getNumber()));
+ } else {
+ return new ManagedKeyData(response.getKeyCust().toByteArray(), response.getKeyNamespace(),
+ ManagedKeyState.forValue((byte) response.getKeyState().getNumber()), keyMetadataHash,
+ response.getRefreshTimestamp());
+ }
}
}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index 9af711e7edfd..4260fd906028 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -343,6 +343,7 @@ public enum OperationStatusCode {
/** Parameter name for HBase instance root directory */
public static final String HBASE_DIR = "hbase.rootdir";
+ public static final String HBASE_ORIGINAL_DIR = "hbase.originalRootdir";
/** Parameter name for HBase client IPC pool type */
public static final String HBASE_CLIENT_IPC_POOL_TYPE = "hbase.client.ipc.pool.type";
@@ -1193,6 +1194,11 @@ public enum OperationStatusCode {
/** Temporary directory used for table creation and deletion */
public static final String HBASE_TEMP_DIRECTORY = ".tmp";
+ /**
+ * Directory used for storing master keys for the cluster
+ */
+ public static final String SYSTEM_KEYS_DIRECTORY = ".system_keys";
+ public static final String SYSTEM_KEY_FILE_PREFIX = "system_key.";
/**
* The period (in milliseconds) between computing region server point in time metrics
*/
@@ -1282,6 +1288,14 @@ public enum OperationStatusCode {
public static final String CRYPTO_KEYPROVIDER_PARAMETERS_KEY =
"hbase.crypto.keyprovider.parameters";
+ /** Configuration key for the managed crypto key provider, a class name */
+ public static final String CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY =
+ "hbase.crypto.managed.keyprovider";
+
+ /** Configuration key for the managed crypto key provider parameters */
+ public static final String CRYPTO_MANAGED_KEYPROVIDER_PARAMETERS_KEY =
+ "hbase.crypto.managed.keyprovider.parameters";
+
/** Configuration key for the name of the master key for the cluster, a string */
public static final String CRYPTO_MASTERKEY_NAME_CONF_KEY = "hbase.crypto.master.key.name";
@@ -1305,6 +1319,43 @@ public enum OperationStatusCode {
/** Configuration key for enabling WAL encryption, a boolean */
public static final String ENABLE_WAL_ENCRYPTION = "hbase.regionserver.wal.encryption";
+ /**
+ * Property used by ManagedKeyStoreKeyProvider class to set the alias that identifies the current
+ * system key.
+ */
+ public static final String CRYPTO_MANAGED_KEY_STORE_SYSTEM_KEY_NAME_CONF_KEY =
+ "hbase.crypto.managed_key_store.system.key.name";
+ public static final String CRYPTO_MANAGED_KEY_STORE_CONF_KEY_PREFIX =
+ "hbase.crypto.managed_key_store.cust.";
+
+ /** Enables or disables the key management feature. */
+ public static final String CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY =
+ "hbase.crypto.managed_keys.enabled";
+ public static final boolean CRYPTO_MANAGED_KEYS_DEFAULT_ENABLED = false;
+
+ /**
+ * Enables or disables key lookup during data path as an alternative to static injection of keys
+ * using control path.
+ */
+ public static final String CRYPTO_MANAGED_KEYS_DYNAMIC_LOOKUP_ENABLED_CONF_KEY =
+ "hbase.crypto.managed_keys.dynamic_lookup.enabled";
+ public static final boolean CRYPTO_MANAGED_KEYS_DYNAMIC_LOOKUP_DEFAULT_ENABLED = true;
+
+ /** Maximum number of entries in the managed key data cache. */
+ public static final String CRYPTO_MANAGED_KEYS_L1_CACHE_MAX_ENTRIES_CONF_KEY =
+ "hbase.crypto.managed_keys.l1_cache.max_entries";
+ public static final int CRYPTO_MANAGED_KEYS_L1_CACHE_MAX_ENTRIES_DEFAULT = 1000;
+
+ /** Maximum number of entries in the managed key active keys cache. */
+ public static final String CRYPTO_MANAGED_KEYS_L1_ACTIVE_CACHE_MAX_NS_ENTRIES_CONF_KEY =
+ "hbase.crypto.managed_keys.l1_active_cache.max_ns_entries";
+ public static final int CRYPTO_MANAGED_KEYS_L1_ACTIVE_CACHE_MAX_NS_ENTRIES_DEFAULT = 100;
+
+ /** Enables or disables local key generation per file. */
+ public static final String CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_ENABLED_CONF_KEY =
+ "hbase.crypto.managed_keys.local_key_gen_per_file.enabled";
+ public static final boolean CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_DEFAULT_ENABLED = false;
+
/** Configuration key for setting RPC codec class name */
public static final String RPC_CODEC_CONF_KEY = "hbase.client.rpc.codec";
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java
index 4d25cb750b23..56a6ad211731 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Encryption.java
@@ -29,12 +29,14 @@
import java.util.Arrays;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
+import java.util.function.BiFunction;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.PBEKeySpec;
import javax.crypto.spec.SecretKeySpec;
import org.apache.commons.io.IOUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.io.crypto.aes.AES;
import org.apache.hadoop.hbase.util.Bytes;
@@ -554,29 +556,50 @@ public static CipherProvider getCipherProvider(Configuration conf) {
}
}
- static final Map, KeyProvider> keyProviderCache = new ConcurrentHashMap<>();
+ static final Map, Object> keyProviderCache = new ConcurrentHashMap<>();
- public static KeyProvider getKeyProvider(Configuration conf) {
- String providerClassName =
- conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, KeyStoreKeyProvider.class.getName());
- String providerParameters = conf.get(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, "");
- try {
- Pair providerCacheKey = new Pair<>(providerClassName, providerParameters);
- KeyProvider provider = keyProviderCache.get(providerCacheKey);
- if (provider != null) {
- return provider;
- }
- provider = (KeyProvider) ReflectionUtils
- .newInstance(getClassLoaderForClass(KeyProvider.class).loadClass(providerClassName), conf);
- provider.init(providerParameters);
- if (LOG.isDebugEnabled()) {
- LOG.debug("Installed " + providerClassName + " into key provider cache");
+ private static Object createProvider(final Configuration conf, String classNameKey,
+ String parametersKey, Class> defaultProviderClass, ClassLoader classLoaderForClass,
+ BiFunction
+ */
+@CoreCoprocessor
+@InterfaceAudience.Private
+public class KeymetaServiceEndpoint implements MasterCoprocessor {
+ private static final Logger LOG = LoggerFactory.getLogger(KeymetaServiceEndpoint.class);
+
+ private MasterServices master = null;
+
+ private final ManagedKeysService managedKeysService = new KeymetaAdminServiceImpl();
+
+ /**
+ * Starts the coprocessor by initializing the reference to the
+ * {@link org.apache.hadoop.hbase.master.MasterServices} * instance.
+ * @param env The coprocessor environment.
+ * @throws IOException If an error occurs during initialization.
+ */
+ @Override
+ public void start(CoprocessorEnvironment env) throws IOException {
+ if (!(env instanceof HasMasterServices)) {
+ throw new IOException("Does not implement HMasterServices");
+ }
+
+ master = ((HasMasterServices) env).getMasterServices();
+ }
+
+ /**
+ * Returns an iterable of the available coprocessor services, which includes the
+ * {@link ManagedKeysService} implemented by
+ * {@link KeymetaServiceEndpoint.KeymetaAdminServiceImpl}.
+ * @return An iterable of the available coprocessor services.
+ */
+ @Override
+ public Iterable getServices() {
+ return Collections.singleton(managedKeysService);
+ }
+
+ /**
+ * The implementation of the {@link ManagedKeysProtos.ManagedKeysService} interface, which
+ * provides the actual method implementations for enabling key management.
+ */
+ @InterfaceAudience.Private
+ public class KeymetaAdminServiceImpl extends ManagedKeysService {
+
+ /**
+ * Enables key management for a given tenant and namespace, as specified in the provided
+ * request.
+ * @param controller The RPC controller.
+ * @param request The request containing the tenant and table specifications.
+ * @param done The callback to be invoked with the response.
+ */
+ @Override
+ public void enableKeyManagement(RpcController controller, ManagedKeyRequest request,
+ RpcCallback done) {
+ ManagedKeyResponse response = null;
+ ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
+ try {
+ initManagedKeyResponseBuilder(controller, request, builder);
+ ManagedKeyData managedKeyState = master.getKeymetaAdmin()
+ .enableKeyManagement(request.getKeyCust().toByteArray(), request.getKeyNamespace());
+ response = generateKeyStateResponse(managedKeyState, builder);
+ } catch (IOException | KeyException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ builder.setKeyState(ManagedKeyState.KEY_FAILED);
+ }
+ if (response == null) {
+ response = builder.build();
+ }
+ done.run(response);
+ }
+
+ @Override
+ public void getManagedKeys(RpcController controller, ManagedKeyRequest request,
+ RpcCallback done) {
+ GetManagedKeysResponse keyStateResponse = null;
+ ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
+ try {
+ initManagedKeyResponseBuilder(controller, request, builder);
+ List managedKeyStates = master.getKeymetaAdmin()
+ .getManagedKeys(request.getKeyCust().toByteArray(), request.getKeyNamespace());
+ keyStateResponse = generateKeyStateResponse(managedKeyStates, builder);
+ } catch (IOException | KeyException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ }
+ if (keyStateResponse == null) {
+ keyStateResponse = GetManagedKeysResponse.getDefaultInstance();
+ }
+ done.run(keyStateResponse);
+ }
+
+ /**
+ * Rotates the system key (STK) by checking for a new key and propagating it to all region
+ * servers.
+ * @param controller The RPC controller.
+ * @param request The request (empty).
+ * @param done The callback to be invoked with the response.
+ */
+ @Override
+ public void rotateSTK(RpcController controller, EmptyMsg request,
+ RpcCallback done) {
+ boolean rotated;
+ try {
+ rotated = master.getKeymetaAdmin().rotateSTK();
+ } catch (IOException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ rotated = false;
+ }
+ done.run(BooleanMsg.newBuilder().setBoolMsg(rotated).build());
+ }
+
+ /**
+ * Disables all managed keys for a given custodian and namespace.
+ * @param controller The RPC controller.
+ * @param request The request containing the custodian and namespace specifications.
+ * @param done The callback to be invoked with the response.
+ */
+ @Override
+ public void disableKeyManagement(RpcController controller, ManagedKeyRequest request,
+ RpcCallback done) {
+ ManagedKeyResponse response = null;
+ ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
+ try {
+ initManagedKeyResponseBuilder(controller, request, builder);
+ ManagedKeyData managedKeyState = master.getKeymetaAdmin()
+ .disableKeyManagement(request.getKeyCust().toByteArray(), request.getKeyNamespace());
+ response = generateKeyStateResponse(managedKeyState, builder);
+ } catch (IOException | KeyException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ builder.setKeyState(ManagedKeyState.KEY_FAILED);
+ }
+ if (response == null) {
+ response = builder.build();
+ }
+ done.run(response);
+ }
+
+ /**
+ * Disables a specific managed key for a given custodian, namespace, and metadata.
+ * @param controller The RPC controller.
+ * @param request The request containing the custodian, namespace, and metadata
+ * specifications.
+ * @param done The callback to be invoked with the response.
+ */
+ @Override
+ public void disableManagedKey(RpcController controller, ManagedKeyEntryRequest request,
+ RpcCallback done) {
+ ManagedKeyResponse response = null;
+ ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
+ try {
+ initManagedKeyResponseBuilder(controller, request.getKeyCustNs(), builder);
+ // Convert hash to metadata by looking up the key first
+ byte[] keyMetadataHash = request.getKeyMetadataHash().toByteArray();
+ byte[] keyCust = request.getKeyCustNs().getKeyCust().toByteArray();
+ String keyNamespace = request.getKeyCustNs().getKeyNamespace();
+
+ ManagedKeyData managedKeyState =
+ master.getKeymetaAdmin().disableManagedKey(keyCust, keyNamespace, keyMetadataHash);
+ response = generateKeyStateResponse(managedKeyState, builder);
+ } catch (IOException | KeyException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ builder.setKeyState(ManagedKeyState.KEY_FAILED);
+ }
+ if (response == null) {
+ response = builder.build();
+ }
+ done.run(response);
+ }
+
+ /**
+ * Rotates the managed key for a given custodian and namespace.
+ * @param controller The RPC controller.
+ * @param request The request containing the custodian and namespace specifications.
+ * @param done The callback to be invoked with the response.
+ */
+ @Override
+ public void rotateManagedKey(RpcController controller, ManagedKeyRequest request,
+ RpcCallback done) {
+ ManagedKeyResponse response = null;
+ ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
+ try {
+ initManagedKeyResponseBuilder(controller, request, builder);
+ ManagedKeyData managedKeyState = master.getKeymetaAdmin()
+ .rotateManagedKey(request.getKeyCust().toByteArray(), request.getKeyNamespace());
+ response = generateKeyStateResponse(managedKeyState, builder);
+ } catch (IOException | KeyException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ builder.setKeyState(ManagedKeyState.KEY_FAILED);
+ }
+ if (response == null) {
+ response = builder.build();
+ }
+ done.run(response);
+ }
+
+ /**
+ * Refreshes all managed keys for a given custodian and namespace.
+ * @param controller The RPC controller.
+ * @param request The request containing the custodian and namespace specifications.
+ * @param done The callback to be invoked with the response.
+ */
+ @Override
+ public void refreshManagedKeys(RpcController controller, ManagedKeyRequest request,
+ RpcCallback done) {
+ try {
+ // Do this just for validation.
+ initManagedKeyResponseBuilder(controller, request, ManagedKeyResponse.newBuilder());
+ master.getKeymetaAdmin().refreshManagedKeys(request.getKeyCust().toByteArray(),
+ request.getKeyNamespace());
+ } catch (IOException | KeyException e) {
+ CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ }
+ done.run(EmptyMsg.getDefaultInstance());
+ }
+ }
+
+ @InterfaceAudience.Private
+ public static ManagedKeyResponse.Builder initManagedKeyResponseBuilder(RpcController controller,
+ ManagedKeyRequest request, ManagedKeyResponse.Builder builder) throws IOException {
+ // We need to set this in advance to make sure builder has non-null values set.
+ builder.setKeyCust(request.getKeyCust());
+ builder.setKeyNamespace(request.getKeyNamespace());
+ if (request.getKeyCust().isEmpty()) {
+ throw new IOException("key_cust must not be empty");
+ }
+ if (request.getKeyNamespace().isEmpty()) {
+ throw new IOException("key_namespace must not be empty");
+ }
+ return builder;
+ }
+
+ // Assumes that all ManagedKeyData objects belong to the same custodian and namespace.
+ @InterfaceAudience.Private
+ public static GetManagedKeysResponse generateKeyStateResponse(
+ List managedKeyStates, ManagedKeyResponse.Builder builder) {
+ GetManagedKeysResponse.Builder responseBuilder = GetManagedKeysResponse.newBuilder();
+ for (ManagedKeyData keyData : managedKeyStates) {
+ responseBuilder.addState(generateKeyStateResponse(keyData, builder));
+ }
+ return responseBuilder.build();
+ }
+
+ private static ManagedKeyResponse generateKeyStateResponse(ManagedKeyData keyData,
+ ManagedKeyResponse.Builder builder) {
+ builder
+ .setKeyState(ManagedKeyState.forNumber(keyData.getKeyState().getExternalState().getVal()))
+ .setRefreshTimestamp(keyData.getRefreshTimestamp())
+ .setKeyNamespace(keyData.getKeyNamespace());
+
+ // Set metadata hash if available
+ byte[] metadataHash = keyData.getKeyMetadataHash();
+ if (metadataHash != null) {
+ builder.setKeyMetadataHash(ByteString.copyFrom(metadataHash));
+ }
+
+ return builder.build();
+ }
+}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java
index e979a20243a5..4a291ff39d89 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java
@@ -17,20 +17,436 @@
*/
package org.apache.hadoop.hbase.keymeta;
+import java.io.IOException;
+import java.security.Key;
+import java.security.KeyException;
+import java.util.ArrayList;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Set;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.Server;
import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Row;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.filter.PrefixFilter;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.security.EncryptionUtil;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
/**
- * STUB IMPLEMENTATION - Feature not yet complete. This class will be fully implemented in
- * HBASE-29368 feature PR.
+ * Accessor for keymeta table as part of key management.
*/
@InterfaceAudience.Private
-public class KeymetaTableAccessor {
+public class KeymetaTableAccessor extends KeyManagementBase {
+ private static final String KEY_META_INFO_FAMILY_STR = "info";
+
+ public static final byte[] KEY_META_INFO_FAMILY = Bytes.toBytes(KEY_META_INFO_FAMILY_STR);
+
+ public static final TableName KEY_META_TABLE_NAME =
+ TableName.valueOf(NamespaceDescriptor.SYSTEM_NAMESPACE_NAME_STR, "keymeta");
+
+ public static final String DEK_METADATA_QUAL_NAME = "m";
+ public static final byte[] DEK_METADATA_QUAL_BYTES = Bytes.toBytes(DEK_METADATA_QUAL_NAME);
+
+ public static final String DEK_CHECKSUM_QUAL_NAME = "c";
+ public static final byte[] DEK_CHECKSUM_QUAL_BYTES = Bytes.toBytes(DEK_CHECKSUM_QUAL_NAME);
+
+ public static final String DEK_WRAPPED_BY_STK_QUAL_NAME = "w";
+ public static final byte[] DEK_WRAPPED_BY_STK_QUAL_BYTES =
+ Bytes.toBytes(DEK_WRAPPED_BY_STK_QUAL_NAME);
- public static final TableName KEY_META_TABLE_NAME = TableName.valueOf("hbase:keymeta");
- public static final TableName KEYMETA_TABLEDESC = KEY_META_TABLE_NAME;
+ public static final String STK_CHECKSUM_QUAL_NAME = "s";
+ public static final byte[] STK_CHECKSUM_QUAL_BYTES = Bytes.toBytes(STK_CHECKSUM_QUAL_NAME);
+
+ public static final String REFRESHED_TIMESTAMP_QUAL_NAME = "t";
+ public static final byte[] REFRESHED_TIMESTAMP_QUAL_BYTES =
+ Bytes.toBytes(REFRESHED_TIMESTAMP_QUAL_NAME);
+
+ public static final String KEY_STATE_QUAL_NAME = "k";
+ public static final byte[] KEY_STATE_QUAL_BYTES = Bytes.toBytes(KEY_STATE_QUAL_NAME);
public static final TableDescriptorBuilder TABLE_DESCRIPTOR_BUILDER =
- TableDescriptorBuilder.newBuilder(KEY_META_TABLE_NAME);
+ TableDescriptorBuilder.newBuilder(KEY_META_TABLE_NAME).setRegionReplication(1)
+ .setPriority(HConstants.SYSTEMTABLE_QOS)
+ .setColumnFamily(ColumnFamilyDescriptorBuilder
+ .newBuilder(KeymetaTableAccessor.KEY_META_INFO_FAMILY)
+ .setScope(HConstants.REPLICATION_SCOPE_LOCAL).setMaxVersions(1).setInMemory(true).build());
+
+ private Server server;
+
+ public KeymetaTableAccessor(Server server) {
+ super(server.getKeyManagementService());
+ this.server = server;
+ }
+
+ public Server getServer() {
+ return server;
+ }
+
+ /**
+ * Add the specified key to the keymeta table.
+ * @param keyData The key data.
+ * @throws IOException when there is an underlying IOException.
+ */
+ public void addKey(ManagedKeyData keyData) throws IOException {
+ assertKeyManagementEnabled();
+ List puts = new ArrayList<>(2);
+ if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ puts.add(addMutationColumns(new Put(constructRowKeyForCustNamespace(keyData)), keyData));
+ }
+ final Put putForMetadata =
+ addMutationColumns(new Put(constructRowKeyForMetadata(keyData)), keyData);
+ puts.add(putForMetadata);
+ Connection connection = getServer().getConnection();
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ table.put(puts);
+ }
+ }
+
+ /**
+ * Get all the keys for the specified keyCust and key_namespace.
+ * @param keyCust The key custodian.
+ * @param keyNamespace The namespace
+ * @param includeMarkers Whether to include key management state markers in the result.
+ * @return a list of key data, one for each key, can be empty when none were found.
+ * @throws IOException when there is an underlying IOException.
+ * @throws KeyException when there is an underlying KeyException.
+ */
+ public List getAllKeys(byte[] keyCust, String keyNamespace,
+ boolean includeMarkers) throws IOException, KeyException {
+ assertKeyManagementEnabled();
+ Connection connection = getServer().getConnection();
+ byte[] prefixForScan = constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ PrefixFilter prefixFilter = new PrefixFilter(prefixForScan);
+ Scan scan = new Scan();
+ scan.setFilter(prefixFilter);
+ scan.addFamily(KEY_META_INFO_FAMILY);
+
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ ResultScanner scanner = table.getScanner(scan);
+ Set allKeys = new LinkedHashSet<>();
+ for (Result result : scanner) {
+ ManagedKeyData keyData =
+ parseFromResult(getKeyManagementService(), keyCust, keyNamespace, result);
+ if (keyData != null && (includeMarkers || keyData.getKeyMetadata() != null)) {
+ allKeys.add(keyData);
+ }
+ }
+ return allKeys.stream().toList();
+ }
+ }
+
+ /**
+ * Get the key management state marker for the specified keyCust and key_namespace.
+ * @param keyCust The prefix
+ * @param keyNamespace The namespace
+ * @return the key management state marker data, or null if no key management state marker found
+ * @throws IOException when there is an underlying IOException.
+ * @throws KeyException when there is an underlying KeyException.
+ */
+ public ManagedKeyData getKeyManagementStateMarker(byte[] keyCust, String keyNamespace)
+ throws IOException, KeyException {
+ return getKey(keyCust, keyNamespace, null);
+ }
+
+ /**
+ * Get the specific key identified by keyCust, keyNamespace and keyMetadataHash.
+ * @param keyCust The prefix.
+ * @param keyNamespace The namespace.
+ * @param keyMetadataHash The metadata hash.
+ * @return the key or {@code null}
+ * @throws IOException when there is an underlying IOException.
+ * @throws KeyException when there is an underlying KeyException.
+ */
+ public ManagedKeyData getKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash)
+ throws IOException, KeyException {
+ assertKeyManagementEnabled();
+ Connection connection = getServer().getConnection();
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ byte[] rowKey = keyMetadataHash != null
+ ? constructRowKeyForMetadata(keyCust, keyNamespace, keyMetadataHash)
+ : constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ Result result = table.get(new Get(rowKey));
+ return parseFromResult(getKeyManagementService(), keyCust, keyNamespace, result);
+ }
+ }
+
+ /**
+ * Disables a key by removing the wrapped key and updating its state to DISABLED.
+ * @param keyData The key data to disable.
+ * @throws IOException when there is an underlying IOException.
+ */
+ public void disableKey(ManagedKeyData keyData) throws IOException {
+ assertKeyManagementEnabled();
+ Preconditions.checkNotNull(keyData.getKeyMetadata(), "Key metadata cannot be null");
+ byte[] keyCust = keyData.getKeyCustodian();
+ String keyNamespace = keyData.getKeyNamespace();
+ byte[] keyMetadataHash = keyData.getKeyMetadataHash();
+
+ List mutations = new ArrayList<>(3); // Max possible mutations.
+
+ if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ // Delete the CustNamespace row
+ byte[] rowKeyForCustNamespace = constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ mutations.add(new Delete(rowKeyForCustNamespace).setDurability(Durability.SKIP_WAL)
+ .setPriority(HConstants.SYSTEMTABLE_QOS));
+ }
+
+ // Update state to DISABLED and timestamp on Metadata row
+ byte[] rowKeyForMetadata = constructRowKeyForMetadata(keyCust, keyNamespace, keyMetadataHash);
+ addMutationsForKeyDisabled(mutations, rowKeyForMetadata, keyData.getKeyMetadata(),
+ keyData.getKeyState() == ManagedKeyState.ACTIVE
+ ? ManagedKeyState.ACTIVE_DISABLED
+ : ManagedKeyState.INACTIVE_DISABLED,
+ keyData.getKeyState());
+
+ Connection connection = getServer().getConnection();
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ table.batch(mutations, null);
+ } catch (InterruptedException e) {
+ Thread.currentThread().interrupt();
+ throw new IOException("Interrupted while disabling key", e);
+ }
+ }
+
+ private void addMutationsForKeyDisabled(List mutations, byte[] rowKey, String metadata,
+ ManagedKeyState targetState, ManagedKeyState currentState) {
+ Put put = new Put(rowKey);
+ if (metadata != null) {
+ put.addColumn(KEY_META_INFO_FAMILY, DEK_METADATA_QUAL_BYTES, metadata.getBytes());
+ }
+ Put putForState = addMutationColumnsForState(put, targetState);
+ mutations.add(putForState);
+
+ // Delete wrapped key columns from Metadata row
+ if (currentState == null || ManagedKeyState.isUsable(currentState)) {
+ Delete deleteWrappedKey = new Delete(rowKey).setDurability(Durability.SKIP_WAL)
+ .setPriority(HConstants.SYSTEMTABLE_QOS)
+ .addColumns(KEY_META_INFO_FAMILY, DEK_CHECKSUM_QUAL_BYTES)
+ .addColumns(KEY_META_INFO_FAMILY, DEK_WRAPPED_BY_STK_QUAL_BYTES)
+ .addColumns(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES);
+ mutations.add(deleteWrappedKey);
+ }
+ }
+
+ /**
+ * Adds a key management state marker to the specified (keyCust, keyNamespace) combination. It
+ * also adds delete markers for the columns unrelates to marker, in case the state is
+ * transitioning from ACTIVE to DISABLED or FAILED. This method is only used for setting the state
+ * to DISABLED or FAILED. For ACTIVE state, the addKey() method implicitly adds the marker.
+ * @param keyCust The key custodian.
+ * @param keyNamespace The namespace.
+ * @param state The key management state to add.
+ * @throws IOException when there is an underlying IOException.
+ */
+ public void addKeyManagementStateMarker(byte[] keyCust, String keyNamespace,
+ ManagedKeyState state) throws IOException {
+ assertKeyManagementEnabled();
+ Preconditions.checkArgument(ManagedKeyState.isKeyManagementState(state),
+ "State must be a key management state, got: " + state);
+ List mutations = new ArrayList<>(2);
+ byte[] rowKey = constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ addMutationsForKeyDisabled(mutations, rowKey, null, state, null);
+ Connection connection = getServer().getConnection();
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ table.batch(mutations, null);
+ } catch (InterruptedException e) {
+ Thread.currentThread().interrupt();
+ throw new IOException("Interrupted while adding key management state marker", e);
+ }
+ }
+
+ /**
+ * Updates the state of a key to one of the ACTIVE or INACTIVE states. The current state can be
+ * any state, but if it the same, it becomes a no-op.
+ * @param keyData The key data.
+ * @param newState The new state (must be ACTIVE or INACTIVE).
+ * @throws IOException when there is an underlying IOException.
+ */
+ public void updateActiveState(ManagedKeyData keyData, ManagedKeyState newState)
+ throws IOException {
+ assertKeyManagementEnabled();
+ ManagedKeyState currentState = keyData.getKeyState();
+
+ // Validate states
+ Preconditions.checkArgument(ManagedKeyState.isUsable(newState),
+ "New state must be ACTIVE or INACTIVE, got: " + newState);
+ // Even for FAILED keys, we expect the metadata to be non-null.
+ Preconditions.checkNotNull(keyData.getKeyMetadata(), "Key metadata cannot be null");
+
+ // No-op if states are the same
+ if (currentState == newState) {
+ return;
+ }
+
+ List mutations = new ArrayList<>(2);
+ byte[] rowKeyForCustNamespace = constructRowKeyForCustNamespace(keyData);
+ byte[] rowKeyForMetadata = constructRowKeyForMetadata(keyData);
+
+ // First take care of the active key specific row.
+ if (newState == ManagedKeyState.ACTIVE) {
+ // INACTIVE -> ACTIVE: Add CustNamespace row and update Metadata row
+ mutations.add(addMutationColumns(new Put(rowKeyForCustNamespace), keyData));
+ }
+ if (currentState == ManagedKeyState.ACTIVE) {
+ mutations.add(new Delete(rowKeyForCustNamespace).setDurability(Durability.SKIP_WAL)
+ .setPriority(HConstants.SYSTEMTABLE_QOS));
+ }
+
+ // Now take care of the key specific row (for point gets by metadata).
+ if (!ManagedKeyState.isUsable(currentState)) {
+ // For DISABLED and FAILED keys, we don't expect cached key material, so add all columns
+ // similar to what addKey() does.
+ mutations.add(addMutationColumns(new Put(rowKeyForMetadata), keyData));
+ } else {
+ // We expect cached key material, so only update the state and timestamp columns.
+ mutations.add(addMutationColumnsForState(new Put(rowKeyForMetadata), newState));
+ }
+
+ Connection connection = getServer().getConnection();
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ table.batch(mutations, null);
+ } catch (InterruptedException e) {
+ Thread.currentThread().interrupt();
+ throw new IOException("Interrupted while updating active state", e);
+ }
+ }
+
+ private Put addMutationColumnsForState(Put put, ManagedKeyState newState) {
+ return addMutationColumnsForState(put, newState, EnvironmentEdgeManager.currentTime());
+ }
+
+ /**
+ * Add only state and timestamp columns to the given Put.
+ */
+ private Put addMutationColumnsForState(Put put, ManagedKeyState newState, long timestamp) {
+ return put.setDurability(Durability.SKIP_WAL).setPriority(HConstants.SYSTEMTABLE_QOS)
+ .addColumn(KEY_META_INFO_FAMILY, KEY_STATE_QUAL_BYTES, new byte[] { newState.getVal() })
+ .addColumn(KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES, Bytes.toBytes(timestamp));
+ }
+
+ /**
+ * Add the mutation columns to the given Put that are derived from the keyData.
+ */
+ private Put addMutationColumns(Put put, ManagedKeyData keyData) throws IOException {
+ ManagedKeyData latestSystemKey =
+ getKeyManagementService().getSystemKeyCache().getLatestSystemKey();
+ if (keyData.getTheKey() != null) {
+ byte[] dekWrappedBySTK = EncryptionUtil.wrapKey(getConfiguration(), null, keyData.getTheKey(),
+ latestSystemKey.getTheKey());
+ put
+ .addColumn(KEY_META_INFO_FAMILY, DEK_CHECKSUM_QUAL_BYTES,
+ Bytes.toBytes(keyData.getKeyChecksum()))
+ .addColumn(KEY_META_INFO_FAMILY, DEK_WRAPPED_BY_STK_QUAL_BYTES, dekWrappedBySTK)
+ .addColumn(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES,
+ Bytes.toBytes(latestSystemKey.getKeyChecksum()));
+ }
+ Put result =
+ addMutationColumnsForState(put, keyData.getKeyState(), keyData.getRefreshTimestamp());
+
+ // Only add metadata column if metadata is not null
+ String metadata = keyData.getKeyMetadata();
+ if (metadata != null) {
+ result.addColumn(KEY_META_INFO_FAMILY, DEK_METADATA_QUAL_BYTES, metadata.getBytes());
+ }
+
+ return result;
+ }
+
+ @InterfaceAudience.Private
+ public static byte[] constructRowKeyForMetadata(ManagedKeyData keyData) {
+ Preconditions.checkNotNull(keyData.getKeyMetadata(), "Key metadata cannot be null");
+ return constructRowKeyForMetadata(keyData.getKeyCustodian(), keyData.getKeyNamespace(),
+ keyData.getKeyMetadataHash());
+ }
+
+ @InterfaceAudience.Private
+ public static byte[] constructRowKeyForMetadata(byte[] keyCust, String keyNamespace,
+ byte[] keyMetadataHash) {
+ return Bytes.add(constructRowKeyForCustNamespace(keyCust, keyNamespace), keyMetadataHash);
+ }
+
+ @InterfaceAudience.Private
+ public static byte[] constructRowKeyForCustNamespace(ManagedKeyData keyData) {
+ return constructRowKeyForCustNamespace(keyData.getKeyCustodian(), keyData.getKeyNamespace());
+ }
+
+ @InterfaceAudience.Private
+ public static byte[] constructRowKeyForCustNamespace(byte[] keyCust, String keyNamespace) {
+ int custLength = keyCust.length;
+ return Bytes.add(Bytes.toBytes(custLength), keyCust, Bytes.toBytes(keyNamespace));
+ }
+
+ @InterfaceAudience.Private
+ public static ManagedKeyData parseFromResult(KeyManagementService keyManagementService,
+ byte[] keyCust, String keyNamespace, Result result) throws IOException, KeyException {
+ if (result == null || result.isEmpty()) {
+ return null;
+ }
+ ManagedKeyState keyState =
+ ManagedKeyState.forValue(result.getValue(KEY_META_INFO_FAMILY, KEY_STATE_QUAL_BYTES)[0]);
+ String dekMetadata =
+ Bytes.toString(result.getValue(KEY_META_INFO_FAMILY, DEK_METADATA_QUAL_BYTES));
+ byte[] dekWrappedByStk = result.getValue(KEY_META_INFO_FAMILY, DEK_WRAPPED_BY_STK_QUAL_BYTES);
+ if (
+ (keyState == ManagedKeyState.ACTIVE || keyState == ManagedKeyState.INACTIVE)
+ && dekWrappedByStk == null
+ ) {
+ throw new IOException(keyState + " key must have a wrapped key");
+ }
+ Key dek = null;
+ if (dekWrappedByStk != null) {
+ long stkChecksum =
+ Bytes.toLong(result.getValue(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES));
+ ManagedKeyData clusterKey =
+ keyManagementService.getSystemKeyCache().getSystemKeyByChecksum(stkChecksum);
+ if (clusterKey == null) {
+ LOG.error("Dropping key with metadata: {} as STK with checksum: {} is unavailable",
+ dekMetadata, stkChecksum);
+ return null;
+ }
+ dek = EncryptionUtil.unwrapKey(keyManagementService.getConfiguration(), null, dekWrappedByStk,
+ clusterKey.getTheKey());
+ }
+ long refreshedTimestamp =
+ Bytes.toLong(result.getValue(KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES));
+ ManagedKeyData dekKeyData;
+ if (dekMetadata != null) {
+ dekKeyData =
+ new ManagedKeyData(keyCust, keyNamespace, dek, keyState, dekMetadata, refreshedTimestamp);
+ if (dek != null) {
+ long dekChecksum =
+ Bytes.toLong(result.getValue(KEY_META_INFO_FAMILY, DEK_CHECKSUM_QUAL_BYTES));
+ if (dekKeyData.getKeyChecksum() != dekChecksum) {
+ LOG.error(
+ "Dropping key, current key checksum: {} didn't match the expected checksum: {}"
+ + " for key with metadata: {}",
+ dekKeyData.getKeyChecksum(), dekChecksum, dekMetadata);
+ dekKeyData = null;
+ }
+ }
+ } else {
+ // Key management marker.
+ dekKeyData = new ManagedKeyData(keyCust, keyNamespace, keyState, refreshedTimestamp);
+ }
+ return dekKeyData;
+ }
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java
index 546be2fcc192..f93706690ded 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java
@@ -17,40 +17,323 @@
*/
package org.apache.hadoop.hbase.keymeta;
+import com.github.benmanes.caffeine.cache.Cache;
+import com.github.benmanes.caffeine.cache.Caffeine;
import java.io.IOException;
import java.security.KeyException;
+import java.util.Objects;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.function.Function;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
/**
- * STUB IMPLEMENTATION - Feature not yet complete. This class will be fully implemented in
- * HBASE-29368 feature PR.
+ * In-memory cache for ManagedKeyData entries, using key metadata hash as the cache key. Uses two
+ * independent Caffeine caches: one for general key data and one for active keys only with
+ * hierarchical structure for efficient single key retrieval.
*/
@InterfaceAudience.Private
-public class ManagedKeyDataCache {
- public ManagedKeyDataCache(Configuration conf, KeymetaAdmin admin) {
- // Stub constructor - does nothing
+public class ManagedKeyDataCache extends KeyManagementBase {
+ private static final Logger LOG = LoggerFactory.getLogger(ManagedKeyDataCache.class);
+
+ private Cache cacheByMetadataHash; // Key is Bytes wrapper around hash
+ private Cache activeKeysCache;
+ private final KeymetaTableAccessor keymetaAccessor;
+
+ /**
+ * Composite key for active keys cache containing custodian and namespace. NOTE: Pair won't work
+ * out of the box because it won't work with byte[] as is.
+ */
+ @InterfaceAudience.LimitedPrivate({ HBaseInterfaceAudience.UNITTEST })
+ public static class ActiveKeysCacheKey {
+ private final byte[] custodian;
+ private final String namespace;
+
+ public ActiveKeysCacheKey(byte[] custodian, String namespace) {
+ this.custodian = custodian;
+ this.namespace = namespace;
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ if (this == obj) {
+ return true;
+ }
+ if (obj == null || getClass() != obj.getClass()) {
+ return false;
+ }
+ ActiveKeysCacheKey cacheKey = (ActiveKeysCacheKey) obj;
+ return Bytes.equals(custodian, cacheKey.custodian)
+ && Objects.equals(namespace, cacheKey.namespace);
+ }
+
+ @Override
+ public int hashCode() {
+ return Objects.hash(Bytes.hashCode(custodian), namespace);
+ }
}
- public ManagedKeyData getActiveEntry(byte[] keyCustodian, String keyNamespace) {
- return null;
+ /**
+ * Constructs the ManagedKeyDataCache with the given configuration and keymeta accessor. When
+ * keymetaAccessor is null, L2 lookup is disabled and dynamic lookup is enabled.
+ * @param conf The configuration, can't be null.
+ * @param keymetaAccessor The keymeta accessor, can be null.
+ */
+ public ManagedKeyDataCache(Configuration conf, KeymetaTableAccessor keymetaAccessor) {
+ super(conf);
+ this.keymetaAccessor = keymetaAccessor;
+ if (keymetaAccessor == null) {
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_DYNAMIC_LOOKUP_ENABLED_CONF_KEY, true);
+ }
+
+ int maxEntries = conf.getInt(HConstants.CRYPTO_MANAGED_KEYS_L1_CACHE_MAX_ENTRIES_CONF_KEY,
+ HConstants.CRYPTO_MANAGED_KEYS_L1_CACHE_MAX_ENTRIES_DEFAULT);
+ int activeKeysMaxEntries =
+ conf.getInt(HConstants.CRYPTO_MANAGED_KEYS_L1_ACTIVE_CACHE_MAX_NS_ENTRIES_CONF_KEY,
+ HConstants.CRYPTO_MANAGED_KEYS_L1_ACTIVE_CACHE_MAX_NS_ENTRIES_DEFAULT);
+ this.cacheByMetadataHash = Caffeine.newBuilder().maximumSize(maxEntries).build();
+ this.activeKeysCache = Caffeine.newBuilder().maximumSize(activeKeysMaxEntries).build();
+ }
+
+ /**
+ * Retrieves an entry from the cache, if it already exists, otherwise a null is returned. No
+ * attempt will be made to load from L2 or provider.
+ * @return the corresponding ManagedKeyData entry, or null if not found
+ */
+ public ManagedKeyData getEntry(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash)
+ throws IOException, KeyException {
+ Bytes metadataHashKey = new Bytes(keyMetadataHash);
+ // Return the entry if it exists in the generic cache or active keys cache, otherwise return
+ // null.
+ ManagedKeyData entry = cacheByMetadataHash.get(metadataHashKey, hashKey -> {
+ return getFromActiveKeysCache(keyCust, keyNamespace, keyMetadataHash);
+ });
+ return entry;
}
- public ManagedKeyData getEntry(byte[] keyCustodian, String keyNamespace, String keyMetadata,
+ /**
+ * Retrieves an entry from the cache, loading it from L2 if KeymetaTableAccessor is available.
+ * When L2 is not available, it will try to load from provider, unless dynamic lookup is disabled.
+ * @param keyCust the key custodian
+ * @param keyNamespace the key namespace
+ * @param keyMetadata the key metadata of the entry to be retrieved
+ * @param wrappedKey The DEK key material encrypted with the corresponding KEK, if available.
+ * @return the corresponding ManagedKeyData entry, or null if not found
+ * @throws IOException if an error occurs while loading from KeymetaTableAccessor
+ * @throws KeyException if an error occurs while loading from KeymetaTableAccessor
+ */
+ public ManagedKeyData getEntry(byte[] keyCust, String keyNamespace, String keyMetadata,
byte[] wrappedKey) throws IOException, KeyException {
+ // Compute hash and use it as cache key
+ byte[] metadataHashBytes = ManagedKeyData.constructMetadataHash(keyMetadata);
+ Bytes metadataHashKey = new Bytes(metadataHashBytes);
+
+ ManagedKeyData entry = cacheByMetadataHash.get(metadataHashKey, hashKey -> {
+ // First check if it's in the active keys cache
+ ManagedKeyData keyData = getFromActiveKeysCache(keyCust, keyNamespace, metadataHashBytes);
+
+ // Try to load from L2
+ if (keyData == null && keymetaAccessor != null) {
+ try {
+ keyData = keymetaAccessor.getKey(keyCust, keyNamespace, metadataHashBytes);
+ } catch (IOException | KeyException e) {
+ LOG.warn(
+ "Failed to load key from L2 for (custodian: {}, namespace: {}) with metadata hash: {}",
+ ManagedKeyProvider.encodeToStr(keyCust), keyNamespace,
+ ManagedKeyProvider.encodeToStr(metadataHashBytes), e);
+ }
+ }
+
+ // If not found in L2 and dynamic lookup is enabled, try with Key Provider
+ if (keyData == null && isDynamicLookupEnabled()) {
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ try {
+ keyData = KeyManagementUtils.retrieveKey(getKeyProvider(), keymetaAccessor, encKeyCust,
+ keyCust, keyNamespace, keyMetadata, wrappedKey);
+ } catch (IOException | KeyException | RuntimeException e) {
+ LOG.warn(
+ "Failed to retrieve key from provider for (custodian: {}, namespace: {}) with "
+ + "metadata hash: {}", ManagedKeyProvider.encodeToStr(keyCust), keyNamespace,
+ ManagedKeyProvider.encodeToStr(metadataHashBytes), e);
+ }
+ }
+
+ if (keyData == null) {
+ keyData =
+ new ManagedKeyData(keyCust, keyNamespace, null, ManagedKeyState.FAILED, keyMetadata);
+ }
+
+ // Also update activeKeysCache if relevant and is missing.
+ if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ activeKeysCache.asMap().putIfAbsent(new ActiveKeysCacheKey(keyCust, keyNamespace), keyData);
+ }
+
+ return keyData;
+ });
+
+ // Verify custodian/namespace match to guard against hash collisions
+ if (entry != null && ManagedKeyState.isUsable(entry.getKeyState())) {
+ if (
+ Bytes.equals(entry.getKeyCustodian(), keyCust)
+ && entry.getKeyNamespace().equals(keyNamespace)
+ ) {
+ return entry;
+ }
+ LOG.warn(
+ "Hash collision or incorrect/mismatched custodian/namespace detected for metadata hash: "
+ + "{} - custodian/namespace mismatch expected: ({}, {}), actual: ({}, {})",
+ ManagedKeyProvider.encodeToStr(metadataHashBytes), ManagedKeyProvider.encodeToStr(keyCust),
+ keyNamespace, ManagedKeyProvider.encodeToStr(entry.getKeyCustodian()),
+ entry.getKeyNamespace());
+ }
return null;
}
+ /**
+ * Retrieves an existing key from the active keys cache.
+ * @param keyCust the key custodian
+ * @param keyNamespace the key namespace
+ * @param keyMetadataHash the key metadata hash
+ * @return the ManagedKeyData if found, null otherwise
+ */
+ private ManagedKeyData getFromActiveKeysCache(byte[] keyCust, String keyNamespace,
+ byte[] keyMetadataHash) {
+ ActiveKeysCacheKey cacheKey = new ActiveKeysCacheKey(keyCust, keyNamespace);
+ ManagedKeyData keyData = activeKeysCache.getIfPresent(cacheKey);
+ if (keyData != null && Bytes.equals(keyData.getKeyMetadataHash(), keyMetadataHash)) {
+ return keyData;
+ }
+ return null;
+ }
+
+ /**
+ * Eject the key identified by the given custodian, namespace and metadata from both the active
+ * keys cache and the generic cache.
+ * @param keyCust the key custodian
+ * @param keyNamespace the key namespace
+ * @param keyMetadataHash the key metadata hash
+ * @return true if the key was ejected from either cache, false otherwise
+ */
+ public boolean ejectKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash) {
+ Bytes keyMetadataHashKey = new Bytes(keyMetadataHash);
+ ActiveKeysCacheKey cacheKey = new ActiveKeysCacheKey(keyCust, keyNamespace);
+ AtomicBoolean ejected = new AtomicBoolean(false);
+ AtomicReference rejectedValue = new AtomicReference<>(null);
+
+ Function conditionalCompute = (value) -> {
+ if (rejectedValue.get() != null) {
+ return value;
+ }
+ if (
+ Bytes.equals(value.getKeyMetadataHash(), keyMetadataHash)
+ && Bytes.equals(value.getKeyCustodian(), keyCust)
+ && value.getKeyNamespace().equals(keyNamespace)
+ ) {
+ ejected.set(true);
+ return null;
+ }
+ rejectedValue.set(value);
+ return value;
+ };
+
+ // Try to eject from active keys cache by matching hash with collision check
+ activeKeysCache.asMap().computeIfPresent(cacheKey,
+ (key, value) -> conditionalCompute.apply(value));
+
+ // Also remove from generic cache by hash, with collision check
+ cacheByMetadataHash.asMap().computeIfPresent(keyMetadataHashKey,
+ (hash, value) -> conditionalCompute.apply(value));
+
+ if (rejectedValue.get() != null) {
+ LOG.warn(
+ "Hash collision or incorrect/mismatched custodian/namespace detected for metadata "
+ + "hash: {} - custodian/namespace mismatch expected: ({}, {}), actual: ({}, {})",
+ ManagedKeyProvider.encodeToStr(keyMetadataHash), ManagedKeyProvider.encodeToStr(keyCust),
+ keyNamespace, ManagedKeyProvider.encodeToStr(rejectedValue.get().getKeyCustodian()),
+ rejectedValue.get().getKeyNamespace());
+ }
+
+ return ejected.get();
+ }
+
+ /**
+ * Clear all the cached entries.
+ */
public void clearCache() {
- // Stub - does nothing
+ cacheByMetadataHash.invalidateAll();
+ activeKeysCache.invalidateAll();
}
- public void ejectEntry(byte[] keyCustodian, String keyNamespace, String keyMetadata) {
- // Stub - does nothing
+ /**
+ * @return the approximate number of entries in the main cache which is meant for general lookup
+ * by key metadata hash.
+ */
+ public int getGenericCacheEntryCount() {
+ return (int) cacheByMetadataHash.estimatedSize();
}
- public boolean ejectKey(byte[] keyCustodian, String keyNamespace, byte[] keyMetadataHash) {
- return false;
+ /** Returns the approximate number of entries in the active keys cache */
+ public int getActiveCacheEntryCount() {
+ return (int) activeKeysCache.estimatedSize();
+ }
+
+ /**
+ * Retrieves the active entry from the cache based on its key custodian and key namespace. This
+ * method also loads active keys from provider if not found in cache.
+ * @param keyCust The key custodian.
+ * @param keyNamespace the key namespace to search for
+ * @return the ManagedKeyData entry with the given custodian and ACTIVE status, or null if not
+ * found
+ */
+ public ManagedKeyData getActiveEntry(byte[] keyCust, String keyNamespace) {
+ ActiveKeysCacheKey cacheKey = new ActiveKeysCacheKey(keyCust, keyNamespace);
+
+ ManagedKeyData keyData = activeKeysCache.get(cacheKey, key -> {
+ ManagedKeyData retrievedKey = null;
+
+ // Try to load from KeymetaTableAccessor if not found in cache
+ if (keymetaAccessor != null) {
+ try {
+ retrievedKey = keymetaAccessor.getKeyManagementStateMarker(keyCust, keyNamespace);
+ } catch (IOException | KeyException | RuntimeException e) {
+ LOG.warn("Failed to load active key from KeymetaTableAccessor for custodian: {} "
+ + "namespace: {}", ManagedKeyProvider.encodeToStr(keyCust), keyNamespace, e);
+ }
+ }
+
+ // As a last ditch effort, load active key from provider. This typically happens for
+ // standalone tools.
+ if (retrievedKey == null && isDynamicLookupEnabled()) {
+ try {
+ String keyCustEnc = ManagedKeyProvider.encodeToStr(keyCust);
+ retrievedKey = KeyManagementUtils.retrieveActiveKey(getKeyProvider(), keymetaAccessor,
+ keyCustEnc, keyCust, keyNamespace, null);
+ } catch (IOException | KeyException | RuntimeException e) {
+ LOG.warn("Failed to load active key from provider for custodian: {} namespace: {}",
+ ManagedKeyProvider.encodeToStr(keyCust), keyNamespace, e);
+ }
+ }
+
+ if (retrievedKey == null) {
+ retrievedKey = new ManagedKeyData(keyCust, keyNamespace, ManagedKeyState.FAILED);
+ }
+
+ return retrievedKey;
+ });
+
+ // This should never be null, but adding a check just to satisfy spotbugs.
+ if (keyData != null && keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ return keyData;
+ }
+ return null;
}
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java
index 374cd597782c..8de01319e25b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java
@@ -17,13 +17,128 @@
*/
package org.apache.hadoop.hbase.keymeta;
+import static org.apache.hadoop.hbase.HConstants.SYSTEM_KEY_FILE_PREFIX;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.hadoop.hbase.util.Pair;
import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
-/**
- * STUB IMPLEMENTATION - Feature not yet complete. This class will be fully implemented in
- * HBASE-29368 feature PR.
- */
@InterfaceAudience.Private
-public class SystemKeyAccessor {
- // Empty stub class - implementation in feature PR
+public class SystemKeyAccessor extends KeyManagementBase {
+ private static final Logger LOG = LoggerFactory.getLogger(SystemKeyAccessor.class);
+
+ private final FileSystem fs;
+ protected final Path systemKeyDir;
+
+ public SystemKeyAccessor(Server server) throws IOException {
+ this(server.getConfiguration(), server.getFileSystem());
+ }
+
+ public SystemKeyAccessor(Configuration configuration, FileSystem fs) throws IOException {
+ super(configuration);
+ this.systemKeyDir = CommonFSUtils.getSystemKeyDir(configuration);
+ this.fs = fs;
+ }
+
+ /**
+ * Return both the latest system key file and all system key files.
+ * @return a pair of the latest system key file and all system key files
+ * @throws IOException if there is an error getting the latest system key file or no cluster key
+ * is initialized yet.
+ */
+ public Pair> getLatestSystemKeyFile() throws IOException {
+ assertKeyManagementEnabled();
+ List allClusterKeyFiles = getAllSystemKeyFiles();
+ if (allClusterKeyFiles.isEmpty()) {
+ throw new RuntimeException("No cluster key initialized yet");
+ }
+ int currentMaxSeqNum = SystemKeyAccessor.extractKeySequence(allClusterKeyFiles.get(0));
+ return new Pair<>(new Path(systemKeyDir, SYSTEM_KEY_FILE_PREFIX + currentMaxSeqNum),
+ allClusterKeyFiles);
+ }
+
+ /**
+ * Return all available cluster key files and return them in the order of latest to oldest. If no
+ * cluster key files are available, then return an empty list. If key management is not enabled,
+ * then return null.
+ * @return a list of all available cluster key files
+ * @throws IOException if there is an error getting the cluster key files
+ */
+ public List getAllSystemKeyFiles() throws IOException {
+ assertKeyManagementEnabled();
+ LOG.info("Getting all system key files from: {} matching prefix: {}", systemKeyDir,
+ SYSTEM_KEY_FILE_PREFIX + "*");
+ Map clusterKeys = new TreeMap<>(Comparator.reverseOrder());
+ for (FileStatus st : fs.globStatus(new Path(systemKeyDir, SYSTEM_KEY_FILE_PREFIX + "*"))) {
+ Path keyPath = st.getPath();
+ int seqNum = extractSystemKeySeqNum(keyPath);
+ clusterKeys.put(seqNum, keyPath);
+ }
+ return new ArrayList<>(clusterKeys.values());
+ }
+
+ public ManagedKeyData loadSystemKey(Path keyPath) throws IOException {
+ ManagedKeyProvider provider = getKeyProvider();
+ ManagedKeyData keyData = provider.unwrapKey(loadKeyMetadata(keyPath), null);
+ if (keyData == null) {
+ throw new RuntimeException("Failed to load system key from: " + keyPath);
+ }
+ return keyData;
+ }
+
+ @InterfaceAudience.Private
+ public static int extractSystemKeySeqNum(Path keyPath) throws IOException {
+ if (keyPath.getName().startsWith(SYSTEM_KEY_FILE_PREFIX)) {
+ try {
+ return Integer.parseInt(keyPath.getName().substring(SYSTEM_KEY_FILE_PREFIX.length()));
+ } catch (NumberFormatException e) {
+ LOG.error("Invalid file name for a cluster key: {}", keyPath, e);
+ }
+ }
+ throw new IOException("Couldn't parse key file name: " + keyPath.getName());
+ }
+
+ /**
+ * Extract the key sequence number from the cluster key file name.
+ * @param clusterKeyFile the path to the cluster key file
+ * @return The sequence or {@code -1} if not a valid sequence file.
+ * @throws IOException if the file name is not a valid sequence file
+ */
+ @InterfaceAudience.Private
+ public static int extractKeySequence(Path clusterKeyFile) throws IOException {
+ int keySeq = -1;
+ if (clusterKeyFile.getName().startsWith(SYSTEM_KEY_FILE_PREFIX)) {
+ String seqStr = clusterKeyFile.getName().substring(SYSTEM_KEY_FILE_PREFIX.length());
+ if (!seqStr.isEmpty()) {
+ try {
+ keySeq = Integer.parseInt(seqStr);
+ } catch (NumberFormatException e) {
+ throw new IOException("Invalid file name for a cluster key: " + clusterKeyFile, e);
+ }
+ }
+ }
+ return keySeq;
+ }
+
+ protected String loadKeyMetadata(Path keyPath) throws IOException {
+ try (FSDataInputStream fin = fs.open(keyPath)) {
+ return fin.readUTF();
+ }
+ }
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java
index c7ac68a827d0..b01af650d764 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java
@@ -17,16 +17,78 @@
*/
package org.apache.hadoop.hbase.keymeta;
+import java.io.IOException;
+import java.util.List;
+import java.util.Map;
+import java.util.TreeMap;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
-/**
- * STUB IMPLEMENTATION - Feature not yet complete. This class will be fully implemented in
- * HBASE-29368 feature PR.
- */
+@SuppressWarnings("checkstyle:FinalClass") // as otherwise it breaks mocking.
@InterfaceAudience.Private
public class SystemKeyCache {
- public SystemKeyCache(Configuration conf) {
- // Stub constructor - does nothing
+ private static final Logger LOG = LoggerFactory.getLogger(SystemKeyCache.class);
+
+ private final ManagedKeyData latestSystemKey;
+ private final Map systemKeys;
+
+ /**
+ * Create a SystemKeyCache from the specified configuration and file system.
+ * @param configuration the configuration to use
+ * @param fs the file system to use
+ * @return the cache or {@code null} if no keys are found.
+ * @throws IOException if there is an error loading the system keys
+ */
+ public static SystemKeyCache createCache(Configuration configuration, FileSystem fs)
+ throws IOException {
+ SystemKeyAccessor accessor = new SystemKeyAccessor(configuration, fs);
+ return createCache(accessor);
+ }
+
+ /**
+ * Construct the System Key cache from the specified accessor.
+ * @param accessor the accessor to use to load the system keys
+ * @return the cache or {@code null} if no keys are found.
+ * @throws IOException if there is an error loading the system keys
+ */
+ public static SystemKeyCache createCache(SystemKeyAccessor accessor) throws IOException {
+ List allSystemKeyFiles = accessor.getAllSystemKeyFiles();
+ if (allSystemKeyFiles.isEmpty()) {
+ LOG.warn("No system key files found, skipping cache creation");
+ return null;
+ }
+ ManagedKeyData latestSystemKey = null;
+ Map systemKeys = new TreeMap<>();
+ for (Path keyPath : allSystemKeyFiles) {
+ ManagedKeyData keyData = accessor.loadSystemKey(keyPath);
+ LOG.info(
+ "Loaded system key with (custodian: {}, namespace: {}), checksum: {} and metadata hash: {} "
+ + " from file: {}",
+ keyData.getKeyCustodianEncoded(), keyData.getKeyNamespace(), keyData.getKeyChecksum(),
+ keyData.getKeyMetadataHashEncoded(), keyPath);
+ if (latestSystemKey == null) {
+ latestSystemKey = keyData;
+ }
+ systemKeys.put(keyData.getKeyChecksum(), keyData);
+ }
+ return new SystemKeyCache(systemKeys, latestSystemKey);
+ }
+
+ private SystemKeyCache(Map systemKeys, ManagedKeyData latestSystemKey) {
+ this.systemKeys = systemKeys;
+ this.latestSystemKey = latestSystemKey;
+ }
+
+ public ManagedKeyData getLatestSystemKey() {
+ return latestSystemKey;
+ }
+
+ public ManagedKeyData getSystemKeyByChecksum(long checksum) {
+ return systemKeys.get(checksum);
}
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index a0f84c5672c3..8902c0f91174 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -119,9 +119,11 @@
import org.apache.hadoop.hbase.favored.FavoredNodesManager;
import org.apache.hadoop.hbase.http.HttpServer;
import org.apache.hadoop.hbase.http.InfoServer;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
import org.apache.hadoop.hbase.ipc.RpcServer;
import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
+import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor;
import org.apache.hadoop.hbase.log.HBaseMarkers;
import org.apache.hadoop.hbase.master.MasterRpcServices.BalanceSwitchMode;
import org.apache.hadoop.hbase.master.assignment.AssignmentManager;
@@ -246,6 +248,7 @@
import org.apache.hadoop.hbase.rsgroup.RSGroupUtil;
import org.apache.hadoop.hbase.security.AccessDeniedException;
import org.apache.hadoop.hbase.security.SecurityConstants;
+import org.apache.hadoop.hbase.security.SecurityUtil;
import org.apache.hadoop.hbase.security.Superusers;
import org.apache.hadoop.hbase.security.UserProvider;
import org.apache.hadoop.hbase.trace.TraceUtil;
@@ -357,6 +360,7 @@ public class HMaster extends HBaseServerBase implements Maste
// file system manager for the master FS operations
private MasterFileSystem fileSystemManager;
private MasterWalManager walManager;
+ private SystemKeyManager systemKeyManager;
// manager to manage procedure-based WAL splitting, can be null if current
// is zk-based WAL splitting. SplitWALManager will replace SplitLogManager
@@ -994,6 +998,10 @@ private void finishActiveMasterInitialization() throws IOException, InterruptedE
ZKClusterId.setClusterId(this.zooKeeper, fileSystemManager.getClusterId());
this.clusterId = clusterId.toString();
+ systemKeyManager = new SystemKeyManager(this);
+ systemKeyManager.ensureSystemKeyInitialized();
+ buildSystemKeyCache();
+
// Precaution. Put in place the old hbck1 lock file to fence out old hbase1s running their
// hbck1s against an hbase2 cluster; it could do damage. To skip this behavior, set
// hbase.write.hbck1.lock.file to false.
@@ -1156,14 +1164,23 @@ private void finishActiveMasterInitialization() throws IOException, InterruptedE
return;
}
+ this.assignmentManager.initializationPostMetaOnline();
+ this.assignmentManager.joinCluster();
+
+ // If key management is enabled, wait for keymeta table regions to be assigned and online,
+ // which includes creating the table the very first time.
+ // This is to ensure that the encrypted tables can successfully initialize Encryption.Context as
+ // part of the store opening process when processOfflineRegions is called.
+ // Without this, we can end up with race condition where a user store is opened before the
+ // keymeta table regions are online, which would cause the store opening to fail.
+ initKeymetaIfEnabled();
+
TableDescriptor metaDescriptor = tableDescriptors.get(TableName.META_TABLE_NAME);
final ColumnFamilyDescriptor tableFamilyDesc =
metaDescriptor.getColumnFamily(HConstants.TABLE_FAMILY);
final ColumnFamilyDescriptor replBarrierFamilyDesc =
metaDescriptor.getColumnFamily(HConstants.REPLICATION_BARRIER_FAMILY);
- this.assignmentManager.initializationPostMetaOnline();
- this.assignmentManager.joinCluster();
// The below depends on hbase:meta being online.
this.assignmentManager.processOfflineRegions();
// this must be called after the above processOfflineRegions to prevent race
@@ -1525,6 +1542,80 @@ private boolean waitForNamespaceOnline() throws IOException {
return true;
}
+ /**
+ * Creates the keymeta table and waits for all its regions to be online.
+ */
+ private void initKeymetaIfEnabled() throws IOException {
+ if (!SecurityUtil.isKeyManagementEnabled(conf)) {
+ return;
+ }
+
+ String keymetaTableName =
+ KeymetaTableAccessor.KEY_META_TABLE_NAME.getNameWithNamespaceInclAsString();
+ if (!getTableDescriptors().exists(KeymetaTableAccessor.KEY_META_TABLE_NAME)) {
+ LOG.info("initKeymetaIfEnabled: {} table not found. Creating.", keymetaTableName);
+ long keymetaTableProcId =
+ createSystemTable(KeymetaTableAccessor.TABLE_DESCRIPTOR_BUILDER.build(), true);
+
+ LOG.info("initKeymetaIfEnabled: Waiting for {} table creation procedure {} to complete",
+ keymetaTableName, keymetaTableProcId);
+ waitForProcedureToComplete(keymetaTableProcId,
+ "Failed to create keymeta table and add to meta");
+ }
+
+ List ris = this.assignmentManager.getRegionStates()
+ .getRegionsOfTable(KeymetaTableAccessor.KEY_META_TABLE_NAME);
+ if (ris.isEmpty()) {
+ throw new RuntimeException(
+ "initKeymetaIfEnabled: No " + keymetaTableName + " table regions found");
+ }
+
+ // First, create assignment procedures for all keymeta regions
+ List procs = new ArrayList<>();
+ for (RegionInfo ri : ris) {
+ RegionStateNode regionNode =
+ assignmentManager.getRegionStates().getOrCreateRegionStateNode(ri);
+ regionNode.lock();
+ try {
+ // Only create if region is in CLOSED or OFFLINE state and no procedure is already attached.
+ // The check for server online is really needed only for the sake of mini cluster as there
+ // are outdated ONLINE entries being returned after a cluster restart that point to the old
+ // RS.
+ if (
+ (regionNode.isInState(RegionState.State.CLOSED, RegionState.State.OFFLINE)
+ || !this.serverManager.isServerOnline(regionNode.getRegionLocation()))
+ && regionNode.getProcedure() == null
+ ) {
+ TransitRegionStateProcedure proc = TransitRegionStateProcedure
+ .assign(getMasterProcedureExecutor().getEnvironment(), ri, null);
+ proc.setCriticalSystemTable(true);
+ regionNode.setProcedure(proc);
+ procs.add(proc);
+ }
+ } finally {
+ regionNode.unlock();
+ }
+ }
+ // Then, trigger assignment for all keymeta regions
+ if (!procs.isEmpty()) {
+ LOG.info("initKeymetaIfEnabled: Submitting {} assignment procedures for {} table regions",
+ procs.size(), keymetaTableName);
+ getMasterProcedureExecutor()
+ .submitProcedures(procs.toArray(new TransitRegionStateProcedure[procs.size()]));
+ }
+
+ // Then wait for all regions to come online
+ LOG.info("initKeymetaIfEnabled: Checking/Waiting for {} table {} regions to be online",
+ keymetaTableName, ris.size());
+ for (RegionInfo ri : ris) {
+ if (!isRegionOnline(ri)) {
+ throw new RuntimeException(keymetaTableName + " table region " + ri.getRegionNameAsString()
+ + " could not be brought online");
+ }
+ }
+ LOG.info("initKeymetaIfEnabled: All {} table regions are online", keymetaTableName);
+ }
+
/**
* Adds the {@code MasterQuotasObserver} to the list of configured Master observers to
* automatically remove quotas for a table when that table is deleted.
@@ -1638,7 +1729,12 @@ public MasterWalManager getMasterWalManager() {
@Override
public boolean rotateSystemKeyIfChanged() throws IOException {
- // STUB - Feature not yet implemented
+ ManagedKeyData newKey = this.systemKeyManager.rotateSystemKeyIfChanged();
+ if (newKey != null) {
+ this.systemKeyCache = null;
+ buildSystemKeyCache();
+ return true;
+ }
return false;
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
index 5a43cd98feb9..0ffbfd15c41d 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterFileSystem.java
@@ -66,6 +66,7 @@ public class MasterFileSystem {
private final FileSystem walFs;
// root log directory on the FS
private final Path rootdir;
+ private final Path systemKeyDir;
// hbase temp directory used for table construction and deletion
private final Path tempdir;
// root hbase directory on the FS
@@ -96,6 +97,7 @@ public MasterFileSystem(Configuration conf) throws IOException {
// default localfs. Presumption is that rootdir is fully-qualified before
// we get to here with appropriate fs scheme.
this.rootdir = CommonFSUtils.getRootDir(conf);
+ this.systemKeyDir = CommonFSUtils.getSystemKeyDir(conf);
this.tempdir = new Path(this.rootdir, HConstants.HBASE_TEMP_DIRECTORY);
// Cover both bases, the old way of setting default fs and the new.
// We're supposed to run on 0.20 and 0.21 anyways.
@@ -134,6 +136,7 @@ private void createInitialFileSystemLayout() throws IOException {
HConstants.CORRUPT_DIR_NAME, ReplicationUtils.REMOTE_WAL_DIR_NAME };
// check if the root directory exists
checkRootDir(this.rootdir, conf, this.fs);
+ checkSubDir(this.systemKeyDir, HBASE_DIR_PERMS);
// Check the directories under rootdir.
checkTempDir(this.tempdir, conf, this.fs);
@@ -158,6 +161,7 @@ private void createInitialFileSystemLayout() throws IOException {
if (isSecurityEnabled) {
fs.setPermission(new Path(rootdir, HConstants.VERSION_FILE_NAME), secureRootFilePerms);
fs.setPermission(new Path(rootdir, HConstants.CLUSTER_ID_FILE_NAME), secureRootFilePerms);
+ fs.setPermission(systemKeyDir, secureRootFilePerms);
}
FsPermission currentRootPerms = fs.getFileStatus(this.rootdir).getPermission();
if (
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SystemKeyManager.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SystemKeyManager.java
new file mode 100644
index 000000000000..de0e37dde275
--- /dev/null
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SystemKeyManager.java
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.apache.hadoop.hbase.HConstants.SYSTEM_KEY_FILE_PREFIX;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.UUID;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.keymeta.SystemKeyAccessor;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+@InterfaceAudience.Private
+public class SystemKeyManager extends SystemKeyAccessor {
+ private static final Logger LOG = LoggerFactory.getLogger(SystemKeyManager.class);
+
+ private final MasterServices master;
+
+ public SystemKeyManager(MasterServices master) throws IOException {
+ super(master);
+ this.master = master;
+ }
+
+ public void ensureSystemKeyInitialized() throws IOException {
+ if (!isKeyManagementEnabled()) {
+ return;
+ }
+ List clusterKeys = getAllSystemKeyFiles();
+ if (clusterKeys.isEmpty()) {
+ LOG.info("Initializing System Key for the first time");
+ // Double check for cluster key as another HMaster might have succeeded.
+ if (rotateSystemKey(null, clusterKeys) == null && getAllSystemKeyFiles().isEmpty()) {
+ throw new RuntimeException("Failed to generate or save System Key");
+ }
+ } else if (rotateSystemKeyIfChanged() != null) {
+ LOG.info("System key has been rotated");
+ } else {
+ LOG.info("System key is already initialized and unchanged");
+ }
+ }
+
+ public ManagedKeyData rotateSystemKeyIfChanged() throws IOException {
+ if (!isKeyManagementEnabled()) {
+ return null;
+ }
+ Pair> latestFileResult = getLatestSystemKeyFile();
+ Path latestFile = latestFileResult.getFirst();
+ String latestKeyMetadata = loadKeyMetadata(latestFile);
+ return rotateSystemKey(latestKeyMetadata, latestFileResult.getSecond());
+ }
+
+ private ManagedKeyData rotateSystemKey(String currentKeyMetadata, List allSystemKeyFiles)
+ throws IOException {
+ ManagedKeyProvider provider = getKeyProvider();
+ ManagedKeyData clusterKey =
+ provider.getSystemKey(master.getMasterFileSystem().getClusterId().toString().getBytes());
+ if (clusterKey == null) {
+ throw new IOException("Failed to get system key for cluster id: "
+ + master.getMasterFileSystem().getClusterId().toString());
+ }
+ if (clusterKey.getKeyState() != ManagedKeyState.ACTIVE) {
+ throw new IOException("System key is expected to be ACTIVE but it is: "
+ + clusterKey.getKeyState() + " for metadata: " + clusterKey.getKeyMetadata());
+ }
+ if (clusterKey.getKeyMetadata() == null) {
+ throw new IOException("System key is expected to have metadata but it is null");
+ }
+ if (
+ !clusterKey.getKeyMetadata().equals(currentKeyMetadata)
+ && saveLatestSystemKey(clusterKey.getKeyMetadata(), allSystemKeyFiles)
+ ) {
+ return clusterKey;
+ }
+ return null;
+ }
+
+ private boolean saveLatestSystemKey(String keyMetadata, List allSystemKeyFiles)
+ throws IOException {
+ int nextSystemKeySeq = (allSystemKeyFiles.isEmpty()
+ ? -1
+ : SystemKeyAccessor.extractKeySequence(allSystemKeyFiles.get(0))) + 1;
+ LOG.info("Trying to save a new cluster key at seq: {}", nextSystemKeySeq);
+ MasterFileSystem masterFS = master.getMasterFileSystem();
+ Path nextSystemKeyPath = new Path(systemKeyDir, SYSTEM_KEY_FILE_PREFIX + nextSystemKeySeq);
+ Path tempSystemKeyFile =
+ new Path(masterFS.getTempDir(), nextSystemKeyPath.getName() + UUID.randomUUID());
+ try (
+ FSDataOutputStream fsDataOutputStream = masterFS.getFileSystem().create(tempSystemKeyFile)) {
+ fsDataOutputStream.writeUTF(keyMetadata);
+ boolean succeeded = masterFS.getFileSystem().rename(tempSystemKeyFile, nextSystemKeyPath);
+ if (succeeded) {
+ LOG.info("System key save succeeded for seq: {}", nextSystemKeySeq);
+ } else {
+ LOG.error("System key save failed for seq: {}", nextSystemKeySeq);
+ }
+ return succeeded;
+ } finally {
+ masterFS.getFileSystem().delete(tempSystemKeyFile, false);
+ }
+ }
+}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
index a2e38e532797..ff88f7140e88 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
@@ -170,6 +170,7 @@
import org.apache.hadoop.hbase.regionserver.wal.WALUtil;
import org.apache.hadoop.hbase.replication.ReplicationUtils;
import org.apache.hadoop.hbase.replication.regionserver.ReplicationObserver;
+import org.apache.hadoop.hbase.security.SecurityUtil;
import org.apache.hadoop.hbase.security.User;
import org.apache.hadoop.hbase.snapshot.SnapshotDescriptionUtils;
import org.apache.hadoop.hbase.snapshot.SnapshotManifest;
@@ -386,6 +387,8 @@ public class HRegion implements HeapSize, PropagatingConfigurationObserver, Regi
private final Configuration baseConf;
private final int rowLockWaitDuration;
static final int DEFAULT_ROWLOCK_WAIT_DURATION = 30000;
+ private final ManagedKeyDataCache managedKeyDataCache;
+ private final SystemKeyCache systemKeyCache;
private Path regionWalDir;
private FileSystem walFS;
@@ -983,6 +986,17 @@ public HRegion(final HRegionFileSystem fs, final WAL wal, final Configuration co
minBlockSizeBytes = Arrays.stream(this.htableDescriptor.getColumnFamilies())
.mapToInt(ColumnFamilyDescriptor::getBlocksize).min().orElse(HConstants.DEFAULT_BLOCKSIZE);
+
+ if (SecurityUtil.isKeyManagementEnabled(conf)) {
+ if (keyManagementService == null) {
+ keyManagementService = KeyManagementService.createDefault(conf, fs.getFileSystem());
+ }
+ this.managedKeyDataCache = keyManagementService.getManagedKeyDataCache();
+ this.systemKeyCache = keyManagementService.getSystemKeyCache();
+ } else {
+ this.managedKeyDataCache = null;
+ this.systemKeyCache = null;
+ }
}
private void setHTableSpecificConf() {
@@ -2177,11 +2191,11 @@ public BlockCache getBlockCache() {
}
public ManagedKeyDataCache getManagedKeyDataCache() {
- return null;
+ return this.managedKeyDataCache;
}
public SystemKeyCache getSystemKeyCache() {
- return null;
+ return this.systemKeyCache;
}
/**
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index eea82ca511eb..2161ed9c5ad5 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -120,6 +120,7 @@
import org.apache.hadoop.hbase.ipc.RpcServer;
import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
import org.apache.hadoop.hbase.ipc.ServerRpcController;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.log.HBaseMarkers;
import org.apache.hadoop.hbase.mob.MobFileCache;
import org.apache.hadoop.hbase.mob.RSMobFileCleanerChore;
@@ -600,7 +601,6 @@ protected RegionServerCoprocessorHost getCoprocessorHost() {
return getRegionServerCoprocessorHost();
}
- @Override
protected boolean canCreateBaseZNode() {
return !clusterMode();
}
@@ -1453,6 +1453,9 @@ protected void handleReportForDutyResponse(final RegionServerStartupResponse c)
initializeFileSystem();
}
+ buildSystemKeyCache();
+ managedKeyDataCache = new ManagedKeyDataCache(this.getConfiguration(), keymetaAdmin);
+
// hack! Maps DFSClient => RegionServer for logs. HDFS made this
// config param for task trackers, but we can piggyback off of it.
if (this.conf.get("mapreduce.task.attempt.id") == null) {
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
index a90ec97dc3fa..0fb5c2e5f940 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
@@ -45,9 +45,11 @@
import org.apache.hadoop.hbase.io.hfile.HFile;
import org.apache.hadoop.hbase.io.hfile.ReaderContext;
import org.apache.hadoop.hbase.io.hfile.ReaderContext.ReaderType;
+import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
import org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTracker;
+import org.apache.hadoop.hbase.security.SecurityUtil;
import org.apache.hadoop.hbase.util.BloomFilterFactory;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
@@ -240,8 +242,9 @@ public long getMaxMemStoreTS() {
*/
public HStoreFile(FileSystem fs, Path p, Configuration conf, CacheConfig cacheConf,
BloomType cfBloomType, boolean primaryReplica, StoreFileTracker sft) throws IOException {
- // Key management not yet implemented - always null
- this(sft.getStoreFileInfo(p, primaryReplica), cfBloomType, cacheConf, null, null, null, null);
+ this(sft.getStoreFileInfo(p, primaryReplica), cfBloomType, cacheConf, null, null,
+ SecurityUtil.isKeyManagementEnabled(conf) ? SystemKeyCache.createCache(conf, fs) : null,
+ SecurityUtil.isKeyManagementEnabled(conf) ? new ManagedKeyDataCache(conf, null) : null);
}
/**
@@ -257,9 +260,13 @@ public HStoreFile(FileSystem fs, Path p, Configuration conf, CacheConfig cacheCo
*/
public HStoreFile(StoreFileInfo fileInfo, BloomType cfBloomType, CacheConfig cacheConf)
throws IOException {
- // Key management not yet implemented - always null
- this(fileInfo, cfBloomType, cacheConf, null, null, // keyNamespace - not yet implemented
- null, null);
+ this(fileInfo, cfBloomType, cacheConf, null, KeyNamespaceUtil.constructKeyNamespace(fileInfo),
+ SecurityUtil.isKeyManagementEnabled(fileInfo.getConf())
+ ? SystemKeyCache.createCache(fileInfo.getConf(), fileInfo.getFileSystem())
+ : null,
+ SecurityUtil.isKeyManagementEnabled(fileInfo.getConf())
+ ? new ManagedKeyDataCache(fileInfo.getConf(), null)
+ : null);
}
/**
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index ba838e2f16ca..61bd92821de7 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -90,6 +90,7 @@
import org.apache.hadoop.hbase.exceptions.TimeoutIOException;
import org.apache.hadoop.hbase.exceptions.UnknownProtocolException;
import org.apache.hadoop.hbase.io.ByteBuffAllocator;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.hfile.BlockCache;
import org.apache.hadoop.hbase.ipc.HBaseRpcController;
import org.apache.hadoop.hbase.ipc.PriorityFunction;
@@ -4062,39 +4063,83 @@ public GetCachedFilesListResponse getCachedFilesList(RpcController controller,
}
/**
- * STUB - Refreshes the system key cache on the region server. Feature not yet implemented in
- * precursor PR.
+ * Refreshes the system key cache on the region server by rebuilding it with the latest keys. This
+ * is called by the master when a system key rotation has occurred.
+ * @param controller the RPC controller
+ * @param request the request
+ * @return empty response
*/
@Override
@QosPriority(priority = HConstants.ADMIN_QOS)
public EmptyMsg refreshSystemKeyCache(final RpcController controller, final EmptyMsg request)
throws ServiceException {
- throw new ServiceException(
- new UnsupportedOperationException("Key management feature not yet implemented"));
+ try {
+ checkOpen();
+ requestCount.increment();
+ LOG.info("Received RefreshSystemKeyCache request, rebuilding system key cache");
+ server.rebuildSystemKeyCache();
+ return EmptyMsg.getDefaultInstance();
+ } catch (IOException ie) {
+ LOG.error("Failed to rebuild system key cache", ie);
+ throw new ServiceException(ie);
+ }
}
/**
- * STUB - Ejects a specific managed key entry from the cache. Feature not yet implemented in
- * precursor PR.
+ * Ejects a specific managed key entry from the managed key data cache on the region server.
+ * @param controller the RPC controller
+ * @param request the request containing key custodian, namespace, and metadata hash
+ * @return BooleanMsg indicating whether the key was ejected
*/
@Override
@QosPriority(priority = HConstants.ADMIN_QOS)
public BooleanMsg ejectManagedKeyDataCacheEntry(final RpcController controller,
final ManagedKeyEntryRequest request) throws ServiceException {
- throw new ServiceException(
- new UnsupportedOperationException("Key management feature not yet implemented"));
+ try {
+ checkOpen();
+ } catch (IOException e) {
+ LOG.error("Failed to eject managed key data cache entry", e);
+ throw new ServiceException(e);
+ }
+ requestCount.increment();
+ byte[] keyCustodian = request.getKeyCustNs().getKeyCust().toByteArray();
+ String keyNamespace = request.getKeyCustNs().getKeyNamespace();
+ byte[] keyMetadataHash = request.getKeyMetadataHash().toByteArray();
+
+ if (LOG.isInfoEnabled()) {
+ String keyCustodianEncoded = ManagedKeyProvider.encodeToStr(keyCustodian);
+ String keyMetadataHashEncoded = ManagedKeyProvider.encodeToStr(keyMetadataHash);
+ LOG.info(
+ "Received EjectManagedKeyDataCacheEntry request for key custodian: {}, namespace: {}, "
+ + "metadata hash: {}",
+ keyCustodianEncoded, keyNamespace, keyMetadataHashEncoded);
+ }
+
+ boolean ejected = server.getKeyManagementService().getManagedKeyDataCache()
+ .ejectKey(keyCustodian, keyNamespace, keyMetadataHash);
+ return BooleanMsg.newBuilder().setBoolMsg(ejected).build();
}
/**
- * STUB - Clears all entries in the managed key data cache. Feature not yet implemented in
- * precursor PR.
+ * Clears all entries in the managed key data cache on the region server.
+ * @param controller the RPC controller
+ * @param request the request (empty)
+ * @return empty response
*/
@Override
@QosPriority(priority = HConstants.ADMIN_QOS)
public EmptyMsg clearManagedKeyDataCache(final RpcController controller, final EmptyMsg request)
throws ServiceException {
- throw new ServiceException(
- new UnsupportedOperationException("Key management feature not yet implemented"));
+ try {
+ checkOpen();
+ } catch (IOException ie) {
+ LOG.error("Failed to clear managed key data cache", ie);
+ throw new ServiceException(ie);
+ }
+ requestCount.increment();
+ LOG.info("Received ClearManagedKeyDataCache request, clearing managed key data cache");
+ server.getKeyManagementService().getManagedKeyDataCache().clearCache();
+ return EmptyMsg.getDefaultInstance();
}
RegionScannerContext checkQuotaAndGetRegionScannerContext(ScanRequest request,
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java
index e262abd9ea35..08e710826358 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java
@@ -41,6 +41,7 @@
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.conf.ConfigKey;
import org.apache.hadoop.hbase.io.hfile.BloomFilterMetrics;
+import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
import org.apache.hadoop.hbase.log.HBaseMarkers;
@@ -236,7 +237,7 @@ public HStoreFile createStoreFileAndReader(Path p) throws IOException {
public HStoreFile createStoreFileAndReader(StoreFileInfo info) throws IOException {
info.setRegionCoprocessorHost(coprocessorHost);
HStoreFile storeFile = new HStoreFile(info, ctx.getFamily().getBloomFilterType(),
- ctx.getCacheConf(), bloomFilterMetrics, null, // keyNamespace - not yet implemented
+ ctx.getCacheConf(), bloomFilterMetrics, KeyNamespaceUtil.constructKeyNamespace(ctx),
systemKeyCache, managedKeyDataCache);
storeFile.initReader();
return storeFile;
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java
index 71bf1706ea3a..5fff2a417ebc 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java
@@ -19,13 +19,17 @@
import java.io.IOException;
import java.security.Key;
+import java.security.KeyException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
import org.apache.hadoop.hbase.client.TableDescriptor;
import org.apache.hadoop.hbase.io.crypto.Cipher;
import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer;
+import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
import org.apache.yetus.audience.InterfaceAudience;
@@ -65,14 +69,12 @@ public static String getPrincipalWithoutRealm(final String principal) {
}
/**
- * Helper to create an encryption context with current encryption key, suitable for writes. STUB
- * IMPLEMENTATION - Key management not yet implemented. Cache parameters are placeholders for
- * future implementation.
+ * Helper to create an encyption context with current encryption key, suitable for writes.
* @param conf The current configuration.
* @param tableDescriptor The table descriptor.
* @param family The current column descriptor.
- * @param managedKeyDataCache The managed key data cache (unused in stub).
- * @param systemKeyCache The system key cache (unused in stub).
+ * @param managedKeyDataCache The managed key data cache.
+ * @param systemKeyCache The system key cache.
* @return The created encryption context.
* @throws IOException if an encryption key for the column cannot be unwrapped
* @throws IllegalStateException in case of encryption related configuration errors
@@ -81,78 +83,273 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
TableDescriptor tableDescriptor, ColumnFamilyDescriptor family,
ManagedKeyDataCache managedKeyDataCache, SystemKeyCache systemKeyCache) throws IOException {
Encryption.Context cryptoContext = Encryption.Context.NONE;
+ boolean isKeyManagementEnabled = isKeyManagementEnabled(conf);
String cipherName = family.getEncryptionType();
-
+ String keyNamespace = null; // Will be set by fallback logic
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Creating encryption context for table: {} and column family: {}",
+ tableDescriptor.getTableName().getNameAsString(), family.getNameAsString());
+ }
if (cipherName != null) {
if (!Encryption.isEncryptionEnabled(conf)) {
throw new IllegalStateException("Encryption for family '" + family.getNameAsString()
+ "' configured with type '" + cipherName + "' but the encryption feature is disabled");
}
-
+ if (isKeyManagementEnabled && systemKeyCache == null) {
+ throw new IOException("Key management is enabled, but SystemKeyCache is null");
+ }
+ Cipher cipher = null;
Key key = null;
- byte[] familyKeyBytes = family.getEncryptionKey();
+ ManagedKeyData kekKeyData =
+ isKeyManagementEnabled ? systemKeyCache.getLatestSystemKey() : null;
- // Unwrap family key if present
+ // Scenario 1: If family has a key, unwrap it and use that as DEK.
+ byte[] familyKeyBytes = family.getEncryptionKey();
if (familyKeyBytes != null) {
- key = EncryptionUtil.unwrapKey(conf, familyKeyBytes);
+ try {
+ if (isKeyManagementEnabled) {
+ // Scenario 1a: If key management is enabled, use STK for both unwrapping and KEK.
+ key = EncryptionUtil.unwrapKey(conf, null, familyKeyBytes, kekKeyData.getTheKey());
+ } else {
+ // Scenario 1b: If key management is disabled, unwrap the key using master key.
+ key = EncryptionUtil.unwrapKey(conf, familyKeyBytes);
+ }
+ LOG.debug("Scenario 1: Use family key for namespace {} cipher: {} "
+ + "key management enabled: {}", keyNamespace, cipherName, isKeyManagementEnabled);
+ } catch (KeyException e) {
+ throw new IOException(e);
+ }
+ } else {
+ if (isKeyManagementEnabled) {
+ boolean localKeyGenEnabled =
+ conf.getBoolean(HConstants.CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_ENABLED_CONF_KEY,
+ HConstants.CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_DEFAULT_ENABLED);
+ // Implement 4-step fallback logic for key namespace resolution in the order of
+ // 1. CF KEY_NAMESPACE attribute
+ // 2. Constructed namespace
+ // 3. Table name
+ // 4. Global namespace
+ String[] candidateNamespaces = { family.getEncryptionKeyNamespace(),
+ KeyNamespaceUtil.constructKeyNamespace(tableDescriptor, family),
+ tableDescriptor.getTableName().getNameAsString(), ManagedKeyData.KEY_SPACE_GLOBAL };
+
+ ManagedKeyData activeKeyData = null;
+ for (String candidate : candidateNamespaces) {
+ if (candidate != null) {
+ // Log information on the table and column family we are looking for the active key in
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Looking for active key for table: {} and column family: {} with "
+ + "(custodian: {}, namespace: {})",
+ tableDescriptor.getTableName().getNameAsString(), family.getNameAsString(),
+ ManagedKeyData.KEY_GLOBAL_CUSTODIAN, candidate);
+ }
+ activeKeyData = managedKeyDataCache
+ .getActiveEntry(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES, candidate);
+ if (activeKeyData != null) {
+ keyNamespace = candidate;
+ break;
+ }
+ }
+ }
+
+ // Scenario 2: There is an active key
+ if (activeKeyData != null) {
+ if (!localKeyGenEnabled) {
+ // Scenario 2a: Use active key as DEK and latest STK as KEK
+ key = activeKeyData.getTheKey();
+ } else {
+ // Scenario 2b: Use active key as KEK and generate local key as DEK
+ kekKeyData = activeKeyData;
+ // TODO: Use the active key as a seed to generate the local key instead of
+ // random generation
+ cipher = getCipherIfValid(conf, cipherName, activeKeyData.getTheKey(),
+ family.getNameAsString());
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Scenario 2: Use active key with (custodian: {}, namespace: {}) for cipher: {} "
+ + "localKeyGenEnabled: {} for table: {} and column family: {}",
+ activeKeyData.getKeyCustodianEncoded(), activeKeyData.getKeyNamespace(), cipherName,
+ localKeyGenEnabled, tableDescriptor.getTableName().getNameAsString(),
+ family.getNameAsString());
+ }
+ } else {
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Scenario 3a: No active key found for table: {} and column family: {}",
+ tableDescriptor.getTableName().getNameAsString(), family.getNameAsString());
+ }
+ // Scenario 3a: Do nothing, let a random key be generated as DEK and if key management
+ // is enabled, let STK be used as KEK.
+ }
+ } else {
+ // Scenario 3b: Do nothing, let a random key be generated as DEK, let STK be used as KEK.
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Scenario 3b: Key management is disabled and no ENCRYPTION_KEY attribute "
+ + "set for table: {} and column family: {}",
+ tableDescriptor.getTableName().getNameAsString(), family.getNameAsString());
+ }
+ }
+ }
+ if (LOG.isDebugEnabled() && kekKeyData != null) {
+ LOG.debug(
+ "Usigng KEK with (custodian: {}, namespace: {}), checksum: {} and metadata " + "hash: {}",
+ kekKeyData.getKeyCustodianEncoded(), kekKeyData.getKeyNamespace(),
+ kekKeyData.getKeyChecksum(), kekKeyData.getKeyMetadataHashEncoded());
}
- Cipher cipher = Encryption.getCipher(conf, cipherName);
if (cipher == null) {
- throw new IllegalStateException("Cipher '" + cipherName + "' is not available");
+ cipher =
+ getCipherIfValid(conf, cipherName, key, key == null ? null : family.getNameAsString());
}
-
- // Generate random key if none provided
if (key == null) {
key = cipher.getRandomKey();
}
-
cryptoContext = Encryption.newContext(conf);
cryptoContext.setCipher(cipher);
cryptoContext.setKey(key);
+ cryptoContext.setKeyNamespace(keyNamespace);
+ cryptoContext.setKEKData(kekKeyData);
}
return cryptoContext;
}
/**
* Create an encryption context from encryption key found in a file trailer, suitable for read.
- * STUB IMPLEMENTATION - Key management not yet implemented. Cache parameters are placeholders for
- * future implementation.
* @param conf The current configuration.
* @param path The path of the file.
* @param trailer The file trailer.
- * @param managedKeyDataCache The managed key data cache (unused in stub).
- * @param systemKeyCache The system key cache (unused in stub).
+ * @param managedKeyDataCache The managed key data cache.
+ * @param systemKeyCache The system key cache.
* @return The created encryption context or null if no key material is available.
* @throws IOException if an encryption key for the file cannot be unwrapped
*/
public static Encryption.Context createEncryptionContext(Configuration conf, Path path,
FixedFileTrailer trailer, ManagedKeyDataCache managedKeyDataCache,
SystemKeyCache systemKeyCache) throws IOException {
+ ManagedKeyData kekKeyData = null;
byte[] keyBytes = trailer.getEncryptionKey();
Encryption.Context cryptoContext = Encryption.Context.NONE;
-
+ LOG.debug("Creating encryption context for path: {}", path);
+ // Check for any key material available
if (keyBytes != null) {
cryptoContext = Encryption.newContext(conf);
- Key key = EncryptionUtil.unwrapKey(conf, keyBytes);
- Cipher cipher = Encryption.getCipher(conf, key.getAlgorithm());
+ Key kek = null;
- if (cipher == null) {
- throw new IllegalStateException("Cipher '" + key.getAlgorithm() + "' is not available");
+ // When there is key material, determine the appropriate KEK
+ boolean isKeyManagementEnabled = isKeyManagementEnabled(conf);
+ if (((trailer.getKEKChecksum() != 0L) || isKeyManagementEnabled) && systemKeyCache == null) {
+ throw new IOException("SystemKeyCache can't be null when using key management feature");
+ }
+ if ((trailer.getKEKChecksum() != 0L && !isKeyManagementEnabled)) {
+ throw new IOException(
+ "Seeing newer trailer with KEK checksum, but key management is disabled");
+ }
+
+ // Try STK lookup first if checksum is available.
+ if (trailer.getKEKChecksum() != 0L) {
+ LOG.debug("Looking for System Key with checksum: {}", trailer.getKEKChecksum());
+ ManagedKeyData systemKeyData =
+ systemKeyCache.getSystemKeyByChecksum(trailer.getKEKChecksum());
+ if (systemKeyData != null) {
+ kek = systemKeyData.getTheKey();
+ kekKeyData = systemKeyData;
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Found System Key with (custodian: {}, namespace: {}), checksum: {} and "
+ + "metadata hash: {}",
+ systemKeyData.getKeyCustodianEncoded(), systemKeyData.getKeyNamespace(),
+ systemKeyData.getKeyChecksum(), systemKeyData.getKeyMetadataHashEncoded());
+ }
+ }
}
+ // If STK lookup failed or no checksum available, try managed key lookup using metadata
+ if (kek == null && trailer.getKEKMetadata() != null) {
+ if (managedKeyDataCache == null) {
+ throw new IOException("KEK metadata is available, but ManagedKeyDataCache is null");
+ }
+ Throwable cause = null;
+ try {
+ kekKeyData = managedKeyDataCache.getEntry(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES,
+ trailer.getKeyNamespace(), trailer.getKEKMetadata(), keyBytes);
+ } catch (KeyException | IOException e) {
+ cause = e;
+ }
+ // When getEntry returns null we treat it the same as exception case.
+ if (kekKeyData == null) {
+ throw new IOException(
+ "Failed to get key data for KEK metadata: " + trailer.getKEKMetadata(), cause);
+ }
+ kek = kekKeyData.getTheKey();
+ } else if (kek == null && isKeyManagementEnabled) {
+ // No checksum or metadata available, fall back to latest system key for backwards
+ // compatibility
+ ManagedKeyData systemKeyData = systemKeyCache.getLatestSystemKey();
+ if (systemKeyData == null) {
+ throw new IOException("Failed to get latest system key");
+ }
+ kek = systemKeyData.getTheKey();
+ kekKeyData = systemKeyData;
+ }
+
+ Key key;
+ if (kek != null) {
+ try {
+ key = EncryptionUtil.unwrapKey(conf, null, keyBytes, kek);
+ } catch (KeyException | IOException e) {
+ throw new IOException("Failed to unwrap key with KEK checksum: "
+ + trailer.getKEKChecksum() + ", metadata: " + trailer.getKEKMetadata(), e);
+ }
+ } else {
+ key = EncryptionUtil.unwrapKey(conf, keyBytes);
+ }
+ // Use the algorithm the key wants
+ Cipher cipher = getCipherIfValid(conf, key.getAlgorithm(), key, null);
cryptoContext.setCipher(cipher);
cryptoContext.setKey(key);
+ cryptoContext.setKeyNamespace(trailer.getKeyNamespace());
+ cryptoContext.setKEKData(kekKeyData);
}
return cryptoContext;
}
/**
- * Check if key management is enabled in configuration. STUB - Always returns false in precursor.
+ * Get the cipher if the cipher name is valid, otherwise throw an exception.
+ * @param conf the configuration
+ * @param cipherName the cipher name to check
+ * @param key the key to check
+ * @param familyName the family name
+ * @return the cipher if the cipher name is valid
+ * @throws IllegalStateException if the cipher name is not valid
+ */
+ private static Cipher getCipherIfValid(Configuration conf, String cipherName, Key key,
+ String familyName) {
+ // Fail if misconfigured
+ // We use the encryption type specified in the column schema as a sanity check
+ // on
+ // what the wrapped key is telling us
+ if (key != null && !key.getAlgorithm().equalsIgnoreCase(cipherName)) {
+ throw new IllegalStateException(
+ "Encryption for family '" + familyName + "' configured with type '" + cipherName
+ + "' but key specifies algorithm '" + key.getAlgorithm() + "'");
+ }
+ // Use the algorithm the key wants
+ Cipher cipher = Encryption.getCipher(conf, cipherName);
+ if (cipher == null) {
+ throw new IllegalStateException("Cipher '" + cipherName + "' is not available");
+ }
+ return cipher;
+ }
+
+ /**
+ * From the given configuration, determine if key management is enabled.
* @param conf the configuration to check
- * @return false in stub implementation
+ * @return true if key management is enabled
*/
public static boolean isKeyManagementEnabled(Configuration conf) {
- return false;
+ return conf.getBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY,
+ HConstants.CRYPTO_MANAGED_KEYS_DEFAULT_ENABLED);
}
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/EncryptionTest.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/EncryptionTest.java
index 192343ae41d3..eb4d72c7745f 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/util/EncryptionTest.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/util/EncryptionTest.java
@@ -28,7 +28,9 @@
import org.apache.hadoop.hbase.io.crypto.DefaultCipherProvider;
import org.apache.hadoop.hbase.io.crypto.Encryption;
import org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyStoreKeyProvider;
import org.apache.hadoop.hbase.security.EncryptionUtil;
+import org.apache.hadoop.hbase.security.SecurityUtil;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -48,12 +50,23 @@ private EncryptionTest() {
* Check that the configured key provider can be loaded and initialized, or throw an exception.
*/
public static void testKeyProvider(final Configuration conf) throws IOException {
- String providerClassName =
- conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, KeyStoreKeyProvider.class.getName());
+ boolean isKeyManagementEnabled = SecurityUtil.isKeyManagementEnabled(conf);
+ String providerClassName;
+ if (isKeyManagementEnabled) {
+ providerClassName = conf.get(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ ManagedKeyStoreKeyProvider.class.getName());
+ } else {
+ providerClassName =
+ conf.get(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, KeyStoreKeyProvider.class.getName());
+ }
Boolean result = keyProviderResults.get(providerClassName);
if (result == null) {
try {
- Encryption.getKeyProvider(conf);
+ if (isKeyManagementEnabled) {
+ Encryption.getManagedKeyProvider(conf);
+ } else {
+ Encryption.getKeyProvider(conf);
+ }
keyProviderResults.put(providerClassName, true);
} catch (Exception e) { // most likely a RuntimeException
keyProviderResults.put(providerClassName, false);
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/DummyKeyProvider.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/DummyKeyProvider.java
new file mode 100644
index 000000000000..16fadfd81a15
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/DummyKeyProvider.java
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import java.security.Key;
+import org.apache.hadoop.hbase.io.crypto.KeyProvider;
+
+public class DummyKeyProvider implements KeyProvider {
+ @Override
+ public void init(String params) {
+ }
+
+ @Override
+ public Key[] getKeys(String[] aliases) {
+ return null;
+ }
+
+ @Override
+ public Key getKey(String alias) {
+ return null;
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java
new file mode 100644
index 000000000000..c91539b7ed68
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java
@@ -0,0 +1,91 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import java.io.IOException;
+import java.security.Key;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.mockito.Mockito;
+
+public class ManagedKeyProviderInterceptor extends MockManagedKeyProvider {
+ public final MockManagedKeyProvider delegate;
+ public final MockManagedKeyProvider spy;
+
+ public ManagedKeyProviderInterceptor() {
+ this.delegate = new MockManagedKeyProvider();
+ this.spy = Mockito.spy(delegate);
+ }
+
+ @Override
+ public void initConfig(Configuration conf, String providerParameters) {
+ spy.initConfig(conf, providerParameters);
+ }
+
+ @Override
+ public ManagedKeyData getManagedKey(byte[] custodian, String namespace) throws IOException {
+ return spy.getManagedKey(custodian, namespace);
+ }
+
+ @Override
+ public ManagedKeyData getSystemKey(byte[] systemId) throws IOException {
+ return spy.getSystemKey(systemId);
+ }
+
+ @Override
+ public ManagedKeyData unwrapKey(String keyMetadata, byte[] wrappedKey) throws IOException {
+ return spy.unwrapKey(keyMetadata, wrappedKey);
+ }
+
+ @Override
+ public void init(String params) {
+ spy.init(params);
+ }
+
+ @Override
+ public Key getKey(String alias) {
+ return spy.getKey(alias);
+ }
+
+ @Override
+ public Key[] getKeys(String[] aliases) {
+ return spy.getKeys(aliases);
+ }
+
+ @Override
+ public void setMockedKeyState(String alias, ManagedKeyState state) {
+ delegate.setMockedKeyState(alias, state);
+ }
+
+ @Override
+ public void setMultikeyGenMode(boolean multikeyGenMode) {
+ delegate.setMultikeyGenMode(multikeyGenMode);
+ }
+
+ @Override
+ public ManagedKeyData getLastGeneratedKeyData(String alias, String keyNamespace) {
+ return delegate.getLastGeneratedKeyData(alias, keyNamespace);
+ }
+
+ @Override
+ public void setMockedKey(String alias, java.security.Key key, String keyNamespace) {
+ delegate.setMockedKey(alias, key, keyNamespace);
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java
new file mode 100644
index 000000000000..9f2381e849bb
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.junit.After;
+import org.junit.Before;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class ManagedKeyTestBase {
+ private static final Logger LOG = LoggerFactory.getLogger(ManagedKeyTestBase.class);
+
+ protected HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+
+ @Before
+ public void setUp() throws Exception {
+ // Uncomment to enable trace logging for the tests that extend this base class.
+ // Log4jUtils.setLogLevel("org.apache.hadoop.hbase", "TRACE");
+ if (isWithKeyManagement()) {
+ TEST_UTIL.getConfiguration().set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ getKeyProviderClass().getName());
+ TEST_UTIL.getConfiguration().set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
+ TEST_UTIL.getConfiguration().set("hbase.coprocessor.master.classes",
+ KeymetaServiceEndpoint.class.getName());
+ }
+
+ // Start the minicluster if needed
+ if (isWithMiniClusterStart()) {
+ LOG.info("\n\nManagedKeyTestBase.setUp: Starting minicluster\n");
+ startMiniCluster();
+ LOG.info("\n\nManagedKeyTestBase.setUp: Minicluster successfully started\n");
+ }
+ }
+
+ protected void startMiniCluster() throws Exception {
+ startMiniCluster(getSystemTableNameToWaitFor());
+ }
+
+ protected void startMiniCluster(TableName tableNameToWaitFor) throws Exception {
+ TEST_UTIL.startMiniCluster(1);
+ waitForMasterInitialization(tableNameToWaitFor);
+ }
+
+ protected void restartMiniCluster() throws Exception {
+ restartMiniCluster(getSystemTableNameToWaitFor());
+ }
+
+ protected void restartMiniCluster(TableName tableNameToWaitFor) throws Exception {
+ LOG.info("\n\nManagedKeyTestBase.restartMiniCluster: Flushing caches\n");
+ TEST_UTIL.flush();
+
+ LOG.info("\n\nManagedKeyTestBase.restartMiniCluster: Shutting down cluster\n");
+ TEST_UTIL.shutdownMiniHBaseCluster();
+
+ LOG.info("\n\nManagedKeyTestBase.restartMiniCluster: Sleeping a bit\n");
+ Thread.sleep(2000);
+
+ LOG.info("\n\nManagedKeyTestBase.restartMiniCluster: Starting the cluster back up\n");
+ TEST_UTIL.restartHBaseCluster(1);
+
+ waitForMasterInitialization(tableNameToWaitFor);
+ }
+
+ private void waitForMasterInitialization(TableName tableNameToWaitFor) throws Exception {
+ LOG.info(
+ "\n\nManagedKeyTestBase.waitForMasterInitialization: Waiting for master initialization\n");
+ TEST_UTIL.waitFor(60000, () -> TEST_UTIL.getMiniHBaseCluster().getMaster().isInitialized());
+
+ LOG.info(
+ "\n\nManagedKeyTestBase.waitForMasterInitialization: Waiting for regions to be assigned\n");
+ TEST_UTIL.waitUntilAllRegionsAssigned(tableNameToWaitFor);
+ LOG.info("\n\nManagedKeyTestBase.waitForMasterInitialization: Regions assigned\n");
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ LOG.info("\n\nManagedKeyTestBase.tearDown: Shutting down cluster\n");
+ TEST_UTIL.shutdownMiniCluster();
+ LOG.info("\n\nManagedKeyTestBase.tearDown: Cluster successfully shut down\n");
+ // Clear the provider cache to avoid test interference
+ Encryption.clearKeyProviderCache();
+ }
+
+ protected Class extends ManagedKeyProvider> getKeyProviderClass() {
+ return MockManagedKeyProvider.class;
+ }
+
+ protected boolean isWithKeyManagement() {
+ return true;
+ }
+
+ protected boolean isWithMiniClusterStart() {
+ return true;
+ }
+
+ protected TableName getSystemTableNameToWaitFor() {
+ return KeymetaTableAccessor.KEY_META_TABLE_NAME;
+ }
+
+ /**
+ * Useful hook to enable setting a breakpoint while debugging ruby tests, just log a message and
+ * you can even have a conditional breakpoint.
+ */
+ protected void logMessage(String msg) {
+ LOG.info(msg);
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementBase.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementBase.java
new file mode 100644
index 000000000000..3f6ddad6a1ee
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementBase.java
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({ MasterTests.class, SmallTests.class })
+public class TestKeyManagementBase {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeyManagementBase.class);
+
+ @Test
+ public void testGetKeyProviderWithInvalidProvider() throws Exception {
+ // Setup configuration with a non-ManagedKeyProvider
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ "org.apache.hadoop.hbase.keymeta.DummyKeyProvider");
+
+ MasterServices mockServer = mock(MasterServices.class);
+ when(mockServer.getConfiguration()).thenReturn(conf);
+
+ final KeyManagementBase keyMgmt = new TestKeyManagement(mockServer);
+ assertEquals(mockServer, keyMgmt.getKeyManagementService());
+
+ // Should throw RuntimeException when provider cannot be cast to ManagedKeyProvider
+ RuntimeException exception = assertThrows(RuntimeException.class, () -> {
+ keyMgmt.getKeyProvider();
+ });
+ // The error message will be about ClassCastException since DummyKeyProvider doesn't implement
+ // ManagedKeyProvider
+ assertTrue(exception.getMessage().contains("ClassCastException")
+ || exception.getCause() instanceof ClassCastException);
+
+ exception = assertThrows(RuntimeException.class, () -> {
+ KeyManagementBase keyMgmt2 = new TestKeyManagement(conf);
+ keyMgmt2.getKeyProvider();
+ });
+ assertTrue(exception.getMessage().contains("ClassCastException")
+ || exception.getCause() instanceof ClassCastException);
+
+ assertThrows(IllegalArgumentException.class, () -> {
+ Configuration configuration = null;
+ new TestKeyManagement(configuration);
+ });
+ }
+
+ private static class TestKeyManagement extends KeyManagementBase {
+ public TestKeyManagement(MasterServices server) {
+ super(server);
+ }
+
+ public TestKeyManagement(Configuration configuration) {
+ super(configuration);
+ }
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java
new file mode 100644
index 000000000000..bfd8be319895
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java
@@ -0,0 +1,106 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.apache.hadoop.hbase.HConstants.SYSTEM_KEY_FILE_PREFIX;
+import static org.apache.hadoop.hbase.io.crypto.KeymetaTestUtils.SeekableByteArrayInputStream;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertThrows;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.ByteArrayOutputStream;
+import java.io.DataOutputStream;
+import java.io.IOException;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.KeymetaTestUtils;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.testclassification.MiscTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+
+@Category({ MiscTests.class, SmallTests.class })
+public class TestKeyManagementService {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeyManagementService.class);
+
+ @Rule
+ public TestName name = new TestName();
+
+ protected Configuration conf = new Configuration();
+ protected FileSystem mockFileSystem = mock(FileSystem.class);
+
+ @Before
+ public void setUp() throws Exception {
+ // Clear provider cache to avoid interference from other tests
+ Encryption.clearKeyProviderCache();
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ MockManagedKeyProvider.class.getName());
+ conf.set(HConstants.HBASE_ORIGINAL_DIR, "/tmp/hbase");
+ }
+
+ @Test
+ public void testDefaultKeyManagementServiceCreation() throws IOException {
+ // SystemKeyCache needs at least one valid key to be created, so setting up a mock FS that
+ // returns a mock file that returns a known mocked key metadata.
+ MockManagedKeyProvider provider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
+ ManagedKeyData keyData =
+ provider.getManagedKey("system".getBytes(), ManagedKeyData.KEY_SPACE_GLOBAL);
+ String fileName = SYSTEM_KEY_FILE_PREFIX + "1";
+ Path systemKeyDir = CommonFSUtils.getSystemKeyDir(conf);
+ FileStatus mockFileStatus = KeymetaTestUtils.createMockFile(fileName);
+
+ // Create a real FSDataInputStream that contains the key metadata in UTF format
+ ByteArrayOutputStream baos = new ByteArrayOutputStream();
+ DataOutputStream dos = new DataOutputStream(baos);
+ dos.writeUTF(keyData.getKeyMetadata());
+ dos.close();
+
+ SeekableByteArrayInputStream seekableStream =
+ new SeekableByteArrayInputStream(baos.toByteArray());
+ FSDataInputStream realStream = new FSDataInputStream(seekableStream);
+
+ when(mockFileSystem.open(eq(mockFileStatus.getPath()))).thenReturn(realStream);
+ when(mockFileSystem.globStatus(eq(new Path(systemKeyDir, SYSTEM_KEY_FILE_PREFIX + "*"))))
+ .thenReturn(new FileStatus[] { mockFileStatus });
+
+ KeyManagementService service = KeyManagementService.createDefault(conf, mockFileSystem);
+ assertNotNull(service);
+ assertNotNull(service.getSystemKeyCache());
+ assertNotNull(service.getManagedKeyDataCache());
+ assertThrows(UnsupportedOperationException.class, () -> service.getKeymetaAdmin());
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java
new file mode 100644
index 000000000000..36df6a32ccd8
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.security.Key;
+import java.security.KeyException;
+import javax.crypto.KeyGenerator;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+/**
+ * Tests KeyManagementUtils for the difficult to cover error paths.
+ */
+@Category({ MasterTests.class, SmallTests.class })
+public class TestKeyManagementUtils {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeyManagementUtils.class);
+
+ private ManagedKeyProvider mockProvider;
+ private KeymetaTableAccessor mockAccessor;
+ private byte[] keyCust;
+ private String keyNamespace;
+ private String keyMetadata;
+ private byte[] wrappedKey;
+ private Key testKey;
+
+ @Before
+ public void setUp() throws Exception {
+ mockProvider = mock(ManagedKeyProvider.class);
+ mockAccessor = mock(KeymetaTableAccessor.class);
+ keyCust = "testCustodian".getBytes();
+ keyNamespace = "testNamespace";
+ keyMetadata = "testMetadata";
+ wrappedKey = new byte[] { 1, 2, 3, 4 };
+
+ KeyGenerator keyGen = KeyGenerator.getInstance("AES");
+ keyGen.init(256);
+ testKey = keyGen.generateKey();
+ }
+
+ @Test
+ public void testRetrieveKeyWithNullResponse() throws Exception {
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(null);
+
+ KeyException exception = assertThrows(KeyException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
+ keyMetadata, wrappedKey);
+ });
+
+ assertNotNull(exception.getMessage());
+ assertEquals(true, exception.getMessage().contains("Invalid key that is null"));
+ }
+
+ @Test
+ public void testRetrieveKeyWithNullMetadata() throws Exception {
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ // Create a mock that returns null for getKeyMetadata()
+ ManagedKeyData mockKeyData = mock(ManagedKeyData.class);
+ when(mockKeyData.getKeyMetadata()).thenReturn(null);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(mockKeyData);
+
+ KeyException exception = assertThrows(KeyException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
+ keyMetadata, wrappedKey);
+ });
+
+ assertNotNull(exception.getMessage());
+ assertEquals(true, exception.getMessage().contains("Invalid key that is null"));
+ }
+
+ @Test
+ public void testRetrieveKeyWithMismatchedMetadata() throws Exception {
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ String differentMetadata = "differentMetadata";
+ ManagedKeyData keyDataWithDifferentMetadata =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, differentMetadata);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithDifferentMetadata);
+
+ KeyException exception = assertThrows(KeyException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
+ keyMetadata, wrappedKey);
+ });
+
+ assertNotNull(exception.getMessage());
+ assertEquals(true, exception.getMessage().contains("invalid metadata"));
+ }
+
+ @Test
+ public void testRetrieveKeyWithDisabledState() throws Exception {
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyData keyDataWithDisabledState =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.DISABLED, keyMetadata);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithDisabledState);
+
+ KeyException exception = assertThrows(KeyException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
+ keyMetadata, wrappedKey);
+ });
+
+ assertNotNull(exception.getMessage());
+ assertEquals(true,
+ exception.getMessage().contains("Invalid key that is null or having invalid metadata"));
+ }
+
+ @Test
+ public void testRetrieveKeySuccess() throws Exception {
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyData validKeyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(validKeyData);
+
+ ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust,
+ keyCust, keyNamespace, keyMetadata, wrappedKey);
+
+ assertNotNull(result);
+ assertEquals(keyMetadata, result.getKeyMetadata());
+ assertEquals(ManagedKeyState.ACTIVE, result.getKeyState());
+ }
+
+ @Test
+ public void testRetrieveKeyWithFailedState() throws Exception {
+ // FAILED state is allowed (unlike DISABLED), so this should succeed
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyData keyDataWithFailedState =
+ new ManagedKeyData(keyCust, keyNamespace, null, ManagedKeyState.FAILED, keyMetadata);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithFailedState);
+
+ ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust,
+ keyCust, keyNamespace, keyMetadata, wrappedKey);
+
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.FAILED, result.getKeyState());
+ }
+
+ @Test
+ public void testRetrieveKeyWithInactiveState() throws Exception {
+ // INACTIVE state is allowed, so this should succeed
+ String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyData keyDataWithInactiveState =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.INACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithInactiveState);
+
+ ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust,
+ keyCust, keyNamespace, keyMetadata, wrappedKey);
+
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.INACTIVE, result.getKeyState());
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java
new file mode 100644
index 000000000000..1012d2b5a08f
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertThrows;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.HFileLink;
+import org.apache.hadoop.hbase.io.crypto.KeymetaTestUtils;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+import org.apache.hadoop.hbase.regionserver.StoreContext;
+import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
+import org.apache.hadoop.hbase.testclassification.MiscTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({ MiscTests.class, SmallTests.class })
+public class TestKeyNamespaceUtil {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeyNamespaceUtil.class);
+
+ @Test
+ public void testConstructKeyNamespace_FromTableDescriptorAndFamilyDescriptor() {
+ TableDescriptor tableDescriptor = mock(TableDescriptor.class);
+ ColumnFamilyDescriptor familyDescriptor = mock(ColumnFamilyDescriptor.class);
+ when(tableDescriptor.getTableName()).thenReturn(TableName.valueOf("test"));
+ when(familyDescriptor.getNameAsString()).thenReturn("family");
+ String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(tableDescriptor, familyDescriptor);
+ assertEquals("test/family", keyNamespace);
+ }
+
+ @Test
+ public void testConstructKeyNamespace_FromStoreContext() {
+ // Test store context path construction
+ TableName tableName = TableName.valueOf("test");
+ RegionInfo regionInfo = RegionInfoBuilder.newBuilder(tableName).build();
+ HRegionFileSystem regionFileSystem = mock(HRegionFileSystem.class);
+ when(regionFileSystem.getRegionInfo()).thenReturn(regionInfo);
+
+ ColumnFamilyDescriptor familyDescriptor = ColumnFamilyDescriptorBuilder.of("family");
+
+ StoreContext storeContext = StoreContext.getBuilder().withRegionFileSystem(regionFileSystem)
+ .withColumnFamilyDescriptor(familyDescriptor).build();
+
+ String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(storeContext);
+ assertEquals("test/family", keyNamespace);
+ }
+
+ @Test
+ public void testConstructKeyNamespace_FromStoreFileInfo_RegularFile() {
+ // Test both regular files and linked files
+ StoreFileInfo storeFileInfo = mock(StoreFileInfo.class);
+ when(storeFileInfo.isLink()).thenReturn(false);
+ Path path = KeymetaTestUtils.createMockPath("test", "family");
+ when(storeFileInfo.getPath()).thenReturn(path);
+ String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(storeFileInfo);
+ assertEquals("test/family", keyNamespace);
+ }
+
+ @Test
+ public void testConstructKeyNamespace_FromStoreFileInfo_LinkedFile() {
+ // Test both regular files and linked files
+ StoreFileInfo storeFileInfo = mock(StoreFileInfo.class);
+ HFileLink link = mock(HFileLink.class);
+ when(storeFileInfo.isLink()).thenReturn(true);
+ Path path = KeymetaTestUtils.createMockPath("test", "family");
+ when(link.getOriginPath()).thenReturn(path);
+ when(storeFileInfo.getLink()).thenReturn(link);
+ String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(storeFileInfo);
+ assertEquals("test/family", keyNamespace);
+ }
+
+ @Test
+ public void testConstructKeyNamespace_FromPath() {
+ // Test path parsing with different HBase directory structures
+ Path path = KeymetaTestUtils.createMockPath("test", "family");
+ String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(path);
+ assertEquals("test/family", keyNamespace);
+ }
+
+ @Test
+ public void testConstructKeyNamespace_FromStrings() {
+ // Test string-based construction
+ String tableName = "test";
+ String family = "family";
+ String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(tableName, family);
+ assertEquals("test/family", keyNamespace);
+ }
+
+ @Test
+ public void testConstructKeyNamespace_NullChecks() {
+ // Test null inputs for both table name and family
+ assertThrows(NullPointerException.class,
+ () -> KeyNamespaceUtil.constructKeyNamespace(null, "family"));
+ assertThrows(NullPointerException.class,
+ () -> KeyNamespaceUtil.constructKeyNamespace("test", null));
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java
new file mode 100644
index 000000000000..0e9c0eae2393
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java
@@ -0,0 +1,561 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyString;
+import static org.mockito.ArgumentMatchers.argThat;
+import static org.mockito.ArgumentMatchers.contains;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+import static org.mockito.Mockito.withSettings;
+
+import java.io.IOException;
+import java.security.KeyException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import javax.crypto.spec.SecretKeySpec;
+import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.keymeta.KeymetaServiceEndpoint.KeymetaAdminServiceImpl;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.MockitoAnnotations;
+
+import org.apache.hbase.thirdparty.com.google.protobuf.ByteString;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcCallback;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcController;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.EmptyMsg;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.GetManagedKeysResponse;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyEntryRequest;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyResponse;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyState;
+
+@Category({ MasterTests.class, SmallTests.class })
+public class TestKeymetaEndpoint {
+
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeymetaEndpoint.class);
+
+ private static final String KEY_CUST = "keyCust";
+ private static final String KEY_NAMESPACE = "keyNamespace";
+ private static final String KEY_METADATA1 = "keyMetadata1";
+ private static final String KEY_METADATA2 = "keyMetadata2";
+
+ @Mock
+ private RpcController controller;
+ @Mock
+ private MasterServices master;
+ @Mock
+ private RpcCallback enableKeyManagementDone;
+ @Mock
+ private RpcCallback getManagedKeysDone;
+ @Mock
+ private RpcCallback disableKeyManagementDone;
+ @Mock
+ private RpcCallback disableManagedKeyDone;
+ @Mock
+ private RpcCallback rotateManagedKeyDone;
+ @Mock
+ private RpcCallback refreshManagedKeysDone;
+
+ KeymetaServiceEndpoint keymetaServiceEndpoint;
+ private ManagedKeyResponse.Builder responseBuilder;
+ private ManagedKeyRequest.Builder requestBuilder;
+ private KeymetaAdminServiceImpl keyMetaAdminService;
+ private ManagedKeyData keyData1;
+ private ManagedKeyData keyData2;
+
+ @Mock
+ private KeymetaAdmin keymetaAdmin;
+
+ @Before
+ public void setUp() throws Exception {
+ MockitoAnnotations.initMocks(this);
+ keymetaServiceEndpoint = new KeymetaServiceEndpoint();
+ CoprocessorEnvironment env =
+ mock(CoprocessorEnvironment.class, withSettings().extraInterfaces(HasMasterServices.class));
+ when(((HasMasterServices) env).getMasterServices()).thenReturn(master);
+ keymetaServiceEndpoint.start(env);
+ keyMetaAdminService =
+ (KeymetaAdminServiceImpl) keymetaServiceEndpoint.getServices().iterator().next();
+ responseBuilder = ManagedKeyResponse.newBuilder().setKeyState(ManagedKeyState.KEY_ACTIVE);
+ requestBuilder =
+ ManagedKeyRequest.newBuilder().setKeyNamespace(ManagedKeyData.KEY_SPACE_GLOBAL);
+ keyData1 = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE,
+ new SecretKeySpec("key1".getBytes(), "AES"), ACTIVE, KEY_METADATA1);
+ keyData2 = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE,
+ new SecretKeySpec("key2".getBytes(), "AES"), ACTIVE, KEY_METADATA2);
+ when(master.getKeymetaAdmin()).thenReturn(keymetaAdmin);
+ }
+
+ @Test
+ public void testCreateResponseBuilderValid() throws IOException {
+ byte[] cust = "testKey".getBytes();
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.copyFrom(cust)).build();
+
+ ManagedKeyResponse.Builder result = ManagedKeyResponse.newBuilder();
+ KeymetaServiceEndpoint.initManagedKeyResponseBuilder(controller, request, result);
+
+ assertNotNull(result);
+ assertArrayEquals(cust, result.getKeyCust().toByteArray());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ @Test
+ public void testCreateResponseBuilderEmptyCust() throws IOException {
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.EMPTY).build();
+
+ IOException exception = assertThrows(IOException.class, () -> KeymetaServiceEndpoint
+ .initManagedKeyResponseBuilder(controller, request, ManagedKeyResponse.newBuilder()));
+
+ assertEquals("key_cust must not be empty", exception.getMessage());
+ }
+
+ @Test
+ public void testGenerateKeyStateResponse() throws Exception {
+ // Arrange
+ ManagedKeyResponse response =
+ responseBuilder.setKeyCust(ByteString.copyFrom(keyData1.getKeyCustodian()))
+ .setKeyNamespace(keyData1.getKeyNamespace()).build();
+ List managedKeyStates = Arrays.asList(keyData1, keyData2);
+
+ // Act
+ GetManagedKeysResponse result =
+ KeymetaServiceEndpoint.generateKeyStateResponse(managedKeyStates, responseBuilder);
+
+ // Assert
+ assertNotNull(response);
+ assertNotNull(result.getStateList());
+ assertEquals(2, result.getStateList().size());
+ assertEquals(ManagedKeyState.KEY_ACTIVE, result.getStateList().get(0).getKeyState());
+ assertEquals(0, Bytes.compareTo(keyData1.getKeyCustodian(),
+ result.getStateList().get(0).getKeyCust().toByteArray()));
+ assertEquals(keyData1.getKeyNamespace(), result.getStateList().get(0).getKeyNamespace());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ @Test
+ public void testGenerateKeyStateResponse_Empty() throws Exception {
+ // Arrange
+ ManagedKeyResponse response =
+ responseBuilder.setKeyCust(ByteString.copyFrom(keyData1.getKeyCustodian()))
+ .setKeyNamespace(keyData1.getKeyNamespace()).build();
+ List managedKeyStates = new ArrayList<>();
+
+ // Act
+ GetManagedKeysResponse result =
+ KeymetaServiceEndpoint.generateKeyStateResponse(managedKeyStates, responseBuilder);
+
+ // Assert
+ assertNotNull(response);
+ assertNotNull(result.getStateList());
+ assertEquals(0, result.getStateList().size());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ @Test
+ public void testGenerateKeyStatResponse_Success() throws Exception {
+ doTestServiceCallForSuccess((controller, request, done) -> keyMetaAdminService
+ .enableKeyManagement(controller, request, done), enableKeyManagementDone);
+ }
+
+ @Test
+ public void testGetManagedKeys_Success() throws Exception {
+ doTestServiceCallForSuccess(
+ (controller, request, done) -> keyMetaAdminService.getManagedKeys(controller, request, done),
+ getManagedKeysDone);
+ }
+
+ private void doTestServiceCallForSuccess(ServiceCall svc, RpcCallback done)
+ throws Exception {
+ // Arrange
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+ when(keymetaAdmin.enableKeyManagement(any(), any())).thenReturn(keyData1);
+
+ // Act
+ svc.call(controller, request, done);
+
+ // Assert
+ verify(done).run(any());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ private interface ServiceCall {
+ void call(RpcController controller, ManagedKeyRequest request, RpcCallback done)
+ throws Exception;
+ }
+
+ @Test
+ public void testGenerateKeyStateResponse_InvalidCust() throws Exception {
+ // Arrange
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.EMPTY).setKeyNamespace(KEY_NAMESPACE).build();
+
+ // Act
+ keyMetaAdminService.enableKeyManagement(controller, request, enableKeyManagementDone);
+
+ // Assert
+ verify(controller).setFailed(contains("key_cust must not be empty"));
+ verify(keymetaAdmin, never()).enableKeyManagement(any(), any());
+ verify(enableKeyManagementDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testGenerateKeyStateResponse_IOException() throws Exception {
+ // Arrange
+ when(keymetaAdmin.enableKeyManagement(any(), any())).thenThrow(IOException.class);
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+
+ // Act
+ keyMetaAdminService.enableKeyManagement(controller, request, enableKeyManagementDone);
+
+ // Assert
+ verify(controller).setFailed(contains("IOException"));
+ verify(keymetaAdmin).enableKeyManagement(any(), any());
+ verify(enableKeyManagementDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testGetManagedKeys_IOException() throws Exception {
+ doTestGetManagedKeysError(IOException.class);
+ }
+
+ @Test
+ public void testGetManagedKeys_KeyException() throws Exception {
+ doTestGetManagedKeysError(KeyException.class);
+ }
+
+ private void doTestGetManagedKeysError(Class extends Exception> exType) throws Exception {
+ // Arrange
+ when(keymetaAdmin.getManagedKeys(any(), any())).thenThrow(exType);
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+
+ // Act
+ keyMetaAdminService.getManagedKeys(controller, request, getManagedKeysDone);
+
+ // Assert
+ verify(controller).setFailed(contains(exType.getSimpleName()));
+ verify(keymetaAdmin).getManagedKeys(any(), any());
+ verify(getManagedKeysDone).run(GetManagedKeysResponse.getDefaultInstance());
+ }
+
+ @Test
+ public void testGetManagedKeys_InvalidCust() throws Exception {
+ // Arrange
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.EMPTY).build();
+
+ keyMetaAdminService.getManagedKeys(controller, request, getManagedKeysDone);
+
+ verify(controller).setFailed(contains("key_cust must not be empty"));
+ verify(keymetaAdmin, never()).getManagedKeys(any(), any());
+ verify(getManagedKeysDone).run(argThat(response -> response.getStateList().isEmpty()));
+ }
+
+ @Test
+ public void testDisableKeyManagement_Success() throws Exception {
+ // Arrange
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+ ManagedKeyData disabledKey = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE, DISABLED);
+ when(keymetaAdmin.disableKeyManagement(any(), any())).thenReturn(disabledKey);
+ // Act
+ keyMetaAdminService.disableKeyManagement(controller, request, disableKeyManagementDone);
+
+ // Assert
+ verify(controller, never()).setFailed(anyString());
+ verify(disableKeyManagementDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_INACTIVE));
+ }
+
+ @Test
+ public void testDisableKeyManagement_IOException() throws Exception {
+ doTestDisableKeyManagementError(IOException.class);
+ }
+
+ @Test
+ public void testDisableKeyManagement_KeyException() throws Exception {
+ doTestDisableKeyManagementError(KeyException.class);
+ }
+
+ private void doTestDisableKeyManagementError(Class extends Exception> exType) throws Exception {
+ // Arrange
+ when(keymetaAdmin.disableKeyManagement(any(), any())).thenThrow(exType);
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+
+ // Act
+ keyMetaAdminService.disableKeyManagement(controller, request, disableKeyManagementDone);
+
+ // Assert
+ verify(controller).setFailed(contains(exType.getSimpleName()));
+ verify(keymetaAdmin).disableKeyManagement(any(), any());
+ verify(disableKeyManagementDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testDisableKeyManagement_InvalidCust() throws Exception {
+ // Arrange
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.EMPTY).build();
+
+ keyMetaAdminService.disableKeyManagement(controller, request, disableKeyManagementDone);
+
+ verify(controller).setFailed(contains("key_cust must not be empty"));
+ verify(keymetaAdmin, never()).disableKeyManagement(any(), any());
+ verify(disableKeyManagementDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testDisableKeyManagement_InvalidNamespace() throws Exception {
+ // Arrange
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
+ .setKeyNamespace("").build();
+
+ keyMetaAdminService.disableKeyManagement(controller, request, disableKeyManagementDone);
+
+ verify(controller).setFailed(contains("key_namespace must not be empty"));
+ verify(keymetaAdmin, never()).disableKeyManagement(any(), any());
+ verify(disableKeyManagementDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testDisableManagedKey_Success() throws Exception {
+ // Arrange
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+ when(keymetaAdmin.disableManagedKey(any(), any(), any())).thenReturn(keyData1);
+
+ // Act
+ keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
+
+ // Assert
+ verify(disableManagedKeyDone).run(any());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ @Test
+ public void testDisableManagedKey_IOException() throws Exception {
+ doTestDisableManagedKeyError(IOException.class);
+ }
+
+ @Test
+ public void testDisableManagedKey_KeyException() throws Exception {
+ doTestDisableManagedKeyError(KeyException.class);
+ }
+
+ private void doTestDisableManagedKeyError(Class extends Exception> exType) throws Exception {
+ // Arrange
+ when(keymetaAdmin.disableManagedKey(any(), any(), any())).thenThrow(exType);
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+
+ // Act
+ keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
+
+ // Assert
+ verify(controller).setFailed(contains(exType.getSimpleName()));
+ verify(keymetaAdmin).disableManagedKey(any(), any(), any());
+ verify(disableManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testDisableManagedKey_InvalidCust() throws Exception {
+ // Arrange
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(
+ requestBuilder.setKeyCust(ByteString.EMPTY).setKeyNamespace(KEY_NAMESPACE).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+
+ keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
+
+ verify(controller).setFailed(contains("key_cust must not be empty"));
+ verify(keymetaAdmin, never()).disableManagedKey(any(), any(), any());
+ verify(disableManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testDisableManagedKey_InvalidNamespace() throws Exception {
+ // Arrange
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
+ .setKeyNamespace("").build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+
+ keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
+
+ verify(controller).setFailed(contains("key_namespace must not be empty"));
+ verify(keymetaAdmin, never()).disableManagedKey(any(), any(), any());
+ verify(disableManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testRotateManagedKey_Success() throws Exception {
+ // Arrange
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+ when(keymetaAdmin.rotateManagedKey(any(), any())).thenReturn(keyData1);
+
+ // Act
+ keyMetaAdminService.rotateManagedKey(controller, request, rotateManagedKeyDone);
+
+ // Assert
+ verify(rotateManagedKeyDone).run(any());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ @Test
+ public void testRotateManagedKey_IOException() throws Exception {
+ doTestRotateManagedKeyError(IOException.class);
+ }
+
+ @Test
+ public void testRotateManagedKey_KeyException() throws Exception {
+ doTestRotateManagedKeyError(KeyException.class);
+ }
+
+ private void doTestRotateManagedKeyError(Class extends Exception> exType) throws Exception {
+ // Arrange
+ when(keymetaAdmin.rotateManagedKey(any(), any())).thenThrow(exType);
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+
+ // Act
+ keyMetaAdminService.rotateManagedKey(controller, request, rotateManagedKeyDone);
+
+ // Assert
+ verify(controller).setFailed(contains(exType.getSimpleName()));
+ verify(keymetaAdmin).rotateManagedKey(any(), any());
+ verify(rotateManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testRotateManagedKey_InvalidCust() throws Exception {
+ // Arrange
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.EMPTY).setKeyNamespace(KEY_NAMESPACE).build();
+
+ keyMetaAdminService.rotateManagedKey(controller, request, rotateManagedKeyDone);
+
+ verify(controller).setFailed(contains("key_cust must not be empty"));
+ verify(keymetaAdmin, never()).rotateManagedKey(any(), any());
+ verify(rotateManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testRefreshManagedKeys_Success() throws Exception {
+ // Arrange
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+
+ // Act
+ keyMetaAdminService.refreshManagedKeys(controller, request, refreshManagedKeysDone);
+
+ // Assert
+ verify(refreshManagedKeysDone).run(any());
+ verify(controller, never()).setFailed(anyString());
+ }
+
+ @Test
+ public void testRefreshManagedKeys_IOException() throws Exception {
+ doTestRefreshManagedKeysError(IOException.class);
+ }
+
+ @Test
+ public void testRefreshManagedKeys_KeyException() throws Exception {
+ doTestRefreshManagedKeysError(KeyException.class);
+ }
+
+ private void doTestRefreshManagedKeysError(Class extends Exception> exType) throws Exception {
+ // Arrange
+ Mockito.doThrow(exType).when(keymetaAdmin).refreshManagedKeys(any(), any());
+ ManagedKeyRequest request =
+ requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
+
+ // Act
+ keyMetaAdminService.refreshManagedKeys(controller, request, refreshManagedKeysDone);
+
+ // Assert
+ verify(controller).setFailed(contains(exType.getSimpleName()));
+ verify(keymetaAdmin).refreshManagedKeys(any(), any());
+ verify(refreshManagedKeysDone).run(EmptyMsg.getDefaultInstance());
+ }
+
+ @Test
+ public void testRefreshManagedKeys_InvalidCust() throws Exception {
+ // Arrange
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.EMPTY).build();
+
+ keyMetaAdminService.refreshManagedKeys(controller, request, refreshManagedKeysDone);
+
+ verify(controller).setFailed(contains("key_cust must not be empty"));
+ verify(keymetaAdmin, never()).refreshManagedKeys(any(), any());
+ verify(refreshManagedKeysDone).run(EmptyMsg.getDefaultInstance());
+ }
+
+ @Test
+ public void testRefreshManagedKeys_InvalidNamespace() throws Exception {
+ // Arrange
+ ManagedKeyRequest request = requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
+ .setKeyNamespace("").build();
+
+ // Act
+ keyMetaAdminService.refreshManagedKeys(controller, request, refreshManagedKeysDone);
+
+ // Assert
+ verify(controller).setFailed(contains("key_namespace must not be empty"));
+ verify(keymetaAdmin, never()).refreshManagedKeys(any(), any());
+ verify(refreshManagedKeysDone).run(EmptyMsg.getDefaultInstance());
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java
new file mode 100644
index 000000000000..fde1d81481c1
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java
@@ -0,0 +1,591 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE_DISABLED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.FAILED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.INACTIVE;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.INACTIVE_DISABLED;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.DEK_CHECKSUM_QUAL_BYTES;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.DEK_METADATA_QUAL_BYTES;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.DEK_WRAPPED_BY_STK_QUAL_BYTES;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.KEY_META_INFO_FAMILY;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.KEY_STATE_QUAL_BYTES;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.REFRESHED_TIMESTAMP_QUAL_BYTES;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.STK_CHECKSUM_QUAL_BYTES;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.constructRowKeyForCustNamespace;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.constructRowKeyForMetadata;
+import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.parseFromResult;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.anyLong;
+import static org.mockito.Mockito.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.NavigableMap;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Durability;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.security.EncryptionUtil;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameter;
+import org.junit.runners.Suite;
+import org.mockito.ArgumentCaptor;
+import org.mockito.Captor;
+import org.mockito.Mock;
+import org.mockito.Mockito;
+import org.mockito.MockitoAnnotations;
+
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestKeymetaTableAccessor.TestAdd.class,
+ TestKeymetaTableAccessor.TestAddWithNullableFields.class, TestKeymetaTableAccessor.TestGet.class,
+ TestKeymetaTableAccessor.TestDisableKey.class,
+ TestKeymetaTableAccessor.TestUpdateActiveState.class, })
+@Category({ MasterTests.class, SmallTests.class })
+public class TestKeymetaTableAccessor {
+ protected static final String ALIAS = "custId1";
+ protected static final byte[] CUST_ID = ALIAS.getBytes();
+ protected static final String KEY_NAMESPACE = "namespace";
+ protected static String KEY_METADATA = "metadata1";
+
+ @Mock
+ protected MasterServices server;
+ @Mock
+ protected Connection connection;
+ @Mock
+ protected Table table;
+ @Mock
+ protected ResultScanner scanner;
+ @Mock
+ protected SystemKeyCache systemKeyCache;
+ @Mock
+ protected KeyManagementService keyManagementService;
+
+ protected KeymetaTableAccessor accessor;
+ protected Configuration conf = HBaseConfiguration.create();
+ protected MockManagedKeyProvider managedKeyProvider;
+ protected ManagedKeyData latestSystemKey;
+
+ private AutoCloseable closeableMocks;
+
+ @Before
+ public void setUp() throws Exception {
+ closeableMocks = MockitoAnnotations.openMocks(this);
+
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ MockManagedKeyProvider.class.getName());
+
+ when(server.getConnection()).thenReturn(connection);
+ when(connection.getTable(KeymetaTableAccessor.KEY_META_TABLE_NAME)).thenReturn(table);
+ when(server.getSystemKeyCache()).thenReturn(systemKeyCache);
+ when(server.getConfiguration()).thenReturn(conf);
+ when(server.getKeyManagementService()).thenReturn(keyManagementService);
+ when(keyManagementService.getConfiguration()).thenReturn(conf);
+ when(keyManagementService.getSystemKeyCache()).thenReturn(systemKeyCache);
+
+ accessor = new KeymetaTableAccessor(server);
+ managedKeyProvider = new MockManagedKeyProvider();
+ managedKeyProvider.initConfig(conf, "");
+
+ latestSystemKey = managedKeyProvider.getSystemKey("system-id".getBytes());
+ when(systemKeyCache.getLatestSystemKey()).thenReturn(latestSystemKey);
+ when(systemKeyCache.getSystemKeyByChecksum(anyLong())).thenReturn(latestSystemKey);
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ closeableMocks.close();
+ }
+
+ @RunWith(Parameterized.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestAdd extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE = HBaseClassTestRule.forClass(TestAdd.class);
+
+ @Parameter(0)
+ public ManagedKeyState keyState;
+
+ @Parameterized.Parameters(name = "{index},keyState={0}")
+ public static Collection data() {
+ return Arrays.asList(new Object[][] { { ACTIVE }, { FAILED }, { INACTIVE }, { DISABLED }, });
+ }
+
+ @Captor
+ private ArgumentCaptor> putCaptor;
+
+ @Test
+ public void testAddKey() throws Exception {
+ managedKeyProvider.setMockedKeyState(ALIAS, keyState);
+ ManagedKeyData keyData = managedKeyProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+
+ accessor.addKey(keyData);
+
+ verify(table).put(putCaptor.capture());
+ List puts = putCaptor.getValue();
+ assertEquals(keyState == ACTIVE ? 2 : 1, puts.size());
+ if (keyState == ACTIVE) {
+ assertPut(keyData, puts.get(0), constructRowKeyForCustNamespace(keyData), ACTIVE);
+ assertPut(keyData, puts.get(1), constructRowKeyForMetadata(keyData), ACTIVE);
+ } else {
+ assertPut(keyData, puts.get(0), constructRowKeyForMetadata(keyData), keyState);
+ }
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestAddWithNullableFields extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestAddWithNullableFields.class);
+
+ @Captor
+ private ArgumentCaptor> batchCaptor;
+
+ @Test
+ public void testAddKeyManagementStateMarker() throws Exception {
+ managedKeyProvider.setMockedKeyState(ALIAS, FAILED);
+ ManagedKeyData keyData = new ManagedKeyData(CUST_ID, KEY_SPACE_GLOBAL, FAILED);
+
+ accessor.addKeyManagementStateMarker(keyData.getKeyCustodian(), keyData.getKeyNamespace(),
+ keyData.getKeyState());
+
+ verify(table).batch(batchCaptor.capture(), any());
+ List mutations = batchCaptor.getValue();
+ assertEquals(2, mutations.size());
+ Mutation mutation1 = mutations.get(0);
+ Mutation mutation2 = mutations.get(1);
+ assertTrue(mutation1 instanceof Put);
+ assertTrue(mutation2 instanceof Delete);
+ Put put = (Put) mutation1;
+ Delete delete = (Delete) mutation2;
+
+ // Verify the row key uses state value for metadata hash
+ byte[] expectedRowKey = constructRowKeyForCustNamespace(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(0, Bytes.compareTo(expectedRowKey, put.getRow()));
+
+ Map valueMap = getValueMap(put);
+
+ // Verify key-related columns are not present
+ assertNull(valueMap.get(new Bytes(DEK_CHECKSUM_QUAL_BYTES)));
+ assertNull(valueMap.get(new Bytes(DEK_WRAPPED_BY_STK_QUAL_BYTES)));
+ assertNull(valueMap.get(new Bytes(STK_CHECKSUM_QUAL_BYTES)));
+
+ assertEquals(Durability.SKIP_WAL, put.getDurability());
+ assertEquals(HConstants.SYSTEMTABLE_QOS, put.getPriority());
+
+ // Verify state is set correctly
+ assertEquals(new Bytes(new byte[] { FAILED.getVal() }),
+ valueMap.get(new Bytes(KEY_STATE_QUAL_BYTES)));
+
+ // Verify the delete operation properties
+ assertEquals(Durability.SKIP_WAL, delete.getDurability());
+ assertEquals(HConstants.SYSTEMTABLE_QOS, delete.getPriority());
+
+ // Verify the row key is correct for a failure marker
+ assertEquals(0, Bytes.compareTo(expectedRowKey, delete.getRow()));
+ // Verify the key checksum, wrapped key, and STK checksum columns are deleted
+ assertDeleteColumns(delete);
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestGet extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE = HBaseClassTestRule.forClass(TestGet.class);
+
+ @Mock
+ private Result result1;
+ @Mock
+ private Result result2;
+
+ private String keyMetadata2 = "metadata2";
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+
+ when(result1.isEmpty()).thenReturn(false);
+ when(result2.isEmpty()).thenReturn(false);
+ when(result1.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
+ .thenReturn(new byte[] { ACTIVE.getVal() });
+ when(result2.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
+ .thenReturn(new byte[] { FAILED.getVal() });
+ for (Result result : Arrays.asList(result1, result2)) {
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(REFRESHED_TIMESTAMP_QUAL_BYTES)))
+ .thenReturn(Bytes.toBytes(0L));
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(STK_CHECKSUM_QUAL_BYTES)))
+ .thenReturn(Bytes.toBytes(0L));
+ }
+ when(result1.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_METADATA_QUAL_BYTES)))
+ .thenReturn(KEY_METADATA.getBytes());
+ when(result2.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_METADATA_QUAL_BYTES)))
+ .thenReturn(keyMetadata2.getBytes());
+ }
+
+ @Test
+ public void testParseEmptyResult() throws Exception {
+ Result result = mock(Result.class);
+ when(result.isEmpty()).thenReturn(true);
+
+ assertNull(parseFromResult(server, CUST_ID, KEY_NAMESPACE, null));
+ assertNull(parseFromResult(server, CUST_ID, KEY_NAMESPACE, result));
+ }
+
+ @Test
+ public void testGetActiveKeyMissingWrappedKey() throws Exception {
+ Result result = mock(Result.class);
+ when(table.get(any(Get.class))).thenReturn(result);
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
+ .thenReturn(new byte[] { ACTIVE.getVal() }, new byte[] { INACTIVE.getVal() });
+
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(KEY_METADATA);
+ IOException ex;
+ ex = assertThrows(IOException.class,
+ () -> accessor.getKey(CUST_ID, KEY_SPACE_GLOBAL, keyMetadataHash));
+ assertEquals("ACTIVE key must have a wrapped key", ex.getMessage());
+ ex = assertThrows(IOException.class,
+ () -> accessor.getKey(CUST_ID, KEY_SPACE_GLOBAL, keyMetadataHash));
+ assertEquals("INACTIVE key must have a wrapped key", ex.getMessage());
+ }
+
+ @Test
+ public void testGetKeyMissingSTK() throws Exception {
+ when(result1.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_WRAPPED_BY_STK_QUAL_BYTES)))
+ .thenReturn(new byte[] { 0 });
+ when(systemKeyCache.getSystemKeyByChecksum(anyLong())).thenReturn(null);
+ when(table.get(any(Get.class))).thenReturn(result1);
+
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(KEY_METADATA);
+ ManagedKeyData result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+
+ assertNull(result);
+ }
+
+ @Test
+ public void testGetKeyWithWrappedKey() throws Exception {
+ ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
+
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyData.getKeyMetadata());
+ ManagedKeyData result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+
+ verify(table).get(any(Get.class));
+ assertNotNull(result);
+ assertEquals(0, Bytes.compareTo(CUST_ID, result.getKeyCustodian()));
+ assertEquals(KEY_NAMESPACE, result.getKeyNamespace());
+ assertEquals(keyData.getKeyMetadata(), result.getKeyMetadata());
+ assertEquals(0,
+ Bytes.compareTo(keyData.getTheKey().getEncoded(), result.getTheKey().getEncoded()));
+ assertEquals(ACTIVE, result.getKeyState());
+
+ // When DEK checksum doesn't match, we expect a null value.
+ result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+ assertNull(result);
+ }
+
+ @Test
+ public void testGetKeyWithoutWrappedKey() throws Exception {
+ when(table.get(any(Get.class))).thenReturn(result2);
+
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata2);
+ ManagedKeyData result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+
+ verify(table).get(any(Get.class));
+ assertNotNull(result);
+ assertEquals(0, Bytes.compareTo(CUST_ID, result.getKeyCustodian()));
+ assertEquals(KEY_NAMESPACE, result.getKeyNamespace());
+ assertEquals(keyMetadata2, result.getKeyMetadata());
+ assertNull(result.getTheKey());
+ assertEquals(FAILED, result.getKeyState());
+ }
+
+ @Test
+ public void testGetAllKeys() throws Exception {
+ ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
+
+ when(scanner.iterator()).thenReturn(List.of(result1, result2).iterator());
+ when(table.getScanner(any(Scan.class))).thenReturn(scanner);
+
+ List allKeys = accessor.getAllKeys(CUST_ID, KEY_NAMESPACE, true);
+
+ assertEquals(2, allKeys.size());
+ assertEquals(keyData.getKeyMetadata(), allKeys.get(0).getKeyMetadata());
+ assertEquals(keyMetadata2, allKeys.get(1).getKeyMetadata());
+ verify(table).getScanner(any(Scan.class));
+ }
+
+ @Test
+ public void testGetActiveKey() throws Exception {
+ ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
+
+ when(scanner.iterator()).thenReturn(List.of(result1).iterator());
+ when(table.get(any(Get.class))).thenReturn(result1);
+
+ ManagedKeyData activeKey = accessor.getKeyManagementStateMarker(CUST_ID, KEY_NAMESPACE);
+
+ assertNotNull(activeKey);
+ assertEquals(keyData, activeKey);
+ verify(table).get(any(Get.class));
+ }
+
+ private ManagedKeyData setupActiveKey(byte[] custId, Result result) throws Exception {
+ ManagedKeyData keyData = managedKeyProvider.getManagedKey(custId, KEY_NAMESPACE);
+ byte[] dekWrappedBySTK =
+ EncryptionUtil.wrapKey(conf, null, keyData.getTheKey(), latestSystemKey.getTheKey());
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_WRAPPED_BY_STK_QUAL_BYTES)))
+ .thenReturn(dekWrappedBySTK);
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_CHECKSUM_QUAL_BYTES)))
+ .thenReturn(Bytes.toBytes(keyData.getKeyChecksum()), Bytes.toBytes(0L));
+ // Update the mock to return the correct metadata from the keyData
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_METADATA_QUAL_BYTES)))
+ .thenReturn(keyData.getKeyMetadata().getBytes());
+ when(table.get(any(Get.class))).thenReturn(result);
+ return keyData;
+ }
+ }
+
+ protected void assertPut(ManagedKeyData keyData, Put put, byte[] rowKey,
+ ManagedKeyState targetState) {
+ assertEquals(Durability.SKIP_WAL, put.getDurability());
+ assertEquals(HConstants.SYSTEMTABLE_QOS, put.getPriority());
+ assertTrue(Bytes.compareTo(rowKey, put.getRow()) == 0);
+
+ Map valueMap = getValueMap(put);
+
+ if (keyData.getTheKey() != null) {
+ assertNotNull(valueMap.get(new Bytes(DEK_CHECKSUM_QUAL_BYTES)));
+ assertNotNull(valueMap.get(new Bytes(DEK_WRAPPED_BY_STK_QUAL_BYTES)));
+ assertEquals(new Bytes(Bytes.toBytes(latestSystemKey.getKeyChecksum())),
+ valueMap.get(new Bytes(STK_CHECKSUM_QUAL_BYTES)));
+ } else {
+ assertNull(valueMap.get(new Bytes(DEK_CHECKSUM_QUAL_BYTES)));
+ assertNull(valueMap.get(new Bytes(DEK_WRAPPED_BY_STK_QUAL_BYTES)));
+ assertNull(valueMap.get(new Bytes(STK_CHECKSUM_QUAL_BYTES)));
+ }
+ assertEquals(new Bytes(keyData.getKeyMetadata().getBytes()),
+ valueMap.get(new Bytes(DEK_METADATA_QUAL_BYTES)));
+ assertNotNull(valueMap.get(new Bytes(REFRESHED_TIMESTAMP_QUAL_BYTES)));
+ assertEquals(new Bytes(new byte[] { targetState.getVal() }),
+ valueMap.get(new Bytes(KEY_STATE_QUAL_BYTES)));
+ }
+
+ // Verify the key checksum, wrapped key, and STK checksum columns are deleted
+ private static void assertDeleteColumns(Delete delete) {
+ Map> familyCellMap = delete.getFamilyCellMap();
+ assertTrue(familyCellMap.containsKey(KEY_META_INFO_FAMILY));
+
+ List cells = familyCellMap.get(KEY_META_INFO_FAMILY);
+ assertEquals(3, cells.size());
+
+ // Verify each column is present in the delete
+ Set qualifiers =
+ cells.stream().map(CellUtil::cloneQualifier).collect(Collectors.toSet());
+
+ assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, DEK_CHECKSUM_QUAL_BYTES)));
+ assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, DEK_WRAPPED_BY_STK_QUAL_BYTES)));
+ assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, STK_CHECKSUM_QUAL_BYTES)));
+ }
+
+ private static Map getValueMap(Mutation mutation) {
+ NavigableMap> familyCellMap = mutation.getFamilyCellMap();
+ List cells = familyCellMap.get(KEY_META_INFO_FAMILY);
+ Map valueMap = new HashMap<>();
+ for (Cell cell : cells) {
+ valueMap.put(
+ new Bytes(cell.getQualifierArray(), cell.getQualifierOffset(), cell.getQualifierLength()),
+ new Bytes(cell.getValueArray(), cell.getValueOffset(), cell.getValueLength()));
+ }
+ return valueMap;
+ }
+
+ /**
+ * Tests for disableKey() method.
+ */
+ @RunWith(Parameterized.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestDisableKey extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestDisableKey.class);
+
+ // Parameterize the key state
+ @Parameter(0)
+ public ManagedKeyState keyState;
+
+ @Captor
+ private ArgumentCaptor> mutationsCaptor;
+
+ @Parameterized.Parameters(name = "{index},keyState={0}")
+ public static Collection data() {
+ return Arrays.asList(new Object[][] { { ACTIVE }, { INACTIVE }, { ACTIVE_DISABLED },
+ { INACTIVE_DISABLED }, { FAILED }, });
+ }
+
+ @Test
+ public void testDisableKey() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, keyState, "testMetadata");
+
+ accessor.disableKey(keyData);
+
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ assertEquals(keyState == ACTIVE ? 3 : keyState == INACTIVE ? 2 : 1, mutations.size());
+ int putIndex = 0;
+ ManagedKeyState targetState = keyState == ACTIVE ? ACTIVE_DISABLED : INACTIVE_DISABLED;
+ if (keyState == ACTIVE) {
+ assertTrue(
+ Bytes.compareTo(constructRowKeyForCustNamespace(keyData), mutations.get(0).getRow())
+ == 0);
+ ++putIndex;
+ }
+ assertPut(keyData, (Put) mutations.get(putIndex), constructRowKeyForMetadata(keyData),
+ targetState);
+ if (keyState == INACTIVE) {
+ assertTrue(
+ Bytes.compareTo(constructRowKeyForMetadata(keyData), mutations.get(putIndex + 1).getRow())
+ == 0);
+ // Verify the key checksum, wrapped key, and STK checksum columns are deleted
+ assertDeleteColumns((Delete) mutations.get(putIndex + 1));
+ }
+ }
+ }
+
+ /**
+ * Tests for updateActiveState() method.
+ */
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestUpdateActiveState extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestUpdateActiveState.class);
+
+ @Captor
+ private ArgumentCaptor> mutationsCaptor;
+
+ @Test
+ public void testUpdateActiveStateFromInactiveToActive() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, INACTIVE, "metadata", 123L);
+ ManagedKeyData systemKey =
+ new ManagedKeyData(new byte[] { 1 }, KEY_SPACE_GLOBAL, null, ACTIVE, "syskey", 100L);
+ when(systemKeyCache.getLatestSystemKey()).thenReturn(systemKey);
+
+ accessor.updateActiveState(keyData, ACTIVE);
+
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ assertEquals(2, mutations.size());
+ }
+
+ @Test
+ public void testUpdateActiveStateFromActiveToInactive() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, ACTIVE, "metadata", 123L);
+
+ accessor.updateActiveState(keyData, INACTIVE);
+
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ assertEquals(2, mutations.size());
+ }
+
+ @Test
+ public void testUpdateActiveStateNoOp() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, ACTIVE, "metadata", 123L);
+
+ accessor.updateActiveState(keyData, ACTIVE);
+
+ verify(table, Mockito.never()).batch(any(), any());
+ }
+
+ @Test
+ public void testUpdateActiveStateFromDisabledToActive() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, DISABLED, "metadata", 123L);
+ ManagedKeyData systemKey =
+ new ManagedKeyData(new byte[] { 1 }, KEY_SPACE_GLOBAL, null, ACTIVE, "syskey", 100L);
+ when(systemKeyCache.getLatestSystemKey()).thenReturn(systemKey);
+
+ accessor.updateActiveState(keyData, ACTIVE);
+
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ // Should have 2 mutations: add CustNamespace row and add all columns to Metadata row
+ assertEquals(2, mutations.size());
+ }
+
+ @Test
+ public void testUpdateActiveStateInvalidNewState() {
+ ManagedKeyData keyData =
+ new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, ACTIVE, "metadata", 123L);
+
+ assertThrows(IllegalArgumentException.class,
+ () -> accessor.updateActiveState(keyData, DISABLED));
+ }
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java
new file mode 100644
index 000000000000..0b00df9e57b6
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java
@@ -0,0 +1,862 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.FAILED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.INACTIVE;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.clearInvocations;
+import static org.mockito.Mockito.doReturn;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.lang.reflect.InvocationTargetException;
+import java.lang.reflect.Method;
+import java.util.Arrays;
+import java.util.stream.Collectors;
+import net.bytebuddy.ByteBuddy;
+import net.bytebuddy.dynamic.loading.ClassLoadingStrategy;
+import net.bytebuddy.implementation.MethodDelegation;
+import net.bytebuddy.implementation.bind.annotation.AllArguments;
+import net.bytebuddy.implementation.bind.annotation.Origin;
+import net.bytebuddy.implementation.bind.annotation.RuntimeType;
+import net.bytebuddy.matcher.ElementMatchers;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Suite;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+import org.mockito.Spy;
+
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestManagedKeyDataCache.TestGeneric.class,
+ TestManagedKeyDataCache.TestWithoutL2Cache.class,
+ TestManagedKeyDataCache.TestWithL2CacheAndNoDynamicLookup.class,
+ TestManagedKeyDataCache.TestWithL2CacheAndDynamicLookup.class, })
+@Category({ MasterTests.class, SmallTests.class })
+public class TestManagedKeyDataCache {
+ private static final String ALIAS = "cust1";
+ private static final byte[] CUST_ID = ALIAS.getBytes();
+ private static Class extends MockManagedKeyProvider> providerClass;
+
+ @Mock
+ private Server server;
+ @Spy
+ protected MockManagedKeyProvider testProvider;
+ protected ManagedKeyDataCache cache;
+ protected Configuration conf = HBaseConfiguration.create();
+
+ public static class ForwardingInterceptor {
+ static ThreadLocal delegate = new ThreadLocal<>();
+
+ static void setDelegate(MockManagedKeyProvider d) {
+ delegate.set(d);
+ }
+
+ @RuntimeType
+ public Object intercept(@Origin Method method, @AllArguments Object[] args) throws Throwable {
+ // Translate the InvocationTargetException that results when the provider throws an exception.
+ // This is actually not needed if the intercept is delegated directly to the spy.
+ try {
+ return method.invoke(delegate.get(), args); // calls the spy, triggering Mockito
+ } catch (InvocationTargetException e) {
+ throw e.getCause();
+ }
+ }
+ }
+
+ @BeforeClass
+ public static synchronized void setUpInterceptor() {
+ if (providerClass != null) {
+ return;
+ }
+ providerClass = new ByteBuddy().subclass(MockManagedKeyProvider.class)
+ .name("org.apache.hadoop.hbase.io.crypto.MockManagedKeyProviderSpy")
+ .method(ElementMatchers.any()) // Intercept all methods
+ // Using a delegator instead of directly forwarding to testProvider to
+ // facilitate switching the testProvider instance. Besides, it
+ .intercept(MethodDelegation.to(new ForwardingInterceptor())).make()
+ .load(MockManagedKeyProvider.class.getClassLoader(), ClassLoadingStrategy.Default.INJECTION)
+ .getLoaded();
+ }
+
+ @Before
+ public void setUp() {
+ MockitoAnnotations.openMocks(this);
+ ForwardingInterceptor.setDelegate(testProvider);
+
+ Encryption.clearKeyProviderCache();
+
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY, providerClass.getName());
+
+ // Configure the server mock to return the configuration
+ when(server.getConfiguration()).thenReturn(conf);
+
+ testProvider.setMultikeyGenMode(true);
+ }
+
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestGeneric {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestGeneric.class);
+
+ @Test
+ public void testEmptyCache() throws Exception {
+ ManagedKeyDataCache cache = new ManagedKeyDataCache(HBaseConfiguration.create(), null);
+ assertEquals(0, cache.getGenericCacheEntryCount());
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testActiveKeysCacheKeyEqualsAndHashCode() {
+ byte[] custodian1 = new byte[] { 1, 2, 3 };
+ byte[] custodian2 = new byte[] { 1, 2, 3 };
+ byte[] custodian3 = new byte[] { 4, 5, 6 };
+ String namespace1 = "ns1";
+ String namespace2 = "ns2";
+
+ // Reflexive
+ ManagedKeyDataCache.ActiveKeysCacheKey key1 =
+ new ManagedKeyDataCache.ActiveKeysCacheKey(custodian1, namespace1);
+ assertTrue(key1.equals(key1));
+
+ // Symmetric and consistent for equal content
+ ManagedKeyDataCache.ActiveKeysCacheKey key2 =
+ new ManagedKeyDataCache.ActiveKeysCacheKey(custodian2, namespace1);
+ assertTrue(key1.equals(key2));
+ assertTrue(key2.equals(key1));
+ assertEquals(key1.hashCode(), key2.hashCode());
+
+ // Different custodian
+ ManagedKeyDataCache.ActiveKeysCacheKey key3 =
+ new ManagedKeyDataCache.ActiveKeysCacheKey(custodian3, namespace1);
+ assertFalse(key1.equals(key3));
+ assertFalse(key3.equals(key1));
+
+ // Different namespace
+ ManagedKeyDataCache.ActiveKeysCacheKey key4 =
+ new ManagedKeyDataCache.ActiveKeysCacheKey(custodian1, namespace2);
+ assertFalse(key1.equals(key4));
+ assertFalse(key4.equals(key1));
+
+ // Null and different class
+ assertFalse(key1.equals(null));
+ assertFalse(key1.equals("not a key"));
+
+ // Both fields different
+ ManagedKeyDataCache.ActiveKeysCacheKey key5 =
+ new ManagedKeyDataCache.ActiveKeysCacheKey(custodian3, namespace2);
+ assertFalse(key1.equals(key5));
+ assertFalse(key5.equals(key1));
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestWithoutL2Cache extends TestManagedKeyDataCache {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestWithoutL2Cache.class);
+
+ @Before
+ public void setUp() {
+ super.setUp();
+ cache = new ManagedKeyDataCache(conf, null);
+ }
+
+ @Test
+ public void testGenericCacheForInvalidMetadata() throws Exception {
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
+ verify(testProvider).unwrapKey(any(String.class), any());
+ }
+
+ @Test
+ public void testWithInvalidProvider() throws Exception {
+ ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ doThrow(new IOException("Test exception")).when(testProvider).unwrapKey(any(String.class),
+ any());
+ // With no L2 and invalid provider, there will be no entry.
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey1.getKeyMetadata(), null));
+ verify(testProvider).unwrapKey(any(String.class), any());
+ clearInvocations(testProvider);
+
+ // A second call to getEntry should not result in a call to the provider due to -ve entry.
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey1.getKeyMetadata(), null));
+ verify(testProvider, never()).unwrapKey(any(String.class), any());
+
+ //
+ doThrow(new IOException("Test exception")).when(testProvider).getManagedKey(any(),
+ any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+
+ // A second call to getActiveEntry should not result in a call to the provider due to -ve
+ // entry.
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ }
+
+ @Test
+ public void testGenericCache() throws Exception {
+ ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(globalKey1,
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey1.getKeyMetadata(), null));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+ ManagedKeyData globalKey2 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(globalKey2,
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey2.getKeyMetadata(), null));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+ ManagedKeyData globalKey3 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(globalKey3,
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey3.getKeyMetadata(), null));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ }
+
+ @Test
+ public void testActiveKeysCache() throws Exception {
+ assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+ ManagedKeyData activeKey = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(activeKey);
+ assertEquals(activeKey, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ }
+
+ @Test
+ public void testGenericCacheOperations() throws Exception {
+ ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData nsKey1 = testProvider.getManagedKey(CUST_ID, "namespace1");
+ assertGenericCacheEntries(nsKey1, globalKey1);
+ ManagedKeyData globalKey2 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertGenericCacheEntries(globalKey2, nsKey1, globalKey1);
+ ManagedKeyData nsKey2 = testProvider.getManagedKey(CUST_ID, "namespace1");
+ assertGenericCacheEntries(nsKey2, globalKey2, nsKey1, globalKey1);
+ }
+
+ @Test
+ public void testActiveKeyGetNoActive() throws Exception {
+ testProvider.setMockedKeyState(ALIAS, FAILED);
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ }
+
+ @Test
+ public void testActiveKeysCacheOperations() throws Exception {
+ assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertNotNull(cache.getActiveEntry(CUST_ID, "namespace1"));
+ assertEquals(2, cache.getActiveCacheEntryCount());
+
+ cache.clearCache();
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(1, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testGenericCacheUsingActiveKeysCacheOverProvider() throws Exception {
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key);
+ assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ verify(testProvider, never()).unwrapKey(any(String.class), any());
+ }
+
+ @Test
+ public void testThatActiveKeysCache_SkipsProvider_WhenLoadedViaGenericCache() throws Exception {
+ ManagedKeyData key1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(key1, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null));
+ ManagedKeyData key2 = testProvider.getManagedKey(CUST_ID, "namespace1");
+ assertEquals(key2, cache.getEntry(CUST_ID, "namespace1", key2.getKeyMetadata(), null));
+ verify(testProvider, times(2)).getManagedKey(any(), any(String.class));
+ assertEquals(2, cache.getActiveCacheEntryCount());
+ clearInvocations(testProvider);
+ assertEquals(key1, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(key2, cache.getActiveEntry(CUST_ID, "namespace1"));
+ // ACTIVE keys are automatically added to activeKeysCache when loaded
+ // via getEntry, so getActiveEntry will find them there and won't call the provider
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ cache.clearCache();
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testThatNonActiveKey_IsIgnored_WhenLoadedViaGenericCache() throws Exception {
+ testProvider.setMockedKeyState(ALIAS, FAILED);
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ assertEquals(0, cache.getActiveCacheEntryCount());
+
+ testProvider.setMockedKeyState(ALIAS, DISABLED);
+ key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ assertEquals(0, cache.getActiveCacheEntryCount());
+
+ testProvider.setMockedKeyState(ALIAS, INACTIVE);
+ key = testProvider.getManagedKey(CUST_ID, "namespace1");
+ assertEquals(key, cache.getEntry(CUST_ID, "namespace1", key.getKeyMetadata(), null));
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testActiveKeysCacheWithMultipleCustodiansInGenericCache() throws Exception {
+ ManagedKeyData key1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null));
+ String alias2 = "cust2";
+ byte[] cust_id2 = alias2.getBytes();
+ ManagedKeyData key2 = testProvider.getManagedKey(cust_id2, KEY_SPACE_GLOBAL);
+ assertNotNull(cache.getEntry(cust_id2, KEY_SPACE_GLOBAL, key2.getKeyMetadata(), null));
+ assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ // ACTIVE keys are automatically added to activeKeysCache when loaded.
+ assertEquals(2, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testActiveKeysCacheWithMultipleNamespaces() throws Exception {
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key1);
+ assertEquals(key1, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ ManagedKeyData key2 = cache.getActiveEntry(CUST_ID, "namespace1");
+ assertNotNull(key2);
+ assertEquals(key2, cache.getActiveEntry(CUST_ID, "namespace1"));
+ ManagedKeyData key3 = cache.getActiveEntry(CUST_ID, "namespace2");
+ assertNotNull(key3);
+ assertEquals(key3, cache.getActiveEntry(CUST_ID, "namespace2"));
+ verify(testProvider, times(3)).getManagedKey(any(), any(String.class));
+ assertEquals(3, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testEjectKey_ActiveKeysCacheOnly() throws Exception {
+ // Load a key into the active keys cache
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key);
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Eject the key - should remove from active keys cache
+ boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertTrue("Key should be ejected when metadata matches", ejected);
+ assertEquals(0, cache.getActiveCacheEntryCount());
+
+ // Try to eject again - should return false since it's already gone from active keys cache
+ boolean ejectedAgain = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertFalse("Should return false when key is already ejected", ejectedAgain);
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testEjectKey_GenericCacheOnly() throws Exception {
+ // Load a key into the generic cache
+ ManagedKeyData key = cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL,
+ testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL).getKeyMetadata(), null);
+ assertNotNull(key);
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Eject the key - should remove from generic cache
+ boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertTrue("Key should be ejected when metadata matches", ejected);
+ assertEquals(0, cache.getGenericCacheEntryCount());
+
+ // Try to eject again - should return false since it's already gone from generic cache
+ boolean ejectedAgain = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertFalse("Should return false when key is already ejected", ejectedAgain);
+ assertEquals(0, cache.getGenericCacheEntryCount());
+ }
+
+ @Test
+ public void testEjectKey_Success() throws Exception {
+ // Load a key into the active keys cache
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key);
+ String metadata = key.getKeyMetadata();
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Also load into the generic cache
+ ManagedKeyData keyFromGeneric = cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, metadata, null);
+ assertNotNull(keyFromGeneric);
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Eject the key with matching metadata - should remove from both caches
+ boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertTrue("Key should be ejected when metadata matches", ejected);
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ assertEquals(0, cache.getGenericCacheEntryCount());
+
+ // Try to eject again - should return false since it's already gone from active keys cache
+ boolean ejectedAgain = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertFalse("Should return false when key is already ejected", ejectedAgain);
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ assertEquals(0, cache.getGenericCacheEntryCount());
+ }
+
+ @Test
+ public void testEjectKey_MetadataMismatch() throws Exception {
+ // Load a key into both caches
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key);
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Also load into the generic cache
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Try to eject with wrong metadata - should not eject from either cache
+ String wrongMetadata = "wrong-metadata";
+ boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL,
+ ManagedKeyData.constructMetadataHash(wrongMetadata));
+ assertFalse("Key should not be ejected when metadata doesn't match", ejected);
+ assertEquals(1, cache.getActiveCacheEntryCount());
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Verify the key is still in both caches
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(key.getKeyMetadata(),
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash()).getKeyMetadata());
+ }
+
+ @Test
+ public void testEjectKey_KeyNotPresent() throws Exception {
+ // Try to eject a key that doesn't exist in the cache
+ String nonExistentMetadata = "non-existent-metadata";
+ boolean ejected = cache.ejectKey(CUST_ID, "non-existent-namespace",
+ ManagedKeyData.constructMetadataHash(nonExistentMetadata));
+ assertFalse("Should return false when key is not present", ejected);
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ }
+
+ @Test
+ public void testEjectKey_MultipleKeys() throws Exception {
+ // Load multiple keys into both caches
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key2 = cache.getActiveEntry(CUST_ID, "namespace1");
+ ManagedKeyData key3 = cache.getActiveEntry(CUST_ID, "namespace2");
+ assertNotNull(key1);
+ assertNotNull(key2);
+ assertNotNull(key3);
+ assertEquals(3, cache.getActiveCacheEntryCount());
+
+ // Also load all keys into the generic cache
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null);
+ cache.getEntry(CUST_ID, "namespace1", key2.getKeyMetadata(), null);
+ cache.getEntry(CUST_ID, "namespace2", key3.getKeyMetadata(), null);
+ assertEquals(3, cache.getGenericCacheEntryCount());
+
+ // Eject only the middle key from both caches
+ boolean ejected = cache.ejectKey(CUST_ID, "namespace1", key2.getKeyMetadataHash());
+ assertTrue("Key should be ejected from both caches", ejected);
+ assertEquals(2, cache.getActiveCacheEntryCount());
+ assertEquals(2, cache.getGenericCacheEntryCount());
+
+ // Verify only key2 was ejected - key1 and key3 should still be there
+ clearInvocations(testProvider);
+ assertEquals(key1, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(key3, cache.getActiveEntry(CUST_ID, "namespace2"));
+ // These getActiveEntry() calls should not trigger provider calls since keys are still cached
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+
+ // Verify generic cache still has key1 and key3
+ assertEquals(key1.getKeyMetadata(),
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null).getKeyMetadata());
+ assertEquals(key3.getKeyMetadata(),
+ cache.getEntry(CUST_ID, "namespace2", key3.getKeyMetadata(), null).getKeyMetadata());
+
+ // Try to eject key2 again - should return false since it's already gone from both caches
+ boolean ejectedAgain = cache.ejectKey(CUST_ID, "namespace1", key2.getKeyMetadataHash());
+ assertFalse("Should return false when key is already ejected", ejectedAgain);
+ assertEquals(2, cache.getActiveCacheEntryCount());
+ assertEquals(2, cache.getGenericCacheEntryCount());
+ }
+
+ @Test
+ public void testEjectKey_DifferentCustodian() throws Exception {
+ // Load a key for one custodian into both caches
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key);
+ String metadata = key.getKeyMetadata();
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Also load into the generic cache
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Try to eject with a different custodian - should not eject from either cache
+ byte[] differentCustodian = "different-cust".getBytes();
+ boolean ejected =
+ cache.ejectKey(differentCustodian, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertFalse("Should not eject key for different custodian", ejected);
+ assertEquals(1, cache.getActiveCacheEntryCount());
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Verify the original key is still in both caches
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(metadata,
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, metadata, null).getKeyMetadata());
+ }
+
+ @Test
+ public void testEjectKey_AfterClearCache() throws Exception {
+ // Load a key into both caches
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key);
+ String metadata = key.getKeyMetadata();
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Also load into the generic cache
+ cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, metadata, null);
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Clear both caches
+ cache.clearCache();
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ assertEquals(0, cache.getGenericCacheEntryCount());
+
+ // Try to eject the key after both caches are cleared
+ boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ assertFalse("Should return false when both caches are empty", ejected);
+ assertEquals(0, cache.getActiveCacheEntryCount());
+ assertEquals(0, cache.getGenericCacheEntryCount());
+ }
+
+ @Test
+ public void testGetEntry_HashCollisionOrMismatchDetection() throws Exception {
+ // Create a key and get it into the cache
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key1);
+
+ // Now simulate a hash collision by trying to get an entry with the same hash
+ // but different custodian/namespace
+ byte[] differentCust = "different-cust".getBytes();
+ String differentNamespace = "different-namespace";
+
+ // This should return null due to custodian/namespace mismatch (collision detection)
+ ManagedKeyData result =
+ cache.getEntry(differentCust, differentNamespace, key1.getKeyMetadata(), null);
+
+ // Result should be null because of hash collision detection
+ // The cache finds an entry with the same metadata hash, but custodian/namespace don't match
+ assertNull("Should return null when hash collision is detected", result);
+ }
+
+ @Test
+ public void testEjectKey_HashCollisionOrMismatchProtection() throws Exception {
+ // Create two keys with potential hash collision scenario
+ byte[] cust1 = "cust1".getBytes();
+ byte[] cust2 = "cust2".getBytes();
+ String namespace1 = "namespace1";
+
+ // Load a key for cust1
+ ManagedKeyData key1 = cache.getActiveEntry(cust1, namespace1);
+ assertNotNull(key1);
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Try to eject using same metadata hash but different custodian
+ // This should not eject the key due to custodian mismatch protection
+ boolean ejected = cache.ejectKey(cust2, namespace1, key1.getKeyMetadataHash());
+ assertFalse("Should not eject key with different custodian even if hash matches", ejected);
+ assertEquals(1, cache.getActiveCacheEntryCount());
+
+ // Verify the original key is still there
+ assertEquals(key1, cache.getActiveEntry(cust1, namespace1));
+ }
+
+ @Test
+ public void testEjectKey_HashCollisionInBothCaches() throws Exception {
+ // This test covers the scenario where rejectedValue is set during the first cache check
+ // (activeKeysCache) and then the second cache check (cacheByMetadataHash) takes the
+ // early return path because rejectedValue is already set.
+ byte[] cust1 = "cust1".getBytes();
+ byte[] cust2 = "cust2".getBytes();
+ String namespace1 = "namespace1";
+
+ // Load a key for cust1 - this will put it in BOTH activeKeysCache and cacheByMetadataHash
+ ManagedKeyData key1 = cache.getActiveEntry(cust1, namespace1);
+ assertNotNull(key1);
+
+ // Also access via generic cache to ensure it's in both caches
+ ManagedKeyData key1viaGeneric =
+ cache.getEntry(cust1, namespace1, key1.getKeyMetadata(), null);
+ assertNotNull(key1viaGeneric);
+ assertEquals(key1, key1viaGeneric);
+
+ // Verify both cache counts
+ assertEquals(1, cache.getActiveCacheEntryCount());
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Try to eject using same metadata hash but different custodian
+ // This will trigger the collision detection in BOTH caches:
+ // 1. First check in activeKeysCache will detect mismatch and set rejectedValue
+ // 2. Second check in cacheByMetadataHash should take early return (line 234)
+ boolean ejected = cache.ejectKey(cust2, namespace1, key1.getKeyMetadataHash());
+ assertFalse("Should not eject key with different custodian even if hash matches", ejected);
+
+ // Verify both caches still have the entry
+ assertEquals(1, cache.getActiveCacheEntryCount());
+ assertEquals(1, cache.getGenericCacheEntryCount());
+
+ // Verify the original key is still accessible
+ assertEquals(key1, cache.getActiveEntry(cust1, namespace1));
+ assertEquals(key1, cache.getEntry(cust1, namespace1, key1.getKeyMetadata(), null));
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestWithL2CacheAndNoDynamicLookup extends TestManagedKeyDataCache {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestWithL2CacheAndNoDynamicLookup.class);
+ private KeymetaTableAccessor mockL2 = mock(KeymetaTableAccessor.class);
+
+ @Before
+ public void setUp() {
+ super.setUp();
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_DYNAMIC_LOOKUP_ENABLED_CONF_KEY, false);
+ cache = new ManagedKeyDataCache(conf, mockL2);
+ }
+
+ @Test
+ public void testGenericCacheNonExistentKeyInL2Cache() throws Exception {
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
+ verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ clearInvocations(mockL2);
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
+ verify(mockL2, never()).getKey(any(), any(String.class), any(byte[].class));
+ }
+
+ @Test
+ public void testGenericCacheRetrievalFromL2Cache() throws Exception {
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ when(mockL2.getKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash())).thenReturn(key);
+ assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ }
+
+ @Test
+ public void testActiveKeysCacheNonExistentKeyInL2Cache() throws Exception {
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ clearInvocations(mockL2);
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2, never()).getKeyManagementStateMarker(any(), any(String.class));
+ }
+
+ @Test
+ public void testActiveKeysCacheRetrievalFromL2Cache() throws Exception {
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ when(mockL2.getKeyManagementStateMarker(CUST_ID, KEY_SPACE_GLOBAL)).thenReturn(key);
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ clearInvocations(mockL2);
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2, never()).getKeyManagementStateMarker(any(), any(String.class));
+ }
+
+ @Test
+ public void testGenericCacheWithKeymetaAccessorException() throws Exception {
+ when(mockL2.getKey(eq(CUST_ID), eq(KEY_SPACE_GLOBAL), any(byte[].class)))
+ .thenThrow(new IOException("Test exception"));
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
+ verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ clearInvocations(mockL2);
+ assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
+ verify(mockL2, never()).getKey(any(), any(String.class), any(byte[].class));
+ }
+
+ @Test
+ public void testGetActiveEntryWithKeymetaAccessorException() throws Exception {
+ when(mockL2.getKeyManagementStateMarker(CUST_ID, KEY_SPACE_GLOBAL))
+ .thenThrow(new IOException("Test exception"));
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ clearInvocations(mockL2);
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2, never()).getKeyManagementStateMarker(any(), any(String.class));
+ }
+
+ @Test
+ public void testActiveKeysCacheUsesKeymetaAccessorWhenGenericCacheEmpty() throws Exception {
+ // Ensure generic cache is empty
+ cache.clearCache();
+
+ // Mock the keymetaAccessor to return a key
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ when(mockL2.getKeyManagementStateMarker(CUST_ID, KEY_SPACE_GLOBAL)).thenReturn(key);
+
+ // Get the active entry - it should call keymetaAccessor since generic cache is empty
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestWithL2CacheAndDynamicLookup extends TestManagedKeyDataCache {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestWithL2CacheAndDynamicLookup.class);
+ private KeymetaTableAccessor mockL2 = mock(KeymetaTableAccessor.class);
+
+ @Before
+ public void setUp() {
+ super.setUp();
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_DYNAMIC_LOOKUP_ENABLED_CONF_KEY, true);
+ cache = new ManagedKeyDataCache(conf, mockL2);
+ }
+
+ @Test
+ public void testGenericCacheRetrivalFromProviderWhenKeyNotFoundInL2Cache() throws Exception {
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ doReturn(key).when(testProvider).unwrapKey(any(String.class), any());
+ assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ verify(mockL2).addKey(any(ManagedKeyData.class));
+ }
+
+ @Test
+ public void testAddKeyFailure() throws Exception {
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ doReturn(key).when(testProvider).unwrapKey(any(String.class), any());
+ doThrow(new IOException("Test exception")).when(mockL2).addKey(any(ManagedKeyData.class));
+ assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ verify(mockL2).addKey(any(ManagedKeyData.class));
+ }
+
+ @Test
+ public void testActiveKeysCacheDynamicLookupWithUnexpectedException() throws Exception {
+ doThrow(new RuntimeException("Test exception")).when(testProvider).getManagedKey(any(),
+ any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+ // A 2nd invocation should not result in a call to the provider.
+ assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ }
+
+ @Test
+ public void testActiveKeysCacheRetrivalFromProviderWhenKeyNotFoundInL2Cache() throws Exception {
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ doReturn(key).when(testProvider).getManagedKey(any(), any(String.class));
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ }
+
+ @Test
+ public void testGenericCacheUsesActiveKeysCacheFirst() throws Exception {
+ // First populate the active keys cache with an active key
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+
+ // Now get the generic cache entry - it should use the active keys cache first, not call
+ // keymetaAccessor
+ assertEquals(key1, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null));
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+
+ // Lookup a diffrent key.
+ ManagedKeyData key2 = cache.getActiveEntry(CUST_ID, "namespace1");
+ assertNotEquals(key1, key2);
+ verify(testProvider).getManagedKey(any(), any(String.class));
+ clearInvocations(testProvider);
+
+ // Now get the generic cache entry - it should use the active keys cache first, not call
+ // keymetaAccessor
+ assertEquals(key2, cache.getEntry(CUST_ID, "namespace1", key2.getKeyMetadata(), null));
+ verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ }
+
+ @Test
+ public void testGetOlderEntryFromGenericCache() throws Exception {
+ // Get one version of the key in to ActiveKeysCache
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ assertNotNull(key1);
+ clearInvocations(testProvider);
+
+ // Now try to lookup another version of the key, it should lookup and discard the active key.
+ ManagedKeyData key2 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(key2, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key2.getKeyMetadata(), null));
+ verify(testProvider).unwrapKey(any(String.class), any());
+ }
+
+ @Test
+ public void testThatActiveKeysCache_PopulatedByGenericCache() throws Exception {
+ // First populate the generic cache with an active key
+ ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ verify(testProvider).unwrapKey(any(String.class), any());
+
+ // Clear invocations to reset the mock state
+ clearInvocations(testProvider);
+
+ // Now get the active entry - it should already be there due to the generic cache first
+ assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ verify(testProvider, never()).unwrapKey(any(String.class), any());
+ }
+ }
+
+ protected void assertGenericCacheEntries(ManagedKeyData... keys) throws Exception {
+ for (ManagedKeyData key : keys) {
+ assertEquals(key,
+ cache.getEntry(key.getKeyCustodian(), key.getKeyNamespace(), key.getKeyMetadata(), null));
+ }
+ assertEquals(keys.length, cache.getGenericCacheEntryCount());
+ int activeKeysCount =
+ Arrays.stream(keys).filter(key -> key.getKeyState() == ManagedKeyState.ACTIVE)
+ .map(key -> new ManagedKeyDataCache.ActiveKeysCacheKey(key.getKeyCustodian(),
+ key.getKeyNamespace()))
+ .collect(Collectors.toSet()).size();
+ assertEquals(activeKeysCount, cache.getActiveCacheEntryCount());
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java
new file mode 100644
index 000000000000..d04dee3853e9
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java
@@ -0,0 +1,443 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.any;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.security.KeyException;
+import java.util.List;
+import org.apache.commons.lang3.NotImplementedException;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.master.HMaster;
+import org.apache.hadoop.hbase.regionserver.HRegionServer;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.ManagedKeysProtos;
+
+/**
+ * Tests the admin API via both RPC and local calls.
+ */
+@Category({ MasterTests.class, MediumTests.class })
+public class TestManagedKeymeta extends ManagedKeyTestBase {
+
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestManagedKeymeta.class);
+
+ /**
+ * Functional interface for setup operations that can throw ServiceException.
+ */
+ @FunctionalInterface
+ interface SetupFunction {
+ void setup(ManagedKeysProtos.ManagedKeysService.BlockingInterface mockStub,
+ ServiceException networkError) throws ServiceException;
+ }
+
+ /**
+ * Functional interface for test operations that can throw checked exceptions.
+ */
+ @FunctionalInterface
+ interface TestFunction {
+ void test(KeymetaAdminClient client) throws IOException, KeyException;
+ }
+
+ @Test
+ public void testEnableLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestEnable(keymetaAdmin);
+ }
+
+ @Test
+ public void testEnableOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestEnable(adminClient);
+ }
+
+ private void doTestEnable(KeymetaAdmin adminClient) throws IOException, KeyException {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(master.getConfiguration());
+ managedKeyProvider.setMultikeyGenMode(true);
+ String cust = "cust1";
+ byte[] custBytes = cust.getBytes();
+ ManagedKeyData managedKey =
+ adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertKeyDataSingleKey(managedKey, ManagedKeyState.ACTIVE);
+
+ // Enable must have persisted the key, but it won't be read back until we call into the cache.
+ // We have the multi key gen mode enabled, but since the key should be loaded from L2, we
+ // should get the same key even after ejecting it.
+ HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(0);
+ ManagedKeyDataCache managedKeyDataCache = regionServer.getManagedKeyDataCache();
+ ManagedKeyData activeEntry =
+ managedKeyDataCache.getActiveEntry(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(activeEntry);
+ assertTrue(Bytes.equals(managedKey.getKeyMetadataHash(), activeEntry.getKeyMetadataHash()));
+ assertTrue(managedKeyDataCache.ejectKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL,
+ managedKey.getKeyMetadataHash()));
+ activeEntry = managedKeyDataCache.getActiveEntry(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(activeEntry);
+ assertTrue(Bytes.equals(managedKey.getKeyMetadataHash(), activeEntry.getKeyMetadataHash()));
+
+ List managedKeys =
+ adminClient.getManagedKeys(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertEquals(managedKeyProvider.getLastGeneratedKeyData(cust, ManagedKeyData.KEY_SPACE_GLOBAL)
+ .createClientFacingInstance(), managedKeys.get(0).createClientFacingInstance());
+
+ String nonExistentCust = "nonExistentCust";
+ byte[] nonExistentBytes = nonExistentCust.getBytes();
+ managedKeyProvider.setMockedKeyState(nonExistentCust, ManagedKeyState.FAILED);
+ ManagedKeyData managedKey1 =
+ adminClient.enableKeyManagement(nonExistentBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertKeyDataSingleKey(managedKey1, ManagedKeyState.FAILED);
+
+ String disabledCust = "disabledCust";
+ byte[] disabledBytes = disabledCust.getBytes();
+ managedKeyProvider.setMockedKeyState(disabledCust, ManagedKeyState.DISABLED);
+ ManagedKeyData managedKey2 =
+ adminClient.enableKeyManagement(disabledBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertKeyDataSingleKey(managedKey2, ManagedKeyState.DISABLED);
+ }
+
+ private static void assertKeyDataSingleKey(ManagedKeyData managedKeyState,
+ ManagedKeyState keyState) {
+ assertNotNull(managedKeyState);
+ assertEquals(keyState, managedKeyState.getKeyState());
+ }
+
+ @Test
+ public void testEnableKeyManagementWithExceptionOnGetManagedKey() throws Exception {
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(TEST_UTIL.getConfiguration());
+ managedKeyProvider.setShouldThrowExceptionOnGetManagedKey(true);
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ IOException exception = assertThrows(IOException.class,
+ () -> adminClient.enableKeyManagement(new byte[0], "namespace"));
+ assertTrue(exception.getMessage().contains("key_cust must not be empty"));
+ }
+
+ @Test
+ public void testEnableKeyManagementWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException((mockStub,
+ networkError) -> when(mockStub.enableKeyManagement(any(), any())).thenThrow(networkError),
+ (client) -> client.enableKeyManagement(new byte[0], "namespace"));
+ }
+
+ @Test
+ public void testGetManagedKeysWithClientSideServiceException() throws Exception {
+ // Similar test for getManagedKeys method
+ doTestWithClientSideServiceException((mockStub,
+ networkError) -> when(mockStub.getManagedKeys(any(), any())).thenThrow(networkError),
+ (client) -> client.getManagedKeys(new byte[0], "namespace"));
+ }
+
+ @Test
+ public void testRotateSTKLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestRotateSTK(keymetaAdmin);
+ }
+
+ @Test
+ public void testRotateSTKOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestRotateSTK(adminClient);
+ }
+
+ private void doTestRotateSTK(KeymetaAdmin adminClient) throws IOException {
+ // Call rotateSTK - since no actual system key change has occurred,
+ // this should return false (no rotation performed)
+ boolean result = adminClient.rotateSTK();
+ assertFalse("rotateSTK should return false when no key change is detected", result);
+
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ ManagedKeyData currentSystemKey = master.getSystemKeyCache().getLatestSystemKey();
+
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(TEST_UTIL.getConfiguration());
+ // Once we enable multikeyGenMode on MockManagedKeyProvider, every call should return a new key
+ // which should trigger a rotation.
+ managedKeyProvider.setMultikeyGenMode(true);
+ result = adminClient.rotateSTK();
+ assertTrue("rotateSTK should return true when a new key is detected", result);
+
+ ManagedKeyData newSystemKey = master.getSystemKeyCache().getLatestSystemKey();
+ assertNotEquals("newSystemKey should be different from currentSystemKey", currentSystemKey,
+ newSystemKey);
+
+ HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(0);
+ assertEquals("regionServer should have the same new system key", newSystemKey,
+ regionServer.getSystemKeyCache().getLatestSystemKey());
+
+ }
+
+ @Test
+ public void testRotateSTKWithExceptionOnGetSystemKey() throws Exception {
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(TEST_UTIL.getConfiguration());
+ managedKeyProvider.setShouldThrowExceptionOnGetSystemKey(true);
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ IOException exception = assertThrows(IOException.class, () -> adminClient.rotateSTK());
+ assertTrue(exception.getMessage().contains("Test exception on getSystemKey"));
+ }
+
+ @Test
+ public void testRotateSTKWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException(
+ (mockStub, networkError) -> when(mockStub.rotateSTK(any(), any())).thenThrow(networkError),
+ (client) -> client.rotateSTK());
+ }
+
+ private void doTestWithClientSideServiceException(SetupFunction setupFunction,
+ TestFunction testFunction) throws Exception {
+ ManagedKeysProtos.ManagedKeysService.BlockingInterface mockStub =
+ mock(ManagedKeysProtos.ManagedKeysService.BlockingInterface.class);
+
+ ServiceException networkError = new ServiceException("Network error");
+ networkError.initCause(new IOException("Network error"));
+
+ KeymetaAdminClient client = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ // Use reflection to set the stub
+ Field stubField = KeymetaAdminClient.class.getDeclaredField("stub");
+ stubField.setAccessible(true);
+ stubField.set(client, mockStub);
+
+ // Setup the mock
+ setupFunction.setup(mockStub, networkError);
+
+ // Execute test function and expect IOException
+ IOException exception = assertThrows(IOException.class, () -> testFunction.test(client));
+
+ assertTrue(exception.getMessage().contains("Network error"));
+ }
+
+ @Test
+ public void testDisableKeyManagementLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestDisableKeyManagement(keymetaAdmin);
+ }
+
+ @Test
+ public void testDisableKeyManagementOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestDisableKeyManagement(adminClient);
+ }
+
+ private void doTestDisableKeyManagement(KeymetaAdmin adminClient)
+ throws IOException, KeyException {
+ String cust = "cust2";
+ byte[] custBytes = cust.getBytes();
+
+ // First enable key management
+ ManagedKeyData managedKey =
+ adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(managedKey);
+ assertKeyDataSingleKey(managedKey, ManagedKeyState.ACTIVE);
+
+ // Now disable it
+ ManagedKeyData disabledKey =
+ adminClient.disableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(disabledKey);
+ assertEquals(ManagedKeyState.DISABLED, disabledKey.getKeyState().getExternalState());
+ }
+
+ @Test
+ public void testDisableKeyManagementWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException(
+ (mockStub, networkError) -> when(mockStub.disableKeyManagement(any(), any()))
+ .thenThrow(networkError),
+ (client) -> client.disableKeyManagement(new byte[0], "namespace"));
+ }
+
+ @Test
+ public void testDisableManagedKeyLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestDisableManagedKey(keymetaAdmin);
+ }
+
+ @Test
+ public void testDisableManagedKeyOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestDisableManagedKey(adminClient);
+ }
+
+ private void doTestDisableManagedKey(KeymetaAdmin adminClient) throws IOException, KeyException {
+ String cust = "cust3";
+ byte[] custBytes = cust.getBytes();
+
+ // First enable key management to create a key
+ ManagedKeyData managedKey =
+ adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(managedKey);
+ assertKeyDataSingleKey(managedKey, ManagedKeyState.ACTIVE);
+ byte[] keyMetadataHash = managedKey.getKeyMetadataHash();
+
+ // Now disable the specific key
+ ManagedKeyData disabledKey =
+ adminClient.disableManagedKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL, keyMetadataHash);
+ assertNotNull(disabledKey);
+ assertEquals(ManagedKeyState.DISABLED, disabledKey.getKeyState().getExternalState());
+ }
+
+ @Test
+ public void testDisableManagedKeyWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException(
+ (mockStub, networkError) -> when(mockStub.disableManagedKey(any(), any()))
+ .thenThrow(networkError),
+ (client) -> client.disableManagedKey(new byte[0], "namespace", new byte[0]));
+ }
+
+ @Test
+ public void testRotateManagedKeyWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException((mockStub,
+ networkError) -> when(mockStub.rotateManagedKey(any(), any())).thenThrow(networkError),
+ (client) -> client.rotateManagedKey(new byte[0], "namespace"));
+ }
+
+ @Test
+ public void testRefreshManagedKeysWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException((mockStub,
+ networkError) -> when(mockStub.refreshManagedKeys(any(), any())).thenThrow(networkError),
+ (client) -> client.refreshManagedKeys(new byte[0], "namespace"));
+ }
+
+ @Test
+ public void testRotateManagedKeyLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestRotateManagedKey(keymetaAdmin);
+ }
+
+ @Test
+ public void testRotateManagedKeyOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestRotateManagedKey(adminClient);
+ }
+
+ private void doTestRotateManagedKey(KeymetaAdmin adminClient) throws IOException, KeyException {
+ // This test covers the success path (line 133 in KeymetaAdminClient for RPC)
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(master.getConfiguration());
+ managedKeyProvider.setMultikeyGenMode(true);
+
+ String cust = "cust1";
+ byte[] custBytes = cust.getBytes();
+
+ // Enable key management first to have a key to rotate
+ adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+
+ // Now rotate the key
+ ManagedKeyData rotatedKey =
+ adminClient.rotateManagedKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+
+ assertNotNull("Rotated key should not be null", rotatedKey);
+ assertEquals("Rotated key should be ACTIVE", ManagedKeyState.ACTIVE, rotatedKey.getKeyState());
+ assertEquals("Rotated key should have correct custodian", 0,
+ Bytes.compareTo(custBytes, rotatedKey.getKeyCustodian()));
+ assertEquals("Rotated key should have correct namespace", ManagedKeyData.KEY_SPACE_GLOBAL,
+ rotatedKey.getKeyNamespace());
+ }
+
+ @Test
+ public void testRefreshManagedKeysLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestRefreshManagedKeys(keymetaAdmin);
+ }
+
+ @Test
+ public void testRefreshManagedKeysOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestRefreshManagedKeys(adminClient);
+ }
+
+ private void doTestRefreshManagedKeys(KeymetaAdmin adminClient) throws IOException, KeyException {
+ // This test covers the success path (line 148 in KeymetaAdminClient for RPC)
+ String cust = "cust1";
+ byte[] custBytes = cust.getBytes();
+
+ // Enable key management first to have keys to refresh
+ adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+
+ // Should complete without exception - covers the normal return path
+ adminClient.refreshManagedKeys(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+
+ // Verify keys still exist after refresh
+ List keys =
+ adminClient.getManagedKeys(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull("Keys should exist after refresh", keys);
+ assertFalse("Should have at least one key after refresh", keys.isEmpty());
+ }
+
+ // ========== NotImplementedException Tests ==========
+
+ @Test
+ public void testEjectManagedKeyDataCacheEntryNotSupported() throws Exception {
+ // This test covers lines 89-90 in KeymetaAdminClient
+ KeymetaAdminClient client = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ String cust = "cust1";
+ byte[] custBytes = cust.getBytes();
+
+ NotImplementedException exception = assertThrows(NotImplementedException.class, () -> client
+ .ejectManagedKeyDataCacheEntry(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL, "metadata"));
+
+ assertTrue("Exception message should indicate method is not supported",
+ exception.getMessage().contains("ejectManagedKeyDataCacheEntry not supported"));
+ assertTrue("Exception message should mention KeymetaAdminClient",
+ exception.getMessage().contains("KeymetaAdminClient"));
+ }
+
+ @Test
+ public void testClearManagedKeyDataCacheNotSupported() throws Exception {
+ // This test covers lines 95-96 in KeymetaAdminClient
+ KeymetaAdminClient client = new KeymetaAdminClient(TEST_UTIL.getConnection());
+
+ NotImplementedException exception =
+ assertThrows(NotImplementedException.class, () -> client.clearManagedKeyDataCache());
+
+ assertTrue("Exception message should indicate method is not supported",
+ exception.getMessage().contains("clearManagedKeyDataCache not supported"));
+ assertTrue("Exception message should mention KeymetaAdminClient",
+ exception.getMessage().contains("KeymetaAdminClient"));
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java
new file mode 100644
index 000000000000..f541d4bac18c
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java
@@ -0,0 +1,310 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.security.Key;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import javax.crypto.spec.SecretKeySpec;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+
+/**
+ * Tests for SystemKeyCache class. NOTE: The createCache() method is tested in
+ * TestKeyManagementService.
+ */
+@Category({ MasterTests.class, SmallTests.class })
+public class TestSystemKeyCache {
+
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestSystemKeyCache.class);
+
+ @Mock
+ private SystemKeyAccessor mockAccessor;
+
+ private static final byte[] TEST_CUSTODIAN = "test-custodian".getBytes();
+ private static final String TEST_NAMESPACE = "test-namespace";
+ private static final String TEST_METADATA_1 = "metadata-1";
+ private static final String TEST_METADATA_2 = "metadata-2";
+ private static final String TEST_METADATA_3 = "metadata-3";
+
+ private Key testKey1;
+ private Key testKey2;
+ private Key testKey3;
+ private ManagedKeyData keyData1;
+ private ManagedKeyData keyData2;
+ private ManagedKeyData keyData3;
+ private Path keyPath1;
+ private Path keyPath2;
+ private Path keyPath3;
+
+ @Before
+ public void setUp() {
+ MockitoAnnotations.openMocks(this);
+
+ // Create test keys
+ testKey1 = new SecretKeySpec("test-key-1-bytes".getBytes(), "AES");
+ testKey2 = new SecretKeySpec("test-key-2-bytes".getBytes(), "AES");
+ testKey3 = new SecretKeySpec("test-key-3-bytes".getBytes(), "AES");
+
+ // Create test key data with different checksums
+ keyData1 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, testKey1, ManagedKeyState.ACTIVE,
+ TEST_METADATA_1, 1000L);
+ keyData2 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, testKey2, ManagedKeyState.ACTIVE,
+ TEST_METADATA_2, 2000L);
+ keyData3 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, testKey3, ManagedKeyState.ACTIVE,
+ TEST_METADATA_3, 3000L);
+
+ // Create test paths
+ keyPath1 = new Path("/system/keys/key1");
+ keyPath2 = new Path("/system/keys/key2");
+ keyPath3 = new Path("/system/keys/key3");
+ }
+
+ @Test
+ public void testCreateCacheWithSingleSystemKey() throws Exception {
+ // Setup
+ List keyPaths = Collections.singletonList(keyPath1);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenReturn(keyData1);
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify
+ assertNotNull(cache);
+ assertSame(keyData1, cache.getLatestSystemKey());
+ assertSame(keyData1, cache.getSystemKeyByChecksum(keyData1.getKeyChecksum()));
+ assertNull(cache.getSystemKeyByChecksum(999L)); // Non-existent checksum
+
+ verify(mockAccessor).getAllSystemKeyFiles();
+ verify(mockAccessor).loadSystemKey(keyPath1);
+ }
+
+ @Test
+ public void testCreateCacheWithMultipleSystemKeys() throws Exception {
+ // Setup - keys should be processed in order, first one becomes latest
+ List keyPaths = Arrays.asList(keyPath1, keyPath2, keyPath3);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenReturn(keyData1);
+ when(mockAccessor.loadSystemKey(keyPath2)).thenReturn(keyData2);
+ when(mockAccessor.loadSystemKey(keyPath3)).thenReturn(keyData3);
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify
+ assertNotNull(cache);
+ assertSame(keyData1, cache.getLatestSystemKey()); // First key becomes latest
+
+ // All keys should be accessible by checksum
+ assertSame(keyData1, cache.getSystemKeyByChecksum(keyData1.getKeyChecksum()));
+ assertSame(keyData2, cache.getSystemKeyByChecksum(keyData2.getKeyChecksum()));
+ assertSame(keyData3, cache.getSystemKeyByChecksum(keyData3.getKeyChecksum()));
+
+ // Non-existent checksum should return null
+ assertNull(cache.getSystemKeyByChecksum(999L));
+
+ verify(mockAccessor).getAllSystemKeyFiles();
+ verify(mockAccessor).loadSystemKey(keyPath1);
+ verify(mockAccessor).loadSystemKey(keyPath2);
+ verify(mockAccessor).loadSystemKey(keyPath3);
+ }
+
+ @Test
+ public void testCreateCacheWithNoSystemKeyFiles() throws Exception {
+ // Setup - this covers the uncovered lines 46-47
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(Collections.emptyList());
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify
+ assertNull(cache);
+ verify(mockAccessor).getAllSystemKeyFiles();
+ }
+
+ @Test
+ public void testCreateCacheWithEmptyKeyFilesList() throws Exception {
+ // Setup - alternative empty scenario
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(new ArrayList<>());
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify
+ assertNull(cache);
+ verify(mockAccessor).getAllSystemKeyFiles();
+ }
+
+ @Test
+ public void testGetLatestSystemKeyConsistency() throws Exception {
+ // Setup
+ List keyPaths = Arrays.asList(keyPath1, keyPath2);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenReturn(keyData1);
+ when(mockAccessor.loadSystemKey(keyPath2)).thenReturn(keyData2);
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify - latest key should be consistent across calls
+ ManagedKeyData latest1 = cache.getLatestSystemKey();
+ ManagedKeyData latest2 = cache.getLatestSystemKey();
+
+ assertNotNull(latest1);
+ assertSame(latest1, latest2);
+ assertSame(keyData1, latest1); // First key should be latest
+ }
+
+ @Test
+ public void testGetSystemKeyByChecksumWithDifferentKeys() throws Exception {
+ // Setup
+ List keyPaths = Arrays.asList(keyPath1, keyPath2, keyPath3);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenReturn(keyData1);
+ when(mockAccessor.loadSystemKey(keyPath2)).thenReturn(keyData2);
+ when(mockAccessor.loadSystemKey(keyPath3)).thenReturn(keyData3);
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify each key can be retrieved by its unique checksum
+ long checksum1 = keyData1.getKeyChecksum();
+ long checksum2 = keyData2.getKeyChecksum();
+ long checksum3 = keyData3.getKeyChecksum();
+
+ // Checksums should be different
+ assert checksum1 != checksum2;
+ assert checksum2 != checksum3;
+ assert checksum1 != checksum3;
+
+ // Each key should be retrievable by its checksum
+ assertSame(keyData1, cache.getSystemKeyByChecksum(checksum1));
+ assertSame(keyData2, cache.getSystemKeyByChecksum(checksum2));
+ assertSame(keyData3, cache.getSystemKeyByChecksum(checksum3));
+ }
+
+ @Test
+ public void testGetSystemKeyByChecksumWithNonExistentChecksum() throws Exception {
+ // Setup
+ List keyPaths = Collections.singletonList(keyPath1);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenReturn(keyData1);
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify
+ assertNotNull(cache);
+
+ // Test various non-existent checksums
+ assertNull(cache.getSystemKeyByChecksum(0L));
+ assertNull(cache.getSystemKeyByChecksum(-1L));
+ assertNull(cache.getSystemKeyByChecksum(Long.MAX_VALUE));
+ assertNull(cache.getSystemKeyByChecksum(Long.MIN_VALUE));
+
+ // But the actual checksum should work
+ assertSame(keyData1, cache.getSystemKeyByChecksum(keyData1.getKeyChecksum()));
+ }
+
+ @Test(expected = IOException.class)
+ public void testCreateCacheWithAccessorIOException() throws Exception {
+ // Setup - accessor throws IOException
+ when(mockAccessor.getAllSystemKeyFiles()).thenThrow(new IOException("File system error"));
+
+ // Execute - should propagate the IOException
+ SystemKeyCache.createCache(mockAccessor);
+ }
+
+ @Test(expected = IOException.class)
+ public void testCreateCacheWithLoadSystemKeyIOException() throws Exception {
+ // Setup - loading key throws IOException
+ List keyPaths = Collections.singletonList(keyPath1);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenThrow(new IOException("Key load error"));
+
+ // Execute - should propagate the IOException
+ SystemKeyCache.createCache(mockAccessor);
+ }
+
+ @Test
+ public void testCacheWithKeysHavingSameChecksum() throws Exception {
+ // Setup - create two keys that will have the same checksum (same content)
+ Key sameKey1 = new SecretKeySpec("identical-bytes".getBytes(), "AES");
+ Key sameKey2 = new SecretKeySpec("identical-bytes".getBytes(), "AES");
+
+ ManagedKeyData sameManagedKey1 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, sameKey1,
+ ManagedKeyState.ACTIVE, "metadata-A", 1000L);
+ ManagedKeyData sameManagedKey2 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, sameKey2,
+ ManagedKeyState.ACTIVE, "metadata-B", 2000L);
+
+ // Verify they have the same checksum
+ assertEquals(sameManagedKey1.getKeyChecksum(), sameManagedKey2.getKeyChecksum());
+
+ List keyPaths = Arrays.asList(keyPath1, keyPath2);
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
+ when(mockAccessor.loadSystemKey(keyPath1)).thenReturn(sameManagedKey1);
+ when(mockAccessor.loadSystemKey(keyPath2)).thenReturn(sameManagedKey2);
+
+ // Execute
+ SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
+
+ // Verify - second key should overwrite first in the map due to same checksum
+ assertNotNull(cache);
+ assertSame(sameManagedKey1, cache.getLatestSystemKey()); // First is still latest
+
+ // The map should contain the second key for the shared checksum
+ ManagedKeyData retrievedKey = cache.getSystemKeyByChecksum(sameManagedKey1.getKeyChecksum());
+ assertSame(sameManagedKey2, retrievedKey); // Last one wins in TreeMap
+ }
+
+ @Test
+ public void testCreateCacheWithUnexpectedNullKeyData() throws Exception {
+ when(mockAccessor.getAllSystemKeyFiles()).thenReturn(Arrays.asList(keyPath1));
+ when(mockAccessor.loadSystemKey(keyPath1)).thenThrow(new RuntimeException("Key load error"));
+
+ RuntimeException ex = assertThrows(RuntimeException.class, () -> {
+ SystemKeyCache.createCache(mockAccessor);
+ });
+ assertTrue(ex.getMessage().equals("Key load error"));
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java
new file mode 100644
index 000000000000..9c3e5991c6e7
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java
@@ -0,0 +1,1077 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.FAILED;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.INACTIVE;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assume.assumeTrue;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.anyBoolean;
+import static org.mockito.Mockito.clearInvocations;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.security.Key;
+import java.security.KeyException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.Consumer;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.Server;
+import org.apache.hadoop.hbase.client.AsyncAdmin;
+import org.apache.hadoop.hbase.client.AsyncClusterConnection;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.keymeta.KeyManagementService;
+import org.apache.hadoop.hbase.keymeta.KeymetaAdminImpl;
+import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameter;
+import org.junit.runners.Parameterized.Parameters;
+import org.junit.runners.Suite;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestKeymetaAdminImpl.TestWhenDisabled.class,
+ TestKeymetaAdminImpl.TestAdminImpl.class, TestKeymetaAdminImpl.TestForKeyProviderNullReturn.class,
+ TestKeymetaAdminImpl.TestMiscAPIs.class,
+ TestKeymetaAdminImpl.TestNewKeyManagementAdminMethods.class })
+@Category({ MasterTests.class, SmallTests.class })
+public class TestKeymetaAdminImpl {
+
+ private static final String CUST = "cust1";
+ private static final byte[] CUST_BYTES = CUST.getBytes();
+
+ protected final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+
+ @Rule
+ public TestName name = new TestName();
+
+ protected Configuration conf;
+ protected Path testRootDir;
+ protected FileSystem fs;
+
+ protected FileSystem mockFileSystem = mock(FileSystem.class);
+ protected MasterServices mockServer = mock(MasterServices.class);
+ protected KeymetaAdminImplForTest keymetaAdmin;
+ KeymetaTableAccessor keymetaAccessor = mock(KeymetaTableAccessor.class);
+
+ @Before
+ public void setUp() throws Exception {
+ conf = TEST_UTIL.getConfiguration();
+ testRootDir = TEST_UTIL.getDataTestDir(name.getMethodName());
+ fs = testRootDir.getFileSystem(conf);
+
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ MockManagedKeyProvider.class.getName());
+
+ when(mockServer.getKeyManagementService()).thenReturn(mockServer);
+ when(mockServer.getFileSystem()).thenReturn(mockFileSystem);
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ keymetaAdmin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ // Clear the provider cache to avoid test interference
+ Encryption.clearKeyProviderCache();
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestWhenDisabled extends TestKeymetaAdminImpl {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestWhenDisabled.class);
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "false");
+ }
+
+ @Test
+ public void testDisabled() throws Exception {
+ assertThrows(IOException.class, () -> keymetaAdmin
+ .enableKeyManagement(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES, KEY_SPACE_GLOBAL));
+ assertThrows(IOException.class, () -> keymetaAdmin
+ .getManagedKeys(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES, KEY_SPACE_GLOBAL));
+ }
+ }
+
+ @RunWith(Parameterized.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestAdminImpl extends TestKeymetaAdminImpl {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestAdminImpl.class);
+
+ @Parameter(0)
+ public String keySpace;
+ @Parameter(1)
+ public ManagedKeyState keyState;
+ @Parameter(2)
+ public boolean isNullKey;
+
+ @Parameters(name = "{index},keySpace={0},keyState={1}")
+ public static Collection data() {
+ return Arrays
+ .asList(new Object[][] { { KEY_SPACE_GLOBAL, ACTIVE, false }, { "ns1", ACTIVE, false },
+ { KEY_SPACE_GLOBAL, FAILED, true }, { KEY_SPACE_GLOBAL, DISABLED, true }, });
+ }
+
+ @Test
+ public void testEnableAndGet() throws Exception {
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
+ managedKeyProvider.setMockedKeyState(CUST, keyState);
+ when(keymetaAccessor.getKeyManagementStateMarker(CUST.getBytes(), keySpace))
+ .thenReturn(managedKeyProvider.getManagedKey(CUST.getBytes(), keySpace));
+
+ ManagedKeyData managedKey = keymetaAdmin.enableKeyManagement(CUST_BYTES, keySpace);
+ assertNotNull(managedKey);
+ assertEquals(keyState, managedKey.getKeyState());
+ verify(keymetaAccessor).getKeyManagementStateMarker(CUST.getBytes(), keySpace);
+
+ keymetaAdmin.getManagedKeys(CUST_BYTES, keySpace);
+ verify(keymetaAccessor).getAllKeys(CUST.getBytes(), keySpace, false);
+ }
+
+ @Test
+ public void testEnableKeyManagement() throws Exception {
+ assumeTrue(keyState == ACTIVE);
+ ManagedKeyData managedKey = keymetaAdmin.enableKeyManagement(CUST_BYTES, "namespace1");
+ assertEquals(ManagedKeyState.ACTIVE, managedKey.getKeyState());
+ assertEquals(ManagedKeyProvider.encodeToStr(CUST_BYTES), managedKey.getKeyCustodianEncoded());
+ assertEquals("namespace1", managedKey.getKeyNamespace());
+
+ // Second call should return the same keys since our mock key provider returns the same key
+ ManagedKeyData managedKey2 = keymetaAdmin.enableKeyManagement(CUST_BYTES, "namespace1");
+ assertEquals(managedKey, managedKey2);
+ }
+
+ @Test
+ public void testEnableKeyManagementWithMultipleNamespaces() throws Exception {
+ ManagedKeyData managedKey = keymetaAdmin.enableKeyManagement(CUST_BYTES, "namespace1");
+ assertEquals("namespace1", managedKey.getKeyNamespace());
+
+ ManagedKeyData managedKey2 = keymetaAdmin.enableKeyManagement(CUST_BYTES, "namespace2");
+ assertEquals("namespace2", managedKey2.getKeyNamespace());
+ }
+ }
+
+ @RunWith(Parameterized.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestForKeyProviderNullReturn extends TestKeymetaAdminImpl {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestForKeyProviderNullReturn.class);
+
+ @Parameter(0)
+ public String keySpace;
+
+ @Parameters(name = "{index},keySpace={0}")
+ public static Collection data() {
+ return Arrays.asList(new Object[][] { { KEY_SPACE_GLOBAL }, { "ns1" }, });
+ }
+
+ @Test
+ public void test() throws Exception {
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
+ String cust = "invalidcust1";
+ byte[] custBytes = cust.getBytes();
+ managedKeyProvider.setMockedKey(cust, null, keySpace);
+ IOException ex = assertThrows(IOException.class,
+ () -> keymetaAdmin.enableKeyManagement(custBytes, keySpace));
+ assertEquals("Invalid null managed key received from key provider", ex.getMessage());
+ }
+ }
+
+ private class KeymetaAdminImplForTest extends KeymetaAdminImpl {
+ public KeymetaAdminImplForTest(MasterServices mockServer, KeymetaTableAccessor mockAccessor) {
+ super(mockServer);
+ }
+
+ @Override
+ public void addKey(ManagedKeyData keyData) throws IOException {
+ keymetaAccessor.addKey(keyData);
+ }
+
+ @Override
+ public List getAllKeys(byte[] key_cust, String keyNamespace,
+ boolean includeMarkers) throws IOException, KeyException {
+ return keymetaAccessor.getAllKeys(key_cust, keyNamespace, includeMarkers);
+ }
+
+ @Override
+ public ManagedKeyData getKeyManagementStateMarker(byte[] key_cust, String keyNamespace)
+ throws IOException, KeyException {
+ return keymetaAccessor.getKeyManagementStateMarker(key_cust, keyNamespace);
+ }
+ }
+
+ protected boolean assertKeyData(ManagedKeyData keyData, ManagedKeyState expKeyState,
+ Key expectedKey) {
+ assertNotNull(keyData);
+ assertEquals(expKeyState, keyData.getKeyState());
+ if (expectedKey == null) {
+ assertNull(keyData.getTheKey());
+ } else {
+ byte[] keyBytes = keyData.getTheKey().getEncoded();
+ byte[] expectedKeyBytes = expectedKey.getEncoded();
+ assertEquals(expectedKeyBytes.length, keyBytes.length);
+ assertEquals(new Bytes(expectedKeyBytes), keyBytes);
+ }
+ return true;
+ }
+
+ /**
+ * Test class for rotateSTK API
+ */
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestMiscAPIs extends TestKeymetaAdminImpl {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestMiscAPIs.class);
+
+ private ServerManager mockServerManager = mock(ServerManager.class);
+ private AsyncClusterConnection mockConnection;
+ private AsyncAdmin mockAsyncAdmin;
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ mockConnection = mock(AsyncClusterConnection.class);
+ mockAsyncAdmin = mock(AsyncAdmin.class);
+ when(mockServer.getServerManager()).thenReturn(mockServerManager);
+ when(mockServer.getAsyncClusterConnection()).thenReturn(mockConnection);
+ when(mockConnection.getAdmin()).thenReturn(mockAsyncAdmin);
+ }
+
+ @Test
+ public void testEnableWithInactiveKey() throws Exception {
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
+ managedKeyProvider.setMockedKeyState(CUST, INACTIVE);
+ when(keymetaAccessor.getKeyManagementStateMarker(CUST.getBytes(), KEY_SPACE_GLOBAL))
+ .thenReturn(managedKeyProvider.getManagedKey(CUST.getBytes(), KEY_SPACE_GLOBAL));
+
+ IOException exception = assertThrows(IOException.class,
+ () -> keymetaAdmin.enableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL));
+ assertTrue(exception.getMessage(),
+ exception.getMessage().contains("Expected key to be ACTIVE, but got an INACTIVE key"));
+ }
+
+ /**
+ * Helper method to test that a method throws IOException when not called on master.
+ * @param adminAction the action to test, taking a KeymetaAdminImpl instance
+ * @param expectedMessageFragment the expected fragment in the error message
+ */
+ private void assertNotOnMasterThrowsException(Consumer adminAction,
+ String expectedMessageFragment) {
+ // Create a non-master server mock
+ Server mockRegionServer = mock(Server.class);
+ KeyManagementService mockKeyService = mock(KeyManagementService.class);
+ when(mockRegionServer.getKeyManagementService()).thenReturn(mockKeyService);
+ when(mockKeyService.getConfiguration()).thenReturn(conf);
+ when(mockRegionServer.getConfiguration()).thenReturn(conf);
+ when(mockRegionServer.getFileSystem()).thenReturn(mockFileSystem);
+
+ KeymetaAdminImpl admin = new KeymetaAdminImpl(mockRegionServer) {
+ @Override
+ protected AsyncAdmin getAsyncAdmin(MasterServices master) {
+ throw new RuntimeException("Shouldn't be called since we are not on master");
+ }
+ };
+
+ RuntimeException runtimeEx =
+ assertThrows(RuntimeException.class, () -> adminAction.accept(admin));
+ assertTrue(runtimeEx.getCause() instanceof IOException);
+ IOException ex = (IOException) runtimeEx.getCause();
+ assertTrue(ex.getMessage().contains(expectedMessageFragment));
+ }
+
+ /**
+ * Helper method to test that a method throws IOException when key management is disabled.
+ * @param adminAction the action to test, taking a KeymetaAdminImpl instance
+ */
+ private void assertDisabledThrowsException(Consumer adminAction) {
+ TEST_UTIL.getConfiguration().set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "false");
+
+ KeymetaAdminImpl admin = new KeymetaAdminImpl(mockServer) {
+ @Override
+ protected AsyncAdmin getAsyncAdmin(MasterServices master) {
+ throw new RuntimeException("Shouldn't be called since we are disabled");
+ }
+ };
+
+ RuntimeException runtimeEx =
+ assertThrows(RuntimeException.class, () -> adminAction.accept(admin));
+ assertTrue(runtimeEx.getCause() instanceof IOException);
+ IOException ex = (IOException) runtimeEx.getCause();
+ assertTrue("Exception message should contain 'not enabled', but was: " + ex.getMessage(),
+ ex.getMessage().contains("not enabled"));
+ }
+
+ /**
+ * Test rotateSTK when a new key is detected. Now that we can mock SystemKeyManager via
+ * master.getSystemKeyManager(), we can properly test the success scenario: 1.
+ * SystemKeyManager.rotateSystemKeyIfChanged() returns non-null (new key detected) 2. Master
+ * gets list of online region servers 3. Master makes parallel RPC calls to all region servers
+ * 4. All region servers successfully rebuild their system key cache 5. Method returns true
+ */
+ @Test
+ public void testRotateSTKWithNewKey() throws Exception {
+ // Setup mocks for MasterServices
+ // Mock SystemKeyManager to return a new key (non-null)
+ when(mockServer.rotateSystemKeyIfChanged()).thenReturn(true);
+
+ when(mockAsyncAdmin.refreshSystemKeyCacheOnServers(any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call rotateSTK - should return true since new key was detected
+ boolean result = admin.rotateSTK();
+
+ // Verify the result
+ assertTrue("rotateSTK should return true when new key is detected", result);
+
+ // Verify that rotateSystemKeyIfChanged was called
+ verify(mockServer).rotateSystemKeyIfChanged();
+ verify(mockAsyncAdmin).refreshSystemKeyCacheOnServers(any());
+ }
+
+ /**
+ * Test rotateSTK when no key change is detected. Now that we can mock SystemKeyManager, we can
+ * properly test the no-change scenario: 1. SystemKeyManager.rotateSystemKeyIfChanged() returns
+ * null 2. Method returns false immediately without calling any region servers 3. No RPC calls
+ * are made to region servers
+ */
+ @Test
+ public void testRotateSTKNoChange() throws Exception {
+ // Mock SystemKeyManager to return null (no key change)
+ when(mockServer.rotateSystemKeyIfChanged()).thenReturn(false);
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call rotateSTK - should return false since no key change was detected
+ boolean result = admin.rotateSTK();
+
+ // Verify the result
+ assertFalse("rotateSTK should return false when no key change is detected", result);
+
+ // Verify that rotateSystemKeyIfChanged was called
+ verify(mockServer).rotateSystemKeyIfChanged();
+
+ // Verify that getOnlineServersList was never called (short-circuit behavior)
+ verify(mockServerManager, never()).getOnlineServersList();
+ }
+
+ @Test
+ public void testRotateSTKOnIOException() throws Exception {
+ when(mockServer.rotateSystemKeyIfChanged()).thenThrow(new IOException("test"));
+
+ KeymetaAdminImpl admin = new KeymetaAdminImpl(mockServer);
+ IOException ex = assertThrows(IOException.class, () -> admin.rotateSTK());
+ assertTrue("Exception message should contain 'test', but was: " + ex.getMessage(),
+ ex.getMessage().equals("test"));
+ }
+
+ /**
+ * Test rotateSTK when region server refresh fails.
+ */
+ @Test
+ public void testRotateSTKWithFailedServerRefresh() throws Exception {
+ // Setup mocks for MasterServices
+ // Mock SystemKeyManager to return a new key (non-null)
+ when(mockServer.rotateSystemKeyIfChanged()).thenReturn(true);
+
+ CompletableFuture failedFuture = new CompletableFuture<>();
+ failedFuture.completeExceptionally(new IOException("refresh failed"));
+ when(mockAsyncAdmin.refreshSystemKeyCacheOnServers(any())).thenReturn(failedFuture);
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call rotateSTK and expect IOException
+ IOException ex = assertThrows(IOException.class, () -> admin.rotateSTK());
+
+ assertTrue(ex.getMessage()
+ .contains("Failed to initiate System Key cache refresh on one or more region servers"));
+
+ // Verify that rotateSystemKeyIfChanged was called
+ verify(mockServer).rotateSystemKeyIfChanged();
+ verify(mockAsyncAdmin).refreshSystemKeyCacheOnServers(any());
+ }
+
+ @Test
+ public void testRotateSTKNotOnMaster() throws Exception {
+ assertNotOnMasterThrowsException(admin -> {
+ try {
+ admin.rotateSTK();
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }, "rotateSTK can only be called on master");
+ }
+
+ @Test
+ public void testEjectManagedKeyDataCacheEntryNotOnMaster() throws Exception {
+ byte[] keyCustodian = Bytes.toBytes("testCustodian");
+ String keyNamespace = "testNamespace";
+ String keyMetadata = "testMetadata";
+
+ assertNotOnMasterThrowsException(admin -> {
+ try {
+ admin.ejectManagedKeyDataCacheEntry(keyCustodian, keyNamespace, keyMetadata);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }, "ejectManagedKeyDataCacheEntry can only be called on master");
+ }
+
+ @Test
+ public void testClearManagedKeyDataCacheNotOnMaster() throws Exception {
+ assertNotOnMasterThrowsException(admin -> {
+ try {
+ admin.clearManagedKeyDataCache();
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ }, "clearManagedKeyDataCache can only be called on master");
+ }
+
+ @Test
+ public void testRotateSTKWhenDisabled() throws Exception {
+ assertDisabledThrowsException(admin -> {
+ try {
+ admin.rotateSTK();
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ });
+ }
+
+ @Test
+ public void testEjectManagedKeyDataCacheEntryWhenDisabled() throws Exception {
+ byte[] keyCustodian = Bytes.toBytes("testCustodian");
+ String keyNamespace = "testNamespace";
+ String keyMetadata = "testMetadata";
+
+ assertDisabledThrowsException(admin -> {
+ try {
+ admin.ejectManagedKeyDataCacheEntry(keyCustodian, keyNamespace, keyMetadata);
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ });
+ }
+
+ @Test
+ public void testClearManagedKeyDataCacheWhenDisabled() throws Exception {
+ assertDisabledThrowsException(admin -> {
+ try {
+ admin.clearManagedKeyDataCache();
+ } catch (IOException e) {
+ throw new RuntimeException(e);
+ }
+ });
+ }
+
+ /**
+ * Test ejectManagedKeyDataCacheEntry API - verify it calls the AsyncAdmin method with correct
+ * parameters
+ */
+ @Test
+ public void testEjectManagedKeyDataCacheEntry() throws Exception {
+ byte[] keyCustodian = Bytes.toBytes("testCustodian");
+ String keyNamespace = "testNamespace";
+ String keyMetadata = "testMetadata";
+
+ when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call the method
+ admin.ejectManagedKeyDataCacheEntry(keyCustodian, keyNamespace, keyMetadata);
+
+ // Verify the AsyncAdmin method was called
+ verify(mockAsyncAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ }
+
+ /**
+ * Test ejectManagedKeyDataCacheEntry when it fails
+ */
+ @Test
+ public void testEjectManagedKeyDataCacheEntryWithFailure() throws Exception {
+ byte[] keyCustodian = Bytes.toBytes("testCustodian");
+ String keyNamespace = "testNamespace";
+ String keyMetadata = "testMetadata";
+
+ CompletableFuture failedFuture = new CompletableFuture<>();
+ failedFuture.completeExceptionally(new IOException("eject failed"));
+ when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(failedFuture);
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call the method and expect IOException
+ IOException ex = assertThrows(IOException.class,
+ () -> admin.ejectManagedKeyDataCacheEntry(keyCustodian, keyNamespace, keyMetadata));
+
+ assertTrue(ex.getMessage().contains("eject failed"));
+ verify(mockAsyncAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ }
+
+ /**
+ * Test clearManagedKeyDataCache API - verify it calls the AsyncAdmin method
+ */
+ @Test
+ public void testClearManagedKeyDataCache() throws Exception {
+ when(mockAsyncAdmin.clearManagedKeyDataCacheOnServers(any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call the method
+ admin.clearManagedKeyDataCache();
+
+ // Verify the AsyncAdmin method was called
+ verify(mockAsyncAdmin).clearManagedKeyDataCacheOnServers(any());
+ }
+
+ /**
+ * Test clearManagedKeyDataCache when it fails
+ */
+ @Test
+ public void testClearManagedKeyDataCacheWithFailure() throws Exception {
+ CompletableFuture failedFuture = new CompletableFuture<>();
+ failedFuture.completeExceptionally(new IOException("clear failed"));
+ when(mockAsyncAdmin.clearManagedKeyDataCacheOnServers(any())).thenReturn(failedFuture);
+
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
+
+ // Call the method and expect IOException
+ IOException ex = assertThrows(IOException.class, () -> admin.clearManagedKeyDataCache());
+
+ assertTrue(ex.getMessage().contains("clear failed"));
+ verify(mockAsyncAdmin).clearManagedKeyDataCacheOnServers(any());
+ }
+ }
+
+ /**
+ * Tests for new key management admin methods.
+ */
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestNewKeyManagementAdminMethods {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestNewKeyManagementAdminMethods.class);
+
+ @Mock
+ private MasterServices mockMasterServices;
+ @Mock
+ private AsyncAdmin mockAsyncAdmin;
+ @Mock
+ private AsyncClusterConnection mockAsyncClusterConnection;
+ @Mock
+ private ServerManager mockServerManager;
+ @Mock
+ private KeymetaTableAccessor mockAccessor;
+ @Mock
+ private ManagedKeyProvider mockProvider;
+ @Mock
+ private KeyManagementService mockKeyManagementService;
+
+ @Before
+ public void setUp() throws Exception {
+ MockitoAnnotations.openMocks(this);
+ when(mockMasterServices.getAsyncClusterConnection()).thenReturn(mockAsyncClusterConnection);
+ when(mockAsyncClusterConnection.getAdmin()).thenReturn(mockAsyncAdmin);
+ when(mockMasterServices.getServerManager()).thenReturn(mockServerManager);
+ when(mockServerManager.getOnlineServersList()).thenReturn(new ArrayList<>());
+
+ // Setup KeyManagementService mock
+ Configuration conf = HBaseConfiguration.create();
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+ when(mockKeyManagementService.getConfiguration()).thenReturn(conf);
+ when(mockMasterServices.getKeyManagementService()).thenReturn(mockKeyManagementService);
+ }
+
+ @Test
+ public void testDisableKeyManagement() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData activeKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData disabledMarker =
+ new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, ManagedKeyState.DISABLED);
+
+ when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(activeKey)
+ .thenReturn(disabledMarker);
+
+ ManagedKeyData result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
+ verify(mockAccessor, times(2)).getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL);
+ verify(mockAccessor).updateActiveState(activeKey, ManagedKeyState.INACTIVE);
+
+ // Repeat the call for idempotency check.
+ clearInvocations(mockAccessor);
+ when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(disabledMarker);
+ result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
+ verify(mockAccessor, times(2)).getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL);
+ verify(mockAccessor, never()).updateActiveState(any(), any());
+ }
+
+ @Test
+ public void testDisableManagedKey() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData disabledKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.DISABLED, "metadata1", 123L);
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash("metadata1");
+ when(mockAccessor.getKey(any(), any(), any())).thenReturn(disabledKey);
+
+ CompletableFuture successFuture = CompletableFuture.completedFuture(null);
+ when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(successFuture);
+
+ IOException exception = assertThrows(IOException.class,
+ () -> admin.disableManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, keyMetadataHash));
+ assertTrue(exception.getMessage(),
+ exception.getMessage().contains("Key is already disabled"));
+ verify(mockAccessor, never()).disableKey(any(ManagedKeyData.class));
+ }
+
+ @Test
+ public void testDisableManagedKeyNotFound() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash("metadata1");
+ // Return null to simulate key not found
+ when(mockAccessor.getKey(any(), any(), any())).thenReturn(null);
+
+ IOException exception = assertThrows(IOException.class,
+ () -> admin.disableManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, keyMetadataHash));
+ assertTrue(exception.getMessage(),
+ exception.getMessage()
+ .contains("Key not found for (custodian: Y3VzdDE=, namespace: *) with metadata hash: "
+ + ManagedKeyProvider.encodeToStr(keyMetadataHash)));
+ verify(mockAccessor).getKey(CUST_BYTES, KEY_SPACE_GLOBAL, keyMetadataHash);
+ }
+
+ @Test
+ public void testRotateManagedKeyNoActiveKey() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ // Return null to simulate no active key exists
+ when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(null);
+
+ IOException exception =
+ assertThrows(IOException.class, () -> admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL));
+ assertTrue(exception.getMessage().contains("No active key found"));
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL);
+ }
+
+ @Test
+ public void testRotateManagedKey() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData currentKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData newKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata2", 124L);
+
+ when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(currentKey);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.getManagedKey(any(), any())).thenReturn(newKey);
+
+ ManagedKeyData result = admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNotNull(result);
+ assertEquals(newKey, result);
+ verify(mockAccessor).addKey(newKey);
+ verify(mockAccessor).updateActiveState(currentKey, ManagedKeyState.INACTIVE);
+ }
+
+ @Test
+ public void testRefreshManagedKeysWithNoStateChange() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ List keys = new ArrayList<>();
+ ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ keys.add(key1);
+
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(key1);
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
+ verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).disableKey(any());
+ }
+
+ @Test
+ public void testRotateManagedKeyIgnoresFailedKey() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData currentKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData newKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.FAILED, "metadata1", 124L);
+
+ when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(currentKey);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.getManagedKey(any(), any())).thenReturn(newKey);
+ // Mock the AsyncAdmin for ejectManagedKeyDataCacheEntry
+ when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+
+ ManagedKeyData result = admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNull(result);
+ // Verify that the active key was not marked as inactive
+ verify(mockAccessor, never()).addKey(any());
+ verify(mockAccessor, never()).updateActiveState(any(), any());
+ }
+
+ @Test
+ public void testRotateManagedKeyNoRotation() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ // Current and new keys have the same metadata hash, so no rotation should occur
+ ManagedKeyData currentKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+
+ when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(currentKey);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.getManagedKey(any(), any())).thenReturn(currentKey);
+
+ ManagedKeyData result = admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNull(result);
+ verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).addKey(any());
+ verify(mockAsyncAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(),
+ any());
+ }
+
+ @Test
+ public void testRefreshManagedKeysWithStateChange() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ List keys = new ArrayList<>();
+ ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ keys.add(key1);
+
+ // Refreshed key has a different state (INACTIVE)
+ ManagedKeyData refreshedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.INACTIVE, "metadata1", 123L);
+
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(refreshedKey);
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
+ verify(mockAccessor).updateActiveState(key1, ManagedKeyState.INACTIVE);
+ }
+
+ @Test
+ public void testRefreshManagedKeysWithDisabledState() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ List keys = new ArrayList<>();
+ ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ keys.add(key1);
+
+ // Refreshed key is DISABLED
+ ManagedKeyData disabledKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.DISABLED, "metadata1", 123L);
+
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(any(), any())).thenReturn(disabledKey);
+ // Mock the ejectManagedKeyDataCacheEntry to cover line 263
+ when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
+ verify(mockAccessor).disableKey(key1);
+ // Verify cache ejection was called (line 263)
+ verify(mockAsyncAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ }
+
+ @Test
+ public void testRefreshManagedKeysWithException() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ List keys = new ArrayList<>();
+ ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData key2 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata2", 124L);
+ ManagedKeyData refreshedKey1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.INACTIVE, "metadata1", 123L);
+ keys.add(key1);
+ keys.add(key2);
+
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ // First key throws IOException, second key should still be refreshed
+ doThrow(new IOException("Simulated error")).when(mockAccessor)
+ .updateActiveState(any(ManagedKeyData.class), any(ManagedKeyState.class));
+ when(mockProvider.unwrapKey(key1.getKeyMetadata(), null)).thenReturn(refreshedKey1);
+ when(mockProvider.unwrapKey(key2.getKeyMetadata(), null)).thenReturn(key2);
+
+ // Should not throw exception, should continue refreshing other keys
+ IOException exception = assertThrows(IOException.class,
+ () -> admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL));
+
+ assertTrue(exception.getCause() instanceof IOException);
+ assertTrue(exception.getCause().getMessage(),
+ exception.getCause().getMessage().contains("Simulated error"));
+ verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
+ verify(mockAccessor, never()).disableKey(any());
+ verify(mockProvider).unwrapKey(key1.getKeyMetadata(), null);
+ verify(mockProvider).unwrapKey(key2.getKeyMetadata(), null);
+ verify(mockAsyncAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(),
+ any());
+ }
+
+ @Test
+ public void testRefreshKeyWithMetadataValidationFailure() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData originalKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ // Refreshed key has different metadata (which should not happen and indicates a serious
+ // error)
+ ManagedKeyData refreshedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata2", 124L);
+
+ List keys = Arrays.asList(originalKey);
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(originalKey.getKeyMetadata(), null)).thenReturn(refreshedKey);
+
+ // The metadata mismatch triggers a KeyException which gets wrapped in an IOException
+ IOException exception = assertThrows(IOException.class,
+ () -> admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL));
+ assertTrue(exception.getCause() instanceof KeyException);
+ assertTrue(exception.getCause().getMessage(),
+ exception.getCause().getMessage().contains("Key metadata changed during refresh"));
+ verify(mockProvider).unwrapKey(originalKey.getKeyMetadata(), null);
+ // No state updates should happen due to the exception
+ verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).disableKey(any());
+ }
+
+ @Test
+ public void testRefreshKeyWithFailedStateIgnored() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData originalKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ // Refreshed key is in FAILED state (provider issue) - using byte[] metadata hash constructor
+ ManagedKeyData failedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.FAILED, "metadata1", 124L);
+
+ List keys = Arrays.asList(originalKey);
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(originalKey.getKeyMetadata(), null)).thenReturn(failedKey);
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ // Should not update state when refreshed key is FAILED
+ verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).disableKey(any());
+ verify(mockProvider).unwrapKey(originalKey.getKeyMetadata(), null);
+ }
+
+ @Test
+ public void testRefreshKeyRecoveryFromPriorEnableFailure() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ // FAILED key with null metadata (lines 119-135 in KeyManagementUtils)
+ ManagedKeyData failedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, FAILED, 123L);
+
+ // Provider returns a recovered key
+ ManagedKeyData recoveredKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyState.ACTIVE, "metadata1", 124L);
+
+ List keys = Arrays.asList(failedKey);
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL))
+ .thenReturn(failedKey);
+ when(mockProvider.getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace()))
+ .thenReturn(recoveredKey);
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ // Should call getManagedKey for FAILED key with null metadata (line 125)
+ verify(mockProvider).getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace());
+ // Should add recovered key (line 130)
+ verify(mockAccessor).addKey(recoveredKey);
+ }
+
+ @Test
+ public void testRefreshKeyNoRecoveryFromPriorEnableFailure() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ // FAILED key with null metadata
+ ManagedKeyData failedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, FAILED, 123L);
+
+ // Provider returns another FAILED key (recovery didn't work)
+ ManagedKeyData stillFailedKey =
+ new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, ManagedKeyState.FAILED, 124L);
+
+ List keys = Arrays.asList(failedKey);
+ when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL))
+ .thenReturn(failedKey);
+ when(mockProvider.getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace()))
+ .thenReturn(stillFailedKey);
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ // Should call getManagedKey for FAILED key with null metadata
+ verify(mockProvider).getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace());
+ verify(mockAccessor, never()).addKey(any());
+ }
+
+ private class KeymetaAdminImplForTest extends KeymetaAdminImpl {
+ private final KeymetaTableAccessor accessor;
+
+ public KeymetaAdminImplForTest(MasterServices server, KeymetaTableAccessor accessor)
+ throws IOException {
+ super(server);
+ this.accessor = accessor;
+ }
+
+ @Override
+ protected AsyncAdmin getAsyncAdmin(MasterServices master) {
+ return mockAsyncAdmin;
+ }
+
+ @Override
+ public List getAllKeys(byte[] keyCust, String keyNamespace,
+ boolean includeMarkers) throws IOException, KeyException {
+ return accessor.getAllKeys(keyCust, keyNamespace, includeMarkers);
+ }
+
+ @Override
+ public ManagedKeyData getKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash)
+ throws IOException, KeyException {
+ return accessor.getKey(keyCust, keyNamespace, keyMetadataHash);
+ }
+
+ @Override
+ public void disableKey(ManagedKeyData keyData) throws IOException {
+ accessor.disableKey(keyData);
+ }
+
+ @Override
+ public ManagedKeyData getKeyManagementStateMarker(byte[] keyCust, String keyNamespace)
+ throws IOException, KeyException {
+ return accessor.getKeyManagementStateMarker(keyCust, keyNamespace);
+ }
+
+ @Override
+ public void addKeyManagementStateMarker(byte[] keyCust, String keyNamespace,
+ ManagedKeyState state) throws IOException {
+ accessor.addKeyManagementStateMarker(keyCust, keyNamespace, state);
+ }
+
+ @Override
+ public ManagedKeyProvider getKeyProvider() {
+ return accessor.getKeyProvider();
+ }
+
+ @Override
+ public void addKey(ManagedKeyData keyData) throws IOException {
+ accessor.addKey(keyData);
+ }
+
+ @Override
+ public void updateActiveState(ManagedKeyData keyData, ManagedKeyState newState)
+ throws IOException {
+ accessor.updateActiveState(keyData, newState);
+ }
+ }
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
index 5e6b8db58243..71b24ecc954a 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
@@ -29,6 +29,7 @@
import org.apache.hadoop.hbase.ServerName;
import org.apache.hadoop.hbase.SingleProcessHBaseCluster;
import org.apache.hadoop.hbase.StartTestingClusterOption;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyTestBase;
import org.apache.hadoop.hbase.master.RegionState.State;
import org.apache.hadoop.hbase.regionserver.HRegionServer;
import org.apache.hadoop.hbase.testclassification.FlakeyTests;
@@ -40,10 +41,16 @@
import org.junit.Test;
import org.junit.experimental.categories.Category;
import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Suite;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@Category({ FlakeyTests.class, LargeTests.class })
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestMasterFailover.TestMasterFailoverDefaultConfig.class,
+ TestMasterFailover.TestSimpleMasterFailoverWithKeymeta.class })
public class TestMasterFailover {
@ClassRule
@@ -54,19 +61,11 @@ public class TestMasterFailover {
@Rule
public TestName name = new TestName();
- /**
- * Simple test of master failover.
- *
- * Starts with three masters. Kills a backup master. Then kills the active master. Ensures the
- * final master becomes active and we can still contact the cluster.
- */
- @Test
- public void testSimpleMasterFailover() throws Exception {
+ protected static void doTestSimpleMasterFailover(HBaseTestingUtil TEST_UTIL) throws Exception {
final int NUM_MASTERS = 3;
final int NUM_RS = 3;
// Start the cluster
- HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
try {
StartTestingClusterOption option = StartTestingClusterOption.builder().numMasters(NUM_MASTERS)
.numRegionServers(NUM_RS).numDataNodes(NUM_RS).build();
@@ -168,50 +167,90 @@ public void testSimpleMasterFailover() throws Exception {
}
}
- /**
- * Test meta in transition when master failover. This test used to manipulate region state up in
- * zk. That is not allowed any more in hbase2 so I removed that messing. That makes this test
- * anemic.
- */
- @Test
- public void testMetaInTransitionWhenMasterFailover() throws Exception {
- // Start the cluster
- HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
- TEST_UTIL.startMiniCluster();
- try {
- SingleProcessHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
- LOG.info("Cluster started");
-
- HMaster activeMaster = cluster.getMaster();
- ServerName metaServerName = cluster.getServerHoldingMeta();
- HRegionServer hrs = cluster.getRegionServer(metaServerName);
-
- // Now kill master, meta should remain on rs, where we placed it before.
- LOG.info("Aborting master");
- activeMaster.abort("test-kill");
- cluster.waitForMasterToStop(activeMaster.getServerName(), 30000);
- LOG.info("Master has aborted");
-
- // meta should remain where it was
- RegionState metaState = MetaTableLocator.getMetaRegionState(hrs.getZooKeeper());
- assertEquals("hbase:meta should be online on RS", metaState.getServerName(), metaServerName);
- assertEquals("hbase:meta should be online on RS", State.OPEN, metaState.getState());
-
- // Start up a new master
- LOG.info("Starting up a new master");
- activeMaster = cluster.startMaster().getMaster();
- LOG.info("Waiting for master to be ready");
- cluster.waitForActiveAndReadyMaster();
- LOG.info("Master is ready");
-
- // ensure meta is still deployed on RS
- metaState = MetaTableLocator.getMetaRegionState(activeMaster.getZooKeeper());
- assertEquals("hbase:meta should be online on RS", metaState.getServerName(), metaServerName);
- assertEquals("hbase:meta should be online on RS", State.OPEN, metaState.getState());
-
- // Done, shutdown the cluster
- } finally {
- TEST_UTIL.shutdownMiniCluster();
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ FlakeyTests.class, LargeTests.class })
+ public static class TestMasterFailoverDefaultConfig {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestMasterFailoverDefaultConfig.class);
+
+ /**
+ * Simple test of master failover.
+ *
+ * Starts with three masters. Kills a backup master. Then kills the active master. Ensures the
+ * final master becomes active and we can still contact the cluster.
+ */
+ @Test
+ public void testSimpleMasterFailover() throws Exception {
+ HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+ doTestSimpleMasterFailover(TEST_UTIL);
+ }
+
+ /**
+ * Test meta in transition when master failover. This test used to manipulate region state up in
+ * zk. That is not allowed any more in hbase2 so I removed that messing. That makes this test
+ * anemic.
+ */
+ @Test
+ public void testMetaInTransitionWhenMasterFailover() throws Exception {
+ // Start the cluster
+ HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+ TEST_UTIL.startMiniCluster();
+ try {
+ SingleProcessHBaseCluster cluster = TEST_UTIL.getHBaseCluster();
+ LOG.info("Cluster started");
+
+ HMaster activeMaster = cluster.getMaster();
+ ServerName metaServerName = cluster.getServerHoldingMeta();
+ HRegionServer hrs = cluster.getRegionServer(metaServerName);
+
+ // Now kill master, meta should remain on rs, where we placed it before.
+ LOG.info("Aborting master");
+ activeMaster.abort("test-kill");
+ cluster.waitForMasterToStop(activeMaster.getServerName(), 30000);
+ LOG.info("Master has aborted");
+
+ // meta should remain where it was
+ RegionState metaState = MetaTableLocator.getMetaRegionState(hrs.getZooKeeper());
+ assertEquals("hbase:meta should be online on RS", metaState.getServerName(),
+ metaServerName);
+ assertEquals("hbase:meta should be online on RS", State.OPEN, metaState.getState());
+
+ // Start up a new master
+ LOG.info("Starting up a new master");
+ activeMaster = cluster.startMaster().getMaster();
+ LOG.info("Waiting for master to be ready");
+ cluster.waitForActiveAndReadyMaster();
+ LOG.info("Master is ready");
+
+ // ensure meta is still deployed on RS
+ metaState = MetaTableLocator.getMetaRegionState(activeMaster.getZooKeeper());
+ assertEquals("hbase:meta should be online on RS", metaState.getServerName(),
+ metaServerName);
+ assertEquals("hbase:meta should be online on RS", State.OPEN, metaState.getState());
+
+ // Done, shutdown the cluster
+ } finally {
+ TEST_UTIL.shutdownMiniCluster();
+ }
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ FlakeyTests.class, LargeTests.class })
+ public static class TestSimpleMasterFailoverWithKeymeta extends ManagedKeyTestBase {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestSimpleMasterFailoverWithKeymeta.class);
+
+ @Test
+ public void testSimpleMasterFailoverWithKeymeta() throws Exception {
+ doTestSimpleMasterFailover(TEST_UTIL);
+ }
+
+ @Override
+ protected boolean isWithMiniClusterStart() {
+ return false;
}
}
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java
new file mode 100644
index 000000000000..09e409b11e7d
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java
@@ -0,0 +1,523 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.apache.hadoop.hbase.HConstants.SYSTEM_KEY_FILE_PREFIX;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.INACTIVE;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.security.Key;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.stream.IntStream;
+import javax.crypto.spec.SecretKeySpec;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.ClusterId;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.crypto.KeymetaTestUtils;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.keymeta.SystemKeyAccessor;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.CommonFSUtils;
+import org.apache.hadoop.hbase.util.Pair;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.rules.TestName;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameter;
+import org.junit.runners.Parameterized.Parameters;
+import org.junit.runners.Suite;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestSystemKeyAccessorAndManager.TestAccessorWhenDisabled.class,
+ TestSystemKeyAccessorAndManager.TestManagerWhenDisabled.class,
+ TestSystemKeyAccessorAndManager.TestAccessor.class,
+ TestSystemKeyAccessorAndManager.TestForInvalidFilenames.class,
+ TestSystemKeyAccessorAndManager.TestManagerForErrors.class,
+ TestSystemKeyAccessorAndManager.TestAccessorMisc.class // ADD THIS
+})
+@Category({ MasterTests.class, SmallTests.class })
+public class TestSystemKeyAccessorAndManager {
+ private static final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
+
+ @Rule
+ public TestName name = new TestName();
+
+ protected Configuration conf;
+ protected Path testRootDir;
+ protected FileSystem fs;
+
+ protected FileSystem mockFileSystem = mock(FileSystem.class);
+ protected MasterServices mockMaster = mock(MasterServices.class);
+ protected SystemKeyManager systemKeyManager;
+
+ @Before
+ public void setUp() throws Exception {
+ conf = TEST_UTIL.getConfiguration();
+ testRootDir = TEST_UTIL.getDataTestDir(name.getMethodName());
+ fs = testRootDir.getFileSystem(conf);
+
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
+
+ when(mockMaster.getFileSystem()).thenReturn(mockFileSystem);
+ when(mockMaster.getConfiguration()).thenReturn(conf);
+ systemKeyManager = new SystemKeyManager(mockMaster);
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestAccessorWhenDisabled extends TestSystemKeyAccessorAndManager {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestAccessorWhenDisabled.class);
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "false");
+ }
+
+ @Test
+ public void test() throws Exception {
+ assertThrows(IOException.class, () -> systemKeyManager.getAllSystemKeyFiles());
+ assertThrows(IOException.class, () -> systemKeyManager.getLatestSystemKeyFile().getFirst());
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestManagerWhenDisabled extends TestSystemKeyAccessorAndManager {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestManagerWhenDisabled.class);
+
+ @Override
+ public void setUp() throws Exception {
+ super.setUp();
+ conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "false");
+ }
+
+ @Test
+ public void test() throws Exception {
+ systemKeyManager.ensureSystemKeyInitialized();
+ assertNull(systemKeyManager.rotateSystemKeyIfChanged());
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestAccessor extends TestSystemKeyAccessorAndManager {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestAccessor.class);
+
+ @Test
+ public void testGetLatestWithNone() throws Exception {
+ when(mockFileSystem.globStatus(any())).thenReturn(new FileStatus[0]);
+
+ RuntimeException ex =
+ assertThrows(RuntimeException.class, () -> systemKeyManager.getLatestSystemKeyFile());
+ assertEquals("No cluster key initialized yet", ex.getMessage());
+ }
+
+ @Test
+ public void testGetWithSingle() throws Exception {
+ String fileName = SYSTEM_KEY_FILE_PREFIX + "1";
+ FileStatus mockFileStatus = KeymetaTestUtils.createMockFile(fileName);
+
+ Path systemKeyDir = CommonFSUtils.getSystemKeyDir(conf);
+ when(mockFileSystem.globStatus(eq(new Path(systemKeyDir, SYSTEM_KEY_FILE_PREFIX + "*"))))
+ .thenReturn(new FileStatus[] { mockFileStatus });
+
+ List files = systemKeyManager.getAllSystemKeyFiles();
+ assertEquals(1, files.size());
+ assertEquals(fileName, files.get(0).getName());
+
+ Pair> latestSystemKeyFileResult = systemKeyManager.getLatestSystemKeyFile();
+ assertEquals(fileName, latestSystemKeyFileResult.getFirst().getName());
+
+ assertEquals(1,
+ SystemKeyAccessor.extractSystemKeySeqNum(latestSystemKeyFileResult.getFirst()));
+ }
+
+ @Test
+ public void testGetWithMultiple() throws Exception {
+ FileStatus[] mockFileStatuses = IntStream.rangeClosed(1, 3)
+ .mapToObj(i -> KeymetaTestUtils.createMockFile(SYSTEM_KEY_FILE_PREFIX + i))
+ .toArray(FileStatus[]::new);
+
+ Path systemKeyDir = CommonFSUtils.getSystemKeyDir(conf);
+ when(mockFileSystem.globStatus(eq(new Path(systemKeyDir, SYSTEM_KEY_FILE_PREFIX + "*"))))
+ .thenReturn(mockFileStatuses);
+
+ List files = systemKeyManager.getAllSystemKeyFiles();
+ assertEquals(3, files.size());
+
+ Pair> latestSystemKeyFileResult = systemKeyManager.getLatestSystemKeyFile();
+ assertEquals(3,
+ SystemKeyAccessor.extractSystemKeySeqNum(latestSystemKeyFileResult.getFirst()));
+ }
+
+ @Test
+ public void testExtractKeySequenceForInvalidFilename() throws Exception {
+ assertEquals(-1,
+ SystemKeyAccessor.extractKeySequence(KeymetaTestUtils.createMockFile("abcd").getPath()));
+ }
+ }
+
+ @RunWith(Parameterized.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestForInvalidFilenames extends TestSystemKeyAccessorAndManager {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestForInvalidFilenames.class);
+
+ @Parameter(0)
+ public String fileName;
+ @Parameter(1)
+ public String expectedErrorMessage;
+
+ @Parameters(name = "{index},fileName={0}")
+ public static Collection data() {
+ return Arrays.asList(new Object[][] { { "abcd", "Couldn't parse key file name: abcd" },
+ { SYSTEM_KEY_FILE_PREFIX + "abcd",
+ "Couldn't parse key file name: " + SYSTEM_KEY_FILE_PREFIX + "abcd" },
+ // Add more test cases here
+ });
+ }
+
+ @Test
+ public void test() throws Exception {
+ FileStatus mockFileStatus = KeymetaTestUtils.createMockFile(fileName);
+
+ IOException ex = assertThrows(IOException.class,
+ () -> SystemKeyAccessor.extractSystemKeySeqNum(mockFileStatus.getPath()));
+ assertEquals(expectedErrorMessage, ex.getMessage());
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestManagerForErrors extends TestSystemKeyAccessorAndManager {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestManagerForErrors.class);
+
+ private static final String CLUSTER_ID = "clusterId";
+
+ @Mock
+ ManagedKeyProvider mockKeyProvide;
+ @Mock
+ MasterFileSystem masterFS;
+
+ private MockSystemKeyManager manager;
+ private AutoCloseable closeableMocks;
+
+ @Before
+ public void setUp() throws Exception {
+ super.setUp();
+ closeableMocks = MockitoAnnotations.openMocks(this);
+
+ when(mockFileSystem.globStatus(any())).thenReturn(new FileStatus[0]);
+ ClusterId clusterId = mock(ClusterId.class);
+ when(mockMaster.getMasterFileSystem()).thenReturn(masterFS);
+ when(masterFS.getClusterId()).thenReturn(clusterId);
+ when(clusterId.toString()).thenReturn(CLUSTER_ID);
+ when(masterFS.getFileSystem()).thenReturn(mockFileSystem);
+
+ manager = new MockSystemKeyManager(mockMaster, mockKeyProvide);
+ }
+
+ @After
+ public void tearDown() throws Exception {
+ closeableMocks.close();
+ }
+
+ @Test
+ public void testEnsureSystemKeyInitialized_WithNoSystemKeys() throws Exception {
+ when(mockKeyProvide.getSystemKey(any())).thenReturn(null);
+
+ IOException ex = assertThrows(IOException.class, manager::ensureSystemKeyInitialized);
+ assertEquals("Failed to get system key for cluster id: " + CLUSTER_ID, ex.getMessage());
+ }
+
+ @Test
+ public void testEnsureSystemKeyInitialized_WithNoNonActiveKey() throws Exception {
+ String metadata = "key-metadata";
+ ManagedKeyData keyData = mock(ManagedKeyData.class);
+ when(keyData.getKeyState()).thenReturn(INACTIVE);
+ when(keyData.getKeyMetadata()).thenReturn(metadata);
+ when(mockKeyProvide.getSystemKey(any())).thenReturn(keyData);
+
+ IOException ex = assertThrows(IOException.class, manager::ensureSystemKeyInitialized);
+ assertEquals(
+ "System key is expected to be ACTIVE but it is: INACTIVE for metadata: " + metadata,
+ ex.getMessage());
+ }
+
+ @Test
+ public void testEnsureSystemKeyInitialized_WithInvalidMetadata() throws Exception {
+ ManagedKeyData keyData = mock(ManagedKeyData.class);
+ when(keyData.getKeyState()).thenReturn(ACTIVE);
+ when(mockKeyProvide.getSystemKey(any())).thenReturn(keyData);
+
+ IOException ex = assertThrows(IOException.class, manager::ensureSystemKeyInitialized);
+ assertEquals("System key is expected to have metadata but it is null", ex.getMessage());
+ }
+
+ @Test
+ public void testEnsureSystemKeyInitialized_WithSaveFailure() throws Exception {
+ String metadata = "key-metadata";
+ ManagedKeyData keyData = mock(ManagedKeyData.class);
+ when(keyData.getKeyState()).thenReturn(ACTIVE);
+ when(mockKeyProvide.getSystemKey(any())).thenReturn(keyData);
+ when(keyData.getKeyMetadata()).thenReturn(metadata);
+ when(mockFileSystem.globStatus(any())).thenReturn(new FileStatus[0]);
+ Path rootDir = CommonFSUtils.getRootDir(conf);
+ when(masterFS.getTempDir()).thenReturn(rootDir);
+ FSDataOutputStream mockStream = mock(FSDataOutputStream.class);
+ when(mockFileSystem.create(any())).thenReturn(mockStream);
+ when(mockFileSystem.rename(any(), any())).thenReturn(false);
+
+ RuntimeException ex =
+ assertThrows(RuntimeException.class, manager::ensureSystemKeyInitialized);
+ assertEquals("Failed to generate or save System Key", ex.getMessage());
+ }
+
+ @Test
+ public void testEnsureSystemKeyInitialized_RaceCondition() throws Exception {
+ String metadata = "key-metadata";
+ ManagedKeyData keyData = mock(ManagedKeyData.class);
+ when(keyData.getKeyState()).thenReturn(ACTIVE);
+ when(mockKeyProvide.getSystemKey(any())).thenReturn(keyData);
+ when(keyData.getKeyMetadata()).thenReturn(metadata);
+ when(mockFileSystem.globStatus(any())).thenReturn(new FileStatus[0]);
+ Path rootDir = CommonFSUtils.getRootDir(conf);
+ when(masterFS.getTempDir()).thenReturn(rootDir);
+ FSDataOutputStream mockStream = mock(FSDataOutputStream.class);
+ when(mockFileSystem.create(any())).thenReturn(mockStream);
+ when(mockFileSystem.rename(any(), any())).thenReturn(false);
+ String fileName = SYSTEM_KEY_FILE_PREFIX + "1";
+ FileStatus mockFileStatus = KeymetaTestUtils.createMockFile(fileName);
+ when(mockFileSystem.globStatus(any())).thenReturn(new FileStatus[0],
+ new FileStatus[] { mockFileStatus });
+
+ manager.ensureSystemKeyInitialized();
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestAccessorMisc extends TestSystemKeyAccessorAndManager {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestAccessorMisc.class);
+
+ @Test
+ public void testLoadSystemKeySuccess() throws Exception {
+ Path testPath = new Path("/test/key/path");
+ String testMetadata = "test-metadata";
+
+ // Create test key data
+ Key testKey = new SecretKeySpec("test-key-bytes".getBytes(), "AES");
+ ManagedKeyData testKeyData = new ManagedKeyData("custodian".getBytes(), "namespace", testKey,
+ ManagedKeyState.ACTIVE, testMetadata, 1000L);
+
+ // Mock key provider
+ ManagedKeyProvider realProvider = mock(ManagedKeyProvider.class);
+ when(realProvider.unwrapKey(testMetadata, null)).thenReturn(testKeyData);
+
+ // Create testable SystemKeyAccessor that overrides both loadKeyMetadata and getKeyProvider
+ SystemKeyAccessor testAccessor = new SystemKeyAccessor(mockMaster) {
+ @Override
+ protected String loadKeyMetadata(Path keyPath) throws IOException {
+ assertEquals(testPath, keyPath);
+ return testMetadata;
+ }
+
+ @Override
+ public ManagedKeyProvider getKeyProvider() {
+ return realProvider;
+ }
+ };
+
+ ManagedKeyData result = testAccessor.loadSystemKey(testPath);
+ assertEquals(testKeyData, result);
+
+ // Verify the key provider was called correctly
+ verify(realProvider).unwrapKey(testMetadata, null);
+ }
+
+ @Test(expected = RuntimeException.class)
+ public void testLoadSystemKeyNullResult() throws Exception {
+ Path testPath = new Path("/test/key/path");
+ String testMetadata = "test-metadata";
+
+ // Mock key provider to return null
+ ManagedKeyProvider realProvider = mock(ManagedKeyProvider.class);
+ when(realProvider.unwrapKey(testMetadata, null)).thenReturn(null);
+
+ SystemKeyAccessor testAccessor = new SystemKeyAccessor(mockMaster) {
+ @Override
+ protected String loadKeyMetadata(Path keyPath) throws IOException {
+ assertEquals(testPath, keyPath);
+ return testMetadata;
+ }
+
+ @Override
+ public ManagedKeyProvider getKeyProvider() {
+ return realProvider;
+ }
+ };
+
+ testAccessor.loadSystemKey(testPath);
+ }
+
+ @Test
+ public void testExtractSystemKeySeqNumValid() throws Exception {
+ Path testPath1 = new Path(SYSTEM_KEY_FILE_PREFIX + "1");
+ Path testPath123 = new Path(SYSTEM_KEY_FILE_PREFIX + "123");
+ Path testPathMax = new Path(SYSTEM_KEY_FILE_PREFIX + Integer.MAX_VALUE);
+
+ assertEquals(1, SystemKeyAccessor.extractSystemKeySeqNum(testPath1));
+ assertEquals(123, SystemKeyAccessor.extractSystemKeySeqNum(testPath123));
+ assertEquals(Integer.MAX_VALUE, SystemKeyAccessor.extractSystemKeySeqNum(testPathMax));
+ }
+
+ @Test(expected = IOException.class)
+ public void testGetAllSystemKeyFilesIOException() throws Exception {
+ when(mockFileSystem.globStatus(any())).thenThrow(new IOException("Filesystem error"));
+ systemKeyManager.getAllSystemKeyFiles();
+ }
+
+ @Test(expected = IOException.class)
+ public void testLoadSystemKeyIOExceptionFromMetadata() throws Exception {
+ Path testPath = new Path("/test/key/path");
+
+ SystemKeyAccessor testAccessor = new SystemKeyAccessor(mockMaster) {
+ @Override
+ protected String loadKeyMetadata(Path keyPath) throws IOException {
+ assertEquals(testPath, keyPath);
+ throw new IOException("Metadata read failed");
+ }
+
+ @Override
+ public ManagedKeyProvider getKeyProvider() {
+ return mock(ManagedKeyProvider.class);
+ }
+ };
+
+ testAccessor.loadSystemKey(testPath);
+ }
+
+ @Test(expected = RuntimeException.class)
+ public void testLoadSystemKeyProviderException() throws Exception {
+ Path testPath = new Path("/test/key/path");
+ String testMetadata = "test-metadata";
+
+ SystemKeyAccessor testAccessor = new SystemKeyAccessor(mockMaster) {
+ @Override
+ protected String loadKeyMetadata(Path keyPath) throws IOException {
+ assertEquals(testPath, keyPath);
+ return testMetadata;
+ }
+
+ @Override
+ public ManagedKeyProvider getKeyProvider() {
+ throw new RuntimeException("Key provider not available");
+ }
+ };
+
+ testAccessor.loadSystemKey(testPath);
+ }
+
+ @Test
+ public void testExtractSystemKeySeqNumBoundaryValues() throws Exception {
+ // Test boundary values
+ Path testPath0 = new Path(SYSTEM_KEY_FILE_PREFIX + "0");
+ Path testPathMin = new Path(SYSTEM_KEY_FILE_PREFIX + Integer.MIN_VALUE);
+
+ assertEquals(0, SystemKeyAccessor.extractSystemKeySeqNum(testPath0));
+ assertEquals(Integer.MIN_VALUE, SystemKeyAccessor.extractSystemKeySeqNum(testPathMin));
+ }
+
+ @Test
+ public void testExtractKeySequenceEdgeCases() throws Exception {
+ // Test various edge cases for extractKeySequence
+ Path validZero = new Path(SYSTEM_KEY_FILE_PREFIX + "0");
+ Path validNegative = new Path(SYSTEM_KEY_FILE_PREFIX + "-1");
+
+ // Valid cases should still work
+ assertEquals(0, SystemKeyAccessor.extractKeySequence(validZero));
+ assertEquals(-1, SystemKeyAccessor.extractKeySequence(validNegative));
+ }
+
+ @Test
+ public void testCreateCacheFactoryMethod() {
+ // Test static factory method
+ }
+
+ @Test
+ public void testCreateCacheWithNoKeys() {
+ // Test behavior when no system keys are available
+ }
+ }
+
+ private static class MockSystemKeyManager extends SystemKeyManager {
+ private final ManagedKeyProvider keyProvider;
+
+ public MockSystemKeyManager(MasterServices master, ManagedKeyProvider keyProvider)
+ throws IOException {
+ super(master);
+ this.keyProvider = keyProvider;
+ // systemKeyDir = mock(Path.class);
+ }
+
+ @Override
+ public ManagedKeyProvider getKeyProvider() {
+ return keyProvider;
+ }
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java
new file mode 100644
index 000000000000..54bfb5e0a120
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java
@@ -0,0 +1,115 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+
+import java.io.IOException;
+import java.security.Key;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyTestBase;
+import org.apache.hadoop.hbase.keymeta.SystemKeyAccessor;
+import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.MediumTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({ MasterTests.class, MediumTests.class })
+public class TestSystemKeyManager extends ManagedKeyTestBase {
+
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestSystemKeyManager.class);
+
+ @Test
+ public void testSystemKeyInitializationAndRotation() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ ManagedKeyProvider keyProvider = Encryption.getManagedKeyProvider(master.getConfiguration());
+ assertNotNull(keyProvider);
+ assertTrue(keyProvider instanceof ManagedKeyProvider);
+ assertTrue(keyProvider instanceof MockManagedKeyProvider);
+ MockManagedKeyProvider pbeKeyProvider = (MockManagedKeyProvider) keyProvider;
+ ManagedKeyData initialSystemKey = validateInitialState(master, pbeKeyProvider);
+
+ restartSystem();
+ master = TEST_UTIL.getHBaseCluster().getMaster();
+ validateInitialState(master, pbeKeyProvider);
+
+ // Test rotation of cluster key by changing the key that the key provider provides and restart
+ // master.
+ String newAlias = "new_cluster_key";
+ pbeKeyProvider.setClusterKeyAlias(newAlias);
+ Key newCluterKey = MockManagedKeyProvider.generateSecretKey();
+ pbeKeyProvider.setMockedKey(newAlias, newCluterKey, ManagedKeyData.KEY_SPACE_GLOBAL);
+
+ restartSystem();
+ master = TEST_UTIL.getHBaseCluster().getMaster();
+ SystemKeyAccessor systemKeyAccessor = new SystemKeyAccessor(master);
+ assertEquals(2, systemKeyAccessor.getAllSystemKeyFiles().size());
+ SystemKeyCache systemKeyCache = master.getSystemKeyCache();
+ assertEquals(0, Bytes.compareTo(newCluterKey.getEncoded(),
+ systemKeyCache.getLatestSystemKey().getTheKey().getEncoded()));
+ assertEquals(initialSystemKey,
+ systemKeyAccessor.loadSystemKey(systemKeyAccessor.getAllSystemKeyFiles().get(1)));
+ assertEquals(initialSystemKey,
+ systemKeyCache.getSystemKeyByChecksum(initialSystemKey.getKeyChecksum()));
+ }
+
+ @Test
+ public void testWithInvalidSystemKey() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ ManagedKeyProvider keyProvider = Encryption.getManagedKeyProvider(master.getConfiguration());
+ MockManagedKeyProvider pbeKeyProvider = (MockManagedKeyProvider) keyProvider;
+
+ // Test startup failure when the cluster key is INACTIVE
+ SystemKeyManager tmpCKM = new SystemKeyManager(master);
+ tmpCKM.ensureSystemKeyInitialized();
+ pbeKeyProvider.setMockedKeyState(pbeKeyProvider.getSystemKeyAlias(), ManagedKeyState.INACTIVE);
+ assertThrows(IOException.class, tmpCKM::ensureSystemKeyInitialized);
+ }
+
+ private ManagedKeyData validateInitialState(HMaster master, MockManagedKeyProvider pbeKeyProvider)
+ throws IOException {
+ SystemKeyAccessor systemKeyAccessor = new SystemKeyAccessor(master);
+ assertEquals(1, systemKeyAccessor.getAllSystemKeyFiles().size());
+ SystemKeyCache systemKeyCache = master.getSystemKeyCache();
+ assertNotNull(systemKeyCache);
+ ManagedKeyData clusterKey = systemKeyCache.getLatestSystemKey();
+ assertEquals(pbeKeyProvider.getSystemKey(master.getClusterId().getBytes()), clusterKey);
+ assertEquals(clusterKey, systemKeyCache.getSystemKeyByChecksum(clusterKey.getKeyChecksum()));
+ return clusterKey;
+ }
+
+ private void restartSystem() throws Exception {
+ TEST_UTIL.shutdownMiniHBaseCluster();
+ Thread.sleep(2000);
+ TEST_UTIL.restartHBaseCluster(1);
+ TEST_UTIL.waitFor(60000, () -> TEST_UTIL.getMiniHBaseCluster().getMaster().isInitialized());
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java
index ca7e20f5869d..9efce81d9573 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java
@@ -18,16 +18,32 @@
package org.apache.hadoop.hbase.regionserver;
import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+import java.io.IOException;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Optional;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.RegionInfoBuilder;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.ipc.RpcCall;
import org.apache.hadoop.hbase.ipc.RpcServer;
+import org.apache.hadoop.hbase.keymeta.KeyManagementService;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.testclassification.MediumTests;
import org.apache.hadoop.hbase.testclassification.RegionServerTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.junit.ClassRule;
import org.junit.Test;
import org.junit.experimental.categories.Category;
@@ -35,6 +51,15 @@
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import org.apache.hbase.thirdparty.com.google.protobuf.ByteString;
+import org.apache.hbase.thirdparty.com.google.protobuf.RpcController;
+import org.apache.hbase.thirdparty.com.google.protobuf.ServiceException;
+
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.BooleanMsg;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.EmptyMsg;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyEntryRequest;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
+
/**
* Test parts of {@link RSRpcServices}
*/
@@ -69,4 +94,294 @@ public void testRegionScannerHolderToString() throws UnknownHostException {
null, null, false, false, clientIpAndPort, userNameTest);
LOG.info("rsh: {}", rsh);
}
+
+ /**
+ * Test the refreshSystemKeyCache RPC method that is used to rebuild the system key cache on
+ * region servers when a system key rotation has occurred.
+ */
+ @Test
+ public void testRefreshSystemKeyCache() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(false);
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ EmptyMsg request = EmptyMsg.getDefaultInstance();
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method
+ EmptyMsg response = rpcServices.refreshSystemKeyCache(controller, request);
+
+ // Verify the response is not null
+ assertNotNull("Response should not be null", response);
+
+ // Verify that rebuildSystemKeyCache was called on the server
+ verify(mockServer).rebuildSystemKeyCache();
+
+ LOG.info("refreshSystemKeyCache test completed successfully");
+ }
+
+ /**
+ * Test that refreshSystemKeyCache throws ServiceException when server is not online
+ */
+ @Test
+ public void testRefreshSystemKeyCacheWhenServerStopped() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(true); // Server is stopped
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ EmptyMsg request = EmptyMsg.getDefaultInstance();
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method and expect ServiceException
+ try {
+ rpcServices.refreshSystemKeyCache(controller, request);
+ fail("Expected ServiceException when server is stopped");
+ } catch (ServiceException e) {
+ // Expected
+ assertTrue("Exception should mention server stopping",
+ e.getCause().getMessage().contains("stopping"));
+ LOG.info("Correctly threw ServiceException when server is stopped");
+ }
+ }
+
+ /**
+ * Test that refreshSystemKeyCache throws ServiceException when rebuildSystemKeyCache fails
+ */
+ @Test
+ public void testRefreshSystemKeyCacheWhenRebuildFails() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(false);
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+
+ // Make rebuildSystemKeyCache throw IOException
+ IOException testException = new IOException("Test failure rebuilding cache");
+ doThrow(testException).when(mockServer).rebuildSystemKeyCache();
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ EmptyMsg request = EmptyMsg.getDefaultInstance();
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method and expect ServiceException
+ try {
+ rpcServices.refreshSystemKeyCache(controller, request);
+ fail("Expected ServiceException when rebuildSystemKeyCache fails");
+ } catch (ServiceException e) {
+ // Expected
+ assertEquals("Test failure rebuilding cache", e.getCause().getMessage());
+ LOG.info("Correctly threw ServiceException when rebuildSystemKeyCache fails");
+ }
+
+ // Verify that rebuildSystemKeyCache was called
+ verify(mockServer).rebuildSystemKeyCache();
+ }
+
+ /**
+ * Test the ejectManagedKeyDataCacheEntry RPC method that is used to eject a specific managed key
+ * entry from the cache on region servers.
+ */
+ @Test
+ public void testEjectManagedKeyDataCacheEntry() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+ KeyManagementService mockKeyService = mock(KeyManagementService.class);
+ ManagedKeyDataCache mockCache = mock(ManagedKeyDataCache.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(false);
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+ when(mockServer.getKeyManagementService()).thenReturn(mockKeyService);
+ when(mockKeyService.getManagedKeyDataCache()).thenReturn(mockCache);
+ // Mock the ejectKey to return true
+ when(mockCache.ejectKey(any(), any(), any())).thenReturn(true);
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ byte[] keyCustodian = Bytes.toBytes("testCustodian");
+ String keyNamespace = "testNamespace";
+ String keyMetadata = "testMetadata";
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata);
+
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCustodian))
+ .setKeyNamespace(keyNamespace).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyMetadataHash)).build();
+
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method
+ BooleanMsg response = rpcServices.ejectManagedKeyDataCacheEntry(controller, request);
+
+ // Verify the response is not null and contains the expected boolean value
+ assertNotNull("Response should not be null", response);
+ assertTrue("Response should indicate key was ejected", response.getBoolMsg());
+
+ // Verify that ejectKey was called on the cache
+ verify(mockCache).ejectKey(keyCustodian, keyNamespace, keyMetadataHash);
+
+ LOG.info("ejectManagedKeyDataCacheEntry test completed successfully");
+ }
+
+ /**
+ * Test that ejectManagedKeyDataCacheEntry throws ServiceException when server is stopped
+ */
+ @Test
+ public void testEjectManagedKeyDataCacheEntryWhenServerStopped() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(true); // Server is stopped
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ byte[] keyCustodian = Bytes.toBytes("testCustodian");
+ String keyNamespace = "testNamespace";
+ String keyMetadata = "testMetadata";
+ byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata);
+
+ ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
+ .setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCustodian))
+ .setKeyNamespace(keyNamespace).build())
+ .setKeyMetadataHash(ByteString.copyFrom(keyMetadataHash)).build();
+
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method and expect ServiceException
+ try {
+ rpcServices.ejectManagedKeyDataCacheEntry(controller, request);
+ fail("Expected ServiceException when server is stopped");
+ } catch (ServiceException e) {
+ // Expected
+ assertTrue("Exception should mention server stopping",
+ e.getCause().getMessage().contains("stopping"));
+ LOG.info("Correctly threw ServiceException when server is stopped");
+ }
+ }
+
+ /**
+ * Test the clearManagedKeyDataCache RPC method that is used to clear all cached entries in the
+ * ManagedKeyDataCache.
+ */
+ @Test
+ public void testClearManagedKeyDataCache() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+ KeyManagementService mockKeyService = mock(KeyManagementService.class);
+ ManagedKeyDataCache mockCache = mock(ManagedKeyDataCache.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(false);
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+ when(mockServer.getKeyManagementService()).thenReturn(mockKeyService);
+ when(mockKeyService.getManagedKeyDataCache()).thenReturn(mockCache);
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ EmptyMsg request = EmptyMsg.getDefaultInstance();
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method
+ EmptyMsg response = rpcServices.clearManagedKeyDataCache(controller, request);
+
+ // Verify the response is not null
+ assertNotNull("Response should not be null", response);
+
+ // Verify that clearCache was called on the cache
+ verify(mockCache).clearCache();
+
+ LOG.info("clearManagedKeyDataCache test completed successfully");
+ }
+
+ /**
+ * Test that clearManagedKeyDataCache throws ServiceException when server is stopped
+ */
+ @Test
+ public void testClearManagedKeyDataCacheWhenServerStopped() throws Exception {
+ // Create mocks
+ HRegionServer mockServer = mock(HRegionServer.class);
+ Configuration conf = HBaseConfiguration.create();
+ FileSystem mockFs = mock(FileSystem.class);
+
+ when(mockServer.getConfiguration()).thenReturn(conf);
+ when(mockServer.isOnline()).thenReturn(true);
+ when(mockServer.isAborted()).thenReturn(false);
+ when(mockServer.isStopped()).thenReturn(true); // Server is stopped
+ when(mockServer.isDataFileSystemOk()).thenReturn(true);
+ when(mockServer.getFileSystem()).thenReturn(mockFs);
+
+ // Create RSRpcServices
+ RSRpcServices rpcServices = new RSRpcServices(mockServer);
+
+ // Create request
+ EmptyMsg request = EmptyMsg.getDefaultInstance();
+ RpcController controller = mock(RpcController.class);
+
+ // Call the RPC method and expect ServiceException
+ try {
+ rpcServices.clearManagedKeyDataCache(controller, request);
+ fail("Expected ServiceException when server is stopped");
+ } catch (ServiceException e) {
+ // Expected
+ assertTrue("Exception should mention server stopping",
+ e.getCause().getMessage().contains("stopping"));
+ LOG.info("Correctly threw ServiceException when server is stopped");
+ }
+ }
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/rsgroup/VerifyingRSGroupAdmin.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/rsgroup/VerifyingRSGroupAdmin.java
index b0e3eb051fc6..862da727d60a 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/rsgroup/VerifyingRSGroupAdmin.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/rsgroup/VerifyingRSGroupAdmin.java
@@ -1011,4 +1011,20 @@ public boolean isReplicationPeerModificationEnabled() throws IOException {
return admin.isReplicationPeerModificationEnabled();
}
+ @Override
+ public void refreshSystemKeyCacheOnServers(List regionServers) throws IOException {
+ admin.refreshSystemKeyCacheOnServers(regionServers);
+ }
+
+ @Override
+ public void ejectManagedKeyDataCacheEntryOnServers(List regionServers,
+ byte[] keyCustodian, String keyNamespace, String keyMetadata) throws IOException {
+ admin.ejectManagedKeyDataCacheEntryOnServers(regionServers, keyCustodian, keyNamespace,
+ keyMetadata);
+ }
+
+ @Override
+ public void clearManagedKeyDataCacheOnServers(List regionServers) throws IOException {
+ admin.clearManagedKeyDataCacheOnServers(regionServers);
+ }
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java
new file mode 100644
index 000000000000..e648d8a1c217
--- /dev/null
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java
@@ -0,0 +1,1105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.security;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.when;
+
+import java.io.IOException;
+import java.security.Key;
+import java.security.KeyException;
+import java.util.Arrays;
+import java.util.Collection;
+import javax.crypto.spec.SecretKeySpec;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.io.crypto.Cipher;
+import org.apache.hadoop.hbase.io.crypto.CipherProvider;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.KeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.MockAesKeyProvider;
+import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer;
+import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
+import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
+import org.apache.hadoop.hbase.testclassification.SecurityTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameter;
+import org.junit.runners.Suite;
+
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestSecurityUtil.TestBasic.class,
+ TestSecurityUtil.TestCreateEncryptionContext_ForWrites.class,
+ TestSecurityUtil.TestCreateEncryptionContext_ForReads.class,
+ TestSecurityUtil.TestCreateEncryptionContext_WithoutKeyManagement_UnwrapKeyException.class, })
+@Category({ SecurityTests.class, SmallTests.class })
+public class TestSecurityUtil {
+
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestSecurityUtil.class);
+
+ // Test constants to eliminate magic strings and improve maintainability
+ protected static final String TEST_NAMESPACE = "test-namespace";
+ protected static final String TEST_FAMILY = "test-family";
+ protected static final String HBASE_KEY = "hbase";
+ protected static final String TEST_KEK_METADATA = "test-kek-metadata";
+ protected static final long TEST_KEK_CHECKSUM = 12345L;
+ protected static final String TEST_KEY_16_BYTE = "test-key-16-byte";
+ protected static final String TEST_DEK_16_BYTE = "test-dek-16-byte";
+ protected static final String INVALID_KEY_DATA = "invalid-key-data";
+ protected static final String INVALID_WRAPPED_KEY_DATA = "invalid-wrapped-key-data";
+ protected static final String INVALID_SYSTEM_KEY_DATA = "invalid-system-key-data";
+ protected static final String UNKNOWN_CIPHER = "UNKNOWN_CIPHER";
+ protected static final String AES_CIPHER = "AES";
+ protected static final String DES_CIPHER = "DES";
+
+ protected Configuration conf;
+ protected HBaseTestingUtil testUtil;
+ protected Path testPath;
+ protected ColumnFamilyDescriptor mockFamily;
+ protected TableDescriptor mockTableDescriptor;
+ protected ManagedKeyDataCache mockManagedKeyDataCache;
+ protected SystemKeyCache mockSystemKeyCache;
+ protected FixedFileTrailer mockTrailer;
+ protected ManagedKeyData mockManagedKeyData;
+ protected Key testKey;
+ protected byte[] testWrappedKey;
+ protected Key kekKey;
+ protected String testTableNamespace;
+
+ /**
+ * Configuration builder for setting up different encryption test scenarios.
+ */
+ protected static class TestConfigBuilder {
+ private boolean encryptionEnabled = true;
+ private boolean keyManagementEnabled = false;
+ private boolean localKeyGenEnabled = false;
+ private String cipherProvider = "org.apache.hadoop.hbase.io.crypto.DefaultCipherProvider";
+ private String keyProvider = MockAesKeyProvider.class.getName();
+ private String masterKeyName = HBASE_KEY;
+
+ public TestConfigBuilder withEncryptionEnabled(boolean enabled) {
+ this.encryptionEnabled = enabled;
+ return this;
+ }
+
+ public TestConfigBuilder withKeyManagement(boolean localKeyGen) {
+ this.keyManagementEnabled = true;
+ this.localKeyGenEnabled = localKeyGen;
+ return this;
+ }
+
+ public TestConfigBuilder withNullCipherProvider() {
+ this.cipherProvider = NullCipherProvider.class.getName();
+ return this;
+ }
+
+ public void apply(Configuration conf) {
+ conf.setBoolean(Encryption.CRYPTO_ENABLED_CONF_KEY, encryptionEnabled);
+ conf.set(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, keyProvider);
+ conf.set(HConstants.CRYPTO_MASTERKEY_NAME_CONF_KEY, masterKeyName);
+ conf.set(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, "true");
+ conf.set(HConstants.CRYPTO_CIPHERPROVIDER_CONF_KEY, cipherProvider);
+
+ if (keyManagementEnabled) {
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_ENABLED_CONF_KEY,
+ localKeyGenEnabled);
+ } else {
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, false);
+ }
+ }
+ }
+
+ protected static TestConfigBuilder configBuilder() {
+ return new TestConfigBuilder();
+ }
+
+ protected void setUpEncryptionConfig() {
+ // Set up real encryption configuration using default AES cipher
+ conf.setBoolean(Encryption.CRYPTO_ENABLED_CONF_KEY, true);
+ conf.set(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, MockAesKeyProvider.class.getName());
+ conf.set(HConstants.CRYPTO_MASTERKEY_NAME_CONF_KEY, HBASE_KEY);
+ // Enable key caching
+ conf.set(HConstants.CRYPTO_KEYPROVIDER_PARAMETERS_KEY, "true");
+ // Use DefaultCipherProvider for real AES encryption functionality
+ conf.set(HConstants.CRYPTO_CIPHERPROVIDER_CONF_KEY,
+ "org.apache.hadoop.hbase.io.crypto.DefaultCipherProvider");
+ }
+
+ protected void setUpEncryptionConfigWithNullCipher() {
+ configBuilder().withNullCipherProvider().apply(conf);
+ }
+
+ // ==== Mock Setup Helpers ====
+
+ protected void setupManagedKeyDataCache(String namespace, ManagedKeyData keyData) {
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(namespace))).thenReturn(keyData);
+ }
+
+ protected void setupManagedKeyDataCache(String namespace, String globalSpace,
+ ManagedKeyData keyData) {
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(namespace))).thenReturn(null);
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(globalSpace))).thenReturn(keyData);
+ }
+
+ protected void setupTrailerMocks(byte[] keyBytes, String metadata, Long checksum,
+ String namespace) {
+ when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
+ when(mockTrailer.getKEKMetadata()).thenReturn(metadata);
+ if (checksum != null) {
+ when(mockTrailer.getKEKChecksum()).thenReturn(checksum);
+ }
+ when(mockTrailer.getKeyNamespace()).thenReturn(namespace);
+ }
+
+ protected void setupSystemKeyCache(Long checksum, ManagedKeyData keyData) {
+ when(mockSystemKeyCache.getSystemKeyByChecksum(checksum)).thenReturn(keyData);
+ }
+
+ protected void setupSystemKeyCache(ManagedKeyData latestKey) {
+ when(mockSystemKeyCache.getLatestSystemKey()).thenReturn(latestKey);
+ }
+
+ protected void setupManagedKeyDataCacheEntry(String namespace, String metadata, byte[] keyBytes,
+ ManagedKeyData keyData) throws IOException, KeyException {
+ when(mockManagedKeyDataCache.getEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(namespace), eq(metadata), eq(keyBytes))).thenReturn(keyData);
+ }
+
+ // ==== Exception Testing Helpers ====
+
+ protected void assertExceptionContains(Class expectedType,
+ String expectedMessage, Runnable testCode) {
+ T exception = assertThrows(expectedType, () -> testCode.run());
+ assertTrue("Exception message should contain: " + expectedMessage,
+ exception.getMessage().contains(expectedMessage));
+ }
+
+ protected void assertEncryptionContextThrowsForWrites(Class extends Exception> expectedType,
+ String expectedMessage) {
+ Exception exception = assertThrows(Exception.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, mockTableDescriptor, mockFamily,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+ });
+ assertTrue("Expected exception type: " + expectedType.getName() + ", but got: "
+ + exception.getClass().getName(), expectedType.isInstance(exception));
+ assertTrue("Exception message should contain: " + expectedMessage,
+ exception.getMessage().contains(expectedMessage));
+ }
+
+ protected void assertEncryptionContextThrowsForReads(Class extends Exception> expectedType,
+ String expectedMessage) {
+ Exception exception = assertThrows(Exception.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+ assertTrue("Expected exception type: " + expectedType.getName() + ", but got: "
+ + exception.getClass().getName(), expectedType.isInstance(exception));
+ assertTrue("Exception message should contain: " + expectedMessage,
+ exception.getMessage().contains(expectedMessage));
+ }
+
+ @Before
+ public void setUp() throws Exception {
+ conf = HBaseConfiguration.create();
+ testUtil = new HBaseTestingUtil(conf);
+ testPath = testUtil.getDataTestDir("test-file");
+
+ // Setup mocks (only for objects that don't have encryption logic)
+ mockFamily = mock(ColumnFamilyDescriptor.class);
+ mockTableDescriptor = mock(TableDescriptor.class);
+ mockManagedKeyDataCache = mock(ManagedKeyDataCache.class);
+ mockSystemKeyCache = mock(SystemKeyCache.class);
+ mockTrailer = mock(FixedFileTrailer.class);
+ mockManagedKeyData = mock(ManagedKeyData.class);
+
+ // Use a real test key with exactly 16 bytes for AES-128
+ testKey = new SecretKeySpec(TEST_KEY_16_BYTE.getBytes(), AES_CIPHER);
+
+ // Configure mocks
+ when(mockFamily.getEncryptionType()).thenReturn(AES_CIPHER);
+ when(mockFamily.getNameAsString()).thenReturn(TEST_FAMILY);
+ when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // Default to null for fallback
+ // logic
+ when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf("test:table"));
+ when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
+
+ testTableNamespace = KeyNamespaceUtil.constructKeyNamespace(mockTableDescriptor, mockFamily);
+
+ // Set up default encryption config
+ setUpEncryptionConfig();
+
+ // Create test wrapped key
+ KeyProvider keyProvider = Encryption.getKeyProvider(conf);
+ kekKey = keyProvider.getKey(HBASE_KEY);
+ Key key = keyProvider.getKey(TEST_DEK_16_BYTE);
+ testWrappedKey = EncryptionUtil.wrapKey(conf, null, key, kekKey);
+ }
+
+ private static byte[] createRandomWrappedKey(Configuration conf) throws IOException {
+ Cipher cipher = Encryption.getCipher(conf, "AES");
+ Key key = cipher.getRandomKey();
+ return EncryptionUtil.wrapKey(conf, HBASE_KEY, key);
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ SecurityTests.class, SmallTests.class })
+ public static class TestBasic extends TestSecurityUtil {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestBasic.class);
+
+ @Test
+ public void testGetUserFromPrincipal() {
+ // Test with slash separator
+ assertEquals("user1", SecurityUtil.getUserFromPrincipal("user1/host@REALM"));
+ assertEquals("user2", SecurityUtil.getUserFromPrincipal("user2@REALM"));
+
+ // Test with no realm
+ assertEquals("user3", SecurityUtil.getUserFromPrincipal("user3"));
+
+ // Test with multiple slashes
+ assertEquals("user4", SecurityUtil.getUserFromPrincipal("user4/host1/host2@REALM"));
+ }
+
+ @Test
+ public void testGetPrincipalWithoutRealm() {
+ // Test with realm
+ assertEquals("user1/host", SecurityUtil.getPrincipalWithoutRealm("user1/host@REALM"));
+ assertEquals("user2", SecurityUtil.getPrincipalWithoutRealm("user2@REALM"));
+
+ // Test without realm
+ assertEquals("user3", SecurityUtil.getPrincipalWithoutRealm("user3"));
+ assertEquals("user4/host", SecurityUtil.getPrincipalWithoutRealm("user4/host"));
+ }
+
+ @Test
+ public void testIsKeyManagementEnabled() {
+ Configuration conf = HBaseConfiguration.create();
+
+ // Test default behavior (should be false)
+ assertFalse(SecurityUtil.isKeyManagementEnabled(conf));
+
+ // Test with key management enabled
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+ assertTrue(SecurityUtil.isKeyManagementEnabled(conf));
+
+ // Test with key management disabled
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, false);
+ assertFalse(SecurityUtil.isKeyManagementEnabled(conf));
+ }
+ }
+
+ // Tests for the first createEncryptionContext method (for ColumnFamilyDescriptor)
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ SecurityTests.class, SmallTests.class })
+ public static class TestCreateEncryptionContext_ForWrites extends TestSecurityUtil {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestCreateEncryptionContext_ForWrites.class);
+
+ @Test
+ public void testWithNoEncryptionOnFamily() throws IOException {
+ when(mockFamily.getEncryptionType()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ assertEquals(Encryption.Context.NONE, result);
+ }
+
+ @Test
+ public void testWithEncryptionDisabled() throws IOException {
+ configBuilder().withEncryptionEnabled(false).apply(conf);
+ assertEncryptionContextThrowsForWrites(IllegalStateException.class,
+ "encryption feature is disabled");
+ }
+
+ @Test
+ public void testWithKeyManagement_LocalKeyGen() throws IOException {
+ configBuilder().withKeyManagement(true).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ }
+
+ @Test
+ public void testWithKeyManagement_NoActiveKey_NoSystemKeyCache() throws IOException {
+ // Test backwards compatibility: when no active key found and system cache is null, should
+ // throw
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, ManagedKeyData.KEY_SPACE_GLOBAL, null);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+
+ // With null system key cache, should still throw IOException
+ Exception exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, mockTableDescriptor, mockFamily,
+ mockManagedKeyDataCache, null);
+ });
+ assertTrue("Should reference system key cache",
+ exception.getMessage().contains("SystemKeyCache"));
+ }
+
+ @Test
+ public void testWithKeyManagement_NoActiveKey_WithSystemKeyCache() throws IOException {
+ // Test backwards compatibility: when no active key found but system cache available, should
+ // work
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, ManagedKeyData.KEY_SPACE_GLOBAL, null);
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Should use system key as KEK and generate random DEK
+ assertEquals(mockManagedKeyData, result.getKEKData());
+ }
+
+ @Test
+ public void testWithKeyManagement_LocalKeyGen_WithUnknownKeyCipher() throws IOException {
+ when(mockFamily.getEncryptionType()).thenReturn(UNKNOWN_CIPHER);
+ Key unknownKey = mock(Key.class);
+ when(unknownKey.getAlgorithm()).thenReturn(UNKNOWN_CIPHER);
+ when(mockManagedKeyData.getTheKey()).thenReturn(unknownKey);
+
+ configBuilder().withKeyManagement(true).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ assertEncryptionContextThrowsForWrites(RuntimeException.class,
+ "Cipher 'UNKNOWN_CIPHER' is not");
+ }
+
+ @Test
+ public void testWithKeyManagement_LocalKeyGen_WithKeyAlgorithmMismatch() throws IOException {
+ Key desKey = mock(Key.class);
+ when(desKey.getAlgorithm()).thenReturn(DES_CIPHER);
+ when(mockManagedKeyData.getTheKey()).thenReturn(desKey);
+
+ configBuilder().withKeyManagement(true).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ assertEncryptionContextThrowsForWrites(IllegalStateException.class,
+ "Encryption for family 'test-family' configured with type 'AES' but key specifies "
+ + "algorithm 'DES'");
+ }
+
+ @Test
+ public void testWithKeyManagement_UseSystemKeyWithNSSpecificActiveKey() throws IOException {
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupSystemKeyCache(mockManagedKeyData);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ }
+
+ @Test
+ public void testWithKeyManagement_UseSystemKeyWithoutNSSpecificActiveKey() throws IOException {
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, ManagedKeyData.KEY_SPACE_GLOBAL,
+ mockManagedKeyData);
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ }
+
+ @Test
+ public void testWithoutKeyManagement_WithFamilyProvidedKey() throws Exception {
+ byte[] wrappedKey = createRandomWrappedKey(conf);
+ when(mockFamily.getEncryptionKey()).thenReturn(wrappedKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result, false);
+ }
+
+ @Test
+ public void testWithoutKeyManagement_KeyAlgorithmMismatch() throws Exception {
+ // Create a key with different algorithm and wrap it
+ Key differentKey = new SecretKeySpec(TEST_KEY_16_BYTE.getBytes(), DES_CIPHER);
+ byte[] wrappedDESKey = EncryptionUtil.wrapKey(conf, HBASE_KEY, differentKey);
+ when(mockFamily.getEncryptionKey()).thenReturn(wrappedDESKey);
+
+ assertEncryptionContextThrowsForWrites(IllegalStateException.class,
+ "Encryption for family 'test-family' configured with type 'AES' but key specifies "
+ + "algorithm 'DES'");
+ }
+
+ @Test
+ public void testWithUnavailableCipher() throws IOException {
+ when(mockFamily.getEncryptionType()).thenReturn(UNKNOWN_CIPHER);
+ setUpEncryptionConfigWithNullCipher();
+ assertEncryptionContextThrowsForWrites(IllegalStateException.class,
+ "Cipher 'UNKNOWN_CIPHER' is not available");
+ }
+
+ // ---- New backwards compatibility test scenarios ----
+
+ @Test
+ public void testBackwardsCompatibility_Scenario1_FamilyKeyWithKeyManagement()
+ throws IOException {
+ // Scenario 1: Family has encryption key -> use as DEK, latest STK as KEK
+ when(mockFamily.getEncryptionKey()).thenReturn(testWrappedKey);
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that system key is used as KEK
+ assertEquals(mockManagedKeyData, result.getKEKData());
+ }
+
+ @Test
+ public void testBackwardsCompatibility_Scenario2a_ActiveKeyAsDeK() throws IOException {
+ // Scenario 2a: Active key exists, local key gen disabled -> use active key as DEK, latest STK
+ // as KEK
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ ManagedKeyData mockSystemKey = mock(ManagedKeyData.class);
+ when(mockSystemKey.getTheKey()).thenReturn(kekKey);
+ setupSystemKeyCache(mockSystemKey);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that active key is used as DEK and system key as KEK
+ assertEquals(testKey, result.getKey()); // Active key should be the DEK
+ assertEquals(mockSystemKey, result.getKEKData()); // System key should be the KEK
+ }
+
+ @Test
+ public void testBackwardsCompatibility_Scenario2b_ActiveKeyAsKekWithLocalKeyGen()
+ throws IOException {
+ // Scenario 2b: Active key exists, local key gen enabled -> use active key as KEK, generate
+ // random DEK
+ configBuilder().withKeyManagement(true).apply(conf);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that active key is used as KEK and a generated key as DEK
+ assertNotNull("DEK should be generated", result.getKey());
+ assertEquals(mockManagedKeyData, result.getKEKData()); // Active key should be the KEK
+ }
+
+ @Test
+ public void testBackwardsCompatibility_Scenario3a_NoActiveKeyGenerateLocalKey()
+ throws IOException {
+ // Scenario 3: No active key -> generate random DEK, latest STK as KEK
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupManagedKeyDataCache(TEST_NAMESPACE, ManagedKeyData.KEY_SPACE_GLOBAL, null); // No active
+ // key
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that a random key is generated as DEK and system key as KEK
+ assertNotNull("DEK should be generated", result.getKey());
+ assertEquals(mockManagedKeyData, result.getKEKData()); // System key should be the KEK
+ }
+
+ @Test
+ public void testWithoutKeyManagement_Scenario3b_WithRandomKeyGeneration() throws IOException {
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result, false);
+ // Here system key with a local key gen, so no namespace is set.
+ assertNull(result.getKeyNamespace());
+ }
+
+ @Test
+ public void testFallbackRule1_CFKeyNamespaceAttribute() throws IOException {
+ // Test Rule 1: Column family has KEY_NAMESPACE attribute
+ String cfKeyNamespace = "cf-specific-namespace";
+ when(mockFamily.getEncryptionKeyNamespace()).thenReturn(cfKeyNamespace);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ // Mock managed key data cache to return active key only for CF namespace
+ setupManagedKeyDataCache(cfKeyNamespace, mockManagedKeyData);
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that CF-specific namespace was used
+ assertEquals(cfKeyNamespace, result.getKeyNamespace());
+ }
+
+ @Test
+ public void testFallbackRule2_ConstructedNamespace() throws IOException {
+ when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // No CF namespace
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+ setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupSystemKeyCache(mockManagedKeyData);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that constructed namespace was used
+ assertEquals(testTableNamespace, result.getKeyNamespace());
+ }
+
+ @Test
+ public void testFallbackRule3_TableNameAsNamespace() throws IOException {
+ // Test Rule 3: Use table name as namespace when CF namespace and constructed namespace fail
+ when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // No CF namespace
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ String tableName = "test:table";
+ when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf(tableName));
+
+ // Mock cache to fail for CF and constructed namespace, succeed for table name
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(testTableNamespace))).thenReturn(null); // Constructed namespace fails
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(tableName))).thenReturn(mockManagedKeyData); // Table name succeeds
+
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that table name was used as namespace
+ assertEquals(tableName, result.getKeyNamespace());
+ }
+
+ @Test
+ public void testFallbackRule4_GlobalNamespace() throws IOException {
+ // Test Rule 4: Fall back to global namespace when all other rules fail
+ when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // No CF namespace
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ String tableName = "test:table";
+ when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf(tableName));
+
+ // Mock cache to fail for all specific namespaces, succeed only for global
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(testTableNamespace))).thenReturn(null); // Constructed namespace fails
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(tableName))).thenReturn(null); // Table name fails
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(ManagedKeyData.KEY_SPACE_GLOBAL))).thenReturn(mockManagedKeyData); // Global succeeds
+
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that global namespace was used
+ assertEquals(ManagedKeyData.KEY_SPACE_GLOBAL, result.getKeyNamespace());
+ }
+
+ @Test
+ public void testFallbackRuleOrder() throws IOException {
+ // Test that the rules are tried in the correct order
+ String cfKeyNamespace = "cf-namespace";
+ String tableName = "test:table";
+
+ when(mockFamily.getEncryptionKeyNamespace()).thenReturn(cfKeyNamespace);
+ when(mockFamily.getEncryptionKey()).thenReturn(null);
+ when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf(tableName));
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ // Set up mocks so that CF namespace fails but table name would succeed
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(cfKeyNamespace))).thenReturn(null); // CF namespace fails
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(testTableNamespace))).thenReturn(mockManagedKeyData); // Constructed namespace succeeds
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq(tableName))).thenReturn(mockManagedKeyData); // Table name would also succeed
+
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Verify that constructed namespace was used (Rule 2), not table name (Rule 3)
+ assertEquals(testTableNamespace, result.getKeyNamespace());
+ }
+
+ @Test
+ public void testBackwardsCompatibility_Scenario1_FamilyKeyWithoutKeyManagement()
+ throws IOException {
+ // Scenario 1 variation: Family has encryption key but key management disabled -> use as DEK,
+ // no KEK
+ byte[] wrappedKey = createRandomWrappedKey(conf);
+ when(mockFamily.getEncryptionKey()).thenReturn(wrappedKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
+ mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result, false); // No key management, so no KEK data
+ }
+
+ @Test
+ public void testWithKeyManagement_FamilyKey_UnwrapKeyException() throws Exception {
+ // Test for KeyException->IOException wrapping when family has key bytes with key management
+ // enabled
+ // This covers the exception block at lines 103-105 in SecurityUtil.java
+
+ // Create a properly wrapped key first, then corrupt it to cause unwrapping failure
+ Key wrongKek = new SecretKeySpec("bad-kek-16-bytes".getBytes(), AES_CIPHER); // Exactly 16
+ // bytes
+ byte[] validWrappedKey = EncryptionUtil.wrapKey(conf, null, testKey, wrongKek);
+
+ when(mockFamily.getEncryptionKey()).thenReturn(validWrappedKey);
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupSystemKeyCache(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(kekKey); // Different KEK for unwrapping
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, mockTableDescriptor, mockFamily,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+ });
+
+ // The IOException should wrap a KeyException from the unwrapping process
+ assertNotNull("Exception should have a cause", exception.getCause());
+ assertTrue("Exception cause should be a KeyException",
+ exception.getCause() instanceof KeyException);
+ }
+
+ // Tests for the second createEncryptionContext method (for reading files)
+
+ @Test
+ public void testWithNoKeyMaterial() throws IOException {
+ when(mockTrailer.getEncryptionKey()).thenReturn(null);
+ when(mockTrailer.getKeyNamespace()).thenReturn(TEST_NAMESPACE);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ assertEquals(Encryption.Context.NONE, result);
+ }
+ }
+
+ // Tests for the second createEncryptionContext method (for reading files)
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ SecurityTests.class, SmallTests.class })
+ public static class TestCreateEncryptionContext_ForReads extends TestSecurityUtil {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestCreateEncryptionContext_ForReads.class);
+
+ @Test
+ public void testWithKEKMetadata_STKLookupFirstThenManagedKey() throws Exception {
+ // Test new logic: STK lookup happens first, then metadata lookup if STK fails
+ // Set up scenario where both checksum and metadata are available
+ setupTrailerMocks(testWrappedKey, TEST_KEK_METADATA, TEST_KEK_CHECKSUM, null);
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ // STK lookup should succeed and be used (first priority)
+ ManagedKeyData stkKeyData = mock(ManagedKeyData.class);
+ when(stkKeyData.getTheKey()).thenReturn(kekKey);
+ setupSystemKeyCache(TEST_KEK_CHECKSUM, stkKeyData);
+
+ // Also set up managed key cache (but it shouldn't be used since STK succeeds)
+ setupManagedKeyDataCacheEntry(testTableNamespace, TEST_KEK_METADATA, testWrappedKey,
+ mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey())
+ .thenThrow(new RuntimeException("This should not be called"));
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Should use STK data, not managed key data
+ assertEquals(stkKeyData, result.getKEKData());
+ }
+
+ @Test
+ public void testWithKEKMetadata_STKFailsThenManagedKeySucceeds() throws Exception {
+ // Test fallback: STK lookup fails, metadata lookup succeeds
+ setupTrailerMocks(testWrappedKey, TEST_KEK_METADATA, TEST_KEK_CHECKSUM, testTableNamespace);
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ // STK lookup should fail (returns null)
+ when(mockSystemKeyCache.getSystemKeyByChecksum(TEST_KEK_CHECKSUM)).thenReturn(null);
+
+ // Managed key lookup should succeed
+ setupManagedKeyDataCacheEntry(testTableNamespace, TEST_KEK_METADATA, testWrappedKey,
+ mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ // Should use managed key data since STK failed
+ assertEquals(mockManagedKeyData, result.getKEKData());
+ }
+
+ @Test
+ public void testWithKeyManagement_KEKMetadataAndChecksumFailure()
+ throws IOException, KeyException {
+ // Test scenario where both STK lookup and managed key lookup fail
+ byte[] keyBytes = "test-encrypted-key".getBytes();
+ String kekMetadata = "test-kek-metadata";
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
+ when(mockTrailer.getKEKMetadata()).thenReturn(kekMetadata);
+ when(mockTrailer.getKEKChecksum()).thenReturn(TEST_KEK_CHECKSUM);
+ when(mockTrailer.getKeyNamespace()).thenReturn("test-namespace");
+
+ // STK lookup should fail
+ when(mockSystemKeyCache.getSystemKeyByChecksum(TEST_KEK_CHECKSUM)).thenReturn(null);
+
+ // Managed key lookup should also fail
+ when(mockManagedKeyDataCache.getEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ eq("test-namespace"), eq(kekMetadata), eq(keyBytes)))
+ .thenThrow(new IOException("Key not found"));
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+
+ assertTrue(
+ exception.getMessage().contains("Failed to get key data for KEK metadata: " + kekMetadata));
+ assertTrue(exception.getCause().getMessage().contains("Key not found"));
+ }
+
+ @Test
+ public void testWithKeyManagement_UseSystemKey() throws IOException {
+ // Test STK lookup by checksum (first priority in new logic)
+ setupTrailerMocks(testWrappedKey, null, TEST_KEK_CHECKSUM, null);
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupSystemKeyCache(TEST_KEK_CHECKSUM, mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ assertEquals(mockManagedKeyData, result.getKEKData());
+ }
+
+ @Test
+ public void testBackwardsCompatibility_WithKeyManagement_LatestSystemKeyNotFound()
+ throws IOException {
+ // Test when both STK lookup by checksum fails and latest system key is null
+ byte[] keyBytes = "test-encrypted-key".getBytes();
+
+ when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
+
+ // Enable key management
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+
+ // Both checksum lookup and latest system key lookup should fail
+ when(mockSystemKeyCache.getLatestSystemKey()).thenReturn(null);
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+
+ assertTrue(exception.getMessage().contains("Failed to get latest system key"));
+ }
+
+ @Test
+ public void testBackwardsCompatibility_FallbackToLatestSystemKey() throws IOException {
+ // Test fallback to latest system key when both checksum and metadata are unavailable
+ setupTrailerMocks(testWrappedKey, null, 0L, TEST_NAMESPACE); // No checksum, no metadata
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ ManagedKeyData latestSystemKey = mock(ManagedKeyData.class);
+ when(latestSystemKey.getTheKey()).thenReturn(kekKey);
+ when(mockSystemKeyCache.getLatestSystemKey()).thenReturn(latestSystemKey);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result);
+ assertEquals(latestSystemKey, result.getKEKData());
+ }
+
+ @Test
+ public void testWithoutKeyManagemntEnabled() throws IOException {
+ byte[] wrappedKey = createRandomWrappedKey(conf);
+ when(mockTrailer.getEncryptionKey()).thenReturn(wrappedKey);
+ when(mockTrailer.getKEKMetadata()).thenReturn(null);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result, false);
+ }
+
+ @Test
+ public void testKeyManagementBackwardsCompatibility() throws Exception {
+ when(mockTrailer.getEncryptionKey()).thenReturn(testWrappedKey);
+ when(mockSystemKeyCache.getLatestSystemKey()).thenReturn(mockManagedKeyData);
+ when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
+ configBuilder().withKeyManagement(false).apply(conf);
+
+ Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
+ mockManagedKeyDataCache, mockSystemKeyCache);
+
+ verifyContext(result, true);
+ }
+
+ @Test
+ public void testWithoutKeyManagement_UnwrapFailure() throws IOException {
+ byte[] invalidKeyBytes = INVALID_KEY_DATA.getBytes();
+ when(mockTrailer.getEncryptionKey()).thenReturn(invalidKeyBytes);
+ when(mockTrailer.getKEKMetadata()).thenReturn(null);
+
+ Exception exception = assertThrows(Exception.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+
+ // The exception should indicate that unwrapping failed - could be IOException or
+ // RuntimeException
+ assertNotNull(exception);
+ }
+
+ @Test
+ public void testCreateEncryptionContext_WithoutKeyManagement_UnavailableCipher()
+ throws Exception {
+ // Create a DES key and wrap it first with working configuration
+ Key desKey = new SecretKeySpec("test-key-16-byte".getBytes(), "DES");
+ byte[] wrappedDESKey = EncryptionUtil.wrapKey(conf, HBASE_KEY, desKey);
+
+ when(mockTrailer.getEncryptionKey()).thenReturn(wrappedDESKey);
+ when(mockTrailer.getKEKMetadata()).thenReturn(null);
+
+ // Disable key management and use null cipher provider
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, false);
+ setUpEncryptionConfigWithNullCipher();
+
+ RuntimeException exception = assertThrows(RuntimeException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+
+ assertTrue(exception.getMessage().contains("Cipher 'AES' not available"));
+ }
+
+ @Test
+ public void testCreateEncryptionContext_WithKeyManagement_NullKeyManagementCache()
+ throws IOException {
+ byte[] keyBytes = "test-encrypted-key".getBytes();
+ String kekMetadata = "test-kek-metadata";
+
+ when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
+ when(mockTrailer.getKEKMetadata()).thenReturn(kekMetadata);
+ when(mockTrailer.getKeyNamespace()).thenReturn("test-namespace");
+
+ // Enable key management
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, null, mockSystemKeyCache);
+ });
+
+ assertTrue(exception.getMessage().contains("ManagedKeyDataCache is null"));
+ }
+
+ @Test
+ public void testCreateEncryptionContext_WithKeyManagement_NullSystemKeyCache()
+ throws IOException {
+ byte[] keyBytes = "test-encrypted-key".getBytes();
+
+ when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
+ when(mockTrailer.getKEKMetadata()).thenReturn(null);
+ when(mockTrailer.getKeyNamespace()).thenReturn("test-namespace");
+
+ // Enable key management
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ null);
+ });
+
+ assertTrue(exception.getMessage()
+ .contains("SystemKeyCache can't be null when using key management feature"));
+ }
+ }
+
+ @RunWith(Parameterized.class)
+ @Category({ SecurityTests.class, SmallTests.class })
+ public static class TestCreateEncryptionContext_WithoutKeyManagement_UnwrapKeyException
+ extends TestSecurityUtil {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE = HBaseClassTestRule
+ .forClass(TestCreateEncryptionContext_WithoutKeyManagement_UnwrapKeyException.class);
+
+ @Parameter(0)
+ public boolean isKeyException;
+
+ @Parameterized.Parameters(name = "{index},isKeyException={0}")
+ public static Collection data() {
+ return Arrays.asList(new Object[][] { { true }, { false }, });
+ }
+
+ @Test
+ public void testWithDEK() throws IOException, KeyException {
+ byte[] wrappedKey = createRandomWrappedKey(conf);
+ MockAesKeyProvider keyProvider = (MockAesKeyProvider) Encryption.getKeyProvider(conf);
+ keyProvider.clearKeys(); // Let a new key be instantiated and cause a unwrap failure.
+
+ setupTrailerMocks(wrappedKey, null, 0L, null);
+ setupManagedKeyDataCacheEntry(TEST_NAMESPACE, TEST_KEK_METADATA, wrappedKey,
+ mockManagedKeyData);
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+
+ assertTrue(exception.getMessage().contains("Key was not successfully unwrapped"));
+ // The root cause should be some kind of parsing/unwrapping exception
+ assertNotNull(exception.getCause());
+ }
+
+ @Test
+ public void testWithSystemKey() throws IOException {
+ // Use invalid key bytes to trigger unwrapping failure
+ byte[] invalidKeyBytes = INVALID_SYSTEM_KEY_DATA.getBytes();
+
+ setupTrailerMocks(invalidKeyBytes, null, TEST_KEK_CHECKSUM, null);
+ configBuilder().withKeyManagement(false).apply(conf);
+ setupSystemKeyCache(TEST_KEK_CHECKSUM, mockManagedKeyData);
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
+ mockSystemKeyCache);
+ });
+
+ assertTrue(exception.getMessage().contains(
+ "Failed to unwrap key with KEK checksum: " + TEST_KEK_CHECKSUM + ", metadata: null"));
+ // The root cause should be some kind of parsing/unwrapping exception
+ assertNotNull(exception.getCause());
+ }
+ }
+
+ protected void verifyContext(Encryption.Context context) {
+ verifyContext(context, true);
+ }
+
+ protected void verifyContext(Encryption.Context context, boolean withKeyManagement) {
+ assertNotNull(context);
+ assertNotNull("Context should have a cipher", context.getCipher());
+ assertNotNull("Context should have a key", context.getKey());
+ if (withKeyManagement) {
+ assertNotNull("Context should have KEK data when key management is enabled",
+ context.getKEKData());
+ } else {
+ assertNull("Context should not have KEK data when key management is disabled",
+ context.getKEKData());
+ }
+ }
+
+ /**
+ * Null cipher provider for testing error cases.
+ */
+ public static class NullCipherProvider implements CipherProvider {
+ private Configuration conf;
+
+ @Override
+ public Configuration getConf() {
+ return conf;
+ }
+
+ @Override
+ public void setConf(Configuration conf) {
+ this.conf = conf;
+ }
+
+ @Override
+ public String getName() {
+ return "null";
+ }
+
+ @Override
+ public String[] getSupportedCiphers() {
+ return new String[0];
+ }
+
+ @Override
+ public Cipher getCipher(String name) {
+ return null; // Always return null to simulate unavailable cipher
+ }
+ }
+}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java
index f0cc2febd6e8..7b67b838659b 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestEncryptionTest.java
@@ -17,6 +17,7 @@
*/
package org.apache.hadoop.hbase.util;
+import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
import java.io.IOException;
@@ -30,7 +31,9 @@
import org.apache.hadoop.hbase.io.crypto.DefaultCipherProvider;
import org.apache.hadoop.hbase.io.crypto.Encryption;
import org.apache.hadoop.hbase.io.crypto.KeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyStoreKeyProvider;
import org.apache.hadoop.hbase.io.crypto.MockAesKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
import org.apache.hadoop.hbase.testclassification.MiscTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
import org.junit.ClassRule;
@@ -130,6 +133,71 @@ public void testTestEnabledWhenCryptoIsExplicitlyDisabled() throws Exception {
EncryptionTest.testEncryption(conf, algorithm, null);
}
+ // Utility methods for configuration setup
+ private Configuration createManagedKeyProviderConfig() {
+ Configuration conf = HBaseConfiguration.create();
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ MockManagedKeyProvider.class.getName());
+ return conf;
+ }
+
+ @Test
+ public void testManagedKeyProvider() throws Exception {
+ Configuration conf = createManagedKeyProviderConfig();
+ EncryptionTest.testKeyProvider(conf);
+ assertTrue("Managed provider should be cached", EncryptionTest.keyProviderResults
+ .containsKey(conf.get(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY)));
+ }
+
+ @Test(expected = IOException.class)
+ public void testBadManagedKeyProvider() throws Exception {
+ Configuration conf = HBaseConfiguration.create();
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ FailingManagedKeyProvider.class.getName());
+ EncryptionTest.testKeyProvider(conf);
+ fail("Instantiation of bad managed key provider should have failed check");
+ }
+
+ @Test
+ public void testEncryptionWithManagedKeyProvider() throws Exception {
+ Configuration conf = createManagedKeyProviderConfig();
+ String algorithm = conf.get(HConstants.CRYPTO_KEY_ALGORITHM_CONF_KEY, HConstants.CIPHER_AES);
+ EncryptionTest.testEncryption(conf, algorithm, null);
+ assertTrue("Managed provider should be cached", EncryptionTest.keyProviderResults
+ .containsKey(conf.get(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY)));
+ }
+
+ @Test(expected = IOException.class)
+ public void testUnknownCipherWithManagedKeyProvider() throws Exception {
+ Configuration conf = createManagedKeyProviderConfig();
+ EncryptionTest.testEncryption(conf, "foobar", null);
+ fail("Test for bogus cipher should have failed with managed key provider");
+ }
+
+ @Test(expected = IOException.class)
+ public void testManagedKeyProviderWhenCryptoIsExplicitlyDisabled() throws Exception {
+ Configuration conf = createManagedKeyProviderConfig();
+ String algorithm = conf.get(HConstants.CRYPTO_KEY_ALGORITHM_CONF_KEY, HConstants.CIPHER_AES);
+ conf.setBoolean(Encryption.CRYPTO_ENABLED_CONF_KEY, false);
+ EncryptionTest.testEncryption(conf, algorithm, null);
+ assertTrue("Managed provider should be cached", EncryptionTest.keyProviderResults
+ .containsKey(conf.get(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY)));
+ }
+
+ @Test(expected = IOException.class)
+ public void testManagedKeyProviderWithKeyManagementDisabled() throws Exception {
+ Configuration conf = HBaseConfiguration.create();
+ conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, false);
+ // This should cause issues since we're trying to use managed provider without enabling key
+ // management
+ conf.set(HConstants.CRYPTO_KEYPROVIDER_CONF_KEY, ManagedKeyStoreKeyProvider.class.getName());
+
+ EncryptionTest.testKeyProvider(conf);
+ fail("Should have failed when using managed provider with key management disabled");
+ }
+
public static class FailingKeyProvider implements KeyProvider {
@Override
@@ -181,4 +249,12 @@ public Cipher getCipher(String name) {
}
}
+
+ // Helper class for testing failing managed key provider
+ public static class FailingManagedKeyProvider extends MockManagedKeyProvider {
+ @Override
+ public void initConfig(Configuration conf, String params) {
+ throw new RuntimeException("BAD MANAGED PROVIDER!");
+ }
+ }
}
diff --git a/hbase-shell/src/main/ruby/hbase/hbase.rb b/hbase-shell/src/main/ruby/hbase/hbase.rb
index a9b35ed1de21..a7e531806cfe 100644
--- a/hbase-shell/src/main/ruby/hbase/hbase.rb
+++ b/hbase-shell/src/main/ruby/hbase/hbase.rb
@@ -1,3 +1,5 @@
+# frozen_string_literal: true
+
#
#
# Licensed to the Apache Software Foundation (ASF) under one
@@ -29,6 +31,7 @@
require 'hbase/visibility_labels'
module Hbase
+ # Main HBase class for connection and admin operations
class Hbase
attr_accessor :configuration
@@ -45,18 +48,21 @@ def initialize(config = nil)
end
def connection
- if @connection.nil?
- @connection = ConnectionFactory.createConnection(configuration)
- end
+ @connection = ConnectionFactory.createConnection(configuration) if @connection.nil?
@connection
end
+
# Returns ruby's Admin class from admin.rb
def admin
- ::Hbase::Admin.new(self.connection)
+ ::Hbase::Admin.new(connection)
end
def rsgroup_admin
- ::Hbase::RSGroupAdmin.new(self.connection)
+ ::Hbase::RSGroupAdmin.new(connection)
+ end
+
+ def keymeta_admin
+ ::Hbase::KeymetaAdmin.new(connection)
end
def taskmonitor
@@ -65,7 +71,7 @@ def taskmonitor
# Create new one each time
def table(table, shell)
- ::Hbase::Table.new(self.connection.getTable(TableName.valueOf(table)), shell)
+ ::Hbase::Table.new(connection.getTable(TableName.valueOf(table)), shell)
end
def replication_admin
@@ -73,21 +79,19 @@ def replication_admin
end
def security_admin
- ::Hbase::SecurityAdmin.new(self.connection.getAdmin)
+ ::Hbase::SecurityAdmin.new(connection.getAdmin)
end
def visibility_labels_admin
- ::Hbase::VisibilityLabelsAdmin.new(self.connection.getAdmin)
+ ::Hbase::VisibilityLabelsAdmin.new(connection.getAdmin)
end
def quotas_admin
- ::Hbase::QuotasAdmin.new(self.connection.getAdmin)
+ ::Hbase::QuotasAdmin.new(connection.getAdmin)
end
def shutdown
- if @connection != nil
- @connection.close
- end
+ @connection&.close
end
end
end
diff --git a/hbase-shell/src/main/ruby/hbase/keymeta_admin.rb b/hbase-shell/src/main/ruby/hbase/keymeta_admin.rb
new file mode 100644
index 000000000000..12cd5445b066
--- /dev/null
+++ b/hbase-shell/src/main/ruby/hbase/keymeta_admin.rb
@@ -0,0 +1,95 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'java'
+java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyData
+java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider
+java_import org.apache.hadoop.hbase.keymeta.KeymetaAdminClient
+
+module Hbase
+ # KeymetaAdmin is a class that provides a Ruby interface to the HBase Key Management API.
+ # It is used to interface with the HBase Key Management API.
+ class KeymetaAdmin
+ def initialize(connection)
+ @connection = connection
+ @admin = KeymetaAdminClient.new(connection)
+ @hb_admin = @connection.getAdmin
+ end
+
+ def close
+ @admin.close
+ end
+
+ def enable_key_management(key_info)
+ cust, namespace = extract_cust_info(key_info)
+ @admin.enableKeyManagement(cust, namespace)
+ end
+
+ def get_key_statuses(key_info)
+ cust, namespace = extract_cust_info(key_info)
+ @admin.getManagedKeys(cust, namespace)
+ end
+
+ def disable_key_management(key_info)
+ cust, namespace = extract_cust_info(key_info)
+ @admin.disableKeyManagement(cust, namespace)
+ end
+
+ def disable_managed_key(key_info, key_metadata_hash_base64)
+ cust, namespace = extract_cust_info(key_info)
+ key_metadata_hash_bytes = decode_to_bytes(key_metadata_hash_base64)
+ @admin.disableManagedKey(cust, namespace, key_metadata_hash_bytes)
+ end
+
+ def rotate_managed_key(key_info)
+ cust, namespace = extract_cust_info(key_info)
+ @admin.rotateManagedKey(cust, namespace)
+ end
+
+ def refresh_managed_keys(key_info)
+ cust, namespace = extract_cust_info(key_info)
+ @admin.refreshManagedKeys(cust, namespace)
+ end
+
+ def rotate_stk
+ @admin.rotateSTK
+ end
+
+ def extract_cust_info(key_info)
+ cust_info = key_info.split(':')
+ raise(ArgumentError, 'Invalid cust:namespace format') unless [1, 2].include?(cust_info.length)
+
+ custodian = cust_info[0]
+ namespace = cust_info.length > 1 ? cust_info[1] : ManagedKeyData::KEY_SPACE_GLOBAL
+ cust_bytes = decode_to_bytes custodian
+
+ [cust_bytes, namespace]
+ end
+
+ def decode_to_bytes(base64_string)
+ begin
+ ManagedKeyProvider.decodeToBytes(base64_string)
+ rescue Java::JavaIo::IOException => e
+ message = e.cause&.message || e.message
+ raise(ArgumentError, "Failed to decode Base64 encoded string '#{base64_string}': #{message}")
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/hbase_constants.rb b/hbase-shell/src/main/ruby/hbase_constants.rb
index d4df1f8f5821..67892e5538c0 100644
--- a/hbase-shell/src/main/ruby/hbase_constants.rb
+++ b/hbase-shell/src/main/ruby/hbase_constants.rb
@@ -138,3 +138,4 @@ def self.promote_constants(constants)
require 'hbase/security'
require 'hbase/visibility_labels'
require 'hbase/rsgroup_admin'
+require 'hbase/keymeta_admin'
diff --git a/hbase-shell/src/main/ruby/shell.rb b/hbase-shell/src/main/ruby/shell.rb
index 81baaf76d306..9f9ff203bed6 100644
--- a/hbase-shell/src/main/ruby/shell.rb
+++ b/hbase-shell/src/main/ruby/shell.rb
@@ -150,6 +150,10 @@ def hbase_rsgroup_admin
@rsgroup_admin ||= hbase.rsgroup_admin
end
+ def hbase_keymeta_admin
+ @hbase_keymeta_admin ||= hbase.keymeta_admin
+ end
+
##
# Create singleton methods on the target receiver object for all the loaded commands
#
@@ -618,6 +622,23 @@ def self.exception_handler(hide_traceback)
]
)
+Shell.load_command_group(
+ 'keymeta',
+ full_name: 'Keymeta',
+ comment: "NOTE: The KeyMeta Coprocessor Endpoint must be enabled on the Master else commands fail
+ with: UnknownProtocolException: No registered Master Coprocessor Endpoint found for
+ ManagedKeysService",
+ commands: %w[
+ enable_key_management
+ show_key_status
+ rotate_stk
+ disable_key_management
+ disable_managed_key
+ refresh_managed_keys
+ rotate_managed_key
+ ]
+)
+
Shell.load_command_group(
'rsgroup',
full_name: 'RSGroups',
diff --git a/hbase-shell/src/main/ruby/shell/commands.rb b/hbase-shell/src/main/ruby/shell/commands.rb
index a40f737e7908..a97dddc4e6a0 100644
--- a/hbase-shell/src/main/ruby/shell/commands.rb
+++ b/hbase-shell/src/main/ruby/shell/commands.rb
@@ -105,6 +105,10 @@ def rsgroup_admin
@shell.hbase_rsgroup_admin
end
+ def keymeta_admin
+ @shell.hbase_keymeta_admin
+ end
+
#----------------------------------------------------------------------
# Creates formatter instance first time and then reuses it.
def formatter
diff --git a/hbase-shell/src/main/ruby/shell/commands/disable_key_management.rb b/hbase-shell/src/main/ruby/shell/commands/disable_key_management.rb
new file mode 100644
index 000000000000..ead9dce96e8f
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/disable_key_management.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # DisableKeyManagement is a class that provides a Ruby interface to disable key management via
+ # HBase Key Management API.
+ class DisableKeyManagement < KeymetaCommandBase
+ def help
+ <<-EOF
+Disable key management for a given cust:namespace (cust in Base64 format).
+If no namespace is specified, the global namespace (*) is used.
+
+Example:
+ hbase> disable_key_management 'cust:namespace'
+ hbase> disable_key_management 'cust'
+ EOF
+ end
+
+ def command(key_info)
+ statuses = [keymeta_admin.disable_key_management(key_info)]
+ print_key_statuses(statuses)
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/disable_managed_key.rb b/hbase-shell/src/main/ruby/shell/commands/disable_managed_key.rb
new file mode 100644
index 000000000000..4384c0f3c825
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/disable_managed_key.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # DisableManagedKey is a class that provides a Ruby interface to disable a managed key via
+ # HBase Key Management API.
+ class DisableManagedKey < KeymetaCommandBase
+ def help
+ <<-EOF
+Disable a managed key for a given cust:namespace (cust in Base64 encoded) and key metadata hash
+(Base64 encoded). If no namespace is specified, the global namespace (*) is used.
+
+Example:
+ hbase> disable_managed_key 'cust:namespace key_metadata_hash_base64'
+ hbase> disable_managed_key 'cust key_metadata_hash_base64'
+ EOF
+ end
+
+ def command(key_info, key_metadata_hash_base64)
+ statuses = [keymeta_admin.disable_managed_key(key_info, key_metadata_hash_base64)]
+ print_key_statuses(statuses)
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/enable_key_management.rb b/hbase-shell/src/main/ruby/shell/commands/enable_key_management.rb
new file mode 100644
index 000000000000..d594fa024b68
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/enable_key_management.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # EnableKeyManagement is a class that provides a Ruby interface to enable key management via
+ # HBase Key Management API.
+ class EnableKeyManagement < KeymetaCommandBase
+ def help
+ <<-EOF
+Enable key management for a given cust:namespace (cust in Base64 encoded).
+If no namespace is specified, the global namespace (*) is used.
+
+Example:
+ hbase> enable_key_management 'cust:namespace'
+ hbase> enable_key_management 'cust'
+ EOF
+ end
+
+ def command(key_info)
+ statuses = [keymeta_admin.enable_key_management(key_info)]
+ print_key_statuses(statuses)
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb b/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb
new file mode 100644
index 000000000000..98a57766831a
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+module Shell
+ module Commands
+ # KeymetaCommandBase is a base class for all key management commands.
+ class KeymetaCommandBase < Command
+ def print_key_statuses(statuses)
+ formatter.header(%w[ENCODED-KEY NAMESPACE STATUS METADATA METADATA-HASH REFRESH-TIMESTAMP])
+ statuses.each { |status| formatter.row(format_status_row(status)) }
+ formatter.footer(statuses.size)
+ end
+
+ private
+
+ def format_status_row(status)
+ [
+ status.getKeyCustodianEncoded,
+ status.getKeyNamespace,
+ status.getKeyState.toString,
+ status.getKeyMetadata,
+ status.getKeyMetadataHashEncoded,
+ status.getRefreshTimestamp
+ ]
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/refresh_managed_keys.rb b/hbase-shell/src/main/ruby/shell/commands/refresh_managed_keys.rb
new file mode 100644
index 000000000000..f4c462ceee19
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/refresh_managed_keys.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # RefreshManagedKeys is a class that provides a Ruby interface to refresh managed keys via
+ # HBase Key Management API.
+ class RefreshManagedKeys < KeymetaCommandBase
+ def help
+ <<-EOF
+Refresh all managed keys for a given cust:namespace (cust in Base64 encoded).
+If no namespace is specified, the global namespace (*) is used.
+
+Example:
+ hbase> refresh_managed_keys 'cust:namespace'
+ hbase> refresh_managed_keys 'cust'
+ EOF
+ end
+
+ def command(key_info)
+ keymeta_admin.refresh_managed_keys(key_info)
+ puts "Managed keys refreshed successfully"
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/rotate_managed_key.rb b/hbase-shell/src/main/ruby/shell/commands/rotate_managed_key.rb
new file mode 100644
index 000000000000..6372d30839e5
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/rotate_managed_key.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # RotateManagedKey is a class that provides a Ruby interface to rotate a managed key via
+ # HBase Key Management API.
+ class RotateManagedKey < KeymetaCommandBase
+ def help
+ <<-EOF
+Rotate the ACTIVE managed key for a given cust:namespace (cust in Base64 encoded).
+If no namespace is specified, the global namespace (*) is used.
+
+Example:
+ hbase> rotate_managed_key 'cust:namespace'
+ hbase> rotate_managed_key 'cust'
+ EOF
+ end
+
+ def command(key_info)
+ statuses = [keymeta_admin.rotate_managed_key(key_info)]
+ print_key_statuses(statuses)
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/rotate_stk.rb b/hbase-shell/src/main/ruby/shell/commands/rotate_stk.rb
new file mode 100644
index 000000000000..f1c754487c40
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/rotate_stk.rb
@@ -0,0 +1,51 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # RotateStk is a class that provides a Ruby interface to rotate the System Key (STK)
+ # via HBase Key Management API.
+ class RotateStk < KeymetaCommandBase
+ def help
+ <<-EOF
+Rotate the System Key (STK) if a new key is detected.
+This command checks for a new system key and propagates it to all region servers.
+Returns true if a new key was detected and rotated, false otherwise.
+
+Example:
+ hbase> rotate_stk
+ EOF
+ end
+
+ def command
+ result = keymeta_admin.rotate_stk
+ if result
+ formatter.row(['System Key rotation was performed successfully and cache was refreshed ' \
+ 'on all region servers'])
+ else
+ formatter.row(['No System Key change was detected'])
+ end
+ result
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/main/ruby/shell/commands/show_key_status.rb b/hbase-shell/src/main/ruby/shell/commands/show_key_status.rb
new file mode 100644
index 000000000000..d3670d094ed3
--- /dev/null
+++ b/hbase-shell/src/main/ruby/shell/commands/show_key_status.rb
@@ -0,0 +1,45 @@
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# frozen_string_literal: true
+
+require 'shell/commands/keymeta_command_base'
+
+module Shell
+ module Commands
+ # ShowKeyStatus is a class that provides a Ruby interface to show key statuses via
+ # HBase Key Management API.
+ class ShowKeyStatus < KeymetaCommandBase
+ def help
+ <<-EOF
+Show key statuses for a given cust:namespace (cust in Base64 format).
+If no namespace is specified, the global namespace (*) is used.
+
+Example:
+ hbase> show_key_status 'cust:namespace'
+ hbase> show_key_status 'cust'
+ EOF
+ end
+
+ def command(key_info)
+ statuses = keymeta_admin.get_key_statuses(key_info)
+ print_key_statuses(statuses)
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaAdminShell.java b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaAdminShell.java
new file mode 100644
index 000000000000..8315d05f3feb
--- /dev/null
+++ b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaAdminShell.java
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.crypto.KeymetaTestUtils;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyStoreKeyProvider;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyTestBase;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.IntegrationTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.jruby.embed.ScriptingContainer;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({ ClientTests.class, IntegrationTests.class })
+public class TestKeymetaAdminShell extends ManagedKeyTestBase implements RubyShellTest {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeymetaAdminShell.class);
+
+ private final ScriptingContainer jruby = new ScriptingContainer();
+
+ @Before
+ public void setUp() throws Exception {
+ final Configuration conf = TEST_UTIL.getConfiguration();
+ // Enable to be able to debug without timing out.
+ // conf.set("zookeeper.session.timeout", "6000000");
+ // conf.set("hbase.rpc.timeout", "6000000");
+ // conf.set("hbase.rpc.read.timeout", "6000000");
+ // conf.set("hbase.rpc.write.timeout", "6000000");
+ // conf.set("hbase.client.operation.timeout", "6000000");
+ // conf.set("hbase.client.scanner.timeout.period", "6000000");
+ // conf.set("hbase.ipc.client.socket.timeout.connect", "6000000");
+ // conf.set("hbase.ipc.client.socket.timeout.read", "6000000");
+ // conf.set("hbase.ipc.client.socket.timeout.write", "6000000");
+ // conf.set("hbase.master.start.timeout.localHBaseCluster", "6000000");
+ // conf.set("hbase.master.init.timeout.localHBaseCluster", "6000000");
+ // conf.set("hbase.client.sync.wait.timeout.msec", "6000000");
+ // conf.set("hbase.client.retries.number", "1000");
+ Map cust_to_key = new HashMap<>();
+ Map cust_to_alias = new HashMap<>();
+ String clusterId = UUID.randomUUID().toString();
+ String SYSTEM_KEY_ALIAS = "system-key-alias";
+ String CUST1 = "cust1";
+ String CUST1_ALIAS = "cust1-alias";
+ String CF_NAMESPACE = "test_table/f";
+ String GLOB_CUST_ALIAS = "glob-cust-alias";
+ String CUSTOM_NAMESPACE = "test_namespace";
+ String CUSTOM_NAMESPACE_ALIAS = "custom-namespace-alias";
+ String CUSTOM_GLOBAL_NAMESPACE = "test_global_namespace";
+ String CUSTOM_GLOBAL_NAMESPACE_ALIAS = "custom-global-namespace-alias";
+ if (isWithKeyManagement()) {
+ String providerParams = KeymetaTestUtils.setupTestKeyStore(TEST_UTIL, true, true, store -> {
+ Properties p = new Properties();
+ try {
+ KeymetaTestUtils.addEntry(conf, 128, store, CUST1_ALIAS, CUST1, true, cust_to_key,
+ cust_to_alias, p);
+ KeymetaTestUtils.addEntry(conf, 128, store, CUST1_ALIAS, CUST1, true, cust_to_key,
+ cust_to_alias, p, CF_NAMESPACE);
+ KeymetaTestUtils.addEntry(conf, 128, store, GLOB_CUST_ALIAS, "*", true, cust_to_key,
+ cust_to_alias, p);
+ KeymetaTestUtils.addEntry(conf, 128, store, SYSTEM_KEY_ALIAS, clusterId, true,
+ cust_to_key, cust_to_alias, p);
+ KeymetaTestUtils.addEntry(conf, 128, store, CUSTOM_NAMESPACE_ALIAS, CUST1, true,
+ cust_to_key, cust_to_alias, p, CUSTOM_NAMESPACE);
+ KeymetaTestUtils.addEntry(conf, 128, store, CUSTOM_GLOBAL_NAMESPACE_ALIAS, "*", true,
+ cust_to_key, cust_to_alias, p, CUSTOM_GLOBAL_NAMESPACE);
+ } catch (Exception e) {
+ throw new RuntimeException(e);
+ }
+ return p;
+ });
+ // byte[] systemKey = cust2key.get(new Bytes(clusterId.getBytes())).get();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_STORE_SYSTEM_KEY_NAME_CONF_KEY, SYSTEM_KEY_ALIAS);
+ conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_PARAMETERS_KEY, providerParams);
+ }
+ RubyShellTest.setUpConfig(this);
+ super.setUp();
+ RubyShellTest.setUpJRubyRuntime(this);
+ RubyShellTest.doTestSetup(this);
+ addCustodianRubyEnvVars(jruby, "GLOB_CUST", "*");
+ addCustodianRubyEnvVars(jruby, "CUST1", CUST1);
+ jruby.put("$TEST", this);
+ }
+
+ @Override
+ public HBaseTestingUtil getTEST_UTIL() {
+ return TEST_UTIL;
+ }
+
+ @Override
+ public ScriptingContainer getJRuby() {
+ return jruby;
+ }
+
+ @Override
+ public String getSuitePattern() {
+ return "**/*_keymeta_test.rb";
+ }
+
+ @Test
+ public void testRunShellTests() throws Exception {
+ RubyShellTest.testRunShellTests(this);
+ }
+
+ @Override
+ protected Class extends ManagedKeyProvider> getKeyProviderClass() {
+ return ManagedKeyStoreKeyProvider.class;
+ }
+
+ public static void addCustodianRubyEnvVars(ScriptingContainer jruby, String custId,
+ String custodian) {
+ jruby.put("$" + custId, custodian);
+ jruby.put("$" + custId + "_ALIAS", custodian + "-alias");
+ jruby.put("$" + custId + "_ENCODED", ManagedKeyProvider.encodeToStr(custodian.getBytes()));
+ }
+}
diff --git a/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMigration.java b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMigration.java
new file mode 100644
index 000000000000..efe124989e56
--- /dev/null
+++ b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMigration.java
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.IntegrationTests;
+import org.junit.ClassRule;
+import org.junit.experimental.categories.Category;
+
+@Category({ ClientTests.class, IntegrationTests.class })
+public class TestKeymetaMigration extends TestKeymetaAdminShell {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeymetaMigration.class);
+
+ @Override
+ public String getSuitePattern() {
+ return "**/*_keymeta_migration_test.rb";
+ }
+
+ @Override
+ protected boolean isWithKeyManagement() {
+ return false;
+ }
+
+ @Override
+ protected boolean isWithMiniClusterStart() {
+ return false;
+ }
+
+ @Override
+ protected TableName getSystemTableNameToWaitFor() {
+ return TableName.META_TABLE_NAME;
+ }
+}
diff --git a/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java
new file mode 100644
index 000000000000..cc4aabe4ff4e
--- /dev/null
+++ b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyTestBase;
+import org.apache.hadoop.hbase.testclassification.ClientTests;
+import org.apache.hadoop.hbase.testclassification.IntegrationTests;
+import org.jruby.embed.ScriptingContainer;
+import org.junit.Before;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+
+@Category({ ClientTests.class, IntegrationTests.class })
+public class TestKeymetaMockProviderShell extends ManagedKeyTestBase implements RubyShellTest {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeymetaMockProviderShell.class);
+
+ private final ScriptingContainer jruby = new ScriptingContainer();
+
+ @Before
+ @Override
+ public void setUp() throws Exception {
+ // Enable to be able to debug without timing out.
+ // final Configuration conf = TEST_UTIL.getConfiguration();
+ // conf.set("zookeeper.session.timeout", "6000000");
+ // conf.set("hbase.rpc.timeout", "6000000");
+ // conf.set("hbase.rpc.read.timeout", "6000000");
+ // conf.set("hbase.rpc.write.timeout", "6000000");
+ // conf.set("hbase.client.operation.timeout", "6000000");
+ // conf.set("hbase.client.scanner.timeout.period", "6000000");
+ // conf.set("hbase.ipc.client.socket.timeout.connect", "6000000");
+ // conf.set("hbase.ipc.client.socket.timeout.read", "6000000");
+ // conf.set("hbase.ipc.client.socket.timeout.write", "6000000");
+ // conf.set("hbase.master.start.timeout.localHBaseCluster", "6000000");
+ // conf.set("hbase.master.init.timeout.localHBaseCluster", "6000000");
+ // conf.set("hbase.client.sync.wait.timeout.msec", "6000000");
+ // conf.set("hbase.client.retries.number", "1000");
+ RubyShellTest.setUpConfig(this);
+ super.setUp();
+ RubyShellTest.setUpJRubyRuntime(this);
+ RubyShellTest.doTestSetup(this);
+ jruby.put("$TEST", this);
+ }
+
+ @Override
+ public HBaseTestingUtil getTEST_UTIL() {
+ return TEST_UTIL;
+ }
+
+ @Override
+ public ScriptingContainer getJRuby() {
+ return jruby;
+ }
+
+ @Override
+ public String getSuitePattern() {
+ return "**/*_keymeta_mock_provider_test.rb";
+ }
+
+ @Test
+ public void testRunShellTests() throws Exception {
+ RubyShellTest.testRunShellTests(this);
+ }
+}
diff --git a/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb b/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb
new file mode 100644
index 000000000000..061e3fc71230
--- /dev/null
+++ b/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb
@@ -0,0 +1,143 @@
+# frozen_string_literal: true
+
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase_shell'
+require 'stringio'
+require 'hbase_constants'
+require 'hbase/hbase'
+require 'hbase/table'
+
+java_import org.apache.hadoop.hbase.io.crypto.Encryption
+java_import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider
+java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider
+java_import org.apache.hadoop.hbase.ipc.RemoteWithExtrasException
+
+module Hbase
+ # Test class for keymeta admin functionality with MockManagedKeyProvider
+ class KeymetaAdminMockProviderTest < Test::Unit::TestCase
+ include TestHelpers
+
+ def setup
+ setup_hbase
+ @key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
+ # Enable multikey generation mode for dynamic key creation on rotate
+ @key_provider.setMultikeyGenMode(true)
+
+ # Set up custodian variables
+ @glob_cust = '*'
+ @glob_cust_encoded = ManagedKeyProvider.encodeToStr(@glob_cust.bytes.to_a)
+ end
+
+ define_test 'Test rotate managed key operation' do
+ test_rotate_key(@glob_cust_encoded, '*')
+ test_rotate_key(@glob_cust_encoded, 'test_namespace')
+ end
+
+ def test_rotate_key(cust, namespace)
+ cust_and_namespace = "#{cust}:#{namespace}"
+ puts "Testing rotate_managed_key for #{cust_and_namespace}"
+
+ # 1. Enable key management first
+ output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
+ puts "enable_key_management output: #{output}"
+ assert(output.include?("#{cust} #{namespace} ACTIVE"),
+ "Expected ACTIVE key after enable, got: #{output}")
+
+ # Verify initial state - should have 1 ACTIVE key
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status before rotation: #{output}"
+ assert(output.include?('1 row(s)'), "Expected 1 key before rotation, got: #{output}")
+
+ # 2. Rotate the managed key (mock provider will generate a new key due to multikeyGenMode)
+ output = capture_stdout { @shell.command('rotate_managed_key', cust_and_namespace) }
+ puts "rotate_managed_key output: #{output}"
+ assert(output.include?("#{cust} #{namespace}"),
+ "Expected key info in rotation output, got: #{output}")
+
+ # 3. Verify we now have both ACTIVE and INACTIVE keys
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after rotation: #{output}"
+ assert(output.include?('ACTIVE'),
+ "Expected ACTIVE key after rotation, got: #{output}")
+ assert(output.include?('INACTIVE'),
+ "Expected INACTIVE key after rotation, got: #{output}")
+ assert(output.include?('2 row(s)'),
+ "Expected 2 keys after rotation, got: #{output}")
+
+ # 4. Rotate again to test multiple rotations
+ output = capture_stdout { @shell.command('rotate_managed_key', cust_and_namespace) }
+ puts "rotate_managed_key (second) output: #{output}"
+ assert(output.include?("#{cust} #{namespace}"),
+ "Expected key info in second rotation output, got: #{output}")
+
+ # Should now have 3 keys: 1 ACTIVE, 2 INACTIVE
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after second rotation: #{output}"
+ assert(output.include?('3 row(s)'),
+ "Expected 3 keys after second rotation, got: #{output}")
+
+ # Cleanup - disable all keys
+ @shell.command('disable_key_management', cust_and_namespace)
+ end
+
+ define_test 'Test rotate without active key fails' do
+ cust_and_namespace = "#{@glob_cust_encoded}:nonexistent_namespace"
+ puts "Testing rotate_managed_key on non-existent namespace"
+
+ # Attempt to rotate when no key management is enabled should fail
+ e = assert_raises(RemoteWithExtrasException) do
+ @shell.command('rotate_managed_key', cust_and_namespace)
+ end
+ assert_true(e.is_do_not_retry)
+ end
+
+ define_test 'Test refresh managed keys with mock provider' do
+ cust_and_namespace = "#{@glob_cust_encoded}:test_refresh"
+ puts "Testing refresh_managed_keys for #{cust_and_namespace}"
+
+ # 1. Enable key management
+ output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
+ puts "enable_key_management output: #{output}"
+ assert(output.include?("#{@glob_cust_encoded} test_refresh ACTIVE"))
+
+ # 2. Rotate to create multiple keys
+ output = capture_stdout { @shell.command('rotate_managed_key', cust_and_namespace) }
+ puts "rotate_managed_key output: #{output}"
+ assert(output.include?("#{@glob_cust_encoded} test_refresh"),
+ "Expected key info in rotation output, got: #{output}")
+
+ # 3. Refresh managed keys - should succeed without changing state
+ output = capture_stdout { @shell.command('refresh_managed_keys', cust_and_namespace) }
+ puts "refresh_managed_keys output: #{output}"
+ assert(output.include?('Managed keys refreshed successfully'),
+ "Expected success message, got: #{output}")
+
+ # Verify keys still exist after refresh
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ assert(output.include?('ACTIVE'), "Expected ACTIVE key after refresh")
+ assert(output.include?('INACTIVE'), "Expected INACTIVE key after refresh")
+
+ # Cleanup
+ @shell.command('disable_key_management', cust_and_namespace)
+ end
+ end
+end
+
diff --git a/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb b/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb
new file mode 100644
index 000000000000..ab413ecbb0bb
--- /dev/null
+++ b/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb
@@ -0,0 +1,193 @@
+# frozen_string_literal: true
+
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase_shell'
+require 'stringio'
+require 'hbase_constants'
+require 'hbase/hbase'
+require 'hbase/table'
+
+module Hbase
+ # Test class for keymeta admin functionality
+ class KeymetaAdminTest < Test::Unit::TestCase
+ include TestHelpers
+
+ def setup
+ setup_hbase
+ end
+
+ define_test 'Test enable key management' do
+ test_key_management($CUST1_ENCODED, '*')
+ test_key_management($CUST1_ENCODED, 'test_table/f')
+ test_key_management($CUST1_ENCODED, 'test_namespace')
+ test_key_management($GLOB_CUST_ENCODED, '*')
+
+ puts 'Testing that cluster can be restarted when key management is enabled'
+ $TEST.restartMiniCluster
+ puts 'Cluster restarted, testing key management again'
+ setup_hbase
+ test_key_management($GLOB_CUST_ENCODED, '*')
+ puts 'Key management test complete'
+ end
+
+ def test_key_management(cust, namespace)
+ # Repeat the enable twice in a loop and ensure multiple enables succeed and return the
+ # same output.
+ 2.times do
+ cust_and_namespace = "#{cust}:#{namespace}"
+ output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
+ puts "enable_key_management output: #{output}"
+ assert(output.include?("#{cust} #{namespace} ACTIVE"))
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status output: #{output}"
+ assert(output.include?("#{cust} #{namespace} ACTIVE"))
+ assert(output.include?('1 row(s)'))
+ end
+ end
+
+ define_test 'Decode failure raises friendly error' do
+ assert_raises(ArgumentError) do
+ @shell.command('enable_key_management', '!!!:namespace')
+ end
+
+ error = assert_raises(ArgumentError) do
+ @shell.command('show_key_status', '!!!:namespace')
+ end
+ assert_match(/Failed to decode Base64 encoded string '!!!'/, error.message)
+ end
+
+ define_test 'Test key management operations without rotation' do
+ test_key_operations($CUST1_ENCODED, '*')
+ test_key_operations($CUST1_ENCODED, 'test_namespace')
+ test_key_operations($GLOB_CUST_ENCODED, '*')
+ end
+
+ def test_key_operations(cust, namespace)
+ cust_and_namespace = "#{cust}:#{namespace}"
+ puts "Testing key management operations for #{cust_and_namespace}"
+
+ # 1. Enable key management
+ output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
+ puts "enable_key_management output: #{output}"
+ assert(output.include?("#{cust} #{namespace} ACTIVE"),
+ "Expected ACTIVE key after enable, got: #{output}")
+
+ # 2. Get the initial key metadata hash for use in disable_managed_key test
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status output: #{output}"
+ # Extract the metadata hash from the output (it's in the 5th column)
+ # Output format: ENCODED-KEY NAMESPACE STATUS METADATA METADATA-HASH REFRESH-TIMESTAMP
+ lines = output.split("\n")
+ key_line = lines.find { |line| line.include?(cust) && line.include?(namespace) }
+ assert_not_nil(key_line, "Could not find key line in output")
+ # Parse the key metadata hash (Base64 encoded)
+ key_metadata_hash = key_line.split[3]
+ assert_not_nil(key_metadata_hash, "Could not extract key metadata hash")
+ puts "Extracted key metadata hash: #{key_metadata_hash}"
+
+ # 3. Refresh managed keys
+ output = capture_stdout { @shell.command('refresh_managed_keys', cust_and_namespace) }
+ puts "refresh_managed_keys output: #{output}"
+ assert(output.include?('Managed keys refreshed successfully'),
+ "Expected success message, got: #{output}")
+ # Verify keys still exist after refresh
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after refresh: #{output}"
+ assert(output.include?('ACTIVE'), "Expected ACTIVE key after refresh, got: #{output}")
+
+ # 4. Disable a specific managed key
+ output = capture_stdout do
+ @shell.command('disable_managed_key', cust_and_namespace, key_metadata_hash)
+ end
+ puts "disable_managed_key output: #{output}"
+ assert(output.include?("#{cust} #{namespace} DISABLED"),
+ "Expected INACTIVE key, got: #{output}")
+ # Verify the key is now INACTIVE
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after disable_managed_key: #{output}"
+ assert(output.include?('DISABLED'), "Expected DISABLED state, got: #{output}")
+
+ # 5. Re-enable key management for next step
+ @shell.command('enable_key_management', cust_and_namespace)
+
+ # 6. Disable all key management
+ output = capture_stdout { @shell.command('disable_key_management', cust_and_namespace) }
+ puts "disable_key_management output: #{output}"
+ assert(output.include?("#{cust} #{namespace} DISABLED"),
+ "Expected DISABLED keys, got: #{output}")
+ # Verify all keys are now INACTIVE
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after disable_key_management: #{output}"
+ # All rows should show INACTIVE state
+ lines = output.split("\n")
+ key_lines = lines.select { |line| line.include?(cust) && line.include?(namespace) }
+ key_lines.each do |line|
+ assert(line.include?('INACTIVE'), "Expected all keys to be INACTIVE, but found: #{line}")
+ end
+
+ # 7. Refresh shouldn't do anything since the key management is disabled.
+ output = capture_stdout do
+ @shell.command('refresh_managed_keys', cust_and_namespace)
+ end
+ puts "refresh_managed_keys output: #{output}"
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after refresh_managed_keys: #{output}"
+ assert(!output.include?(' ACTIVE '), "Expected all keys to be INACTIVE, but found: #{output}")
+
+ # 7. Enable key management again
+ @shell.command('enable_key_management', cust_and_namespace)
+
+ # 8. Get the key metadata hash for the enabled key
+ output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
+ puts "show_key_status after enable_key_management: #{output}"
+ assert(output.include?('ACTIVE'), "Expected ACTIVE key after enable_key_management, got: #{output}")
+ assert(output.include?('1 row(s)'))
+ end
+
+ define_test 'Test refresh error handling' do
+ # Test refresh on non-existent key management (should not fail, just no-op)
+ cust_and_namespace = "#{$CUST1_ENCODED}:nonexistent_namespace"
+ output = capture_stdout do
+ @shell.command('refresh_managed_keys', cust_and_namespace)
+ end
+ puts "refresh_managed_keys on non-existent namespace: #{output}"
+ assert(output.include?('Managed keys refreshed successfully'),
+ "Expected success message even for non-existent namespace, got: #{output}")
+ end
+
+ define_test 'Test disable operations error handling' do
+ # Test disable_managed_key with invalid metadata hash
+ cust_and_namespace = "#{$CUST1_ENCODED}:*"
+ error = assert_raises(ArgumentError) do
+ @shell.command('disable_managed_key', cust_and_namespace, '!!!invalid!!!')
+ end
+ assert_match(/Failed to decode Base64 encoded string '!!!invalid!!!'/, error.message)
+
+ # Test disable_key_management on non-existent namespace (should succeed, no-op)
+ cust_and_namespace = "#{$CUST1_ENCODED}:nonexistent_for_disable"
+ output = capture_stdout { @shell.command('disable_key_management', cust_and_namespace) }
+ puts "disable_key_management on non-existent namespace: #{output}"
+ # Should show 0 rows since no keys exist
+ assert(output.include?('1 row(s)'))
+ assert(output.include?(" DISABLED "), "Expected DISABLED key, got: #{output}")
+ end
+ end
+end
diff --git a/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb b/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb
new file mode 100644
index 000000000000..35ad85785e0f
--- /dev/null
+++ b/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb
@@ -0,0 +1,177 @@
+# frozen_string_literal: true
+
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase_shell'
+require 'stringio'
+require 'hbase_constants'
+require 'hbase/hbase'
+require 'hbase/table'
+
+java_import org.apache.hadoop.conf.Configuration
+java_import org.apache.hadoop.fs.FSDataInputStream
+java_import org.apache.hadoop.hbase.CellUtil
+java_import org.apache.hadoop.hbase.HConstants
+java_import org.apache.hadoop.hbase.client.Get
+java_import org.apache.hadoop.hbase.io.crypto.Encryption
+java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider
+java_import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider
+java_import org.apache.hadoop.hbase.io.hfile.CorruptHFileException
+java_import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer
+java_import org.apache.hadoop.hbase.io.hfile.HFile
+java_import org.apache.hadoop.hbase.io.hfile.CacheConfig
+java_import org.apache.hadoop.hbase.util.Bytes
+
+module Hbase
+ # Test class for encrypted table keymeta functionality
+ class EncryptedTableKeymetaTest < Test::Unit::TestCase
+ include TestHelpers
+
+ def setup
+ setup_hbase
+ @test_table = "enctest#{Time.now.to_i}"
+ @connection = $TEST_CLUSTER.connection
+ end
+
+ define_test 'Test table put/get with encryption' do
+ # Custodian is currently not supported, so this will end up falling back to local key
+ # generation.
+ test_table_put_get_with_encryption($CUST1_ENCODED, '*',
+ { 'NAME' => 'f', 'ENCRYPTION' => 'AES' },
+ true)
+ end
+
+ define_test 'Test table with custom namespace attribute in Column Family' do
+ custom_namespace = 'test_global_namespace'
+ test_table_put_get_with_encryption(
+ $GLOB_CUST_ENCODED, custom_namespace,
+ { 'NAME' => 'f', 'ENCRYPTION' => 'AES', 'ENCRYPTION_KEY_NAMESPACE' => custom_namespace },
+ false
+ )
+ end
+
+ def test_table_put_get_with_encryption(cust, namespace, table_attrs, fallback_scenario)
+ cust_and_namespace = "#{cust}:#{namespace}"
+ output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
+ assert(output.include?("#{cust} #{namespace} ACTIVE"))
+ @shell.command(:create, @test_table, table_attrs)
+ test_table = table(@test_table)
+ test_table.put('1', 'f:a', '2')
+ puts "Added a row, now flushing table #{@test_table}"
+ command(:flush, @test_table)
+
+ table_name = TableName.valueOf(@test_table)
+ store_file_info = nil
+ $TEST_CLUSTER.getRSForFirstRegionInTable(table_name).getRegions(table_name).each do |region|
+ region.getStores.each do |store|
+ store.getStorefiles.each do |storefile|
+ store_file_info = storefile.getFileInfo
+ end
+ end
+ end
+ assert_not_nil(store_file_info)
+ hfile_info = store_file_info.getHFileInfo
+ assert_not_nil(hfile_info)
+ live_trailer = hfile_info.getTrailer
+ assert_trailer(live_trailer)
+ assert_equal(namespace, live_trailer.getKeyNamespace)
+
+ # When active key is supposed to be used, we can valiate the key bytes in the context against
+ # the actual key from provider.
+ unless fallback_scenario
+ encryption_context = hfile_info.getHFileContext.getEncryptionContext
+ assert_not_nil(encryption_context)
+ assert_not_nil(encryption_context.getKeyBytes)
+ key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
+ key_data = key_provider.getManagedKey(ManagedKeyProvider.decodeToBytes(cust), namespace)
+ assert_not_nil(key_data)
+ assert_equal(namespace, key_data.getKeyNamespace)
+ assert_equal(key_data.getTheKey.getEncoded, encryption_context.getKeyBytes)
+ end
+
+ ## Disable table to ensure that the stores are not cached.
+ command(:disable, @test_table)
+ assert(!command(:is_enabled, @test_table))
+
+ # Open FSDataInputStream to the path pointed to by the store_file_info
+ fs = store_file_info.getFileSystem
+ fio = fs.open(store_file_info.getPath)
+ assert_not_nil(fio)
+ # Read trailer using FiledFileTrailer
+ offline_trailer = FixedFileTrailer.readFromStream(
+ fio, fs.getFileStatus(store_file_info.getPath).getLen
+ )
+ fio.close
+ assert_trailer(offline_trailer, live_trailer)
+
+ # Test for the ability to read HFile with encryption in an offline offline
+ reader = HFile.createReader(fs, store_file_info.getPath, CacheConfig::DISABLED, true,
+ $TEST_CLUSTER.getConfiguration)
+ assert_not_nil(reader)
+ offline_trailer = reader.getTrailer
+ assert_trailer(offline_trailer, live_trailer)
+ scanner = reader.getScanner($TEST_CLUSTER.getConfiguration, false, false)
+ assert_true(scanner.seekTo)
+ cell = scanner.getCell
+ assert_equal('1', Bytes.toString(CellUtil.cloneRow(cell)))
+ assert_equal('2', Bytes.toString(CellUtil.cloneValue(cell)))
+ assert_false(scanner.next)
+
+ # Confirm that the offline reading will fail with no config related to encryption
+ Encryption.clearKeyProviderCache
+ conf = Configuration.new($TEST_CLUSTER.getConfiguration)
+ conf.set(HConstants::CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ MockManagedKeyProvider.java_class.getName)
+ # This is expected to fail with CorruptHFileException.
+ e = assert_raises(CorruptHFileException) do
+ reader = HFile.createReader(fs, store_file_info.getPath, CacheConfig::DISABLED, true, conf)
+ end
+ assert_true(e.message.include?(
+ "Problem reading HFile Trailer from file #{store_file_info.getPath}"
+ ))
+ Encryption.clearKeyProviderCache
+
+ ## Enable back the table to be able to query.
+ command(:enable, @test_table)
+ assert(command(:is_enabled, @test_table))
+
+ get = Get.new(Bytes.toBytes('1'))
+ res = test_table.table.get(get)
+ puts "res for row '1' and column f:a: #{res}"
+ assert_false(res.isEmpty)
+ assert_equal('2', Bytes.toString(res.getValue(Bytes.toBytes('f'), Bytes.toBytes('a'))))
+ end
+
+ def assert_trailer(offline_trailer, live_trailer = nil)
+ assert_not_nil(offline_trailer)
+ assert_not_nil(offline_trailer.getEncryptionKey)
+ assert_not_nil(offline_trailer.getKEKMetadata)
+ assert_not_nil(offline_trailer.getKEKChecksum)
+ assert_not_nil(offline_trailer.getKeyNamespace)
+
+ return unless live_trailer
+
+ assert_equal(live_trailer.getEncryptionKey, offline_trailer.getEncryptionKey)
+ assert_equal(live_trailer.getKEKMetadata, offline_trailer.getKEKMetadata)
+ assert_equal(live_trailer.getKEKChecksum, offline_trailer.getKEKChecksum)
+ assert_equal(live_trailer.getKeyNamespace, offline_trailer.getKeyNamespace)
+ end
+ end
+end
diff --git a/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb b/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb
new file mode 100644
index 000000000000..d527eea8240c
--- /dev/null
+++ b/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb
@@ -0,0 +1,663 @@
+# frozen_string_literal: true
+
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase_shell'
+require 'stringio'
+require 'hbase_constants'
+require 'hbase/hbase'
+require 'hbase/table'
+require 'tempfile'
+require 'fileutils'
+
+java_import org.apache.hadoop.conf.Configuration
+java_import org.apache.hadoop.fs.FSDataInputStream
+java_import org.apache.hadoop.hbase.CellUtil
+java_import org.apache.hadoop.hbase.HConstants
+java_import org.apache.hadoop.hbase.TableName
+java_import org.apache.hadoop.hbase.client.Get
+java_import org.apache.hadoop.hbase.client.Scan
+java_import org.apache.hadoop.hbase.io.crypto.Encryption
+java_import org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider
+java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider
+java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyStoreKeyProvider
+java_import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer
+java_import org.apache.hadoop.hbase.io.hfile.HFile
+java_import org.apache.hadoop.hbase.io.hfile.CacheConfig
+java_import org.apache.hadoop.hbase.util.Bytes
+java_import org.apache.hadoop.hbase.keymeta.KeymetaServiceEndpoint
+java_import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor
+java_import org.apache.hadoop.hbase.security.EncryptionUtil
+java_import java.security.KeyStore
+java_import java.security.MessageDigest
+java_import javax.crypto.spec.SecretKeySpec
+java_import java.io.FileOutputStream
+java_import java.net.URLEncoder
+java_import java.util.Base64
+
+module Hbase
+ # Test class for key provider migration functionality
+ class KeyProviderKeymetaMigrationTest < Test::Unit::TestCase
+ include TestHelpers
+
+ def setup
+ @test_timestamp = Time.now.to_i.to_s
+ @master_key_alias = 'masterkey'
+ @shared_key_alias = 'sharedkey'
+ @table_key_alias = 'tablelevelkey'
+ @cf_key1_alias = 'cfkey1'
+ @cf_key2_alias = 'cfkey2'
+ @keystore_password = 'password'
+
+ # Test table names
+ @table_no_encryption = "no_enc_#{@test_timestamp}"
+ @table_random_key = "random_key_#{@test_timestamp}"
+ @table_table_key = "table_key_#{@test_timestamp}"
+ @table_shared_key1 = "shared1_#{@test_timestamp}"
+ @table_shared_key2 = "shared2_#{@test_timestamp}"
+ @table_cf_keys = "cf_keys_#{@test_timestamp}"
+
+ # Unified table metadata with CFs and expected namespaces
+ @tables_metadata = {
+ @table_no_encryption => {
+ cfs: ['f'],
+ expected_namespace: { 'f' => nil },
+ no_encryption: true
+ },
+ @table_random_key => {
+ cfs: ['f'],
+ expected_namespace: { 'f' => nil }
+ },
+ @table_table_key => {
+ cfs: ['f'],
+ expected_namespace: { 'f' => @table_table_key }
+ },
+ @table_shared_key1 => {
+ cfs: ['f'],
+ expected_namespace: { 'f' => 'shared-global-key' }
+ },
+ @table_shared_key2 => {
+ cfs: ['f'],
+ expected_namespace: { 'f' => 'shared-global-key' }
+ },
+ @table_cf_keys => {
+ cfs: %w[cf1 cf2],
+ expected_namespace: {
+ 'cf1' => "#{@table_cf_keys}/cf1",
+ 'cf2' => "#{@table_cf_keys}/cf2"
+ }
+ }
+ }
+
+ # Setup initial KeyStoreKeyProvider
+ setup_old_key_provider
+ puts ' >> Starting Cluster'
+ $TEST.startMiniCluster
+ puts ' >> Cluster started'
+
+ setup_hbase
+ end
+
+ define_test 'Test complete key provider migration' do
+ puts '\n=== Starting Key Provider Migration Test ==='
+
+ # Step 1-3: Setup old provider and create tables
+ create_test_tables
+ puts '\n--- Validating initial table operations ---'
+ validate_pre_migration_operations(false)
+
+ # Step 4: Setup new provider and restart
+ setup_new_key_provider
+ restart_cluster_and_validate
+
+ # Step 5: Perform migration
+ migrate_tables_step_by_step
+
+ # Step 6: Cleanup and final validation
+ cleanup_old_provider_and_validate
+
+ puts '\n=== Migration Test Completed Successfully ==='
+ end
+
+ private
+
+ def setup_old_key_provider
+ puts '\n--- Setting up old KeyStoreKeyProvider ---'
+
+ # Use proper test directory (similar to KeymetaTestUtils.setupTestKeyStore)
+ test_data_dir = $TEST_CLUSTER.getDataTestDir("old_keystore_#{@test_timestamp}").toString
+ FileUtils.mkdir_p(test_data_dir)
+ @old_keystore_file = File.join(test_data_dir, 'keystore.jceks')
+ puts " >> Old keystore file: #{@old_keystore_file}"
+
+ # Create keystore with only the master key
+ # ENCRYPTION_KEY attributes generate their own keys and don't use keystore entries
+ create_keystore(@old_keystore_file, { @master_key_alias => generate_key(@master_key_alias) })
+
+ # Configure old KeyStoreKeyProvider
+ provider_uri = "jceks://#{File.expand_path(@old_keystore_file)}?" \
+ "password=#{@keystore_password}"
+ $TEST_CLUSTER.getConfiguration.set(HConstants::CRYPTO_KEYPROVIDER_CONF_KEY,
+ KeyStoreKeyProvider.java_class.name)
+ $TEST_CLUSTER.getConfiguration.set(HConstants::CRYPTO_KEYPROVIDER_PARAMETERS_KEY,
+ provider_uri)
+ $TEST_CLUSTER.getConfiguration.set(HConstants::CRYPTO_MASTERKEY_NAME_CONF_KEY,
+ @master_key_alias)
+
+ puts " >> Old KeyStoreKeyProvider configured with keystore: #{@old_keystore_file}"
+ end
+
+ def create_test_tables
+ puts '\n--- Creating test tables ---'
+
+ # 1. Table without encryption
+ command(:create, @table_no_encryption, { 'NAME' => 'f' })
+ puts " >> Created table #{@table_no_encryption} without encryption"
+
+ # 2. Table with random key (no explicit key set)
+ command(:create, @table_random_key, { 'NAME' => 'f', 'ENCRYPTION' => 'AES' })
+ puts " >> Created table #{@table_random_key} with random key"
+
+ # 3. Table with table-level key
+ command(:create, @table_table_key, { 'NAME' => 'f', 'ENCRYPTION' => 'AES',
+ 'ENCRYPTION_KEY' => @table_key_alias })
+ puts " >> Created table #{@table_table_key} with table-level key"
+
+ # 4. First table with shared key
+ command(:create, @table_shared_key1, { 'NAME' => 'f', 'ENCRYPTION' => 'AES',
+ 'ENCRYPTION_KEY' => @shared_key_alias })
+ puts " >> Created table #{@table_shared_key1} with shared key"
+
+ # 5. Second table with shared key
+ command(:create, @table_shared_key2, { 'NAME' => 'f', 'ENCRYPTION' => 'AES',
+ 'ENCRYPTION_KEY' => @shared_key_alias })
+ puts " >> Created table #{@table_shared_key2} with shared key"
+
+ # 6. Table with column family specific keys
+ command(:create, @table_cf_keys,
+ { 'NAME' => 'cf1', 'ENCRYPTION' => 'AES', 'ENCRYPTION_KEY' => @cf_key1_alias },
+ { 'NAME' => 'cf2', 'ENCRYPTION' => 'AES', 'ENCRYPTION_KEY' => @cf_key2_alias })
+ puts " >> Created table #{@table_cf_keys} with CF-specific keys"
+ end
+
+ def validate_pre_migration_operations(is_key_management_enabled)
+ @tables_metadata.each do |table_name, metadata|
+ puts " >> test_table_operations on table: #{table_name} with CFs: " \
+ "#{metadata[:cfs].join(', ')}"
+ next if metadata[:no_encryption]
+
+ test_table_operations(table_name, metadata[:cfs])
+ check_hfile_trailers_pre_migration(table_name, metadata[:cfs], is_key_management_enabled)
+ end
+ end
+
+ def test_table_operations(table_name, column_families)
+ puts " >> Testing operations on table #{table_name}"
+
+ test_table = table(table_name)
+
+ column_families.each do |cf|
+ puts " >> Running put operations on CF: #{cf} in table: #{table_name}"
+ # Put data
+ test_table.put('row1', "#{cf}:col1", 'value1')
+ test_table.put('row2', "#{cf}:col2", 'value2')
+ end
+
+ # Flush table
+ puts " >> Flushing table: #{table_name}"
+ $TEST_CLUSTER.flush(TableName.valueOf(table_name))
+
+ # Get data and validate
+ column_families.each do |cf|
+ puts " >> Validating data in CF: #{cf} in table: #{table_name}"
+ get_result = test_table.table.get(Get.new(Bytes.toBytes('row1')))
+ assert_false(get_result.isEmpty)
+ assert_equal('value1',
+ Bytes.toString(get_result.getValue(Bytes.toBytes(cf), Bytes.toBytes('col1'))))
+ end
+
+ puts " >> Operations validated for #{table_name}"
+ end
+
+ def setup_new_key_provider
+ puts '\n--- Setting up new ManagedKeyStoreKeyProvider ---'
+
+ # Use proper test directory (similar to KeymetaTestUtils.setupTestKeyStore)
+ test_data_dir = $TEST_CLUSTER.getDataTestDir("new_keystore_#{@test_timestamp}").toString
+ FileUtils.mkdir_p(test_data_dir)
+ @new_keystore_file = File.join(test_data_dir, 'managed_keystore.jceks')
+ puts " >> New keystore file: #{@new_keystore_file}"
+
+ # Extract wrapped keys from encrypted tables and unwrap them
+ migrated_keys = extract_and_unwrap_keys_from_tables
+
+ # Create new keystore with migrated keys
+ create_keystore(@new_keystore_file, migrated_keys)
+
+ # Configure ManagedKeyStoreKeyProvider
+ provider_uri = "jceks://#{File.expand_path(@new_keystore_file)}?" \
+ "password=#{@keystore_password}"
+ $TEST_CLUSTER.getConfiguration.set(HConstants::CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, 'true')
+ $TEST_CLUSTER.getConfiguration.set(HConstants::CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
+ ManagedKeyStoreKeyProvider.java_class.name)
+ $TEST_CLUSTER.getConfiguration.set(HConstants::CRYPTO_MANAGED_KEYPROVIDER_PARAMETERS_KEY,
+ provider_uri)
+ $TEST_CLUSTER.getConfiguration.set(
+ HConstants::CRYPTO_MANAGED_KEY_STORE_SYSTEM_KEY_NAME_CONF_KEY,
+ 'system_key'
+ )
+
+ # Setup key configurations for ManagedKeyStoreKeyProvider
+ # Shared key configuration
+ $TEST_CLUSTER.getConfiguration.set(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.shared-global-key.alias",
+ 'shared_global_key'
+ )
+ $TEST_CLUSTER.getConfiguration.setBoolean(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.shared-global-key.active",
+ true
+ )
+
+ # Table-level key configuration - let system determine namespace automatically
+ $TEST_CLUSTER.getConfiguration.set(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_table_key}.alias",
+ "#{@table_table_key}_key"
+ )
+ $TEST_CLUSTER.getConfiguration.setBoolean(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_table_key}.active",
+ true
+ )
+
+ # CF-level key configurations - let system determine namespace automatically
+ $TEST_CLUSTER.getConfiguration.set(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_cf_keys}/cf1.alias",
+ "#{@table_cf_keys}_cf1_key"
+ )
+ $TEST_CLUSTER.getConfiguration.setBoolean(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_cf_keys}/cf1.active",
+ true
+ )
+
+ $TEST_CLUSTER.getConfiguration.set(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_cf_keys}/cf2.alias",
+ "#{@table_cf_keys}_cf2_key"
+ )
+ $TEST_CLUSTER.getConfiguration.setBoolean(
+ "hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_cf_keys}/cf2.active",
+ true
+ )
+
+ # Enable KeyMeta coprocessor
+ $TEST_CLUSTER.getConfiguration.set('hbase.coprocessor.master.classes',
+ KeymetaServiceEndpoint.java_class.name)
+
+ puts ' >> New ManagedKeyStoreKeyProvider configured'
+ end
+
+ def restart_cluster_and_validate
+ puts '\n--- Restarting cluster with managed key store key provider ---'
+
+ $TEST.restartMiniCluster(KeymetaTableAccessor::KEY_META_TABLE_NAME)
+ puts ' >> Cluster restarted with ManagedKeyStoreKeyProvider'
+ setup_hbase
+
+ # Validate key management service is functional
+ output = capture_stdout { command(:show_key_status, "#{$GLOB_CUST_ENCODED}:*") }
+ assert(output.include?('0 row(s)'), "Expected 0 rows from show_key_status, got: #{output}")
+ puts ' >> Key management service is functional'
+
+ # Test operations still work and check HFile trailers
+ puts '\n--- Validating operations after restart ---'
+ validate_pre_migration_operations(true)
+ end
+
+ def check_hfile_trailers_pre_migration(table_name, column_families, is_key_management_enabled)
+ puts " >> Checking HFile trailers for #{table_name} with CFs: " \
+ "#{column_families.join(', ')}"
+
+ column_families.each do |cf_name|
+ validate_hfile_trailer(table_name, cf_name, false, is_key_management_enabled, false)
+ end
+ end
+
+ def migrate_tables_step_by_step
+ puts '\n--- Performing step-by-step table migration ---'
+
+ # Migrate shared key tables first
+ migrate_shared_key_tables
+
+ # Migrate table-level key
+ migrate_table_level_key
+
+ # Migrate CF-level keys
+ migrate_cf_level_keys
+ end
+
+ def migrate_shared_key_tables
+ puts '\n--- Migrating shared key tables ---'
+
+ # Enable key management for shared global key
+ cust_and_namespace = "#{$GLOB_CUST_ENCODED}:shared-global-key"
+ output = capture_stdout { command(:enable_key_management, cust_and_namespace) }
+ assert(output.include?("#{$GLOB_CUST_ENCODED} shared-global-key ACTIVE"),
+ "Expected ACTIVE status for shared key, got: #{output}")
+ puts ' >> Enabled key management for shared global key'
+
+ # Migrate first shared key table
+ migrate_table_to_managed_key(@table_shared_key1, 'f', 'shared-global-key',
+ use_namespace_attribute: true)
+
+ # Migrate second shared key table
+ migrate_table_to_managed_key(@table_shared_key2, 'f', 'shared-global-key',
+ use_namespace_attribute: true)
+ end
+
+ def migrate_table_level_key
+ puts '\n--- Migrating table-level key ---'
+
+ # Enable key management for table namespace
+ cust_and_namespace = "#{$GLOB_CUST_ENCODED}:#{@table_table_key}"
+ output = capture_stdout { command(:enable_key_management, cust_and_namespace) }
+ assert(output.include?("#{$GLOB_CUST_ENCODED} #{@table_table_key} ACTIVE"),
+ "Expected ACTIVE status for table key, got: #{output}")
+ puts ' >> Enabled key management for table-level key'
+
+ # Migrate the table - no namespace attribute, let system auto-determine
+ migrate_table_to_managed_key(@table_table_key, 'f', @table_table_key)
+ end
+
+ def migrate_cf_level_keys
+ puts '\n--- Migrating CF-level keys ---'
+
+ # Enable key management for CF1
+ cf1_namespace = "#{@table_cf_keys}/cf1"
+ cust_and_namespace = "#{$GLOB_CUST_ENCODED}:#{cf1_namespace}"
+ output = capture_stdout { command(:enable_key_management, cust_and_namespace) }
+ assert(output.include?("#{$GLOB_CUST_ENCODED} #{cf1_namespace} ACTIVE"),
+ "Expected ACTIVE status for CF1 key, got: #{output}")
+ puts ' >> Enabled key management for CF1'
+
+ # Enable key management for CF2
+ cf2_namespace = "#{@table_cf_keys}/cf2"
+ cust_and_namespace = "#{$GLOB_CUST_ENCODED}:#{cf2_namespace}"
+ output = capture_stdout { command(:enable_key_management, cust_and_namespace) }
+ assert(output.include?("#{$GLOB_CUST_ENCODED} #{cf2_namespace} ACTIVE"),
+ "Expected ACTIVE status for CF2 key, got: #{output}")
+ puts ' >> Enabled key management for CF2'
+
+ # Migrate CF1
+ migrate_table_to_managed_key(@table_cf_keys, 'cf1', cf1_namespace)
+
+ # Migrate CF2
+ migrate_table_to_managed_key(@table_cf_keys, 'cf2', cf2_namespace)
+ end
+
+ def migrate_table_to_managed_key(table_name, cf_name, namespace,
+ use_namespace_attribute: false)
+ puts " >> Migrating table #{table_name}, CF #{cf_name} to namespace #{namespace}"
+
+ # Use atomic alter operation to remove ENCRYPTION_KEY and optionally add
+ # ENCRYPTION_KEY_NAMESPACE
+ if use_namespace_attribute
+ # For shared key tables: remove ENCRYPTION_KEY and add ENCRYPTION_KEY_NAMESPACE atomically
+ command(:alter, table_name,
+ { 'NAME' => cf_name,
+ 'CONFIGURATION' => { 'ENCRYPTION_KEY' => '',
+ 'ENCRYPTION_KEY_NAMESPACE' => namespace } })
+ else
+ # For table/CF level keys: just remove ENCRYPTION_KEY, let system auto-determine namespace
+ command(:alter, table_name,
+ { 'NAME' => cf_name, 'CONFIGURATION' => { 'ENCRYPTION_KEY' => '' } })
+ end
+
+ puts " >> Altered #{table_name} CF #{cf_name} to use namespace #{namespace}"
+
+ # The CF alter should trigger an online schema change and should cause the stores to be
+ # reopened and the encryption context to be reinitialized, but it is asynchronous and may take
+ # some time, so we sleep for 5s, following the same pattern as in the
+ # TestEncryptionKeyRotation.testCFKeyRotation().
+ sleep(5)
+
+ # Scan all existing data to verify accessibility
+ scan_and_validate_table(table_name)
+
+ # Add new data
+ test_table = table(table_name)
+ test_table.put('new_row', "#{cf_name}:new_col", 'new_value')
+
+ # Flush and validate trailer
+ $TEST_CLUSTER.flush(TableName.valueOf(table_name))
+ validate_hfile_trailer(table_name, cf_name, true, true, false, namespace)
+
+ puts " >> Migration completed for #{table_name} CF #{cf_name}"
+ end
+
+ def scan_and_validate_table(table_name)
+ puts " >> Scanning and validating existing data in #{table_name}"
+
+ test_table = table(table_name)
+ scan = Scan.new
+ scanner = test_table.table.getScanner(scan)
+
+ row_count = 0
+ while (result = scanner.next)
+ row_count += 1
+ assert_false(result.isEmpty)
+ end
+ scanner.close
+
+ assert(row_count.positive?, "Expected to find existing data in #{table_name}")
+ puts " >> Found #{row_count} rows, all accessible"
+ end
+
+ def validate_hfile_trailer(table_name, cf_name, is_post_migration, is_key_management_enabled,
+ is_compacted, expected_namespace = nil)
+ context = is_post_migration ? 'migrated' : 'pre-migration'
+ puts " >> Validating HFile trailer for #{context} table #{table_name}, CF: #{cf_name}"
+
+ table_name_obj = TableName.valueOf(table_name)
+ region_servers = $TEST_CLUSTER.getRSForFirstRegionInTable(table_name_obj)
+ regions = region_servers.getRegions(table_name_obj)
+
+ regions.each do |region|
+ region.getStores.each do |store|
+ next unless store.getColumnFamilyName == cf_name
+
+ puts " >> store file count for CF: #{cf_name} in table: #{table_name} is " \
+ "#{store.getStorefiles.size}"
+ if is_compacted
+ assert_equal(1, store.getStorefiles.size)
+ else
+ assert_true(!store.getStorefiles.empty?)
+ end
+ store.getStorefiles.each do |storefile|
+ puts " >> Checking HFile trailer for storefile: #{storefile.getPath.getName} " \
+ "with sequence id: #{storefile.getMaxSequenceId} against max sequence id of " \
+ "store: #{store.getMaxSequenceId.getAsLong}"
+ # The flush would have created new HFiles, but the old would still be there
+ # so we need to make sure to check the latest store only.
+ next unless storefile.getMaxSequenceId == store.getMaxSequenceId.getAsLong
+
+ store_file_info = storefile.getFileInfo
+ next unless store_file_info
+
+ hfile_info = store_file_info.getHFileInfo
+ next unless hfile_info
+
+ trailer = hfile_info.getTrailer
+
+ assert_not_nil(trailer.getEncryptionKey)
+
+ if is_key_management_enabled
+ assert_not_nil(trailer.getKEKMetadata)
+ assert_not_equal(0, trailer.getKEKChecksum)
+ else
+ assert_nil(trailer.getKEKMetadata)
+ assert_equal(0, trailer.getKEKChecksum)
+ end
+
+ if is_post_migration
+ assert_equal(expected_namespace, trailer.getKeyNamespace)
+ puts " >> Trailer validation passed - namespace: #{trailer.getKeyNamespace}"
+ else
+ assert_nil(trailer.getKeyNamespace)
+ puts ' >> Trailer validation passed - using legacy key format'
+ end
+ end
+ end
+ end
+ end
+
+ def cleanup_old_provider_and_validate
+ puts '\n--- Cleaning up old key provider and final validation ---'
+
+ # Remove old KeyProvider configurations
+ $TEST_CLUSTER.getConfiguration.unset(HConstants::CRYPTO_KEYPROVIDER_CONF_KEY)
+ $TEST_CLUSTER.getConfiguration.unset(HConstants::CRYPTO_KEYPROVIDER_PARAMETERS_KEY)
+ $TEST_CLUSTER.getConfiguration.unset(HConstants::CRYPTO_MASTERKEY_NAME_CONF_KEY)
+
+ # Remove old keystore
+ FileUtils.rm_rf(@old_keystore_file) if File.directory?(@old_keystore_file)
+ puts ' >> Removed old keystore and configuration'
+
+ # Restart cluster
+ $TEST.restartMiniCluster(KeymetaTableAccessor::KEY_META_TABLE_NAME)
+ puts ' >> Cluster restarted without old key provider'
+ setup_hbase
+
+ # Validate all data is still accessible
+ validate_all_tables_final
+
+ # Perform major compaction and validate
+ perform_major_compaction_and_validate
+ end
+
+ def validate_all_tables_final
+ puts '\n--- Final validation - scanning all tables ---'
+
+ @tables_metadata.each do |table_name, metadata|
+ next if metadata[:no_encryption]
+
+ puts " >> Final validation for table: #{table_name} with CFs: #{metadata[:cfs].join(', ')}"
+ scan_and_validate_table(table_name)
+ puts " >> #{table_name} - all data accessible"
+ end
+ end
+
+ def perform_major_compaction_and_validate
+ puts '\n--- Performing major compaction and final validation ---'
+
+ $TEST_CLUSTER.compact(true)
+
+ @tables_metadata.each do |table_name, metadata|
+ next if metadata[:no_encryption]
+
+ puts " >> Validating post-compaction HFiles for table: #{table_name} with " \
+ "CFs: #{metadata[:cfs].join(', ')}"
+ metadata[:cfs].each do |cf_name|
+ # When using random key from system key, there is no namespace
+ validate_hfile_trailer(table_name, cf_name, true, true, true,
+ metadata[:expected_namespace][cf_name])
+ end
+ end
+ end
+
+ # Utility methods
+
+ def extract_and_unwrap_keys_from_tables
+ puts ' >> Extracting and unwrapping keys from encrypted tables'
+
+ keys = {}
+
+ # Reuse existing master key from old keystore as system key
+ old_key_provider = Encryption.getKeyProvider($TEST_CLUSTER.getConfiguration)
+ master_key_bytes = old_key_provider.getKey(@master_key_alias).getEncoded
+ keys['system_key'] = master_key_bytes
+
+ # Extract wrapped keys from table descriptors and unwrap them
+ # Only call extract_key_from_table for tables that have ENCRYPTION_KEY attribute
+
+ # For shared key tables (both use same key)
+ shared_key = extract_key_from_table(@table_shared_key1, 'f')
+ keys['shared_global_key'] = shared_key
+
+ # For table-level key
+ table_key = extract_key_from_table(@table_table_key, 'f')
+ keys["#{@table_table_key}_key"] = table_key
+
+ # For CF-level keys
+ cf1_key = extract_key_from_table(@table_cf_keys, 'cf1')
+ keys["#{@table_cf_keys}_cf1_key"] = cf1_key
+
+ cf2_key = extract_key_from_table(@table_cf_keys, 'cf2')
+ keys["#{@table_cf_keys}_cf2_key"] = cf2_key
+
+ puts " >> Extracted #{keys.size} keys for migration"
+ keys
+ end
+
+ def extract_key_from_table(table_name, cf_name)
+ # Get table descriptor
+ admin = $TEST_CLUSTER.getAdmin
+ table_descriptor = admin.getDescriptor(TableName.valueOf(table_name))
+ cf_descriptor = table_descriptor.getColumnFamily(Bytes.toBytes(cf_name))
+
+ # Get the wrapped key bytes from ENCRYPTION_KEY attribute
+ wrapped_key_bytes = cf_descriptor.getEncryptionKey
+
+ # Use EncryptionUtil.unwrapKey with master key alias as subject
+ unwrapped_key = EncryptionUtil.unwrapKey($TEST_CLUSTER.getConfiguration,
+ @master_key_alias, wrapped_key_bytes)
+
+ unwrapped_key.getEncoded
+ end
+
+ def generate_key(alias_name)
+ MessageDigest.getInstance('SHA-256').digest(Bytes.toBytes(alias_name))
+ end
+
+ def create_keystore(keystore_path, key_entries)
+ store = KeyStore.getInstance('JCEKS')
+ password_chars = @keystore_password.to_java.toCharArray
+ store.load(nil, password_chars)
+
+ key_entries.each do |alias_name, key_bytes|
+ secret_key = SecretKeySpec.new(key_bytes, 'AES')
+ store.setEntry(alias_name, KeyStore::SecretKeyEntry.new(secret_key),
+ KeyStore::PasswordProtection.new(password_chars))
+ end
+
+ fos = FileOutputStream.new(keystore_path)
+ begin
+ store.store(fos, password_chars)
+ ensure
+ fos.close
+ end
+ end
+
+ def teardown
+ # Cleanup temporary test directories (keystore files will be cleaned up with the directories)
+ test_base_dir = $TEST_CLUSTER.getDataTestDir.toString
+ Dir.glob(File.join(test_base_dir, "*keystore_#{@test_timestamp}*")).each do |dir|
+ FileUtils.rm_rf(dir) if File.directory?(dir)
+ end
+ end
+ end
+end
diff --git a/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb b/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb
new file mode 100644
index 000000000000..77a2a339552e
--- /dev/null
+++ b/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb
@@ -0,0 +1,59 @@
+# frozen_string_literal: true
+
+#
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+require 'hbase_shell'
+require 'stringio'
+require 'hbase_constants'
+require 'hbase/hbase'
+require 'hbase/table'
+
+java_import org.apache.hadoop.hbase.io.crypto.Encryption
+java_import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider
+module Hbase
+ # Test class for rotate_stk command
+ class RotateSTKKeymetaTest < Test::Unit::TestCase
+ include TestHelpers
+
+ def setup
+ setup_hbase
+ end
+
+ define_test 'Test rotate_stk command' do
+ puts 'Testing rotate_stk command'
+
+ # this should return false (no rotation performed)
+ output = capture_stdout { @shell.command(:rotate_stk) }
+ puts "rotate_stk output: #{output}"
+ assert(output.include?('No System Key change was detected'),
+ "Expected output to contain rotation status message, but got: #{output}")
+
+ key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
+ # Once we enable multikeyGenMode on MockManagedKeyProvider, every call should return a new key
+ # which should trigger a rotation.
+ key_provider.setMultikeyGenMode(true)
+ output = capture_stdout { @shell.command(:rotate_stk) }
+ puts "rotate_stk output: #{output}"
+ assert(output.include?('System Key rotation was performed successfully and cache was ' \
+ 'refreshed on all region servers'),
+ "Expected output to contain rotation status message, but got: #{output}")
+ end
+ end
+end
diff --git a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/client/ThriftAdmin.java b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/client/ThriftAdmin.java
index 76a8b41481be..2c7c8f1c06ad 100644
--- a/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/client/ThriftAdmin.java
+++ b/hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/client/ThriftAdmin.java
@@ -1386,4 +1386,23 @@ public boolean isReplicationPeerModificationEnabled() throws IOException {
throw new NotImplementedException(
"isReplicationPeerModificationEnabled not supported in ThriftAdmin");
}
+
+ @Override
+ public void refreshSystemKeyCacheOnServers(List regionServers) throws IOException {
+ throw new NotImplementedException(
+ "refreshSystemKeyCacheOnServers not supported in ThriftAdmin");
+ }
+
+ @Override
+ public void ejectManagedKeyDataCacheEntryOnServers(List regionServers,
+ byte[] keyCustodian, String keyNamespace, String keyMetadata) throws IOException {
+ throw new NotImplementedException(
+ "ejectManagedKeyDataCacheEntryOnServers not supported in ThriftAdmin");
+ }
+
+ @Override
+ public void clearManagedKeyDataCacheOnServers(List regionServers) throws IOException {
+ throw new NotImplementedException(
+ "clearManagedKeyDataCacheOnServers not supported in ThriftAdmin");
+ }
}
From 083df81eb20418f558620b2ef31ba7c1104c5f78 Mon Sep 17 00:00:00 2001
From: Hari Dara
Date: Fri, 3 Apr 2026 13:49:23 +0530
Subject: [PATCH 2/7] Significant improvements and cleanup in Key Management.
Main purpose was to avoid byte[] garbage as much as possible. The following describe the overall changes:
- Removed key namespace from trailer in favor of key identity.
- Removed the support for dynamic key namespace based on table and CF names to avoid -ve key lookup buildup in the cache
- Optimization to reduce byte[] garbage due to key namespace handling
- Refactor caches and facilitate more efficient lookups with reduced garbage generation
- Streamline the cache usage as part of making cache lookups more efficient
- Consolidate managed key identity types and shared row-key encoding so hot paths reuse backing byte[] and avoid redundant copies, further reducing allocation on lookups
- Also updated the disabledKeyManagement to update all key states to INACTIVE.
- Change ManagedKeyProvider interface to take identity where it makes sense and avoid byte[] copy
- Added a new admin API setManagedKey
- Changed KeymetaAdminImpl -> KeymetaTableAccessor relationship as composition for a cleaner relationship
- More consistent exception handling
- Refactor ManagedKeyData and keymeta to configurable digest and partial/full identity and defaulted to a hash that uses 8 bytes instead of 16.
---
.../hbase/client/ColumnFamilyDescriptor.java | 3 +
.../client/ColumnFamilyDescriptorBuilder.java | 5 +
.../hbase/client/RawAsyncHBaseAdmin.java | 4 +-
.../hbase/keymeta/KeymetaAdminClient.java | 33 +-
hbase-common/pom.xml | 4 +
.../org/apache/hadoop/hbase/HConstants.java | 10 +-
.../hadoop/hbase/io/crypto/Context.java | 10 -
.../hbase/io/crypto/DigestAlgorithms.java | 126 ++
.../hbase/io/crypto/ManagedKeyData.java | 239 +--
.../hbase/io/crypto/ManagedKeyProvider.java | 34 +-
.../io/crypto/ManagedKeyStoreKeyProvider.java | 35 +-
.../hbase/keymeta/KeyIdentityBytesBacked.java | 189 +++
.../keymeta/KeyIdentityPrefixBytesBacked.java | 191 +++
.../keymeta/KeyIdentitySingleArrayBacked.java | 288 ++++
.../hadoop/hbase/keymeta/KeymetaAdmin.java | 30 +-
.../hbase/keymeta/ManagedKeyIdentity.java | 138 ++
.../keymeta/ManagedKeyIdentityUtils.java | 210 +++
.../org/apache/hadoop/hbase/util/Bytes.java | 31 +-
.../hadoop/hbase/util/CommonFSUtils.java | 8 +-
.../io/crypto/MockManagedKeyProvider.java | 23 +-
.../hbase/io/crypto/TestManagedKeyData.java | 183 ++-
.../io/crypto/TestManagedKeyProvider.java | 139 +-
.../hbase/keymeta/TestManagedKeyIdentity.java | 1429 +++++++++++++++++
.../src/main/protobuf/HBase.proto | 5 +
.../main/protobuf/server/ManagedKeys.proto | 2 +
.../src/main/protobuf/server/io/HFile.proto | 3 +-
.../apache/hadoop/hbase/HBaseServerBase.java | 19 +-
.../org/apache/hadoop/hbase/io/HFileLink.java | 9 -
.../hbase/io/hfile/FixedFileTrailer.java | 56 +-
.../hbase/io/hfile/HFileWriterImpl.java | 8 +-
.../hbase/keymeta/KeyManagementBase.java | 1 +
.../hbase/keymeta/KeyManagementService.java | 8 +
.../hbase/keymeta/KeyManagementUtils.java | 174 +-
.../hbase/keymeta/KeyNamespaceUtil.java | 93 --
.../hbase/keymeta/KeymetaAdminImpl.java | 151 +-
.../hbase/keymeta/KeymetaServiceEndpoint.java | 91 +-
.../hbase/keymeta/KeymetaTableAccessor.java | 238 ++-
.../hbase/keymeta/ManagedKeyDataCache.java | 266 ++-
.../hbase/keymeta/SystemKeyAccessor.java | 2 +-
.../hadoop/hbase/keymeta/SystemKeyCache.java | 31 +-
.../apache/hadoop/hbase/master/HMaster.java | 14 +-
.../hbase/regionserver/HRegionServer.java | 4 +-
.../hadoop/hbase/regionserver/HStoreFile.java | 16 +-
.../hbase/regionserver/RSRpcServices.java | 12 +-
.../hbase/regionserver/StoreEngine.java | 4 +-
.../hbase/regionserver/StoreFileInfo.java | 3 +-
.../hadoop/hbase/security/SecurityUtil.java | 152 +-
.../hbase/io/hfile/TestFixedFileTrailer.java | 2 +-
.../ManagedKeyProviderInterceptor.java | 9 +-
.../hbase/keymeta/ManagedKeyTestBase.java | 4 +
.../keymeta/TestKeyManagementService.java | 6 +-
.../hbase/keymeta/TestKeyManagementUtils.java | 244 ++-
.../hbase/keymeta/TestKeyNamespaceUtil.java | 126 --
.../hbase/keymeta/TestKeymetaEndpoint.java | 65 +-
.../keymeta/TestKeymetaTableAccessor.java | 413 ++++-
.../keymeta/TestManagedKeyDataCache.java | 636 ++++----
.../hbase/keymeta/TestManagedKeymeta.java | 77 +-
.../hbase/keymeta/TestSystemKeyCache.java | 137 +-
.../hbase/master/TestKeymetaAdminImpl.java | 639 +++++---
.../TestSystemKeyAccessorAndManager.java | 15 +-
.../hbase/master/TestSystemKeyManager.java | 10 +-
.../hbase/regionserver/TestRSRpcServices.java | 12 +-
.../hbase/regionserver/TestStoreFileInfo.java | 3 +-
.../hbase/security/TestSecurityUtil.java | 288 ++--
hbase-shell/pom.xml | 25 +
.../shell/commands/keymeta_command_base.rb | 5 +-
.../client/TestKeymetaMockProviderShell.java | 13 +-
.../shell/admin_keymeta_mock_provider_test.rb | 7 +-
.../src/test/ruby/shell/admin_keymeta_test.rb | 34 +-
.../shell/encrypted_table_keymeta_test.rb | 108 +-
.../key_provider_keymeta_migration_test.rb | 39 +-
.../rotate_stk_keymeta_mock_provider_test.rb | 4 +-
pom.xml | 11 +
73 files changed, 5702 insertions(+), 1954 deletions(-)
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DigestAlgorithms.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityBytesBacked.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityPrefixBytesBacked.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentitySingleArrayBacked.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentity.java
create mode 100644 hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentityUtils.java
create mode 100644 hbase-common/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyIdentity.java
delete mode 100644 hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyNamespaceUtil.java
delete mode 100644 hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.java
index ea8d81043694..a40c23a3d877 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptor.java
@@ -117,6 +117,9 @@ public interface ColumnFamilyDescriptor {
/** Returns the encryption key namespace for this family */
String getEncryptionKeyNamespace();
+ /** Returns the encryption key namespace for this family as a {@link Bytes} object */
+ Bytes getEncryptionKeyNamespaceBytes();
+
/** Returns Return the encryption algorithm in use by this family */
String getEncryptionType();
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java
index 6635645c0760..93142a479935 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ColumnFamilyDescriptorBuilder.java
@@ -1352,6 +1352,11 @@ public String getEncryptionKeyNamespace() {
return getStringOrDefault(ENCRYPTION_KEY_NAMESPACE_BYTES, Function.identity(), null);
}
+ @Override
+ public Bytes getEncryptionKeyNamespaceBytes() {
+ return getValue(ENCRYPTION_KEY_NAMESPACE_BYTES);
+ }
+
/**
* Set the encryption key namespace attribute for the family
* @param keyNamespace the key namespace, or null to remove existing setting
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
index ec1aa4736df5..f6dcd4a4a160 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
@@ -76,8 +76,8 @@
import org.apache.hadoop.hbase.client.replication.TableCFs;
import org.apache.hadoop.hbase.client.security.SecurityCapability;
import org.apache.hadoop.hbase.exceptions.DeserializationException;
-import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.ipc.HBaseRpcController;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.net.Address;
import org.apache.hadoop.hbase.quotas.QuotaFilter;
import org.apache.hadoop.hbase.quotas.QuotaSettings;
@@ -4731,7 +4731,7 @@ public CompletableFuture ejectManagedKeyDataCacheEntryOnServers(
List regionServers, byte[] keyCustodian, String keyNamespace, String keyMetadata) {
CompletableFuture future = new CompletableFuture<>();
// Create the request once instead of repeatedly for each server
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata);
+ byte[] keyMetadataHash = ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata);
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCustodian))
.setKeyNamespace(keyNamespace).build())
diff --git a/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java b/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java
index 9801750e5b7d..8dc779348c51 100644
--- a/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java
+++ b/hbase-client/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminClient.java
@@ -25,6 +25,7 @@
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.hbase.thirdparty.com.google.protobuf.ByteString;
@@ -37,10 +38,10 @@
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyEntryRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyResponse;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.SetManagedKeyRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ManagedKeysProtos;
@InterfaceAudience.Public
-@InterfaceAudience.Private
public class KeymetaAdminClient implements KeymetaAdmin {
private ManagedKeysProtos.ManagedKeysService.BlockingInterface stub;
@@ -147,6 +148,21 @@ public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
}
}
+ @Override
+ public ManagedKeyData setManagedKey(byte[] keyCust, String keyNamespace, String keyMetadata)
+ throws IOException, KeyException {
+ try {
+ ManagedKeyResponse response = stub.setManagedKey(null,
+ SetManagedKeyRequest
+ .newBuilder().setKeyCustNs(ManagedKeyRequest.newBuilder()
+ .setKeyCust(ByteString.copyFrom(keyCust)).setKeyNamespace(keyNamespace).build())
+ .setKeyMetadata(keyMetadata).build());
+ return generateKeyData(response);
+ } catch (ServiceException e) {
+ throw ProtobufUtil.handleRemoteException(e);
+ }
+ }
+
private static List generateKeyDataList(GetManagedKeysResponse stateResponse) {
List keyStates = new ArrayList<>();
for (ManagedKeyResponse state : stateResponse.getStateList()) {
@@ -156,15 +172,22 @@ private static List generateKeyDataList(GetManagedKeysResponse s
}
private static ManagedKeyData generateKeyData(ManagedKeyResponse response) {
- // Use hash-only constructor for client-side ManagedKeyData
+ // Convert namespace String from RPC response to byte[] once at this boundary.
+ byte[] keyCust = response.getKeyCust().toByteArray();
+ byte[] keyNamespace = Bytes.toBytes(response.getKeyNamespace());
byte[] keyMetadataHash =
response.hasKeyMetadataHash() ? response.getKeyMetadataHash().toByteArray() : null;
+ ManagedKeyIdentity fullKeyIdentity =
+ new KeyIdentityBytesBacked(new Bytes(keyCust), new Bytes(keyNamespace),
+ keyMetadataHash != null
+ ? new Bytes(keyMetadataHash)
+ : ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES);
if (keyMetadataHash == null) {
- return new ManagedKeyData(response.getKeyCust().toByteArray(), response.getKeyNamespace(),
+ return new ManagedKeyData(fullKeyIdentity,
ManagedKeyState.forValue((byte) response.getKeyState().getNumber()));
} else {
- return new ManagedKeyData(response.getKeyCust().toByteArray(), response.getKeyNamespace(),
- ManagedKeyState.forValue((byte) response.getKeyState().getNumber()), keyMetadataHash,
+ return new ManagedKeyData(fullKeyIdentity,
+ ManagedKeyState.forValue((byte) response.getKeyState().getNumber()),
response.getRefreshTimestamp());
}
}
diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml
index 9a30ae406d09..a033e218db4f 100644
--- a/hbase-common/pom.xml
+++ b/hbase-common/pom.xml
@@ -110,6 +110,10 @@
org.apache.commonscommons-crypto
+
+ net.openhft
+ zero-allocation-hashing
+ org.junit.jupiterjunit-jupiter-api
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index 4260fd906028..7d9a94de1d2a 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -343,7 +343,7 @@ public enum OperationStatusCode {
/** Parameter name for HBase instance root directory */
public static final String HBASE_DIR = "hbase.rootdir";
- public static final String HBASE_ORIGINAL_DIR = "hbase.originalRootdir";
+ public static final String HBASE_ORIGINAL_ROOT_DIR = "hbase.originalRootdir";
/** Parameter name for HBase client IPC pool type */
public static final String HBASE_CLIENT_IPC_POOL_TYPE = "hbase.client.ipc.pool.type";
@@ -1356,6 +1356,14 @@ public enum OperationStatusCode {
"hbase.crypto.managed_keys.local_key_gen_per_file.enabled";
public static final boolean CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_DEFAULT_ENABLED = false;
+ /**
+ * Comma-separated list of digest algorithm names for key metadata digest (partial identity). Up
+ * to 2 algorithms from DigestAlgo (XXH3, XXHASH64, MD5). Default is xxh3.
+ */
+ public static final String CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY =
+ "hbase.crypto.managed_key.metadata.digest.algorithms";
+ public static final String CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_DEFAULT = "xxh3";
+
/** Configuration key for setting RPC codec class name */
public static final String RPC_CODEC_CONF_KEY = "hbase.client.rpc.codec";
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java
index 7e816b917628..95d372e1f37d 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/Context.java
@@ -35,7 +35,6 @@ public class Context implements Configurable {
private Cipher cipher;
private Key key;
private ManagedKeyData kekData;
- private String keyNamespace;
private String keyHash;
Context(Configuration conf) {
@@ -100,15 +99,6 @@ public Context setKey(Key key) {
return this;
}
- public Context setKeyNamespace(String keyNamespace) {
- this.keyNamespace = keyNamespace;
- return this;
- }
-
- public String getKeyNamespace() {
- return keyNamespace;
- }
-
public Context setKEKData(ManagedKeyData kekData) {
this.kekData = kekData;
return this;
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DigestAlgorithms.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DigestAlgorithms.java
new file mode 100644
index 000000000000..cdea1def40c0
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/DigestAlgorithms.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.io.crypto;
+
+import java.nio.ByteBuffer;
+import java.security.MessageDigest;
+import java.security.NoSuchAlgorithmException;
+import net.openhft.hashing.LongHashFunction;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Digest algorithms for computing partial identity (digest of key metadata). Each value has a bit
+ * position (for encoding which algorithms were used) and a fixed digest size.
+ */
+@InterfaceAudience.Private
+public enum DigestAlgorithms {
+ /** XXH3 64-bit; 8 bytes. */
+ XXH3((byte) 0x01, 8) {
+
+ @Override
+ public void digest(byte[] input, int inputOffset, int inputLength, byte[] out, int outOffset) {
+ long h = LongHashFunction.xx3().hashBytes(ByteBuffer.wrap(input, inputOffset, inputLength));
+ Bytes.putLong(out, outOffset, h);
+ }
+ },
+ /** XXHash64; 8 bytes. */
+ XXHASH64((byte) 0x02, 8) {
+ @Override
+ public void digest(byte[] input, int inputOffset, int inputLength, byte[] out, int outOffset) {
+ long h = LongHashFunction.xx().hashBytes(ByteBuffer.wrap(input, inputOffset, inputLength));
+ Bytes.putLong(out, outOffset, h);
+ }
+ },
+ /** MD5; 16 bytes. */
+ MD5((byte) 0x04, 16) {
+ @Override
+ public void digest(byte[] input, int inputOffset, int inputLength, byte[] out, int outOffset) {
+ try {
+ MessageDigest md = MessageDigest.getInstance("MD5");
+ md.update(input, inputOffset, inputLength);
+ byte[] d = md.digest();
+ System.arraycopy(d, 0, out, outOffset, getDigestSizeBytes());
+ } catch (NoSuchAlgorithmException e) {
+ throw new RuntimeException(e);
+ }
+ }
+ },
+ /** MetroHash64 */
+ METRO64((byte) 0x08, 8) {
+ @Override
+ public void digest(byte[] input, int inputOffset, int inputLength, byte[] out, int outOffset) {
+ long h = LongHashFunction.metro().hashBytes(ByteBuffer.wrap(input, inputOffset, inputLength));
+ Bytes.putLong(out, outOffset, h);
+ }
+ },
+ /** WYHASH version 3 */
+ WYHASH3((byte) 0x10, 8) {
+ @Override
+ public void digest(byte[] input, int inputOffset, int inputLength, byte[] out, int outOffset) {
+ long h = LongHashFunction.wy_3().hashBytes(ByteBuffer.wrap(input, inputOffset, inputLength));
+ Bytes.putLong(out, outOffset, h);
+ }
+ },;
+
+ private final byte bitPosition;
+ private final int digestSizeBytes;
+
+ DigestAlgorithms(byte bitPosition, int digestSizeBytes) {
+ this.bitPosition = bitPosition;
+ this.digestSizeBytes = digestSizeBytes;
+ }
+
+ public byte getBitPosition() {
+ return bitPosition;
+ }
+
+ public int getDigestSizeBytes() {
+ return digestSizeBytes;
+ }
+
+ /**
+ * Compute the digest of the input bytes and write the result into {@code out} at
+ * {@code outOffset}. Caller must ensure {@code out} has at least {@link #getDigestSizeBytes()}
+ * bytes at {@code outOffset}. Uses {@link Bytes#putLong} for 8-byte digests to avoid allocating a
+ * new byte array.
+ * @param input the input bytes
+ * @param inputOffset offset into input
+ * @param inputLength number of bytes to hash
+ * @param out output buffer
+ * @param outOffset offset into out where digest is written
+ */
+ public abstract void digest(byte[] input, int inputOffset, int inputLength, byte[] out,
+ int outOffset);
+
+ /**
+ * Parse a name (case-insensitive) to a DigestAlgo, or null if not recognized.
+ */
+ public static DigestAlgorithms fromName(String name) {
+ if (name == null) {
+ return null;
+ }
+ String n = name.trim().toUpperCase();
+ for (DigestAlgorithms a : values()) {
+ if (a.name().equals(n)) {
+ return a;
+ }
+ }
+ return null;
+ }
+}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyData.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyData.java
index e35649113f54..32f88f889208 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyData.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyData.java
@@ -17,17 +17,18 @@
*/
package org.apache.hadoop.hbase.io.crypto;
+import java.nio.charset.StandardCharsets;
import java.security.Key;
-import java.security.MessageDigest;
-import java.security.NoSuchAlgorithmException;
import java.util.Arrays;
-import org.apache.commons.lang3.builder.EqualsBuilder;
-import org.apache.commons.lang3.builder.HashCodeBuilder;
+import java.util.Objects;
import org.apache.hadoop.hbase.HBaseInterfaceAudience;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
import org.apache.hadoop.util.DataChecksum;
import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.yetus.audience.InterfaceStability;
+
import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
/**
@@ -44,10 +45,10 @@
*
* The class provides methods to retrieve, as well as to compute a checksum for the key data. The
* checksum is used to ensure the integrity of the key data. The class also provides a method to
- * generate an MD5 hash of the key metadata, which can be used for validation and identification.
+ * generate a digest of the key metadata (partial identity), which can be used for validation and
+ * identification.
*/
@InterfaceAudience.Public
-@InterfaceStability.Evolving
public class ManagedKeyData {
/**
* Special value to be used for custodian or namespace to indicate that it is global, meaning it
@@ -56,131 +57,135 @@ public class ManagedKeyData {
public static final String KEY_SPACE_GLOBAL = "*";
/**
- * Special value to be used for custodian to indicate that it is global, meaning it is not
- * associated with a specific custodian.
+ * Byte representation of the {@link #KEY_SPACE_GLOBAL}.
*/
- public static final byte[] KEY_GLOBAL_CUSTODIAN_BYTES = KEY_SPACE_GLOBAL.getBytes();
+ public static final Bytes KEY_SPACE_GLOBAL_BYTES =
+ new Bytes(KEY_SPACE_GLOBAL.getBytes(StandardCharsets.UTF_8));
/**
* Encoded form of global custodian.
*/
- public static final String KEY_GLOBAL_CUSTODIAN =
- ManagedKeyProvider.encodeToStr(KEY_GLOBAL_CUSTODIAN_BYTES);
+ public static final String GLOBAL_CUST_ENCODED =
+ ManagedKeyProvider.encodeToStr(KEY_SPACE_GLOBAL_BYTES.get());
+
+ /** Maximum length: key custodian in bytes, key namespace in bytes. */
+ public static final short MAX_UNSIGNED_BYTE = 255;
- private final byte[] keyCustodian;
- private final String keyNamespace;
+ private final ManagedKeyIdentity keyIdentity;
private final Key theKey;
private final ManagedKeyState keyState;
private final String keyMetadata;
private final long refreshTimestamp;
private volatile long keyChecksum = 0;
- private byte[] keyMetadataHash;
/**
- * Constructs a new instance with the given parameters.
+ * Constructs a new instance with the given parameters. Convenience constructor to build a
+ * ManagedKeyIdentity from the (custodian, namespace and metadata).
* @param key_cust The key custodian.
- * @param key_namespace The key namespace.
+ * @param key_namespace The key namespace as a byte array.
* @param theKey The actual key, can be {@code null}.
* @param keyState The state of the key.
* @param keyMetadata The metadata associated with the key.
* @throws NullPointerException if any of key_cust, keyState or keyMetadata is null.
*/
- public ManagedKeyData(byte[] key_cust, String key_namespace, Key theKey, ManagedKeyState keyState,
+ public ManagedKeyData(byte[] key_cust, byte[] key_namespace, Key theKey, ManagedKeyState keyState,
String keyMetadata) {
- this(key_cust, key_namespace, theKey, keyState, keyMetadata,
- EnvironmentEdgeManager.currentTime());
+ this(ManagedKeyIdentityUtils.buildIdentityFromMetadata(key_cust, key_namespace, keyMetadata),
+ theKey, keyState, keyMetadata, EnvironmentEdgeManager.currentTime());
}
/**
- * Constructs a new instance with the given parameters including refresh timestamp.
- * @param key_cust The key custodian.
- * @param key_namespace The key namespace.
- * @param theKey The actual key, can be {@code null}.
- * @param keyState The state of the key.
- * @param refreshTimestamp The refresh timestamp for the key.
+ * Constructs a new instance with the given parameters. Convenience constructor to build a
+ * ManagedKeyIdentity from the (custodian, namespace and metadata).
+ * @param key_cust The key custodian.
+ * @param key_namespace The key namespace as a byte array.
+ * @param theKey The actual key, can be {@code null}.
+ * @param keyState The state of the key.
+ * @param keyMetadata The metadata associated with the key.
* @throws NullPointerException if any of key_cust, keyState or keyMetadata is null.
*/
- public ManagedKeyData(byte[] key_cust, String key_namespace, Key theKey, ManagedKeyState keyState,
- String keyMetadata, long refreshTimestamp) {
- Preconditions.checkNotNull(key_cust, "key_cust should not be null");
- Preconditions.checkNotNull(key_namespace, "key_namespace should not be null");
- Preconditions.checkNotNull(keyState, "keyState should not be null");
- Preconditions.checkNotNull(keyMetadata, "metadata should not be null");
+ public ManagedKeyData(Bytes custodian, Bytes namespace, Key theKey, ManagedKeyState keyState,
+ String keyMetadata) {
+ this(ManagedKeyIdentityUtils.buildIdentityFromMetadata(custodian, namespace, keyMetadata),
+ theKey, keyState, keyMetadata, EnvironmentEdgeManager.currentTime());
+ }
- this.keyCustodian = key_cust;
- this.keyNamespace = key_namespace;
- this.keyState = keyState;
- this.theKey = theKey;
- this.keyMetadata = keyMetadata;
- this.keyMetadataHash = constructMetadataHash(keyMetadata);
- this.refreshTimestamp = refreshTimestamp;
+ // ---------------------------------------------------------------------------
+ // New FullKeyIdentity-based constructors — primary constructors that store fields.
+ // ---------------------------------------------------------------------------
+
+ /**
+ * Constructs a new instance from a {@link ManagedKeyIdentity} with a key and metadata.
+ * @param fullIdentity The full key identity (custodian + namespace + partial identity).
+ * @param theKey The actual key, can be {@code null}.
+ * @param keyState The state of the key.
+ * @param keyMetadata The metadata associated with the key.
+ * @throws NullPointerException if any of fullIdentity, keyState or keyMetadata is null.
+ */
+ public ManagedKeyData(ManagedKeyIdentity fullIdentity, Key theKey, ManagedKeyState keyState,
+ String keyMetadata) {
+ this(fullIdentity, theKey, keyState, keyMetadata, EnvironmentEdgeManager.currentTime());
}
/**
- * Client-side constructor using only metadata hash. This constructor is intended for use by
- * client code where the original metadata string is not available.
- * @param key_cust The key custodian.
- * @param key_namespace The key namespace.
+ * Constructs a new instance from a {@link ManagedKeyIdentity} with a key and metadata.
+ * @param fullIdentity The full key identity (custodian + namespace + partial identity).
+ * @param theKey The actual key, can be {@code null}.
* @param keyState The state of the key.
- * @param keyMetadataHash The pre-computed metadata hash.
+ * @param keyMetadata The metadata associated with the key.
* @param refreshTimestamp The refresh timestamp for the key.
- * @throws NullPointerException if any of key_cust, keyState or keyMetadataHash is null.
+ * @throws NullPointerException if any of fullIdentity, keyState or keyMetadata is null.
*/
- public ManagedKeyData(byte[] key_cust, String key_namespace, ManagedKeyState keyState,
- byte[] keyMetadataHash, long refreshTimestamp) {
- Preconditions.checkNotNull(key_cust, "key_cust should not be null");
- Preconditions.checkNotNull(key_namespace, "key_namespace should not be null");
+ public ManagedKeyData(ManagedKeyIdentity fullIdentity, Key theKey, ManagedKeyState keyState,
+ String keyMetadata, long refreshTimestamp) {
+ Preconditions.checkNotNull(fullIdentity, "fullIdentity should not be null");
Preconditions.checkNotNull(keyState, "keyState should not be null");
- Preconditions.checkNotNull(keyMetadataHash, "keyMetadataHash should not be null");
- this.keyCustodian = key_cust;
- this.keyNamespace = key_namespace;
+ Preconditions.checkNotNull(keyMetadata, "metadata should not be null");
+ this.keyIdentity = fullIdentity;
+ this.theKey = theKey;
this.keyState = keyState;
- this.keyMetadataHash = keyMetadataHash;
+ this.keyMetadata = keyMetadata;
this.refreshTimestamp = refreshTimestamp;
- this.theKey = null;
- this.keyMetadata = null;
}
/**
- * Constructs a new instance for the given key management state with the current timestamp.
- * @param key_cust The key custodian.
- * @param key_namespace The key namespace.
- * @param keyState The state of the key.
- * @throws NullPointerException if any of key_cust or key_namespace is null.
- * @throws IllegalArgumentException if keyState is not a key management state.
+ * Constructs a new instance from a {@link ManagedKeyIdentity} without a key or metadata (used for
+ * client-side or key-management-state instances).
+ * @param fullIdentity The full key identity (custodian + namespace + partial identity).
+ * @param keyState The state of the key.
+ * @throws NullPointerException if any of fullIdentity or keyState is null.
*/
- public ManagedKeyData(byte[] key_cust, String key_namespace, ManagedKeyState keyState) {
- this(key_cust, key_namespace, keyState, EnvironmentEdgeManager.currentTime());
+ public ManagedKeyData(ManagedKeyIdentity fullIdentity, ManagedKeyState keyState) {
+ this(fullIdentity, keyState, EnvironmentEdgeManager.currentTime());
}
/**
- * Constructs a new instance for the given key management state.
- * @param key_cust The key custodian.
- * @param key_namespace The key namespace.
- * @param keyState The state of the key.
- * @throws NullPointerException if any of key_cust or key_namespace is null.
- * @throws IllegalArgumentException if keyState is not a key management state.
+ * Constructs a new instance from a {@link ManagedKeyIdentity} without a key or metadata (used for
+ * client-side or key-management-state instances).
+ * @param fullIdentity The full key identity (custodian + namespace + partial identity).
+ * @param keyState The state of the key.
+ * @param refreshTimestamp The refresh timestamp for the key.
+ * @throws NullPointerException if any of fullIdentity or keyState is null.
*/
- public ManagedKeyData(byte[] key_cust, String key_namespace, ManagedKeyState keyState,
+ public ManagedKeyData(ManagedKeyIdentity fullIdentity, ManagedKeyState keyState,
long refreshTimestamp) {
- Preconditions.checkNotNull(key_cust, "key_cust should not be null");
- Preconditions.checkNotNull(key_namespace, "key_namespace should not be null");
+ Preconditions.checkNotNull(fullIdentity, "fullIdentity should not be null");
Preconditions.checkNotNull(keyState, "keyState should not be null");
- Preconditions.checkArgument(ManagedKeyState.isKeyManagementState(keyState),
- "keyState must be a key management state, got: " + keyState);
- this.keyCustodian = key_cust;
- this.keyNamespace = key_namespace;
+ this.keyIdentity = fullIdentity;
this.keyState = keyState;
this.refreshTimestamp = refreshTimestamp;
this.theKey = null;
this.keyMetadata = null;
- this.keyMetadataHash = null;
}
@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.UNITTEST)
public ManagedKeyData createClientFacingInstance() {
- return new ManagedKeyData(keyCustodian, keyNamespace, keyState.getExternalState(),
- keyMetadataHash, refreshTimestamp);
+ return new ManagedKeyData(keyIdentity, keyState, refreshTimestamp);
+ }
+
+ /** Returns The full key identity. */
+ public ManagedKeyIdentity getKeyIdentity() {
+ return keyIdentity;
}
/**
@@ -188,7 +193,7 @@ public ManagedKeyData createClientFacingInstance() {
* @return The key custodian as a byte array.
*/
public byte[] getKeyCustodian() {
- return keyCustodian;
+ return keyIdentity.copyCustodian();
}
/**
@@ -196,15 +201,26 @@ public byte[] getKeyCustodian() {
* @return the encoded key custodian
*/
public String getKeyCustodianEncoded() {
- return ManagedKeyProvider.encodeToStr(keyCustodian);
+ return keyIdentity.getCustodianEncoded();
}
/**
- * Returns the namespace associated with the key.
+ * Returns the namespace associated with the key as a String. Intended for logging, RPC responses,
+ * and external-facing display only. Internal code should use {@link #getKeyNamespaceBytes()} to
+ * avoid unnecessary String allocation.
* @return The namespace as a {@code String}.
*/
public String getKeyNamespace() {
- return keyNamespace;
+ return keyIdentity.getNamespaceString();
+ }
+
+ /**
+ * Returns the namespace associated with the key as a byte array. Use this in internal keymeta
+ * code (row-key building, cache keys, table accessor) to avoid String conversion.
+ * @return The namespace as a byte array.
+ */
+ public byte[] getKeyNamespaceBytes() {
+ return keyIdentity.copyNamespace();
}
/**
@@ -241,9 +257,10 @@ public long getRefreshTimestamp() {
@Override
public String toString() {
- return "ManagedKeyData{" + "keyCustodian=" + Arrays.toString(keyCustodian) + ", keyNamespace='"
- + keyNamespace + '\'' + ", keyState=" + keyState + ", keyMetadata='" + keyMetadata + '\''
- + ", refreshTimestamp=" + refreshTimestamp + ", keyChecksum=" + getKeyChecksum() + '}';
+ return "ManagedKeyData{" + "keyCustodian=" + Arrays.toString(getKeyCustodian())
+ + ", keyNamespace='" + getKeyNamespace() + '\'' + ", keyState=" + keyState + ", keyMetadata='"
+ + keyMetadata + '\'' + ", refreshTimestamp=" + refreshTimestamp + ", keyChecksum="
+ + getKeyChecksum() + '}';
}
/**
@@ -268,56 +285,40 @@ public static long constructKeyChecksum(byte[] data) {
}
/**
- * Computes the hash of the key metadata. If the hash has already been computed, this method
- * returns the previously computed value. The hash is computed using the MD5 algorithm.
- * @return The hash of the key metadata as a byte array.
+ * Returns the partial identity (digest of key metadata). If the digest has already been computed,
+ * this method returns the previously computed value.
+ * @return The partial identity as a byte array, or {@code null} if not available.
*/
- public byte[] getKeyMetadataHash() {
- return keyMetadataHash;
+ public byte[] getPartialIdentity() {
+ Bytes pi = keyIdentity.getPartialIdentityView();
+ return pi.getLength() == 0 ? null : pi.copyBytesIfNecessary();
}
/**
- * Return the hash of key metadata in Base64 encoded form.
- * @return the encoded hash or {@code null} if no metadata is available.
+ * Return the partial identity in Base64 encoded form.
+ * @return the encoded partial identity or {@code null} if no metadata is available.
*/
- public String getKeyMetadataHashEncoded() {
- byte[] hash = getKeyMetadataHash();
- if (hash != null) {
- return ManagedKeyProvider.encodeToStr(hash);
+ public String getPartialIdentityEncoded() {
+ byte[] id = getPartialIdentity();
+ if (id != null) {
+ return ManagedKeyProvider.encodeToStr(id);
}
return null;
}
- public static byte[] constructMetadataHash(String metadata) {
- MessageDigest md5;
- try {
- md5 = MessageDigest.getInstance("MD5");
- } catch (NoSuchAlgorithmException e) {
- throw new RuntimeException(e);
- }
- return md5.digest(metadata.getBytes());
- }
-
@Override
public boolean equals(Object o) {
- if (this == o) {
- return true;
- }
-
- if (o == null || getClass() != o.getClass()) {
- return false;
- }
+ if (this == o) return true;
+ if (o == null || getClass() != o.getClass()) return false;
ManagedKeyData that = (ManagedKeyData) o;
- return new EqualsBuilder().append(keyCustodian, that.keyCustodian)
- .append(keyNamespace, that.keyNamespace).append(theKey, that.theKey)
- .append(keyState, that.keyState).append(keyMetadata, that.keyMetadata).isEquals();
+ return Objects.equals(keyIdentity, that.keyIdentity) && Objects.equals(theKey, that.theKey)
+ && Objects.equals(keyState, that.keyState) && Objects.equals(keyMetadata, that.keyMetadata);
}
@Override
public int hashCode() {
- return new HashCodeBuilder(17, 37).append(keyCustodian).append(keyNamespace).append(theKey)
- .append(keyState).append(keyMetadata).toHashCode();
+ return Objects.hash(keyIdentity, theKey, keyState, keyMetadata);
}
}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyProvider.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyProvider.java
index 308625fbfb17..41c9319d62ab 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyProvider.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyProvider.java
@@ -19,8 +19,11 @@
import edu.umd.cs.findbugs.annotations.NonNull;
import java.io.IOException;
+import java.nio.ByteBuffer;
+import java.nio.charset.StandardCharsets;
import java.util.Base64;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
import org.apache.yetus.audience.InterfaceAudience;
/**
@@ -45,22 +48,26 @@ public interface ManagedKeyProvider {
ManagedKeyData getSystemKey(byte[] systemId) throws IOException;
/**
- * Retrieve a managed key for the specified prefix. The returned key is expected to be in the
- * ACTIVE state, but it may be in any other state, such as FAILED and DISABLED. A key provider is
- * typically expected to return a key with no associated metadata, but it is not required, instead
- * it is permitted to throw IOException which will be treated as a retrieval FAILURE.
- * @param key_cust The key custodian.
- * @param key_namespace Key namespace
+ * Retrieve a managed key for the specified identity (custodian + namespace prefix). The returned
+ * key is expected to be in the ACTIVE state, but it may be in any other state, such as FAILED and
+ * DISABLED. A key provider is typically expected to return a key with no associated metadata, but
+ * it is not required, instead it is permitted to throw IOException which will be treated as a
+ * retrieval FAILURE.
+ * @param keyIdentity custodian and namespace (partial identity segment is ignored); must not be
+ * {@code null}
* @return ManagedKeyData for the managed key and is expected to be not {@code null}
* @throws IOException if an error occurs while retrieving the key
*/
- ManagedKeyData getManagedKey(byte[] key_cust, String key_namespace) throws IOException;
+ ManagedKeyData getManagedKey(ManagedKeyIdentity keyIdentity) throws IOException;
/**
* Retrieve a key identified by the key metadata. The key metadata is typically generated by the
* same key provider via the {@link #getSystemKey(byte[])} or
- * {@link #getManagedKey(byte[], String)} methods. If key couldn't be retrieved using metadata and
- * the wrappedKey is provided, the implementation may try to decrypt it as a fallback operation.
+ * {@link #getManagedKey(ManagedKeyIdentity)} methods. If key couldn't be retrieved using metadata
+ * and the wrappedKey is provided, the implementation may try to decrypt it as a fallback
+ * operation.
+ * @param keyIdentity The key identity prefix or full identity associated with the request, can
+ * be @{null}.
* @param keyMetaData Key metadata, must not be {@code null}.
* @param wrappedKey The DEK key material encrypted with the corresponding KEK, if available.
* @return ManagedKeyData for the key represented by the metadata and is expected to be not
@@ -68,7 +75,8 @@ public interface ManagedKeyProvider {
* @throws IOException if an error occurs while generating the key
*/
@NonNull
- ManagedKeyData unwrapKey(String keyMetaData, byte[] wrappedKey) throws IOException;
+ ManagedKeyData unwrapKey(ManagedKeyIdentity keyIdentity, String keyMetaData, byte[] wrappedKey)
+ throws IOException;
/**
* Decode the given key custodian which is encoded as Base64 string.
@@ -96,4 +104,10 @@ static String encodeToStr(byte[] key_cust) {
return Base64.getEncoder().encodeToString(key_cust);
}
+ static String encodeToStr(byte[] data, int offset, int length) {
+ ByteBuffer buffer = ByteBuffer.wrap(data, offset, length);
+ ByteBuffer encodedBuffer = Base64.getEncoder().encode(buffer);
+ return new String(encodedBuffer.array(), 0, encodedBuffer.remaining(),
+ StandardCharsets.US_ASCII);
+ }
}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyStoreKeyProvider.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyStoreKeyProvider.java
index 15e49bd692e4..1b014fbf24a2 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyStoreKeyProvider.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/io/crypto/ManagedKeyStoreKeyProvider.java
@@ -24,6 +24,9 @@
import java.util.Map;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.GsonUtil;
import org.apache.yetus.audience.InterfaceAudience;
@@ -62,16 +65,15 @@ public ManagedKeyData getSystemKey(byte[] clusterId) {
// Encode clusterId too for consistency with that of key custodian.
String keyMetadata = generateKeyMetadata(systemKeyAlias,
ManagedKeyProvider.encodeToStr(clusterId), ManagedKeyData.KEY_SPACE_GLOBAL);
- return new ManagedKeyData(clusterId, ManagedKeyData.KEY_SPACE_GLOBAL, key,
+ return new ManagedKeyData(clusterId, ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.copyBytes(), key,
ManagedKeyState.ACTIVE, keyMetadata);
}
@Override
- public ManagedKeyData getManagedKey(byte[] key_cust, String key_namespace) throws IOException {
+ public ManagedKeyData getManagedKey(ManagedKeyIdentity keyIdentity) throws IOException {
checkConfig();
- String encodedCust = ManagedKeyProvider.encodeToStr(key_cust);
-
- // Handle null key_namespace by defaulting to global namespace
+ String encodedCust = keyIdentity.getCustodianEncoded();
+ String key_namespace = keyIdentity.getNamespaceString();
if (key_namespace == null) {
key_namespace = ManagedKeyData.KEY_SPACE_GLOBAL;
}
@@ -85,15 +87,17 @@ public ManagedKeyData getManagedKey(byte[] key_cust, String key_namespace) throw
// If no alias is configured for this custodian+namespace combination, treat as key not found
if (alias == null) {
- return new ManagedKeyData(key_cust, key_namespace, null, ManagedKeyState.FAILED, keyMetadata);
+ return new ManagedKeyData(keyIdentity.getCustodianView(), keyIdentity.getNamespaceView(),
+ null, ManagedKeyState.FAILED, keyMetadata);
}
// Namespaces match, proceed to get the key
- return unwrapKey(keyMetadata, null);
+ return unwrapKey(keyIdentity, keyMetadata, null);
}
@Override
- public ManagedKeyData unwrapKey(String keyMetadataStr, byte[] wrappedKey) throws IOException {
+ public ManagedKeyData unwrapKey(ManagedKeyIdentity keyIdentity, String keyMetadataStr,
+ byte[] wrappedKey) throws IOException {
Map keyMetadata =
GsonUtil.getDefaultInstance().fromJson(keyMetadataStr, KEY_METADATA_TYPE);
String encodedCust = keyMetadata.get(KEY_METADATA_CUST);
@@ -104,14 +108,23 @@ public ManagedKeyData unwrapKey(String keyMetadataStr, byte[] wrappedKey) throws
}
String activeStatusConfKey = buildActiveStatusConfKey(encodedCust, namespace);
boolean isActive = conf.getBoolean(activeStatusConfKey, true);
- byte[] key_cust = ManagedKeyProvider.decodeToBytes(encodedCust);
String alias = keyMetadata.get(KEY_METADATA_ALIAS);
Key key = alias != null ? getKey(alias) : null;
+ ManagedKeyIdentity fullKeyIdentity;
+ if (keyIdentity == null) {
+ fullKeyIdentity = ManagedKeyIdentityUtils.buildIdentityFromMetadata(
+ ManagedKeyProvider.decodeToBytes(encodedCust), Bytes.toBytes(namespace), keyMetadataStr);
+ } else {
+ fullKeyIdentity = keyIdentity.getPartialIdentityLength() > 0
+ ? keyIdentity
+ : ManagedKeyIdentityUtils.buildIdentityFromMetadata(keyIdentity.getCustodianView(),
+ keyIdentity.getNamespaceView(), keyMetadataStr);
+ }
if (key != null) {
- return new ManagedKeyData(key_cust, namespace, key,
+ return new ManagedKeyData(fullKeyIdentity, key,
isActive ? ManagedKeyState.ACTIVE : ManagedKeyState.INACTIVE, keyMetadataStr);
}
- return new ManagedKeyData(key_cust, namespace, null,
+ return new ManagedKeyData(fullKeyIdentity, null,
isActive ? ManagedKeyState.FAILED : ManagedKeyState.DISABLED, keyMetadataStr);
}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityBytesBacked.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityBytesBacked.java
new file mode 100644
index 000000000000..a8092650b4c5
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityBytesBacked.java
@@ -0,0 +1,189 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+/**
+ * Immutable value type representing a key identity (custodian + namespace + partial identity)
+ * backed by three {@link Bytes} objects. The backing {@link Bytes} objects will not be duplicated
+ * for the sake of optimizing the memory usage, so only share those {@link Bytes} objects that are
+ * truly immutable and owned by the caller. The same applies to the conveienice constructor that
+ * takes byte arrays, as they are simply wrapped in {@link Bytes} objects. While the partial
+ * identity is optional, it can't be {@code null} instead a zero-length array is accepted, however
+ * in such cases, {@link KeyIdentityPrefixBytesBacked} is the more efficient alternative.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class KeyIdentityBytesBacked implements ManagedKeyIdentity {
+ private final Bytes custodian;
+ private final Bytes namespace;
+ private final Bytes partialIdentity;
+
+ public KeyIdentityBytesBacked(byte[] custodian, byte[] namespace, byte[] partialIdentity) {
+ this(custodian == null ? null : new Bytes(custodian),
+ namespace == null ? null : new Bytes(namespace),
+ partialIdentity == null ? null : new Bytes(partialIdentity));
+ }
+
+ public KeyIdentityBytesBacked(Bytes custodian, Bytes namespace, Bytes partialIdentity) {
+ Preconditions.checkNotNull(custodian, "custodian cannot be null");
+ Preconditions.checkNotNull(namespace, "namespace cannot be null");
+ Preconditions.checkNotNull(partialIdentity, "partialIdentity cannot be null");
+ Preconditions.checkArgument(custodian.getLength() >= 1,
+ "Custodian length must be at least 1, got %s", custodian.getLength());
+ Preconditions.checkArgument(namespace.getLength() >= 1,
+ "Namespace length must be at least 1, got %s", namespace.getLength());
+ Preconditions.checkArgument(partialIdentity.getLength() >= 0,
+ "Partial identity length must be >= 0, got %s", partialIdentity.getLength());
+ this.custodian = custodian;
+ this.namespace = namespace;
+ this.partialIdentity = partialIdentity;
+ }
+
+ @Override
+ public Bytes getCustodianView() {
+ return custodian;
+ }
+
+ @Override
+ public Bytes getNamespaceView() {
+ return namespace;
+ }
+
+ @Override
+ public Bytes getPartialIdentityView() {
+ return partialIdentity;
+ }
+
+ @Override
+ public Bytes getFullIdentityView() {
+ return new Bytes(ManagedKeyIdentityUtils.constructRowKeyForIdentity(custodian.get(),
+ namespace.get(), partialIdentity.get()));
+ }
+
+ @Override
+ public Bytes getIdentityPrefixView() {
+ return new Bytes(
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(custodian.get(), namespace.get()));
+ }
+
+ @Override
+ public ManagedKeyIdentity getKeyIdentityPrefix() {
+ if (partialIdentity.getLength() == 0) {
+ return this;
+ }
+ return new KeyIdentityPrefixBytesBacked(custodian, namespace);
+ }
+
+ @Override
+ public byte[] copyCustodian() {
+ return custodian.copyBytes();
+ }
+
+ @Override
+ public byte[] copyNamespace() {
+ return namespace.copyBytes();
+ }
+
+ @Override
+ public byte[] copyPartialIdentity() {
+ return partialIdentity.copyBytes();
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return ManagedKeyIdentity.contentEquals(this, obj);
+ }
+
+ @Override
+ public int hashCode() {
+ return ManagedKeyIdentity.contentHashCode(this);
+ }
+
+ @Override
+ public KeyIdentityBytesBacked clone() {
+ return new KeyIdentityBytesBacked(custodian.clone(), namespace.clone(),
+ partialIdentity.clone());
+ }
+
+ @Override
+ public int getCustodianLength() {
+ return custodian.getLength();
+ }
+
+ @Override
+ public int getNamespaceLength() {
+ return namespace.getLength();
+ }
+
+ @Override
+ public int getPartialIdentityLength() {
+ return partialIdentity.getLength();
+ }
+
+ @Override
+ public String getCustodianEncoded() {
+ return ManagedKeyProvider.encodeToStr(custodian.get());
+ }
+
+ @Override
+ public String getNamespaceString() {
+ return Bytes.toString(namespace.get());
+ }
+
+ @Override
+ public String getPartialIdentityEncoded() {
+ return ManagedKeyProvider.encodeToStr(partialIdentity.get());
+ }
+
+ @Override
+ public int compareCustodian(byte[] otherCustodian) {
+ return compareCustodian(otherCustodian, 0, otherCustodian.length);
+ }
+
+ @Override
+ public int compareCustodian(byte[] otherCustodian, int otherOffset, int otherLength) {
+ return custodian.compareTo(otherCustodian, otherOffset, otherLength);
+ }
+
+ @Override
+ public int compareNamespace(byte[] otherNamespace) {
+ return compareNamespace(otherNamespace, 0, otherNamespace.length);
+ }
+
+ @Override
+ public int compareNamespace(byte[] otherNamespace, int otherOffset, int otherLength) {
+ return namespace.compareTo(otherNamespace, otherOffset, otherLength);
+ }
+
+ @Override
+ public int comparePartialIdentity(byte[] otherPartialIdentity) {
+ return comparePartialIdentity(otherPartialIdentity, 0, otherPartialIdentity.length);
+ }
+
+ @Override
+ public int comparePartialIdentity(byte[] otherPartialIdentity, int otherOffset, int otherLength) {
+ return partialIdentity.compareTo(otherPartialIdentity, otherOffset, otherLength);
+ }
+}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityPrefixBytesBacked.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityPrefixBytesBacked.java
new file mode 100644
index 000000000000..e146e0b82f23
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentityPrefixBytesBacked.java
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+/**
+ * Immutable value type for custodian + namespace only (same binary prefix as
+ * {@link ManagedKeyIdentityUtils#constructRowKeyForCustodianNamespace(byte[], byte[])}). Use as a
+ * {@link ManagedKeyIdentity} where partial identity is intentionally absent, equivalent to
+ * {@link KeyIdentityBytesBacked} with {@link ManagedKeyIdentity#KEY_NULL_IDENTITY_BYTES} for the
+ * partial segment but without storing a third backing array so it is more space efficient. The
+ * backing {@link Bytes} objects will not be duplicated for the sake of optimizing the memory usage,
+ * so only share those {@link Bytes} objects that are truly immutable and owned by the caller. The
+ * same applies to the conveienice constructor that takes byte arrays, as they are simply wrapped in
+ * {@link Bytes} objects.
+ *
+ * {@link #getPartialIdentityView()} returns {@link ManagedKeyIdentity#KEY_NULL_IDENTITY_BYTES} so
+ * {@link ManagedKeyIdentity#contentEquals} and {@link ManagedKeyIdentity#contentHashCode} remain
+ * consistent with a bytes-backed identity that uses that sentinel. Operations that copy, encode, or
+ * lexicographically compare the partial segment throw {@link UnsupportedOperationException} because
+ * there is no distinct partial payload (only the shared empty sentinel).
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class KeyIdentityPrefixBytesBacked implements ManagedKeyIdentity {
+ private static final String NO_PARTIAL =
+ "This identity has no partial identity segment; use " + ManagedKeyData.class.getSimpleName()
+ + ".KEY_NULL_IDENTITY_BYTES via getPartialIdentityView() if an empty view is required";
+
+ private final Bytes custodian;
+ private final Bytes namespace;
+
+ public KeyIdentityPrefixBytesBacked(byte[] custodian, byte[] namespace) {
+ this(custodian == null ? null : new Bytes(custodian),
+ namespace == null ? null : new Bytes(namespace));
+ }
+
+ public KeyIdentityPrefixBytesBacked(Bytes custodian, Bytes namespace) {
+ Preconditions.checkNotNull(custodian, "custodian cannot be null");
+ Preconditions.checkNotNull(namespace, "namespace cannot be null");
+ Preconditions.checkArgument(custodian.getLength() >= 1,
+ "Custodian length must be at least 1, got %s", custodian.getLength());
+ Preconditions.checkArgument(namespace.getLength() >= 1,
+ "Namespace length must be at least 1, got %s", namespace.getLength());
+ this.custodian = custodian;
+ this.namespace = namespace;
+ }
+
+ @Override
+ public Bytes getCustodianView() {
+ return custodian;
+ }
+
+ @Override
+ public Bytes getNamespaceView() {
+ return namespace;
+ }
+
+ @Override
+ public Bytes getPartialIdentityView() {
+ return ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES;
+ }
+
+ @Override
+ public Bytes getFullIdentityView() {
+ throw new UnsupportedOperationException(NO_PARTIAL);
+ }
+
+ @Override
+ public Bytes getIdentityPrefixView() {
+ return new Bytes(
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(custodian.get(), namespace.get()));
+ }
+
+ @Override
+ public ManagedKeyIdentity getKeyIdentityPrefix() {
+ return this;
+ }
+
+ @Override
+ public byte[] copyCustodian() {
+ return custodian.copyBytes();
+ }
+
+ @Override
+ public byte[] copyNamespace() {
+ return namespace.copyBytes();
+ }
+
+ @Override
+ public byte[] copyPartialIdentity() {
+ throw new UnsupportedOperationException(NO_PARTIAL);
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return ManagedKeyIdentity.contentEquals(this, obj);
+ }
+
+ @Override
+ public int hashCode() {
+ return ManagedKeyIdentity.contentHashCode(this);
+ }
+
+ @Override
+ public KeyIdentityPrefixBytesBacked clone() {
+ return new KeyIdentityPrefixBytesBacked(custodian.clone(), namespace.clone());
+ }
+
+ @Override
+ public int getCustodianLength() {
+ return custodian.getLength();
+ }
+
+ @Override
+ public int getNamespaceLength() {
+ return namespace.getLength();
+ }
+
+ @Override
+ public int getPartialIdentityLength() {
+ return 0;
+ }
+
+ @Override
+ public String getCustodianEncoded() {
+ return ManagedKeyProvider.encodeToStr(custodian.get());
+ }
+
+ @Override
+ public String getNamespaceString() {
+ return Bytes.toString(namespace.get());
+ }
+
+ @Override
+ public String getPartialIdentityEncoded() {
+ throw new UnsupportedOperationException(NO_PARTIAL);
+ }
+
+ @Override
+ public int compareCustodian(byte[] otherCustodian) {
+ return compareCustodian(otherCustodian, 0, otherCustodian.length);
+ }
+
+ @Override
+ public int compareCustodian(byte[] otherCustodian, int otherOffset, int otherLength) {
+ return custodian.compareTo(otherCustodian, otherOffset, otherLength);
+ }
+
+ @Override
+ public int compareNamespace(byte[] otherNamespace) {
+ return compareNamespace(otherNamespace, 0, otherNamespace.length);
+ }
+
+ @Override
+ public int compareNamespace(byte[] otherNamespace, int otherOffset, int otherLength) {
+ return namespace.compareTo(otherNamespace, otherOffset, otherLength);
+ }
+
+ @Override
+ public int comparePartialIdentity(byte[] otherPartialIdentity) {
+ throw new UnsupportedOperationException(NO_PARTIAL);
+ }
+
+ @Override
+ public int comparePartialIdentity(byte[] otherPartialIdentity, int otherOffset, int otherLength) {
+ throw new UnsupportedOperationException(NO_PARTIAL);
+ }
+}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentitySingleArrayBacked.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentitySingleArrayBacked.java
new file mode 100644
index 000000000000..d4162c63ab2c
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeyIdentitySingleArrayBacked.java
@@ -0,0 +1,288 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+/**
+ * Immutable value type representing a key identity (custodian + namespace + optional partial
+ * identity) backed by a single byte array. The backing byte array will not be duplicated for the
+ * sake of optimizing the memory usage, so only share those byte arrays that are truly immutable and
+ * owned by the caller. For the format of the byte array refer to
+ * {@link ManagedKeyIdentityUtils#constructRowKeyForIdentity}. In addition to a 0-length partial
+ * identity encoding, this implementation also works with a partial identity that is completely
+ * missing, so it works with marker rows created by
+ * {@link ManagedKeyIdentityUtils#constructRowKeyForCustNamespace}.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public final class KeyIdentitySingleArrayBacked implements ManagedKeyIdentity {
+
+ /** Backing key full identity bytes, or null when view-based (fromViews). */
+ private final byte[] keyIdentity;
+ private final int offset;
+ private final int length;
+
+ /**
+ * Creates a KeyIdentity backed by the given key full identity bytes. Validates format and parses
+ * segment offsets/lengths; no array copies.
+ * @param keyFullIdentity the key full identity byte array (format: custLen, cust, nsLen, ns,
+ * partialLen, partial)
+ * @throws IllegalArgumentException if keyFullIdentity is null or malformed
+ */
+ public KeyIdentitySingleArrayBacked(byte[] keyFullIdentity) {
+ this(keyFullIdentity, 0, keyFullIdentity != null ? keyFullIdentity.length : 0);
+ }
+
+ /**
+ * Creates a KeyIdentity backed by a slice of the given array.
+ * @param keyFullIdentity the key full identity byte array
+ * @param offset start offset
+ * @param length length of the key full identity segment
+ */
+ public KeyIdentitySingleArrayBacked(byte[] keyFullIdentity, int offset, int length) {
+ Preconditions.checkArgument(keyFullIdentity != null, "keyFullIdentity cannot be null");
+ int minLen = 1 + 1 + 1 + 1; // custLen byte + min 1 cust + nsLen byte + min 1 ns
+ Preconditions.checkArgument(length >= minLen, "keyFullIdentity appears to be too short: %s",
+ length);
+ this.keyIdentity = keyFullIdentity;
+ this.offset = offset;
+ this.length = length;
+ }
+
+ /**
+ * Returns the backing key full identity byte array, or null when view-based. When non-null and
+ * offset is 0 and length is array length, this is the row key for Get.
+ */
+ public byte[] getBackingArray() {
+ return keyIdentity;
+ }
+
+ /** Returns the offset into the backing array, or 0 when view-based. */
+ public int getOffset() {
+ return offset;
+ }
+
+ /** Returns the length of the key full identity segment, or 0 when view-based. */
+ public int getLength() {
+ return length;
+ }
+
+ /** Returns a Bytes view of the custodian segment (no copy). */
+ @Override
+ public Bytes getCustodianView() {
+ int off = getCustodianOffset();
+ int custLen = keyIdentity[off++] & 0xFF;
+ return new Bytes(keyIdentity, off, custLen);
+ }
+
+ /** Returns a Bytes view of the namespace segment (no copy). */
+ @Override
+ public Bytes getNamespaceView() {
+ int off = getNamespaceOffset();
+ int nsLen = keyIdentity[off++] & 0xFF;
+ return new Bytes(keyIdentity, off, nsLen);
+ }
+
+ /** Returns a Bytes view of the partial identity segment (no copy). */
+ @Override
+ public Bytes getPartialIdentityView() {
+ int off = getPartialIdentityOffset();
+ if (off == -1) {
+ return ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES;
+ }
+ int partialLen = keyIdentity[off++] & 0xFF;
+ return new Bytes(keyIdentity, off, partialLen);
+ }
+
+ @Override
+ public Bytes getFullIdentityView() {
+ return new Bytes(keyIdentity, offset, length);
+ }
+
+ @Override
+ public Bytes getIdentityPrefixView() {
+ int prefixEndExclusive = getNamespaceOffset() + 1 + getNamespaceLength();
+ return new Bytes(keyIdentity, offset, prefixEndExclusive - offset);
+ }
+
+ @Override
+ public ManagedKeyIdentity getKeyIdentityPrefix() {
+ int partialOffset = getPartialIdentityOffset();
+ if (partialOffset == -1) {
+ return this;
+ }
+ return new KeyIdentitySingleArrayBacked(keyIdentity, offset, partialOffset - offset);
+ }
+
+ /** Returns a copy of the custodian bytes (owned). */
+ @Override
+ public byte[] copyCustodian() {
+ return getCustodianView().copyBytes();
+ }
+
+ /** Returns a copy of the namespace bytes (owned). */
+ @Override
+ public byte[] copyNamespace() {
+ return getNamespaceView().copyBytes();
+ }
+
+ /** Returns a copy of the partial identity bytes (owned). */
+ @Override
+ public byte[] copyPartialIdentity() {
+ return getPartialIdentityView().copyBytes();
+ }
+
+ @Override
+ public boolean equals(Object obj) {
+ return ManagedKeyIdentity.contentEquals(this, obj);
+ }
+
+ @Override
+ public int hashCode() {
+ return ManagedKeyIdentity.contentHashCode(this);
+ }
+
+ private int getCustodianOffset() {
+ int off = offset;
+ int custLen = keyIdentity[off] & 0xFF;
+ Preconditions.checkArgument(custLen >= 1, "Custodian length must be at least 1, got %s",
+ custLen);
+ Preconditions.checkArgument(off + custLen <= offset + length,
+ "keyIdentity too short for custodian length expected %s, got %s", off + custLen,
+ offset + length);
+ return off;
+ }
+
+ private int getNamespaceOffset() {
+ int off = getCustodianOffset();
+ int custLen = keyIdentity[off++] & 0xFF;
+ off += custLen;
+ int nsLen = keyIdentity[off] & 0xFF;
+ Preconditions.checkArgument(nsLen >= 1, "Namespace length must be at least 1, got %s", nsLen);
+ Preconditions.checkArgument(off + nsLen <= offset + length,
+ "keyIdentity too short for namespace length expected %s, got %s", off + nsLen,
+ offset + length);
+ return off;
+ }
+
+ private int getPartialIdentityOffset() {
+ int off = getNamespaceOffset();
+ int nsLen = keyIdentity[off++] & 0xFF;
+ off += nsLen;
+ if (off >= length) {
+ return -1;
+ }
+ int partialLen = keyIdentity[off] & 0xFF;
+ Preconditions.checkArgument(partialLen >= 0,
+ "Partial identity length must be at least 0, got %s", partialLen);
+ Preconditions.checkArgument(off + 1 + partialLen == offset + length,
+ "keyIdentity too short for partialIdentity length expected %s, got %s", off + partialLen,
+ offset + length);
+ return off;
+ }
+
+ @Override
+ public int getCustodianLength() {
+ return keyIdentity[getCustodianOffset()] & 0xFF;
+ }
+
+ @Override
+ public int getNamespaceLength() {
+ return keyIdentity[getNamespaceOffset()] & 0xFF;
+ }
+
+ @Override
+ public int getPartialIdentityLength() {
+ int off = getPartialIdentityOffset();
+ if (off == -1) {
+ return 0;
+ }
+ return keyIdentity[off] & 0xFF;
+ }
+
+ @Override
+ public KeyIdentitySingleArrayBacked clone() {
+ return new KeyIdentitySingleArrayBacked(Bytes.copy(keyIdentity, offset, length), 0, length);
+ }
+
+ @Override
+ public String getCustodianEncoded() {
+ return ManagedKeyProvider.encodeToStr(keyIdentity, getCustodianOffset() + 1,
+ getCustodianLength());
+ }
+
+ @Override
+ public String getNamespaceString() {
+ return Bytes.toString(keyIdentity, getNamespaceOffset() + 1, getNamespaceLength());
+ }
+
+ @Override
+ public String getPartialIdentityEncoded() {
+ int off = getPartialIdentityOffset();
+ if (off == -1) {
+ return null;
+ }
+ return ManagedKeyProvider.encodeToStr(keyIdentity, off + 1, getPartialIdentityLength());
+ }
+
+ @Override
+ public int compareCustodian(byte[] otherCustodian) {
+ return compareCustodian(otherCustodian, 0, otherCustodian.length);
+ }
+
+ @Override
+ public int compareCustodian(byte[] otherCustodian, int otherOffset, int otherLength) {
+ return Bytes.compareTo(keyIdentity, getCustodianOffset() + 1, getCustodianLength(),
+ otherCustodian, otherOffset, otherLength);
+ }
+
+ @Override
+ public int compareNamespace(byte[] otherNamespace) {
+ return compareNamespace(otherNamespace, 0, otherNamespace.length);
+ }
+
+ @Override
+ public int compareNamespace(byte[] otherNamespace, int otherOffset, int otherLength) {
+ return Bytes.compareTo(keyIdentity, getNamespaceOffset() + 1, getNamespaceLength(),
+ otherNamespace, otherOffset, otherLength);
+ }
+
+ @Override
+ public int comparePartialIdentity(byte[] otherPartialIdentity) {
+ return comparePartialIdentity(otherPartialIdentity, 0, otherPartialIdentity.length);
+ }
+
+ @Override
+ public int comparePartialIdentity(byte[] otherPartialIdentity, int otherOffset, int otherLength) {
+ int off = getPartialIdentityOffset();
+ if (off == -1) { // partial identity segment omitted (length 0).
+ if (otherLength == 0) {
+ return 0;
+ }
+ return -1;
+ }
+ return Bytes.compareTo(keyIdentity, off + 1, getPartialIdentityLength(), otherPartialIdentity,
+ otherOffset, otherLength);
+ }
+}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdmin.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdmin.java
index 4506de9e9d2e..2ae91d4bb1b9 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdmin.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdmin.java
@@ -21,15 +21,14 @@
import java.security.KeyException;
import java.util.List;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.yetus.audience.InterfaceStability;
/**
* KeymetaAdmin is an interface for administrative functions related to managed keys. It handles the
* following methods:
*/
@InterfaceAudience.Public
-@InterfaceStability.Evolving
public interface KeymetaAdmin {
/**
* Enables key management for the specified custodian and namespace.
@@ -78,13 +77,15 @@ void ejectManagedKeyDataCacheEntry(byte[] keyCustodian, String keyNamespace, Str
void clearManagedKeyDataCache() throws IOException;
/**
- * Disables key management for the specified custodian and namespace. This marks any ACTIVE keys
- * as INACTIVE and adds a DISABLED state marker such that no new ACTIVE key is retrieved, so the
- * new data written will not be encrypted.
+ * Disables key management for the specified custodian and namespace. First adds a DISABLED state
+ * marker for the (custodian, namespace) so no new ACTIVE key is retrieved; then enumerates all
+ * managed keys for that scope and disables each via the same semantics as
+ * {@link #disableManagedKey(byte[], String, byte[])}. New data written for this scope will not be
+ * encrypted.
* @param keyCust The key custodian identifier.
* @param keyNamespace The namespace for the key management.
- * @return The {@link ManagedKeyData} object identifying the previously active key and its current
- * state.
+ * @return The {@link ManagedKeyData} object for the DISABLED state marker (or a synthetic marker
+ * if read-back is null).
* @throws IOException if an error occurs while disabling key management.
* @throws KeyException if an error occurs while disabling key management.
*/
@@ -123,4 +124,19 @@ ManagedKeyData rotateManagedKey(byte[] keyCust, String keyNamespace)
* @throws KeyException if an error occurs while refreshing managed keys.
*/
void refreshManagedKeys(byte[] keyCust, String keyNamespace) throws IOException, KeyException;
+
+ /**
+ * Resolves {@code keyMetadata} through the configured managed key provider (no wrapped key
+ * bytes), then persists via {@code addKey}. An {@code ACTIVE} key is recorded for encryption of
+ * new writes like other admin flows.
+ * @param keyCust key custodian; must match the unwrapped key's custodian
+ * @param keyNamespace key namespace; must match the unwrapped key's namespace
+ * @param keyMetadata argument to
+ * {@link ManagedKeyProvider#unwrapKey(ManagedKeyIdentity, String, byte[])}
+ * @return persisted key data
+ * @throws IOException if metadata is empty, persistence fails, or an I/O error occurs
+ * @throws KeyException if the provider returns an invalid key or scope mismatches the request
+ */
+ ManagedKeyData setManagedKey(byte[] keyCust, String keyNamespace, String keyMetadata)
+ throws IOException, KeyException;
}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentity.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentity.java
new file mode 100644
index 000000000000..6ef49dca30a6
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentity.java
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import java.util.Objects;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.apache.yetus.audience.InterfaceStability;
+
+/**
+ * Abstraction representing a key identity (custodian + namespace + partial identity (optional)).
+ * The underlying representation can be anything, a single byte array, or multiple individual byte
+ * arrays. The interface is designed such that byte arrays can be passed around with least amount of
+ * copying.
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public interface ManagedKeyIdentity extends Cloneable {
+
+ Bytes KEY_NULL_IDENTITY_BYTES = new Bytes(new byte[0]);
+
+ /** Returns a Bytes view of the custodian segment (no copy). */
+ public Bytes getCustodianView();
+
+ /** Returns a Bytes view of the namespace segment (no copy). */
+ public Bytes getNamespaceView();
+
+ /** Returns a Bytes view of the partial identity segment (no copy). */
+ public Bytes getPartialIdentityView();
+
+ /** Returns a Bytes view of the full identity avoiding a copy when possible. */
+ public Bytes getFullIdentityView();
+
+ /**
+ * Returns a Bytes view of the identity prefix suitable for use as a prefix filter to scan for all
+ * keys with the same custodian and namespace.
+ */
+ public Bytes getIdentityPrefixView();
+
+ /**
+ * Returns a {@link ManagedKeyIdentity} representing only the custodian and namespace segments
+ * (partial identity stripped). Implementations should avoid byte copies when possible.
+ *
+ * If this instance has no partial identity segment, this method should return {@code this}.
+ */
+ public ManagedKeyIdentity getKeyIdentityPrefix();
+
+ /** Returns the length of the custodian segment. */
+ public int getCustodianLength();
+
+ /** Returns the length of the namespace segment. */
+ public int getNamespaceLength();
+
+ /** Returns the length of the partial identity segment. */
+ public int getPartialIdentityLength();
+
+ /** Returns a copy of the custodian bytes (owned). */
+ public byte[] copyCustodian();
+
+ /** Returns a copy of the namespace bytes (owned). */
+ public byte[] copyNamespace();
+
+ /** Returns a copy of the partial identity bytes (owned). */
+ public byte[] copyPartialIdentity();
+
+ /** Returns the custodian encoded as a Base64 string. */
+ public String getCustodianEncoded();
+
+ /** Returns the namespace as a string. */
+ public String getNamespaceString();
+
+ /** Returns the partial identity encoded as a Base64 string. */
+ public String getPartialIdentityEncoded();
+
+ /** Clones the FullKeyIdentity. */
+ public ManagedKeyIdentity clone();
+
+ /** Compares the custodian bytes with the other custodian bytes. */
+ public int compareCustodian(byte[] otherCustodian);
+
+ /** Compares the custodian bytes with the other custodian bytes. */
+ public int compareCustodian(byte[] otherCustodian, int otherOffset, int otherLength);
+
+ /** Compares the namespace bytes with the other namespace bytes. */
+ public int compareNamespace(byte[] otherNamespace);
+
+ /** Compares the namespace bytes with the other namespace bytes. */
+ public int compareNamespace(byte[] otherNamespace, int otherOffset, int otherLength);
+
+ /** Compares the partial identity bytes with the other partial identity bytes. */
+ public int comparePartialIdentity(byte[] otherPartialIdentity);
+
+ /** Compares the partial identity bytes with the other partial identity bytes. */
+ public int comparePartialIdentity(byte[] otherPartialIdentity, int otherOffset, int otherLength);
+
+ /**
+ * Content-based equality so that all implementations are interchangeable as map keys. Two
+ * identities are equal iff their custodian, namespace, and partial identity bytes are equal.
+ * Implementations should override {@link Object#equals} and delegate to this method.
+ */
+ static boolean contentEquals(ManagedKeyIdentity self, Object obj) {
+ if (self == obj) {
+ return true;
+ }
+ if (!(obj instanceof ManagedKeyIdentity)) {
+ return false;
+ }
+ ManagedKeyIdentity that = (ManagedKeyIdentity) obj;
+ return self.getCustodianView().equals(that.getCustodianView())
+ && self.getNamespaceView().equals(that.getNamespaceView())
+ && self.getPartialIdentityView().equals(that.getPartialIdentityView());
+ }
+
+ /**
+ * Content-based hash so that all implementations are interchangeable as map keys. Uses
+ * HashCodeBuilder(17, 37) over the three Bytes views; each view's hashCode is content-based.
+ * Implementations should override {@link Object#hashCode} and delegate to this method.
+ */
+ static int contentHashCode(ManagedKeyIdentity self) {
+ return Objects.hash(self.getCustodianView(), self.getNamespaceView(),
+ self.getPartialIdentityView());
+ }
+}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentityUtils.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentityUtils.java
new file mode 100644
index 000000000000..fb186e8489de
--- /dev/null
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyIdentityUtils.java
@@ -0,0 +1,210 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import java.nio.charset.StandardCharsets;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.LinkedHashSet;
+import java.util.List;
+import java.util.Set;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.io.crypto.DigestAlgorithms;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
+/**
+ * Binary encoding for key full identity row keys (custodian + namespace + partial identity). Shared
+ * by {@link KeyIdentitySingleArrayBacked} and server-side persistence utilities.
+ */
+@InterfaceAudience.Private
+public final class ManagedKeyIdentityUtils {
+ private static final Logger LOG = LoggerFactory.getLogger(ManagedKeyIdentityUtils.class);
+ private static final short MAX_UNSIGNED_BYTE = 255;
+ /** Cached digest algorithm list; parsed once on first use and not updated for JVM life. */
+ private static List DIGEST_ALGOS = null;
+
+ private ManagedKeyIdentityUtils() {
+ }
+
+ /**
+ * Build row key prefix for custodian + namespace. Format: [custLen (1 byte)][keyCust][nsLen (1
+ * byte)][keyNamespace bytes].
+ */
+ public static byte[] constructRowKeyForCustNamespace(byte[] keyCust, byte[] keyNamespace) {
+ return constructRowKey(keyCust, keyNamespace, null);
+ }
+
+ /**
+ * Build full row key for identity row. Format: prefix from
+ * {@link #constructRowKeyForCustNamespace} + [partialIdentityLen (1 byte)] [partialIdentity].
+ * Partial identity length is encoded in a single byte (max 255).
+ * @param keyCustodian key custodian bytes
+ * @param keyNamespace key namespace bytes
+ * @param partialIdentity partial identity bytes (digest of metadata), can be of 0-length for a
+ * marker row.
+ * @return full identity byte array suitable for SystemKeyCache and HFile trailer
+ */
+ public static byte[] constructRowKeyForIdentity(byte[] keyCustodian, byte[] keyNamespace,
+ byte[] partialIdentity) {
+ Preconditions.checkNotNull(partialIdentity, "partialIdentity cannot be null");
+ Preconditions.checkArgument(
+ partialIdentity.length >= 0 && partialIdentity.length <= MAX_UNSIGNED_BYTE,
+ "Partial identity length must be 0-255, got %s", partialIdentity.length);
+ return constructRowKey(keyCustodian, keyNamespace, partialIdentity);
+ }
+
+ private static byte[] constructRowKey(byte[] keyCustodian, byte[] keyNamespace,
+ byte[] partialIdentity) {
+ validateCustodianAndNamespaceLength(keyCustodian, keyNamespace);
+ int nsLen = keyNamespace.length;
+ int piLen = partialIdentity == null ? 0 : partialIdentity.length;
+ byte[] result = new byte[1 + keyCustodian.length + 1 + nsLen + (piLen > 0 ? 1 : 0) + piLen];
+ int off = 0;
+ result[off++] = (byte) keyCustodian.length;
+ System.arraycopy(keyCustodian, 0, result, off, keyCustodian.length);
+ off += keyCustodian.length;
+ result[off++] = (byte) nsLen;
+ System.arraycopy(keyNamespace, 0, result, off, nsLen);
+ off += nsLen;
+ if (piLen > 0) {
+ result[off++] = (byte) piLen;
+ System.arraycopy(partialIdentity, 0, result, off, piLen);
+ }
+ return result;
+ }
+
+ /**
+ * Validates that key custodian and key namespace length are between 1 and the maximum allowed.
+ */
+ private static void validateCustodianAndNamespaceLength(byte[] keyCust, byte[] keyNamespace) {
+ Preconditions.checkArgument(keyCust != null, "Key custodian cannot be null");
+ Preconditions.checkArgument(keyNamespace != null, "Key namespace cannot be null");
+ Preconditions.checkArgument(keyCust.length >= 1 && keyCust.length <= MAX_UNSIGNED_BYTE,
+ "Key custodian length must be 1-%s, got %s", MAX_UNSIGNED_BYTE, keyCust.length);
+ Preconditions.checkArgument(
+ keyNamespace.length >= 1 && keyNamespace.length <= MAX_UNSIGNED_BYTE,
+ "Key namespace length must be 1-%s, got %s", MAX_UNSIGNED_BYTE, keyNamespace.length);
+ }
+
+ /**
+ * Construct the partial identity (digest) for the given metadata string. Uses default algorithm
+ * (xxh3) when no configuration is available. The result is prefixed with a single byte encoding
+ * the algorithm(s) used (bitwise OR of DigestAlgo bit positions).
+ * @param metadata the key metadata string
+ * @return partial identity bytes: [algoSelectorByte][digest1][digest2...]
+ */
+ public static byte[] constructMetadataHash(String metadata) {
+ byte[] input = metadata.getBytes(StandardCharsets.UTF_8);
+ int totalDigestSize = 0;
+ for (DigestAlgorithms a : getDigestAlgos()) {
+ totalDigestSize += a.getDigestSizeBytes();
+ }
+ byte[] result = new byte[1 + totalDigestSize];
+ byte selector = 0;
+ int outOff = 1;
+ for (DigestAlgorithms a : getDigestAlgos()) {
+ selector |= a.getBitPosition();
+ a.digest(input, 0, input.length, result, outOff);
+ outOff += a.getDigestSizeBytes();
+ }
+ result[0] = selector;
+ return result;
+ }
+
+ public static List getDigestAlgos() {
+ if (DIGEST_ALGOS == null) {
+ initDigestAlgos(null);
+ }
+ return DIGEST_ALGOS;
+ }
+
+ /**
+ * Initializes the list of digest algorithms to use (up to 2), sorted by bitPosition. Dedupes by
+ * algorithm and picks first 2; logs a warning if config lists more than 2. Parsed once on first
+ * use and cached for the life of the JVM.
+ * @param conf the configuration to use
+ */
+ public static void initDigestAlgos(Configuration conf) {
+ if (DIGEST_ALGOS == null) {
+ String algoList = conf != null
+ ? conf.get(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY,
+ HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_DEFAULT)
+ : HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_DEFAULT;
+ String[] names = algoList.split(",");
+ Set deduped = new LinkedHashSet<>();
+ for (String name : names) {
+ DigestAlgorithms a = DigestAlgorithms.fromName(name);
+ if (a != null) {
+ deduped.add(a);
+ }
+ }
+ List sorted = new ArrayList<>(deduped);
+ sorted.sort(Comparator.comparingInt(a -> a.getBitPosition() & 0xFF));
+ if (sorted.size() > 2) {
+ LOG.warn(
+ "Configured digest algorithms list has more than 2 entries; using first 2 by bitPosition: "
+ + sorted.get(0).name() + ", " + sorted.get(1).name());
+ sorted = sorted.subList(0, 2);
+ }
+ if (sorted.isEmpty()) {
+ sorted = new ArrayList<>();
+ sorted.add(DigestAlgorithms.XXH3);
+ }
+ DIGEST_ALGOS = Collections.unmodifiableList(sorted);
+ }
+ }
+
+ /**
+ * Creates a {@link ManagedKeyIdentity} from custodian, namespace, and key metadata. The partial
+ * identity is computed as the hash of the metadata string via {@link #constructMetadataHash}.
+ * @param custodian custodian bytes
+ * @param namespace namespace bytes
+ * @param metadata key metadata string
+ * @return FullKeyIdentity for the given custodian, namespace, and metadata
+ */
+ public static ManagedKeyIdentity fullKeyIdentityFromMetadata(Bytes custodian, Bytes namespace,
+ String metadata) {
+ Preconditions.checkNotNull(custodian, "custodian should not be null");
+ Preconditions.checkNotNull(namespace, "namespace should not be null");
+ Preconditions.checkNotNull(metadata, "metadata should not be null");
+ return new KeyIdentityBytesBacked(custodian, namespace,
+ new Bytes(constructMetadataHash(metadata)));
+ }
+
+ public static ManagedKeyIdentity buildIdentityFromMetadata(byte[] key_cust, byte[] key_namespace,
+ String keyMetadata) {
+ return fullKeyIdentityFromMetadata(key_cust == null ? null : new Bytes(key_cust),
+ key_namespace == null ? null : new Bytes(key_namespace), keyMetadata);
+ }
+
+ public static ManagedKeyIdentity buildIdentityFromMetadata(Bytes custodian, Bytes namespace,
+ String keyMetadata) {
+ Preconditions.checkNotNull(custodian, "custodian should not be null");
+ Preconditions.checkNotNull(namespace, "namespace should not be null");
+ Preconditions.checkNotNull(keyMetadata, "metadata should not be null");
+ return new KeyIdentityBytesBacked(custodian, namespace,
+ new Bytes(constructMetadataHash(keyMetadata)));
+ }
+}
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
index 96b3dbd4a8a5..0259aa9764d8 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/Bytes.java
@@ -59,7 +59,7 @@
value = "EQ_CHECK_FOR_OPERAND_NOT_COMPATIBLE_WITH_THIS",
justification = "It has been like this forever")
@SuppressWarnings("MixedMutabilityReturnType")
-public class Bytes implements Comparable {
+public class Bytes implements Comparable, Cloneable {
// Using the charset canonical name for String/byte[] conversions is much
// more efficient due to use of cached encoders/decoders.
@@ -236,7 +236,18 @@ public int compareTo(Bytes that) {
* smaller than right.
*/
public int compareTo(final byte[] that) {
- return BYTES_RAWCOMPARATOR.compare(this.bytes, this.offset, this.length, that, 0, that.length);
+ return compareTo(that, 0, that.length);
+ }
+
+ /**
+ * Compares the bytes in this object to the specified byte array with the specified offset and
+ * length
+ * @return Positive if left is bigger than right, 0 if they are equal, and negative if left is
+ * smaller than right.
+ */
+ public int compareTo(final byte[] that, int thatOffset, int thatLength) {
+ return BYTES_RAWCOMPARATOR.compare(this.bytes, this.offset, this.length, that, thatOffset,
+ thatLength);
}
@Override
@@ -269,11 +280,27 @@ public static byte[][] toArray(final List array) {
return results;
}
+ @Override
+ public Bytes clone() {
+ return new Bytes(copyBytes(), 0, length);
+ }
+
/** Returns a copy of the bytes referred to by this writable */
public byte[] copyBytes() {
return Arrays.copyOfRange(bytes, offset, offset + length);
}
+ /**
+ * Returns the internal bytes array if fully representative, otherwisea copy. Preferable over
+ * {@link #copyBytes()} when getting access to the byte[] is hhe goal instead of copying.
+ */
+ public byte[] copyBytesIfNecessary() {
+ if (bytes != null && offset == 0 && length == bytes.length) {
+ return bytes;
+ }
+ return copyBytes();
+ }
+
/** Byte array comparator class. */
@InterfaceAudience.Public
public static class ByteArrayComparator implements RawComparator {
diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
index 5fdb34f1593b..09801994b436 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/util/CommonFSUtils.java
@@ -328,9 +328,9 @@ public static Path getRootDir(final Configuration c) throws IOException {
*/
public static Path getOriginalRootDir(final Configuration c) throws IOException {
return getRootDir(c,
- c.get(HConstants.HBASE_ORIGINAL_DIR) == null
+ c.get(HConstants.HBASE_ORIGINAL_ROOT_DIR) == null
? HConstants.HBASE_DIR
- : HConstants.HBASE_ORIGINAL_DIR);
+ : HConstants.HBASE_ORIGINAL_ROOT_DIR);
}
/**
@@ -349,8 +349,8 @@ public static Path getRootDir(final Configuration c, final String rootDirProp)
public static void setRootDir(final Configuration c, final Path root) {
// Keep track of the original root dir.
- if (c.get(HConstants.HBASE_ORIGINAL_DIR) == null && c.get(HConstants.HBASE_DIR) != null) {
- c.set(HConstants.HBASE_ORIGINAL_DIR, c.get(HConstants.HBASE_DIR));
+ if (c.get(HConstants.HBASE_ORIGINAL_ROOT_DIR) == null && c.get(HConstants.HBASE_DIR) != null) {
+ c.set(HConstants.HBASE_ORIGINAL_ROOT_DIR, c.get(HConstants.HBASE_DIR));
}
c.set(HConstants.HBASE_DIR, root.toString());
}
diff --git a/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/MockManagedKeyProvider.java b/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/MockManagedKeyProvider.java
index 9e24e93c9edb..138fe4500eb0 100644
--- a/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/MockManagedKeyProvider.java
+++ b/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/MockManagedKeyProvider.java
@@ -24,6 +24,7 @@
import java.util.Map;
import javax.crypto.KeyGenerator;
import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
import org.apache.hadoop.hbase.util.Bytes;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -63,27 +64,31 @@ public ManagedKeyData getSystemKey(byte[] systemId) throws IOException {
}
@Override
- public ManagedKeyData getManagedKey(byte[] key_cust, String key_namespace) throws IOException {
+ public ManagedKeyData getManagedKey(ManagedKeyIdentity keyIdentity) throws IOException {
if (shouldThrowExceptionOnGetManagedKey) {
throw new IOException("Test exception on getManagedKey");
}
- String alias = Bytes.toString(key_cust);
+ String key_namespace = keyIdentity.getNamespaceString();
+ Bytes custView = keyIdentity.getCustodianView();
+ String alias = custView.toString();
+ byte[] key_cust = custView.copyBytesIfNecessary();
return getKey(key_cust, alias, key_namespace);
}
@Override
- public ManagedKeyData unwrapKey(String keyMetadata, byte[] wrappedKey) throws IOException {
+ public ManagedKeyData unwrapKey(ManagedKeyIdentity keyIdentity, String keyMetadata,
+ byte[] wrappedKey) throws IOException {
String[] meta_toks = keyMetadata.split(":");
Preconditions.checkArgument(meta_toks.length >= 3, "Invalid key metadata: %s", keyMetadata);
if (allGeneratedKeys.containsKey(keyMetadata)) {
ManagedKeyState keyState = this.keyState.get(meta_toks[1]);
- ManagedKeyData managedKeyData =
- new ManagedKeyData(meta_toks[0].getBytes(), meta_toks[2], allGeneratedKeys.get(keyMetadata),
- keyState == null ? ManagedKeyState.ACTIVE : keyState, keyMetadata);
+ ManagedKeyData managedKeyData = new ManagedKeyData(meta_toks[0].getBytes(),
+ Bytes.toBytes(meta_toks[2]), allGeneratedKeys.get(keyMetadata),
+ keyState == null ? ManagedKeyState.ACTIVE : keyState, keyMetadata);
return registerKeyData(meta_toks[1], managedKeyData);
}
- return new ManagedKeyData(meta_toks[0].getBytes(), meta_toks[2], null, ManagedKeyState.FAILED,
- keyMetadata);
+ return new ManagedKeyData(meta_toks[0].getBytes(), Bytes.toBytes(meta_toks[2]), null,
+ ManagedKeyState.FAILED, keyMetadata);
}
public ManagedKeyData getLastGeneratedKeyData(String alias, String keyNamespace) {
@@ -175,7 +180,7 @@ private ManagedKeyData getKey(byte[] key_cust, String alias, String key_namespac
String keyMetadata = partialMetadata + ":" + key_namespace + ":" + checksum;
allGeneratedKeys.put(partialMetadata, key);
allGeneratedKeys.put(keyMetadata, key);
- ManagedKeyData managedKeyData = new ManagedKeyData(key_cust, key_namespace, key,
+ ManagedKeyData managedKeyData = new ManagedKeyData(key_cust, Bytes.toBytes(key_namespace), key,
keyState == null ? ManagedKeyState.ACTIVE : keyState, keyMetadata);
return registerKeyData(alias, managedKeyData);
}
diff --git a/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyData.java b/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyData.java
index 66a2bfd9344b..166d24b9abda 100644
--- a/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyData.java
+++ b/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyData.java
@@ -25,11 +25,17 @@
import static org.junit.Assert.assertThrows;
import static org.junit.Assert.assertTrue;
+import java.lang.reflect.Field;
import java.security.Key;
-import java.security.NoSuchAlgorithmException;
import java.util.Base64;
+import java.util.List;
import javax.crypto.KeyGenerator;
+import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.keymeta.KeyIdentityPrefixBytesBacked;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.testclassification.MiscTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
import org.apache.hadoop.hbase.util.Bytes;
@@ -46,21 +52,24 @@ public class TestManagedKeyData {
private byte[] keyCust;
private String keyNamespace;
+ private byte[] keyNamespaceBytes;
private Key theKey;
private ManagedKeyState keyState;
private String keyMetadata;
private ManagedKeyData managedKeyData;
@Before
- public void setUp() throws NoSuchAlgorithmException {
+ public void setUp() throws Exception {
+ resetDigestAlgosCache();
keyCust = "testCustodian".getBytes();
keyNamespace = "testNamespace";
+ keyNamespaceBytes = Bytes.toBytes(keyNamespace);
KeyGenerator keyGen = KeyGenerator.getInstance("AES");
keyGen.init(256);
theKey = keyGen.generateKey();
keyState = ManagedKeyState.ACTIVE;
keyMetadata = "testMetadata";
- managedKeyData = new ManagedKeyData(keyCust, keyNamespace, theKey, keyState, keyMetadata);
+ managedKeyData = new ManagedKeyData(keyCust, keyNamespaceBytes, theKey, keyState, keyMetadata);
}
@Test
@@ -76,30 +85,33 @@ public void testConstructor() {
@Test
public void testConstructorNullChecks() {
assertThrows(NullPointerException.class,
- () -> new ManagedKeyData(null, keyNamespace, theKey, keyState, keyMetadata));
+ () -> new ManagedKeyData(null, keyNamespaceBytes, theKey, keyState, keyMetadata));
assertThrows(NullPointerException.class,
() -> new ManagedKeyData(keyCust, null, theKey, keyState, keyMetadata));
assertThrows(NullPointerException.class,
- () -> new ManagedKeyData(keyCust, keyNamespace, theKey, null, keyMetadata));
+ () -> new ManagedKeyData(keyCust, keyNamespaceBytes, theKey, null, keyMetadata));
assertThrows(NullPointerException.class,
- () -> new ManagedKeyData(keyCust, keyNamespace, theKey, ManagedKeyState.ACTIVE, null));
+ () -> new ManagedKeyData(keyCust, keyNamespaceBytes, theKey, ManagedKeyState.ACTIVE, null));
}
@Test
public void testConstructorWithFailedEncryptionStateAndNullMetadata() {
- ManagedKeyData keyData = new ManagedKeyData(keyCust, keyNamespace, ManagedKeyState.FAILED);
+ ManagedKeyData keyData = new ManagedKeyData(
+ new KeyIdentityPrefixBytesBacked(keyCust, keyNamespaceBytes), ManagedKeyState.FAILED);
assertNotNull(keyData);
assertEquals(ManagedKeyState.FAILED, keyData.getKeyState());
assertNull(keyData.getKeyMetadata());
- assertNull(keyData.getKeyMetadataHash());
+ assertNull(keyData.getPartialIdentity());
assertNull(keyData.getTheKey());
}
@Test
public void testConstructorWithRefreshTimestamp() {
long refreshTimestamp = System.currentTimeMillis();
+ ManagedKeyIdentity fullIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(new Bytes(keyCust), new Bytes(keyNamespaceBytes), keyMetadata);
ManagedKeyData keyDataWithTimestamp =
- new ManagedKeyData(keyCust, keyNamespace, theKey, keyState, keyMetadata, refreshTimestamp);
+ new ManagedKeyData(fullIdentity, theKey, keyState, keyMetadata, refreshTimestamp);
assertEquals(refreshTimestamp, keyDataWithTimestamp.getRefreshTimestamp());
}
@@ -108,10 +120,10 @@ public void testCloneWithoutKey() {
ManagedKeyData cloned = managedKeyData.createClientFacingInstance();
assertNull(cloned.getTheKey());
assertNull(cloned.getKeyMetadata());
- assertEquals(managedKeyData.getKeyCustodian(), cloned.getKeyCustodian());
+ assertTrue(Bytes.equals(managedKeyData.getKeyCustodian(), cloned.getKeyCustodian()));
assertEquals(managedKeyData.getKeyNamespace(), cloned.getKeyNamespace());
assertEquals(managedKeyData.getKeyState(), cloned.getKeyState());
- assertTrue(Bytes.equals(managedKeyData.getKeyMetadataHash(), cloned.getKeyMetadataHash()));
+ assertTrue(Bytes.equals(managedKeyData.getPartialIdentity(), cloned.getPartialIdentity()));
}
@Test
@@ -128,7 +140,7 @@ public void testGetKeyChecksum() {
// Test with null key
ManagedKeyData nullKeyData =
- new ManagedKeyData(keyCust, keyNamespace, null, keyState, keyMetadata);
+ new ManagedKeyData(keyCust, keyNamespaceBytes, null, keyState, keyMetadata);
assertEquals(0, nullKeyData.getKeyChecksum());
}
@@ -140,24 +152,24 @@ public void testConstructKeyChecksum() {
}
@Test
- public void testGetKeyMetadataHash() {
- byte[] hash = managedKeyData.getKeyMetadataHash();
- assertNotNull(hash);
- assertEquals(16, hash.length); // MD5 hash is 16 bytes long
+ public void testGetPartialIdentity() {
+ byte[] partialIdentity = managedKeyData.getPartialIdentity();
+ assertNotNull(partialIdentity);
+ assertEquals(9, partialIdentity.length); // algo selector (1) + XXH3 (8 bytes), default digest
}
@Test
- public void testGetKeyMetadataHashEncoded() {
- String encodedHash = managedKeyData.getKeyMetadataHashEncoded();
- assertNotNull(encodedHash);
- assertEquals(24, encodedHash.length()); // Base64 encoded MD5 hash is 24 characters long
+ public void testGetPartialIdentityEncoded() {
+ String encoded = managedKeyData.getPartialIdentityEncoded();
+ assertNotNull(encoded);
+ assertEquals(12, encoded.length()); // Base64 of algo selector + XXH3 (9 bytes) = 12 chars
}
@Test
public void testConstructMetadataHash() {
- byte[] hash = ManagedKeyData.constructMetadataHash(keyMetadata);
- assertNotNull(hash);
- assertEquals(16, hash.length); // MD5 hash is 16 bytes long
+ byte[] partialIdentity = ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata);
+ assertNotNull(partialIdentity);
+ assertEquals(9, partialIdentity.length); // default xxh3: algo selector (1) + 8 bytes
}
@Test
@@ -173,11 +185,12 @@ public void testToString() {
@Test
public void testEquals() {
- ManagedKeyData same = new ManagedKeyData(keyCust, keyNamespace, theKey, keyState, keyMetadata);
+ ManagedKeyData same =
+ new ManagedKeyData(keyCust, keyNamespaceBytes, theKey, keyState, keyMetadata);
assertEquals(managedKeyData, same);
- ManagedKeyData different =
- new ManagedKeyData("differentCust".getBytes(), keyNamespace, theKey, keyState, keyMetadata);
+ ManagedKeyData different = new ManagedKeyData("differentCust".getBytes(), keyNamespaceBytes,
+ theKey, keyState, keyMetadata);
assertNotEquals(managedKeyData, different);
}
@@ -193,18 +206,126 @@ public void testEqualsWithDifferentClass() {
@Test
public void testHashCode() {
- ManagedKeyData same = new ManagedKeyData(keyCust, keyNamespace, theKey, keyState, keyMetadata);
+ ManagedKeyData same =
+ new ManagedKeyData(keyCust, keyNamespaceBytes, theKey, keyState, keyMetadata);
assertEquals(managedKeyData.hashCode(), same.hashCode());
- ManagedKeyData different =
- new ManagedKeyData("differentCust".getBytes(), keyNamespace, theKey, keyState, keyMetadata);
+ ManagedKeyData different = new ManagedKeyData("differentCust".getBytes(), keyNamespaceBytes,
+ theKey, keyState, keyMetadata);
assertNotEquals(managedKeyData.hashCode(), different.hashCode());
}
@Test
public void testConstants() {
assertEquals("*", ManagedKeyData.KEY_SPACE_GLOBAL);
- assertEquals(ManagedKeyProvider.encodeToStr(ManagedKeyData.KEY_SPACE_GLOBAL.getBytes()),
- ManagedKeyData.KEY_GLOBAL_CUSTODIAN);
+ assertEquals(ManagedKeyProvider.encodeToStr(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.copyBytes()),
+ ManagedKeyData.GLOBAL_CUST_ENCODED);
+ }
+
+ /**
+ * Resets the cached digest algorithms so initDigestAlgos() can be tested with different configs.
+ */
+ private static void resetDigestAlgosCache() throws Exception {
+ Field field = ManagedKeyIdentityUtils.class.getDeclaredField("DIGEST_ALGOS");
+ field.setAccessible(true);
+ field.set(null, null);
+ }
+
+ @Test
+ public void testInitDigestAlgosWithNullConf() throws Exception {
+ resetDigestAlgosCache();
+ ManagedKeyIdentityUtils.initDigestAlgos(null);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(1, algos.size());
+ assertEquals(DigestAlgorithms.XXH3, algos.get(0));
+ }
+
+ @Test
+ public void testInitDigestAlgosWithDefaultConfig() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(1, algos.size());
+ assertEquals(DigestAlgorithms.XXH3, algos.get(0));
+ }
+
+ @Test
+ public void testInitDigestAlgosWithSingleAlgoConfig() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY, "md5");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(1, algos.size());
+ assertEquals(DigestAlgorithms.MD5, algos.get(0));
+ }
+
+ @Test
+ public void testInitDigestAlgosWithTwoAlgosSortedByBitPosition() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY, "md5,xxh3");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(2, algos.size());
+ assertEquals(DigestAlgorithms.XXH3, algos.get(0));
+ assertEquals(DigestAlgorithms.MD5, algos.get(1));
+ }
+
+ @Test
+ public void testInitDigestAlgosWithThreeAlgosUsesFirstTwo() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY,
+ "xxh3,xxhash64,md5");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(2, algos.size());
+ assertEquals(DigestAlgorithms.XXH3, algos.get(0));
+ assertEquals(DigestAlgorithms.XXHASH64, algos.get(1));
+ }
+
+ @Test
+ public void testInitDigestAlgosDedupesAndIgnoresUnknown() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY,
+ "xxh3,unknown,xxh3");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(1, algos.size());
+ assertEquals(DigestAlgorithms.XXH3, algos.get(0));
+ }
+
+ @Test
+ public void testInitDigestAlgosEmptyConfigFallsBackToXxh3() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY, "");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ List algos = ManagedKeyIdentityUtils.getDigestAlgos();
+ assertNotNull(algos);
+ assertEquals(1, algos.size());
+ assertEquals(DigestAlgorithms.XXH3, algos.get(0));
+ }
+
+ @Test
+ public void testInitDigestAlgosIdempotentAfterFirstCall() throws Exception {
+ resetDigestAlgosCache();
+ Configuration conf = new Configuration();
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY, "md5");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ assertEquals(DigestAlgorithms.MD5, ManagedKeyIdentityUtils.getDigestAlgos().get(0));
+ // Second call with different config should not change cached value
+ conf.set(HConstants.CRYPTO_MANAGED_KEY_METADATA_DIGEST_ALGORITHMS_CONF_KEY, "xxh3");
+ ManagedKeyIdentityUtils.initDigestAlgos(conf);
+ assertEquals(DigestAlgorithms.MD5, ManagedKeyIdentityUtils.getDigestAlgos().get(0));
}
}
diff --git a/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyProvider.java b/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyProvider.java
index 2ec1bc718623..3fc9ed7cb969 100644
--- a/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyProvider.java
+++ b/hbase-common/src/test/java/org/apache/hadoop/hbase/io/crypto/TestManagedKeyProvider.java
@@ -17,7 +17,7 @@
*/
package org.apache.hadoop.hbase.io.crypto;
-import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL_BYTES;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyStoreKeyProvider.KEY_METADATA_ALIAS;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyStoreKeyProvider.KEY_METADATA_CUST;
import static org.junit.Assert.assertEquals;
@@ -39,6 +39,10 @@
import org.apache.hadoop.hbase.HBaseCommonTestingUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.keymeta.KeyIdentityBytesBacked;
+import org.apache.hadoop.hbase.keymeta.KeyIdentityPrefixBytesBacked;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.testclassification.MiscTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
import org.apache.hadoop.hbase.util.Bytes;
@@ -92,6 +96,17 @@ public static Collection parameters() {
private String clusterId;
private byte[] systemKey;
+ private static ManagedKeyIdentity managedKeyPrefixId(Bytes custodian, String namespace) {
+ if (namespace == null) {
+ return new KeyIdentityPrefixBytesBacked(custodian, KEY_SPACE_GLOBAL_BYTES);
+ }
+ return new KeyIdentityPrefixBytesBacked(custodian, new Bytes(Bytes.toBytes(namespace)));
+ }
+
+ private static ManagedKeyIdentity managedKeyPrefixId(byte[] custodian, String namespace) {
+ return managedKeyPrefixId(new Bytes(custodian), namespace);
+ }
+
@Before
public void setUp() throws Exception {
String providerParams = KeymetaTestUtils.setupTestKeyStore(TEST_UTIL, withPasswordOnAlias,
@@ -153,8 +168,8 @@ public void testMissingConfig() throws Exception {
@Test
public void testGetManagedKey() throws Exception {
for (Bytes cust : cust2key.keySet()) {
- ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(cust.get(), ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData keyData = managedKeyProvider
+ .getManagedKey(managedKeyPrefixId(cust, ManagedKeyData.KEY_SPACE_GLOBAL));
assertKeyData(keyData, ManagedKeyState.ACTIVE, cust2key.get(cust).get(), cust.get(),
cust2alias.get(cust));
}
@@ -162,11 +177,11 @@ public void testGetManagedKey() throws Exception {
@Test
public void testGetGlobalCustodianKey() throws Exception {
- byte[] globalCustodianKey = cust2key.get(new Bytes(KEY_GLOBAL_CUSTODIAN_BYTES)).get();
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(KEY_GLOBAL_CUSTODIAN_BYTES,
- ManagedKeyData.KEY_SPACE_GLOBAL);
- assertKeyData(keyData, ManagedKeyState.ACTIVE, globalCustodianKey, KEY_GLOBAL_CUSTODIAN_BYTES,
- "global-cust-alias");
+ byte[] globalCustodianKey = cust2key.get(KEY_SPACE_GLOBAL_BYTES).get();
+ ManagedKeyData keyData = managedKeyProvider
+ .getManagedKey(managedKeyPrefixId(KEY_SPACE_GLOBAL_BYTES, ManagedKeyData.KEY_SPACE_GLOBAL));
+ assertKeyData(keyData, ManagedKeyState.ACTIVE, globalCustodianKey,
+ KEY_SPACE_GLOBAL_BYTES.get(), "global-cust-alias");
}
@Test
@@ -175,8 +190,8 @@ public void testGetInactiveKey() throws Exception {
String encCust = Base64.getEncoder().encodeToString(firstCust.get());
conf.set(HConstants.CRYPTO_MANAGED_KEY_STORE_CONF_KEY_PREFIX + encCust + ".*.active",
"false");
- ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(firstCust.get(), ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData keyData = managedKeyProvider
+ .getManagedKey(managedKeyPrefixId(firstCust, ManagedKeyData.KEY_SPACE_GLOBAL));
assertNotNull(keyData);
assertKeyData(keyData, ManagedKeyState.INACTIVE, cust2key.get(firstCust).get(),
firstCust.get(), cust2alias.get(firstCust));
@@ -185,8 +200,8 @@ public void testGetInactiveKey() throws Exception {
@Test
public void testGetInvalidKey() throws Exception {
byte[] invalidCustBytes = "invalid".getBytes();
- ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(invalidCustBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData keyData = managedKeyProvider
+ .getManagedKey(managedKeyPrefixId(invalidCustBytes, ManagedKeyData.KEY_SPACE_GLOBAL));
assertNotNull(keyData);
assertKeyData(keyData, ManagedKeyState.FAILED, null, invalidCustBytes, null);
}
@@ -200,8 +215,8 @@ public void testGetDisabledKey() throws Exception {
"disabled-alias");
conf.set(HConstants.CRYPTO_MANAGED_KEY_STORE_CONF_KEY_PREFIX + invalidCustEnc + ".*.active",
"false");
- ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(invalidCust, ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData keyData = managedKeyProvider
+ .getManagedKey(managedKeyPrefixId(invalidCust, ManagedKeyData.KEY_SPACE_GLOBAL));
assertNotNull(keyData);
assertKeyData(keyData, ManagedKeyState.DISABLED, null, invalidCust, "disabled-alias");
}
@@ -227,7 +242,7 @@ public void testUnwrapInvalidKey() throws Exception {
String invalidCustEnc = ManagedKeyProvider.encodeToStr(invalidCust);
String invalidMetadata =
ManagedKeyStoreKeyProvider.generateKeyMetadata(invalidAlias, invalidCustEnc);
- ManagedKeyData keyData = managedKeyProvider.unwrapKey(invalidMetadata, null);
+ ManagedKeyData keyData = managedKeyProvider.unwrapKey(null, invalidMetadata, null);
assertNotNull(keyData);
assertKeyData(keyData, ManagedKeyState.FAILED, null, invalidCust, invalidAlias);
}
@@ -241,11 +256,69 @@ public void testUnwrapDisabledKey() throws Exception {
"false");
String invalidMetadata = ManagedKeyStoreKeyProvider.generateKeyMetadata(invalidAlias,
invalidCustEnc, ManagedKeyData.KEY_SPACE_GLOBAL);
- ManagedKeyData keyData = managedKeyProvider.unwrapKey(invalidMetadata, null);
+ ManagedKeyData keyData = managedKeyProvider.unwrapKey(null, invalidMetadata, null);
assertNotNull(keyData);
assertKeyData(keyData, ManagedKeyState.DISABLED, null, invalidCust, invalidAlias);
}
+ @Test
+ public void testUnwrapKeyIdentityNullBuildsFullIdentityFromMetadata() throws Exception {
+ Bytes cust = cust2key.keySet().iterator().next();
+ ManagedKeyData fromGetManagedKey =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(cust, ManagedKeyData.KEY_SPACE_GLOBAL));
+ String keyMetadata = fromGetManagedKey.getKeyMetadata();
+
+ ManagedKeyData unwrapped = managedKeyProvider.unwrapKey(null, keyMetadata, null);
+
+ assertNotNull(unwrapped);
+ assertEquals(ManagedKeyState.ACTIVE, unwrapped.getKeyState());
+ assertEquals(keyMetadata, unwrapped.getKeyMetadata());
+ assertEquals(ManagedKeyData.KEY_SPACE_GLOBAL, unwrapped.getKeyNamespace());
+ assertTrue(Arrays.equals(cust.get(), unwrapped.getKeyCustodian()));
+ assertTrue(Arrays.equals(ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata),
+ unwrapped.getPartialIdentity()));
+ }
+
+ @Test
+ public void testUnwrapKeyIdentityPartialExpandsToFullUsingMetadataHash() throws Exception {
+ Bytes cust = cust2key.keySet().iterator().next();
+ ManagedKeyIdentity partialIdentity =
+ managedKeyPrefixId(cust, ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData fromGetManagedKey =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(cust, ManagedKeyData.KEY_SPACE_GLOBAL));
+ String keyMetadata = fromGetManagedKey.getKeyMetadata();
+
+ ManagedKeyData unwrapped = managedKeyProvider.unwrapKey(partialIdentity, keyMetadata, null);
+
+ assertNotNull(unwrapped);
+ assertEquals(ManagedKeyState.ACTIVE, unwrapped.getKeyState());
+ assertEquals(keyMetadata, unwrapped.getKeyMetadata());
+ assertTrue(Arrays.equals(cust.get(), unwrapped.getKeyCustodian()));
+ assertEquals(ManagedKeyData.KEY_SPACE_GLOBAL, unwrapped.getKeyNamespace());
+ assertTrue(Arrays.equals(ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata),
+ unwrapped.getPartialIdentity()));
+ }
+
+ @Test
+ public void testUnwrapKeyIdentityFullIsKeptAsProvided() throws Exception {
+ Bytes cust = cust2key.keySet().iterator().next();
+ ManagedKeyData fromGetManagedKey =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(cust, ManagedKeyData.KEY_SPACE_GLOBAL));
+ String keyMetadata = fromGetManagedKey.getKeyMetadata();
+ byte[] customPartialIdentity = "custom-partial-identity".getBytes();
+ ManagedKeyIdentity fullIdentity =
+ new KeyIdentityBytesBacked(cust, KEY_SPACE_GLOBAL_BYTES, new Bytes(customPartialIdentity));
+
+ ManagedKeyData unwrapped = managedKeyProvider.unwrapKey(fullIdentity, keyMetadata, null);
+
+ assertNotNull(unwrapped);
+ assertEquals(ManagedKeyState.ACTIVE, unwrapped.getKeyState());
+ assertEquals(keyMetadata, unwrapped.getKeyMetadata());
+ assertTrue(Arrays.equals(cust.get(), unwrapped.getKeyCustodian()));
+ assertEquals(ManagedKeyData.KEY_SPACE_GLOBAL, unwrapped.getKeyNamespace());
+ assertTrue(Arrays.equals(customPartialIdentity, unwrapped.getPartialIdentity()));
+ }
+
@Test
public void testGetManagedKeyWithCustomNamespace() throws Exception {
String customNamespace1 = "table1/cf1";
@@ -253,7 +326,8 @@ public void testGetManagedKeyWithCustomNamespace() throws Exception {
int index = 0;
for (Bytes cust : namespaceCust2key.keySet()) {
String namespace = (index == 0) ? customNamespace1 : customNamespace2;
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(cust.get(), namespace);
+ ManagedKeyData keyData =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(cust, namespace));
assertKeyDataWithNamespace(keyData, ManagedKeyState.ACTIVE,
namespaceCust2key.get(cust).get(), cust.get(), namespaceCust2alias.get(cust), namespace);
index++;
@@ -269,7 +343,8 @@ public void testGetManagedKeyWithCustomNamespaceInactive() throws Exception {
conf.set(HConstants.CRYPTO_MANAGED_KEY_STORE_CONF_KEY_PREFIX + encCust + "." + customNamespace
+ ".active", "false");
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(firstCust.get(), customNamespace);
+ ManagedKeyData keyData =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(firstCust, customNamespace));
assertNotNull(keyData);
assertKeyDataWithNamespace(keyData, ManagedKeyState.INACTIVE,
namespaceCust2key.get(firstCust).get(), firstCust.get(), namespaceCust2alias.get(firstCust),
@@ -280,7 +355,8 @@ public void testGetManagedKeyWithCustomNamespaceInactive() throws Exception {
public void testGetManagedKeyWithInvalidCustomNamespace() throws Exception {
byte[] invalidCustBytes = "invalid".getBytes();
String customNamespace = "invalid/namespace";
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(invalidCustBytes, customNamespace);
+ ManagedKeyData keyData =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(invalidCustBytes, customNamespace));
assertNotNull(keyData);
assertKeyDataWithNamespace(keyData, ManagedKeyState.FAILED, null, invalidCustBytes, null,
customNamespace);
@@ -294,7 +370,7 @@ public void testNamespaceMismatchReturnsFailedKey() throws Exception {
// Request key with different namespace - should fail
ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(firstCust.get(), requestedNamespace);
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(firstCust, requestedNamespace));
assertNotNull(keyData);
assertEquals(ManagedKeyState.FAILED, keyData.getKeyState());
@@ -310,7 +386,7 @@ public void testNamespaceMatchReturnsKey() throws Exception {
String configuredNamespace = "table1/cf1"; // This matches our test setup
ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(firstCust.get(), configuredNamespace);
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(firstCust, configuredNamespace));
assertNotNull(keyData);
assertEquals(ManagedKeyState.ACTIVE, keyData.getKeyState());
@@ -325,7 +401,8 @@ public void testGlobalKeyAccessedWithWrongNamespaceFails() throws Exception {
// Try to access it with a custom namespace - should fail
String wrongNamespace = "table1/cf1";
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(globalCust.get(), wrongNamespace);
+ ManagedKeyData keyData =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(globalCust, wrongNamespace));
assertNotNull(keyData);
assertEquals(ManagedKeyState.FAILED, keyData.getKeyState());
@@ -339,8 +416,8 @@ public void testNamespaceKeyAccessedAsGlobalFails() throws Exception {
Bytes namespaceCust = namespaceCust2key.keySet().iterator().next();
// Try to access it as global - should fail
- ManagedKeyData keyData =
- managedKeyProvider.getManagedKey(namespaceCust.get(), ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData keyData = managedKeyProvider
+ .getManagedKey(managedKeyPrefixId(namespaceCust, ManagedKeyData.KEY_SPACE_GLOBAL));
assertNotNull(keyData);
assertEquals(ManagedKeyState.FAILED, keyData.getKeyState());
@@ -357,13 +434,13 @@ public void testMultipleNamespacesForSameCustodianFail() throws Exception {
// Verify we can access with configured namespace
ManagedKeyData keyData1 =
- managedKeyProvider.getManagedKey(namespaceCust.get(), configuredNamespace);
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(namespaceCust, configuredNamespace));
assertEquals(ManagedKeyState.ACTIVE, keyData1.getKeyState());
assertEquals(configuredNamespace, keyData1.getKeyNamespace());
// But accessing with different namespace should fail (even though it's the same custodian)
ManagedKeyData keyData2 =
- managedKeyProvider.getManagedKey(namespaceCust.get(), differentNamespace);
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(namespaceCust, differentNamespace));
assertEquals(ManagedKeyState.FAILED, keyData2.getKeyState());
assertEquals(differentNamespace, keyData2.getKeyNamespace());
}
@@ -373,8 +450,9 @@ public void testNullNamespaceDefaultsToGlobal() throws Exception {
// Get a global key (one from cust2key)
Bytes globalCust = cust2key.keySet().iterator().next();
- // Call getManagedKey with null namespace - should default to global and succeed
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(globalCust.get(), null);
+ // Null namespace in the old API defaulted to global; express the same with a global prefix id
+ ManagedKeyData keyData =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(globalCust, null));
assertNotNull(keyData);
assertEquals(ManagedKeyState.ACTIVE, keyData.getKeyState());
@@ -389,7 +467,8 @@ public void testFailedKeyContainsProperMetadataWithAlias() throws Exception {
String wrongNamespace = "wrong/namespace";
// Request with wrong namespace - should fail but have proper metadata
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(firstCust.get(), wrongNamespace);
+ ManagedKeyData keyData =
+ managedKeyProvider.getManagedKey(managedKeyPrefixId(firstCust, wrongNamespace));
assertNotNull(keyData);
assertEquals(ManagedKeyState.FAILED, keyData.getKeyState());
@@ -483,7 +562,7 @@ private void assertKeyDataWithNamespace(ManagedKeyData keyData, ManagedKeyState
assertMetadataMatches(keyData.getKeyMetadata(), alias, encodedCust, expectedNamespace);
assertTrue(Bytes.equals(custBytes, keyData.getKeyCustodian()));
- assertEquals(keyData, managedKeyProvider.unwrapKey(keyData.getKeyMetadata(), null));
+ assertEquals(keyData, managedKeyProvider.unwrapKey(null, keyData.getKeyMetadata(), null));
}
}
diff --git a/hbase-common/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyIdentity.java b/hbase-common/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyIdentity.java
new file mode 100644
index 000000000000..22637fbc03ea
--- /dev/null
+++ b/hbase-common/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyIdentity.java
@@ -0,0 +1,1429 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.keymeta;
+
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNotSame;
+import static org.junit.Assert.assertSame;
+import static org.junit.Assert.assertThrows;
+import static org.junit.Assert.assertTrue;
+
+import org.apache.hadoop.hbase.HBaseClassTestRule;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
+import org.apache.hadoop.hbase.testclassification.MasterTests;
+import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.BlockJUnit4ClassRunner;
+import org.junit.runners.Suite;
+
+/**
+ * Tests for {@link ManagedKeyIdentity} implementations: {@link KeyIdentityBytesBacked},
+ * {@link KeyIdentitySingleArrayBacked}, and {@link KeyIdentityPrefixBytesBacked} (custodian +
+ * namespace only; partial identity is absent and matches
+ * {@link ManagedKeyIdentity#KEY_NULL_IDENTITY_BYTES} for equality).
+ *
+ * Structure:
+ *
+ *
{@link TestCrossClassEquality} — verifies that instances of different concrete classes with
+ * the same data are equal and have the same hashCode (content-based, interoperable).
+ *
{@link TestInteroperabilityAsHashKeys} — verifies that types are interchangeable as map keys
+ * (put with one type, get with another, etc.).
+ *
{@link AbstractTestFullKeyIdentity} — abstract base holding all interface-contract tests.
+ * Subclasses implement a single {@code create()} factory method; JUnit's inheritance mechanism runs
+ * each abstract test against every concrete implementation automatically.
+ *
{@link TestBytesBacked}, {@link TestSingleArrayBacked} — concrete subclasses that override
+ * {@code create()} and add implementation-specific tests.
+ *
{@link TestKeyIdentityPrefixBytesBacked} — contract and construction tests for the
+ * prefix-only implementation (mirrors {@link AbstractTestFullKeyIdentity} where applicable).
+ *
+ *
+ * Note: {@link KeyIdentityBytesBacked}'s {@code getCustodianEncoded()},
+ * {@code getNamespaceString()}, and {@code getPartialIdentityEncoded()} call {@code Bytes.get()}
+ * which returns the full backing array. These methods produce incorrect output when the
+ * {@link Bytes} objects have non-zero offsets. The implementation-specific tests here deliberately
+ * avoid that path to keep the suite green; the issue is captured in comments in
+ * {@link TestBytesBacked#testConstructionWithBytesHavingOffset()}.
+ */
+@RunWith(Suite.class)
+@Suite.SuiteClasses({ TestManagedKeyIdentity.TestCrossClassEquality.class,
+ TestManagedKeyIdentity.TestInteroperabilityAsHashKeys.class,
+ TestManagedKeyIdentity.TestBytesBacked.class, TestManagedKeyIdentity.TestSingleArrayBacked.class,
+ TestManagedKeyIdentity.TestKeyIdentityPrefixBytesBacked.class })
+@Category({ MasterTests.class, SmallTests.class })
+public class TestManagedKeyIdentity {
+
+ // ---------------------------------------------------------------------------
+ // Shared test data
+ // ---------------------------------------------------------------------------
+
+ static final byte[] CUSTODIAN = new byte[] { 0x01, 0x02, 0x03, 0x04 };
+ static final byte[] NAMESPACE = Bytes.toBytes("testns");
+ static final byte[] PARTIAL = new byte[] { (byte) 0xAA, (byte) 0xBB, (byte) 0xCC };
+
+ // ---------------------------------------------------------------------------
+ // Cross-class equality
+ // ---------------------------------------------------------------------------
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestCrossClassEquality {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestCrossClassEquality.class);
+
+ @Test
+ public void testDifferentImplsWithSameDataAreEqual() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentityBytesBacked bb =
+ new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE), new Bytes(PARTIAL));
+ KeyIdentityBytesBacked bbFromRawArrays =
+ new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentitySingleArrayBacked sa = new KeyIdentitySingleArrayBacked(backing);
+
+ assertTrue("BytesBacked (Bytes ctor) must equal BytesBacked (byte[] ctor)",
+ bb.equals(bbFromRawArrays));
+ assertTrue("BytesBacked (byte[] ctor) must equal BytesBacked (Bytes ctor)",
+ bbFromRawArrays.equals(bb));
+ assertTrue("BytesBacked must equal SingleArrayBacked", bb.equals(sa));
+ assertTrue("SingleArrayBacked must equal BytesBacked", sa.equals(bb));
+ assertTrue("BytesBacked (byte[] ctor) must equal SingleArrayBacked",
+ bbFromRawArrays.equals(sa));
+ assertTrue("SingleArrayBacked must equal BytesBacked (byte[] ctor)",
+ sa.equals(bbFromRawArrays));
+ }
+
+ @Test
+ public void testDifferentImplsWithSameDataHaveSameHashCode() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentityBytesBacked bb =
+ new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE), new Bytes(PARTIAL));
+ KeyIdentityBytesBacked bbFromRawArrays =
+ new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentitySingleArrayBacked sa = new KeyIdentitySingleArrayBacked(backing);
+
+ assertEquals("BytesBacked (Bytes ctor) and (byte[] ctor) must have same hashCode",
+ bb.hashCode(), bbFromRawArrays.hashCode());
+ assertEquals("BytesBacked and SingleArrayBacked must have same hashCode", bb.hashCode(),
+ sa.hashCode());
+ assertEquals("BytesBacked (byte[] ctor) and SingleArrayBacked must have same hashCode",
+ bbFromRawArrays.hashCode(), sa.hashCode());
+ }
+
+ /**
+ * {@link KeyIdentityPrefixBytesBacked} matches {@link KeyIdentityBytesBacked} with an empty
+ * partial segment and {@link KeyIdentitySingleArrayBacked} over the custodian+namespace marker
+ * row (no trailing partial length byte).
+ */
+ @Test
+ public void testPrefixBackedEqualsBytesBackedEmptyPartialAndCustNamespaceSingleArray() {
+ KeyIdentityPrefixBytesBacked prefix = new KeyIdentityPrefixBytesBacked(CUSTODIAN, NAMESPACE);
+ KeyIdentityBytesBacked bbEmptyPartial =
+ new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, new byte[0]);
+ KeyIdentityBytesBacked bbEmptyFromBytes = new KeyIdentityBytesBacked(new Bytes(CUSTODIAN),
+ new Bytes(NAMESPACE), new Bytes(new byte[0]));
+ byte[] markerRow =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ KeyIdentitySingleArrayBacked sa = new KeyIdentitySingleArrayBacked(markerRow);
+
+ assertTrue(prefix.equals(bbEmptyPartial));
+ assertTrue(bbEmptyPartial.equals(prefix));
+ assertTrue(prefix.equals(bbEmptyFromBytes));
+ assertTrue(bbEmptyFromBytes.equals(prefix));
+ assertTrue(prefix.equals(sa));
+ assertTrue(sa.equals(prefix));
+
+ assertEquals(prefix.hashCode(), bbEmptyPartial.hashCode());
+ assertEquals(prefix.hashCode(), bbEmptyFromBytes.hashCode());
+ assertEquals(prefix.hashCode(), sa.hashCode());
+ }
+
+ @Test
+ public void testPrefixBackedNotEqualToIdentityWithNonEmptyPartial() {
+ KeyIdentityPrefixBytesBacked prefix = new KeyIdentityPrefixBytesBacked(CUSTODIAN, NAMESPACE);
+ KeyIdentityBytesBacked withPartial =
+ new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentitySingleArrayBacked sa = new KeyIdentitySingleArrayBacked(backing);
+
+ assertFalse(prefix.equals(withPartial));
+ assertFalse(withPartial.equals(prefix));
+ assertFalse(prefix.equals(sa));
+ assertFalse(sa.equals(prefix));
+ }
+ }
+
+ // ---------------------------------------------------------------------------
+ // Interoperability as hash keys (Map/Set)
+ // ---------------------------------------------------------------------------
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestInteroperabilityAsHashKeys {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestInteroperabilityAsHashKeys.class);
+
+ private static KeyIdentityBytesBacked createBytesBacked() {
+ return new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE),
+ new Bytes(PARTIAL));
+ }
+
+ /**
+ * Same logical identity as {@link #createBytesBacked()} but via the {@code byte[]} constructor.
+ */
+ private static KeyIdentityBytesBacked createBytesBackedFromRawArrays() {
+ return new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, PARTIAL);
+ }
+
+ private static KeyIdentitySingleArrayBacked createSingleArrayBacked() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ return new KeyIdentitySingleArrayBacked(backing);
+ }
+
+ private static KeyIdentityPrefixBytesBacked createPrefixBacked() {
+ return new KeyIdentityPrefixBytesBacked(CUSTODIAN, NAMESPACE);
+ }
+
+ private static KeyIdentityBytesBacked createBytesBackedEmptyPartial() {
+ return new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, new byte[0]);
+ }
+
+ private static KeyIdentitySingleArrayBacked createSingleArrayCustNamespaceMarker() {
+ byte[] marker = ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ return new KeyIdentitySingleArrayBacked(marker);
+ }
+
+ @Test
+ public void testPutWithOneTypeGetWithAnother() {
+ java.util.Map map = new java.util.HashMap<>();
+ String value = "value1";
+
+ map.put(createBytesBacked(), value);
+ assertEquals(value, map.get(createBytesBackedFromRawArrays()));
+ assertEquals(value, map.get(createSingleArrayBacked()));
+
+ map.clear();
+ map.put(createBytesBackedFromRawArrays(), value);
+ assertEquals(value, map.get(createBytesBacked()));
+ assertEquals(value, map.get(createSingleArrayBacked()));
+
+ map.clear();
+ map.put(createSingleArrayBacked(), value);
+ assertEquals(value, map.get(createBytesBacked()));
+ assertEquals(value, map.get(createBytesBackedFromRawArrays()));
+ }
+
+ @Test
+ public void testContainsKeyWithDifferentType() {
+ java.util.Map map = new java.util.HashMap<>();
+ map.put(createBytesBacked(), "v");
+
+ assertTrue(map.containsKey(createBytesBackedFromRawArrays()));
+ assertTrue(map.containsKey(createSingleArrayBacked()));
+ }
+
+ @Test
+ public void testSetDeduplicatesAcrossTypes() {
+ java.util.Set set = new java.util.HashSet<>();
+ set.add(createBytesBacked());
+ set.add(createBytesBackedFromRawArrays());
+ set.add(createSingleArrayBacked());
+
+ assertEquals("Set must contain exactly one entry for same logical identity", 1, set.size());
+ }
+
+ @Test
+ public void testMapSizeOneWhenSameIdentityDifferentTypes() {
+ java.util.Map map = new java.util.HashMap<>();
+ map.put(createBytesBacked(), "first");
+ map.put(createBytesBackedFromRawArrays(), "second");
+ map.put(createSingleArrayBacked(), "third");
+
+ assertEquals("Map must have size 1 when all keys are same logical identity", 1, map.size());
+ assertEquals("Last put wins", "third", map.get(createBytesBacked()));
+ }
+
+ @Test
+ public void testPutPrefixGetWithBytesBackedEmptyPartialAndCustNamespaceMarker() {
+ java.util.Map map = new java.util.HashMap<>();
+ String value = "prefix-interop";
+ map.put(createPrefixBacked(), value);
+ assertEquals(value, map.get(createBytesBackedEmptyPartial()));
+ assertEquals(value, map.get(createSingleArrayCustNamespaceMarker()));
+ }
+
+ @Test
+ public void testPutBytesBackedEmptyPartialGetWithPrefixAndCustNamespaceMarker() {
+ java.util.Map map = new java.util.HashMap<>();
+ String value = "empty-partial-interop";
+ map.put(createBytesBackedEmptyPartial(), value);
+ assertEquals(value, map.get(createPrefixBacked()));
+ assertEquals(value, map.get(createSingleArrayCustNamespaceMarker()));
+ }
+
+ @Test
+ public void testSetDeduplicatesPrefixWithBytesBackedEmptyPartialAndMarker() {
+ java.util.Set set = new java.util.HashSet<>();
+ set.add(createPrefixBacked());
+ set.add(createBytesBackedEmptyPartial());
+ set.add(createSingleArrayCustNamespaceMarker());
+ assertEquals("Set must collapse prefix-equivalent identities to one entry", 1, set.size());
+ }
+
+ @Test
+ public void testSingleArrayBackedWithSlice_equalsAndHashCodeWithOtherTypes() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] larger = new byte[backing.length + 6];
+ System.arraycopy(backing, 0, larger, 3, backing.length);
+ KeyIdentitySingleArrayBacked slice =
+ new KeyIdentitySingleArrayBacked(larger, 3, backing.length);
+
+ KeyIdentitySingleArrayBacked full = new KeyIdentitySingleArrayBacked(backing);
+ KeyIdentityBytesBacked bb = createBytesBacked();
+ KeyIdentityBytesBacked bbArrays = createBytesBackedFromRawArrays();
+
+ assertTrue("SingleArrayBacked slice must equal full-array SingleArrayBacked",
+ slice.equals(full));
+ assertTrue("SingleArrayBacked slice must equal BytesBacked", slice.equals(bb));
+ assertTrue("SingleArrayBacked slice must equal BytesBacked (byte[] ctor)",
+ slice.equals(bbArrays));
+
+ assertEquals("SingleArrayBacked slice must have same hashCode as full-array",
+ slice.hashCode(), full.hashCode());
+ assertEquals("SingleArrayBacked slice must have same hashCode as BytesBacked",
+ slice.hashCode(), bb.hashCode());
+ assertEquals("SingleArrayBacked slice must have same hashCode as BytesBacked (byte[] ctor)",
+ slice.hashCode(), bbArrays.hashCode());
+ }
+ }
+
+ // ---------------------------------------------------------------------------
+ // Abstract base — interface-contract tests shared by all implementations.
+ //
+ // JUnit discovers @Test methods on every concrete subclass via inheritance,
+ // running each one with that subclass's create() factory in effect (Template
+ // Method pattern). N test bodies written here execute 2×N times total.
+ // ---------------------------------------------------------------------------
+
+ public abstract static class AbstractTestFullKeyIdentity {
+
+ /**
+ * Creates an instance from the given raw segment bytes. The partial identity must be non-empty
+ * (length >= 1) because {@link KeyIdentitySingleArrayBacked} is constructed via
+ * {@link ManagedKeyIdentityUtils#constructRowKeyForIdentity(byte[], byte[], byte[])} which
+ * enforces that constraint.
+ */
+ protected abstract ManagedKeyIdentity create(byte[] cust, byte[] ns, byte[] partial);
+
+ // -- View getters --
+
+ @Test
+ public void testGetCustodianView() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertArrayEquals(CUSTODIAN, fki.getCustodianView().copyBytes());
+ }
+
+ @Test
+ public void testGetNamespaceView() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertArrayEquals(NAMESPACE, fki.getNamespaceView().copyBytes());
+ }
+
+ @Test
+ public void testGetPartialIdentityView() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertArrayEquals(PARTIAL, fki.getPartialIdentityView().copyBytes());
+ }
+
+ @Test
+ public void testGetFullIdentityView() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] expected =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertArrayEquals(expected, fki.getFullIdentityView().copyBytes());
+ }
+
+ @Test
+ public void testGetIdentityPrefixView() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] expected =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ assertArrayEquals(expected, fki.getIdentityPrefixView().copyBytes());
+ }
+
+ @Test
+ public void testGetKeyIdentityPrefix() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ ManagedKeyIdentity prefix = fki.getKeyIdentityPrefix();
+ assertNotSame("Non-empty partial identity must produce a stripped prefix identity", fki,
+ prefix);
+ assertArrayEquals(CUSTODIAN, prefix.getCustodianView().copyBytes());
+ assertArrayEquals(NAMESPACE, prefix.getNamespaceView().copyBytes());
+ assertEquals(0, prefix.getPartialIdentityLength());
+ }
+
+ // -- Copy methods (defensive isolation) --
+
+ @Test
+ public void testCopyCustodian() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] copy = fki.copyCustodian();
+ assertArrayEquals(CUSTODIAN, copy);
+ copy[0] = (byte) 0xFF;
+ assertArrayEquals("Mutating returned copy must not affect the object", CUSTODIAN,
+ fki.copyCustodian());
+ }
+
+ @Test
+ public void testCopyNamespace() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] copy = fki.copyNamespace();
+ assertArrayEquals(NAMESPACE, copy);
+ copy[0] = (byte) 0xFF;
+ assertArrayEquals("Mutating returned copy must not affect the object", NAMESPACE,
+ fki.copyNamespace());
+ }
+
+ @Test
+ public void testCopyPartialIdentity() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] copy = fki.copyPartialIdentity();
+ assertArrayEquals(PARTIAL, copy);
+ copy[0] = (byte) 0x00;
+ assertArrayEquals("Mutating returned copy must not affect the object", PARTIAL,
+ fki.copyPartialIdentity());
+ }
+
+ // -- Length methods --
+
+ @Test
+ public void testGetCustodianLength() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(CUSTODIAN.length, fki.getCustodianLength());
+ }
+
+ @Test
+ public void testGetNamespaceLength() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(NAMESPACE.length, fki.getNamespaceLength());
+ }
+
+ @Test
+ public void testGetPartialIdentityLength() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(PARTIAL.length, fki.getPartialIdentityLength());
+ }
+
+ // -- String / encoded getters --
+
+ @Test
+ public void testGetCustodianEncoded() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(ManagedKeyProvider.encodeToStr(CUSTODIAN), fki.getCustodianEncoded());
+ }
+
+ @Test
+ public void testGetNamespaceString() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(Bytes.toString(NAMESPACE), fki.getNamespaceString());
+ }
+
+ @Test
+ public void testGetPartialIdentityEncoded() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(ManagedKeyProvider.encodeToStr(PARTIAL), fki.getPartialIdentityEncoded());
+ }
+
+ // -- equals / hashCode --
+
+ @Test
+ public void testEqualsReflexive() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertTrue(fki.equals(fki));
+ }
+
+ @Test
+ public void testEqualsSameData() {
+ ManagedKeyIdentity fki1 = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ ManagedKeyIdentity fki2 = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertTrue(fki1.equals(fki2));
+ assertTrue(fki2.equals(fki1));
+ assertEquals("equal instances must have the same hash code", fki1.hashCode(),
+ fki2.hashCode());
+ }
+
+ @Test
+ public void testEqualsDifferentCustodian() {
+ ManagedKeyIdentity fki1 = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ ManagedKeyIdentity fki2 = create(new byte[] { 0x0A, 0x0B }, NAMESPACE, PARTIAL);
+ assertFalse(fki1.equals(fki2));
+ assertFalse(fki2.equals(fki1));
+ }
+
+ @Test
+ public void testEqualsDifferentNamespace() {
+ ManagedKeyIdentity fki1 = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ ManagedKeyIdentity fki2 = create(CUSTODIAN, Bytes.toBytes("other"), PARTIAL);
+ assertFalse(fki1.equals(fki2));
+ assertFalse(fki2.equals(fki1));
+ }
+
+ @Test
+ public void testEqualsDifferentPartialIdentity() {
+ ManagedKeyIdentity fki1 = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ ManagedKeyIdentity fki2 = create(CUSTODIAN, NAMESPACE, new byte[] { 0x01 });
+ assertFalse(fki1.equals(fki2));
+ assertFalse(fki2.equals(fki1));
+ }
+
+ @Test
+ public void testEqualsNull() {
+ assertFalse(create(CUSTODIAN, NAMESPACE, PARTIAL).equals(null));
+ }
+
+ @Test
+ public void testEqualsDifferentRuntimeClass() {
+ assertFalse(create(CUSTODIAN, NAMESPACE, PARTIAL).equals("not a FullKeyIdentity"));
+ }
+
+ // -- clone --
+
+ @Test
+ public void testCloneIsEqualAndSameClass() {
+ ManagedKeyIdentity fki = create(CUSTODIAN, NAMESPACE, PARTIAL);
+ ManagedKeyIdentity clone = fki.clone();
+ assertNotNull(clone);
+ assertEquals("clone must be same class as original", fki.getClass(), clone.getClass());
+ assertTrue("clone must be equal to original", fki.equals(clone));
+ assertTrue("original must be equal to clone", clone.equals(fki));
+ // Ensure clone created a copy instead of just returning the same.
+ assertNotSame(fki, clone);
+ assertNotSame(fki.getCustodianView(), clone.getCustodianView());
+ assertNotSame(fki.getNamespaceView(), clone.getNamespaceView());
+ assertNotSame(fki.getPartialIdentityView(), clone.getPartialIdentityView());
+ }
+
+ @Test
+ public void testCloneHoldsCorrectData() {
+ ManagedKeyIdentity clone = create(CUSTODIAN, NAMESPACE, PARTIAL).clone();
+ assertArrayEquals(CUSTODIAN, clone.copyCustodian());
+ assertArrayEquals(NAMESPACE, clone.copyNamespace());
+ assertArrayEquals(PARTIAL, clone.copyPartialIdentity());
+ }
+
+ // -- compareCustodian --
+
+ @Test
+ public void testCompareCustodianEqual() {
+ assertEquals(0, create(CUSTODIAN, NAMESPACE, PARTIAL).compareCustodian(CUSTODIAN));
+ }
+
+ @Test
+ public void testCompareCustodianLess() {
+ // {0x01, 0x02, 0x03} < {0x01, 0x02, 0x03, 0x04} lexicographically (prefix, shorter)
+ assertTrue(
+ create(new byte[] { 0x01, 0x02, 0x03 }, NAMESPACE, PARTIAL).compareCustodian(CUSTODIAN)
+ < 0);
+ }
+
+ @Test
+ public void testCompareCustodianGreater() {
+ // {0x01, 0x02, 0x03, 0x05} > {0x01, 0x02, 0x03, 0x04}
+ assertTrue(create(new byte[] { 0x01, 0x02, 0x03, 0x05 }, NAMESPACE, PARTIAL)
+ .compareCustodian(CUSTODIAN) > 0);
+ }
+
+ @Test
+ public void testCompareCustodianWithOffsetLength() {
+ // Embed CUSTODIAN inside a larger array starting at offset 2.
+ byte[] padded = new byte[CUSTODIAN.length + 2];
+ padded[0] = (byte) 0xFF;
+ padded[1] = (byte) 0xFF;
+ System.arraycopy(CUSTODIAN, 0, padded, 2, CUSTODIAN.length);
+ assertEquals(0,
+ create(CUSTODIAN, NAMESPACE, PARTIAL).compareCustodian(padded, 2, CUSTODIAN.length));
+ }
+
+ // -- compareNamespace --
+
+ @Test
+ public void testCompareNamespaceEqual() {
+ assertEquals(0, create(CUSTODIAN, NAMESPACE, PARTIAL).compareNamespace(NAMESPACE));
+ }
+
+ @Test
+ public void testCompareNamespaceLess() {
+ assertTrue(
+ create(CUSTODIAN, Bytes.toBytes("aaa"), PARTIAL).compareNamespace(Bytes.toBytes("zzz"))
+ < 0);
+ }
+
+ @Test
+ public void testCompareNamespaceGreater() {
+ assertTrue(
+ create(CUSTODIAN, Bytes.toBytes("zzz"), PARTIAL).compareNamespace(Bytes.toBytes("aaa"))
+ > 0);
+ }
+
+ @Test
+ public void testCompareNamespaceWithOffsetLength() {
+ byte[] padded = new byte[NAMESPACE.length + 1];
+ padded[0] = (byte) 0x00;
+ System.arraycopy(NAMESPACE, 0, padded, 1, NAMESPACE.length);
+ assertEquals(0,
+ create(CUSTODIAN, NAMESPACE, PARTIAL).compareNamespace(padded, 1, NAMESPACE.length));
+ }
+
+ // -- comparePartialIdentity --
+
+ @Test
+ public void testComparePartialIdentityEqual() {
+ assertEquals(0, create(CUSTODIAN, NAMESPACE, PARTIAL).comparePartialIdentity(PARTIAL));
+ }
+
+ @Test
+ public void testComparePartialIdentityLess() {
+ // {0x00} < {0xAA, 0xBB, 0xCC}
+ assertTrue(
+ create(CUSTODIAN, NAMESPACE, new byte[] { 0x00 }).comparePartialIdentity(PARTIAL) < 0);
+ }
+
+ @Test
+ public void testComparePartialIdentityGreater() {
+ // {0xFF, 0xFF} > {0xAA, 0xBB, 0xCC}
+ assertTrue(create(CUSTODIAN, NAMESPACE, new byte[] { (byte) 0xFF, (byte) 0xFF })
+ .comparePartialIdentity(PARTIAL) > 0);
+ }
+
+ @Test
+ public void testComparePartialIdentityWithOffsetLength() {
+ byte[] padded = new byte[PARTIAL.length + 3];
+ System.arraycopy(PARTIAL, 0, padded, 3, PARTIAL.length);
+ assertEquals(0,
+ create(CUSTODIAN, NAMESPACE, PARTIAL).comparePartialIdentity(padded, 3, PARTIAL.length));
+ }
+ }
+
+ // ---------------------------------------------------------------------------
+ // BytesBacked
+ // ---------------------------------------------------------------------------
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestBytesBacked extends AbstractTestFullKeyIdentity {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestBytesBacked.class);
+
+ @Override
+ protected ManagedKeyIdentity create(byte[] cust, byte[] ns, byte[] partial) {
+ return new KeyIdentityBytesBacked(new Bytes(cust), new Bytes(ns), new Bytes(partial));
+ }
+
+ // -- Construction validation --
+
+ @Test
+ public void testConstructionWithNullCustodian_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityBytesBacked(null, new Bytes(NAMESPACE), new Bytes(PARTIAL)));
+ }
+
+ @Test
+ public void testConstructionWithNullNamespace_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), null, new Bytes(PARTIAL)));
+ }
+
+ @Test
+ public void testConstructionWithNullPartialIdentity_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE), null));
+ }
+
+ @Test
+ public void testConstructionWithZeroLengthCustodian_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentityBytesBacked(new Bytes(new byte[0]), new Bytes(NAMESPACE),
+ new Bytes(PARTIAL)));
+ }
+
+ @Test
+ public void testConstructionWithZeroLengthNamespace_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(new byte[0]),
+ new Bytes(PARTIAL)));
+ }
+
+ // -- Empty partial identity (permitted; partialIdentity.getLength() >= 0) --
+
+ @Test
+ public void testEmptyPartialIdentity() {
+ ManagedKeyIdentity fki = new KeyIdentityBytesBacked(new Bytes(CUSTODIAN),
+ new Bytes(NAMESPACE), new Bytes(new byte[0]));
+ assertEquals(0, fki.getPartialIdentityLength());
+ assertArrayEquals(new byte[0], fki.copyPartialIdentity());
+ }
+
+ // -- Bytes with non-zero offset --
+
+ @Test
+ public void testConstructionWithBytesHavingOffset() {
+ // Build a Bytes that views into the middle of a larger array.
+ byte[] larger = new byte[CUSTODIAN.length + 4];
+ System.arraycopy(CUSTODIAN, 0, larger, 2, CUSTODIAN.length);
+ Bytes custView = new Bytes(larger, 2, CUSTODIAN.length);
+
+ KeyIdentityBytesBacked fki =
+ new KeyIdentityBytesBacked(custView, new Bytes(NAMESPACE), new Bytes(PARTIAL));
+
+ // copyCustodian() and the view use offset+length correctly.
+ assertArrayEquals(CUSTODIAN, fki.copyCustodian());
+ assertEquals(CUSTODIAN.length, fki.getCustodianLength());
+ assertArrayEquals(CUSTODIAN, fki.getCustodianView().copyBytes());
+ // compareCustodian also uses the Bytes compareTo which respects offset/length.
+ assertEquals(0, fki.compareCustodian(CUSTODIAN));
+ // Note: getCustodianEncoded() calls custodian.get() (returns full backing array)
+ // and would encode more bytes than intended when offset > 0. That is a known
+ // deficiency in BytesBacked not fixed here; the abstract-base encoding tests
+ // use full-array Bytes so they are unaffected.
+ }
+
+ @Test
+ public void testConstructionWithBytesHavingOffset_equalsAndHashCodeWithOtherTypes() {
+ // Build BytesBacked with all three segments viewing into larger arrays (non-zero offset).
+ byte[] largerCust = new byte[CUSTODIAN.length + 4];
+ System.arraycopy(CUSTODIAN, 0, largerCust, 2, CUSTODIAN.length);
+ byte[] largerNs = new byte[NAMESPACE.length + 4];
+ System.arraycopy(NAMESPACE, 0, largerNs, 2, NAMESPACE.length);
+ byte[] largerPartial = new byte[PARTIAL.length + 4];
+ System.arraycopy(PARTIAL, 0, largerPartial, 2, PARTIAL.length);
+
+ KeyIdentityBytesBacked withOffset =
+ new KeyIdentityBytesBacked(new Bytes(largerCust, 2, CUSTODIAN.length),
+ new Bytes(largerNs, 2, NAMESPACE.length), new Bytes(largerPartial, 2, PARTIAL.length));
+
+ KeyIdentityBytesBacked noOffset =
+ new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE), new Bytes(PARTIAL));
+ KeyIdentityBytesBacked fromRawArrays =
+ new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentitySingleArrayBacked single = new KeyIdentitySingleArrayBacked(backing);
+
+ assertTrue("BytesBacked with offset must equal BytesBacked without offset",
+ withOffset.equals(noOffset));
+ assertTrue("BytesBacked without offset must equal BytesBacked with offset",
+ noOffset.equals(withOffset));
+ assertTrue("BytesBacked with offset must equal BytesBacked (byte[] ctor)",
+ withOffset.equals(fromRawArrays));
+ assertTrue("BytesBacked with offset must equal SingleArrayBacked", withOffset.equals(single));
+
+ assertEquals("BytesBacked with offset must have same hashCode as no-offset",
+ withOffset.hashCode(), noOffset.hashCode());
+ assertEquals("BytesBacked with offset must have same hashCode as BytesBacked (byte[] ctor)",
+ withOffset.hashCode(), fromRawArrays.hashCode());
+ assertEquals("BytesBacked with offset must have same hashCode as SingleArrayBacked",
+ withOffset.hashCode(), single.hashCode());
+ }
+
+ // -- getCustodianView returns the stored Bytes object (no copy on read) --
+
+ @Test
+ public void testGetCustodianViewReturnsSameReference() {
+ Bytes custBytes = new Bytes(CUSTODIAN);
+ KeyIdentityBytesBacked fki =
+ new KeyIdentityBytesBacked(custBytes, new Bytes(NAMESPACE), new Bytes(PARTIAL));
+ assertSame(custBytes, fki.getCustodianView());
+ }
+
+ @Test
+ public void testGetNamespaceViewReturnsSameReference() {
+ Bytes nsBytes = new Bytes(NAMESPACE);
+ KeyIdentityBytesBacked fki =
+ new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), nsBytes, new Bytes(PARTIAL));
+ assertSame(nsBytes, fki.getNamespaceView());
+ }
+
+ @Test
+ public void testGetPartialIdentityViewReturnsSameReference() {
+ Bytes partialBytes = new Bytes(PARTIAL);
+ KeyIdentityBytesBacked fki =
+ new KeyIdentityBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE), partialBytes);
+ assertSame(partialBytes, fki.getPartialIdentityView());
+ }
+
+ // -- byte[] constructor (wraps arrays without copying; same validation as Bytes ctor) --
+
+ @Test
+ public void testConstructionWithNullCustodian_byteArrayCtor_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityBytesBacked(null, NAMESPACE, PARTIAL));
+ }
+
+ @Test
+ public void testConstructionWithNullNamespace_byteArrayCtor_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityBytesBacked(CUSTODIAN, null, PARTIAL));
+ }
+
+ @Test
+ public void testConstructionWithNullPartialIdentity_byteArrayCtor_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, null));
+ }
+
+ @Test
+ public void testConstructionWithZeroLengthCustodian_byteArrayCtor_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentityBytesBacked(new byte[0], NAMESPACE, PARTIAL));
+ }
+
+ @Test
+ public void testConstructionWithZeroLengthNamespace_byteArrayCtor_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentityBytesBacked(CUSTODIAN, new byte[0], PARTIAL));
+ }
+
+ @Test
+ public void testEmptyPartialIdentity_byteArrayCtor() {
+ ManagedKeyIdentity fki = new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, new byte[0]);
+ assertEquals(0, fki.getPartialIdentityLength());
+ assertArrayEquals(new byte[0], fki.copyPartialIdentity());
+ }
+
+ /**
+ * {@link KeyIdentityBytesBacked#FullKeyIdentityBytesBacked(byte[], byte[], byte[])} uses
+ * {@link Bytes#Bytes(byte[])} which keeps the caller's array as backing storage; mutating that
+ * array changes the identity's logical content.
+ */
+ @Test
+ public void testByteArrayCtorSharesBackingWithInput() {
+ byte[] cust = CUSTODIAN.clone();
+ ManagedKeyIdentity fki = new KeyIdentityBytesBacked(cust, NAMESPACE, PARTIAL);
+ byte mutated = (byte) 0xFF;
+ cust[0] = mutated;
+ assertEquals(mutated, fki.copyCustodian()[0]);
+ }
+
+ @Test
+ public void testGetKeyIdentityPrefixWithPartialReusesCustodianAndNamespaceViews() {
+ Bytes custBytes = new Bytes(CUSTODIAN);
+ Bytes nsBytes = new Bytes(NAMESPACE);
+ KeyIdentityBytesBacked fki =
+ new KeyIdentityBytesBacked(custBytes, nsBytes, new Bytes(PARTIAL));
+
+ ManagedKeyIdentity prefix = fki.getKeyIdentityPrefix();
+ assertNotSame(fki, prefix);
+ assertTrue(prefix instanceof KeyIdentityPrefixBytesBacked);
+ assertSame(custBytes, prefix.getCustodianView());
+ assertSame(nsBytes, prefix.getNamespaceView());
+ assertSame(ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES, prefix.getPartialIdentityView());
+ }
+
+ @Test
+ public void testGetKeyIdentityPrefixWithNoPartialReturnsSameInstance() {
+ KeyIdentityBytesBacked fki = new KeyIdentityBytesBacked(CUSTODIAN, NAMESPACE, new byte[0]);
+ assertSame(fki, fki.getKeyIdentityPrefix());
+ }
+ }
+
+ // ---------------------------------------------------------------------------
+ // SingleArrayBacked
+ // ---------------------------------------------------------------------------
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestSingleArrayBacked extends AbstractTestFullKeyIdentity {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestSingleArrayBacked.class);
+
+ @Override
+ protected ManagedKeyIdentity create(byte[] cust, byte[] ns, byte[] partial) {
+ return new KeyIdentitySingleArrayBacked(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(cust, ns, partial));
+ }
+
+ // -- Construction validation --
+
+ @Test
+ public void testConstructorWithNullArray_throws() {
+ assertThrows(IllegalArgumentException.class, () -> new KeyIdentitySingleArrayBacked(null));
+ }
+
+ @Test
+ public void testConstructorWithArrayTooShort_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentitySingleArrayBacked(new byte[2]));
+ }
+
+ /**
+ * ACTIVE marker row keys are {@link ManagedKeyIdentityUtils#constructRowKeyForCustNamespace} —
+ * no trailing partial length byte when partial identity is empty.
+ */
+ @Test
+ public void testCustNamespaceMarkerRowKey_emptyPartialIdentity() {
+ byte[] markerRow =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ KeyIdentitySingleArrayBacked fromRow = new KeyIdentitySingleArrayBacked(markerRow);
+ assertEquals(0, fromRow.getPartialIdentityLength());
+ assertArrayEquals(new byte[0], fromRow.copyPartialIdentity());
+ assertArrayEquals(markerRow, fromRow.getFullIdentityView().copyBytes());
+ assertArrayEquals(markerRow, fromRow.getIdentityPrefixView().copyBytes());
+ ManagedKeyIdentity parsed = new KeyIdentitySingleArrayBacked(markerRow);
+ assertTrue(fromRow.equals(parsed));
+ assertEquals(fromRow.hashCode(), parsed.hashCode());
+ }
+
+ @Test
+ public void testComparePartialIdentity_markerRowWithNonEmptyOtherReturnsNegativeOne() {
+ byte[] markerRow =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ KeyIdentitySingleArrayBacked fromRow = new KeyIdentitySingleArrayBacked(markerRow);
+ assertEquals(-1, fromRow.comparePartialIdentity(new byte[] { 0x01 }));
+ }
+
+ @Test
+ public void testComparePartialIdentity_markerRowWithEmptyOtherReturnsZero() {
+ byte[] markerRow =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ KeyIdentitySingleArrayBacked fromRow = new KeyIdentitySingleArrayBacked(markerRow);
+ assertEquals(0, fromRow.comparePartialIdentity(new byte[0]));
+ }
+
+ // -- Offset+length constructor --
+
+ @Test
+ public void testOffsetLengthConstructorExtractsCorrectData() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ // Embed the identity bytes inside a larger array with a 3-byte prefix.
+ byte[] larger = new byte[backing.length + 5];
+ System.arraycopy(backing, 0, larger, 3, backing.length);
+
+ KeyIdentitySingleArrayBacked fki =
+ new KeyIdentitySingleArrayBacked(larger, 3, backing.length);
+
+ assertArrayEquals(CUSTODIAN, fki.copyCustodian());
+ assertArrayEquals(NAMESPACE, fki.copyNamespace());
+ assertArrayEquals(PARTIAL, fki.copyPartialIdentity());
+
+ byte[] expectedFull =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] expectedPrefix =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ assertArrayEquals(expectedFull, fki.getFullIdentityView().copyBytes());
+ assertArrayEquals(expectedPrefix, fki.getIdentityPrefixView().copyBytes());
+ }
+
+ @Test
+ public void testOffsetLengthConstructorTooShortForLength_throws() {
+ // offset+length < 3
+ byte[] arr = new byte[5];
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentitySingleArrayBacked(arr, 4, 1));
+ }
+
+ // -- getBackingArray / getOffset / getLength --
+
+ @Test
+ public void testGetBackingArrayReturnsSameReference() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentitySingleArrayBacked fki = new KeyIdentitySingleArrayBacked(backing);
+ assertSame(backing, fki.getBackingArray());
+ }
+
+ @Test
+ public void testGetOffsetZeroForFullArrayConstructor() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(0, new KeyIdentitySingleArrayBacked(backing).getOffset());
+ }
+
+ @Test
+ public void testGetOffsetReflectsSlice() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] larger = new byte[backing.length + 3];
+ System.arraycopy(backing, 0, larger, 3, backing.length);
+ assertEquals(3, new KeyIdentitySingleArrayBacked(larger, 3, backing.length).getOffset());
+ }
+
+ @Test
+ public void testGetLengthMatchesBackingArrayLength() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ assertEquals(backing.length, new KeyIdentitySingleArrayBacked(backing).getLength());
+ }
+
+ @Test
+ public void testGetLengthReflectsSlice() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] larger = new byte[backing.length + 10];
+ System.arraycopy(backing, 0, larger, 5, backing.length);
+ assertEquals(backing.length,
+ new KeyIdentitySingleArrayBacked(larger, 5, backing.length).getLength());
+ }
+
+ // -- Malformed format detected lazily on first accessor call --
+
+ @Test
+ public void testMalformedFormat_custodianLengthOverflows_throws() {
+ // custLen byte = 100 but array is only 5 bytes total.
+ byte[] bad = new byte[] { 100, 0x01, 0x02, 0x03, 0x04 };
+ KeyIdentitySingleArrayBacked fki = new KeyIdentitySingleArrayBacked(bad);
+ assertThrows(IllegalArgumentException.class, () -> fki.getCustodianView());
+ }
+
+ @Test
+ public void testMalformedFormat_zeroNamespaceLength_throws() {
+ // custLen=1, cust=0x01, nsLen=0 (must be >= 1).
+ byte[] bad = new byte[] { 1, 0x01, 0, 0x02, 1, (byte) 0xAA };
+ KeyIdentitySingleArrayBacked fki = new KeyIdentitySingleArrayBacked(bad);
+ assertThrows(IllegalArgumentException.class, () -> fki.getNamespaceView());
+ }
+
+ @Test
+ public void testMalformedFormat_partialLengthMismatch_throws() {
+ // custLen=1, cust=0x01, nsLen=1, ns=0x02, partialLen=5 but only 1 byte follows.
+ byte[] bad = new byte[] { 1, 0x01, 1, 0x02, 5, (byte) 0xFF };
+ KeyIdentitySingleArrayBacked fki = new KeyIdentitySingleArrayBacked(bad);
+ assertThrows(IllegalArgumentException.class, () -> fki.getPartialIdentityView());
+ }
+
+ // -- equals / hashCode: compares raw backing-array slice (format-level equality) --
+
+ @Test
+ public void testEqualsTwoInstancesFromEquivalentBackings() {
+ byte[] b1 = ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] b2 = b1.clone();
+ KeyIdentitySingleArrayBacked fki1 = new KeyIdentitySingleArrayBacked(b1);
+ KeyIdentitySingleArrayBacked fki2 = new KeyIdentitySingleArrayBacked(b2);
+ assertTrue(fki1.equals(fki2));
+ assertEquals(fki1.hashCode(), fki2.hashCode());
+ }
+
+ @Test
+ public void testEqualsFullArrayVsSlice() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] larger = new byte[backing.length + 4];
+ System.arraycopy(backing, 0, larger, 2, backing.length);
+ KeyIdentitySingleArrayBacked full = new KeyIdentitySingleArrayBacked(backing.clone());
+ KeyIdentitySingleArrayBacked slice =
+ new KeyIdentitySingleArrayBacked(larger, 2, backing.length);
+ assertTrue(full.equals(slice));
+ assertTrue(slice.equals(full));
+ assertEquals(full.hashCode(), slice.hashCode());
+ }
+
+ // -- clone returns a self-contained copy (offset 0, own array) --
+
+ @Test
+ public void testCloneFromSliceProducesStandaloneInstance() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ byte[] larger = new byte[backing.length + 6];
+ System.arraycopy(backing, 0, larger, 3, backing.length);
+ KeyIdentitySingleArrayBacked slice =
+ new KeyIdentitySingleArrayBacked(larger, 3, backing.length);
+
+ KeyIdentitySingleArrayBacked clone = (KeyIdentitySingleArrayBacked) slice.clone();
+ assertEquals(0, clone.getOffset());
+ assertEquals(backing.length, clone.getLength());
+ assertArrayEquals(CUSTODIAN, clone.copyCustodian());
+ assertArrayEquals(NAMESPACE, clone.copyNamespace());
+ assertArrayEquals(PARTIAL, clone.copyPartialIdentity());
+ }
+
+ @Test
+ public void testGetKeyIdentityPrefixWithPartialReturnsBackingSlice() {
+ byte[] backing =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(CUSTODIAN, NAMESPACE, PARTIAL);
+ KeyIdentitySingleArrayBacked fki = new KeyIdentitySingleArrayBacked(backing);
+
+ ManagedKeyIdentity prefix = fki.getKeyIdentityPrefix();
+ assertNotSame(fki, prefix);
+ assertTrue(prefix instanceof KeyIdentitySingleArrayBacked);
+ KeyIdentitySingleArrayBacked prefixSingleArray = (KeyIdentitySingleArrayBacked) prefix;
+ assertSame(backing, prefixSingleArray.getBackingArray());
+ assertEquals(fki.getOffset(), prefixSingleArray.getOffset());
+ assertEquals(
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE).length,
+ prefixSingleArray.getLength());
+ assertEquals(0, prefixSingleArray.getPartialIdentityLength());
+ }
+
+ @Test
+ public void testGetKeyIdentityPrefixWithNoPartialReturnsSameInstance() {
+ byte[] marker = ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ KeyIdentitySingleArrayBacked fki = new KeyIdentitySingleArrayBacked(marker);
+ assertSame(fki, fki.getKeyIdentityPrefix());
+ }
+ }
+
+ // ---------------------------------------------------------------------------
+ // KeyIdentityPrefixBytesBacked (custodian + namespace only)
+ // ---------------------------------------------------------------------------
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestKeyIdentityPrefixBytesBacked {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestKeyIdentityPrefixBytesBacked.class);
+
+ private static KeyIdentityPrefixBytesBacked create() {
+ return new KeyIdentityPrefixBytesBacked(CUSTODIAN, NAMESPACE);
+ }
+
+ private static KeyIdentityPrefixBytesBacked create(byte[] cust, byte[] ns) {
+ return new KeyIdentityPrefixBytesBacked(cust, ns);
+ }
+
+ // -- Construction validation --
+
+ @Test
+ public void testConstructionWithNullCustodian_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityPrefixBytesBacked(null, NAMESPACE));
+ }
+
+ @Test
+ public void testConstructionWithNullNamespace_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityPrefixBytesBacked(CUSTODIAN, null));
+ }
+
+ @Test
+ public void testConstructionWithNullCustodian_BytesCtor_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityPrefixBytesBacked(null, new Bytes(NAMESPACE)));
+ }
+
+ @Test
+ public void testConstructionWithNullNamespace_BytesCtor_throws() {
+ assertThrows(NullPointerException.class,
+ () -> new KeyIdentityPrefixBytesBacked(new Bytes(CUSTODIAN), null));
+ }
+
+ @Test
+ public void testConstructionWithZeroLengthCustodian_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentityPrefixBytesBacked(new byte[0], NAMESPACE));
+ }
+
+ @Test
+ public void testConstructionWithZeroLengthNamespace_throws() {
+ assertThrows(IllegalArgumentException.class,
+ () -> new KeyIdentityPrefixBytesBacked(CUSTODIAN, new byte[0]));
+ }
+
+ // -- View getters (aligned with AbstractTestFullKeyIdentity for custodian / namespace / prefix)
+
+ @Test
+ public void testGetCustodianView() {
+ ManagedKeyIdentity id = create();
+ assertArrayEquals(CUSTODIAN, id.getCustodianView().copyBytes());
+ }
+
+ @Test
+ public void testGetNamespaceView() {
+ ManagedKeyIdentity id = create();
+ assertArrayEquals(NAMESPACE, id.getNamespaceView().copyBytes());
+ }
+
+ @Test
+ public void testGetPartialIdentityViewIsKeyNullIdentityBytes() {
+ ManagedKeyIdentity id = create();
+ assertSame(ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES, id.getPartialIdentityView());
+ assertEquals(0, id.getPartialIdentityView().getLength());
+ }
+
+ @Test
+ public void testGetFullIdentityView_throws() {
+ ManagedKeyIdentity id = create();
+ assertThrows(UnsupportedOperationException.class, () -> id.getFullIdentityView());
+ }
+
+ @Test
+ public void testGetIdentityPrefixView() {
+ ManagedKeyIdentity id = create();
+ byte[] expected =
+ ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ assertArrayEquals(expected, id.getIdentityPrefixView().copyBytes());
+ }
+
+ @Test
+ public void testGetKeyIdentityPrefixReturnsSameInstance() {
+ ManagedKeyIdentity id = create();
+ assertSame(id, id.getKeyIdentityPrefix());
+ }
+
+ // -- Copy methods --
+
+ @Test
+ public void testCopyCustodian() {
+ ManagedKeyIdentity id = create();
+ byte[] copy = id.copyCustodian();
+ assertArrayEquals(CUSTODIAN, copy);
+ copy[0] = (byte) 0xFF;
+ assertArrayEquals("Mutating returned copy must not affect the object", CUSTODIAN,
+ id.copyCustodian());
+ }
+
+ @Test
+ public void testCopyNamespace() {
+ ManagedKeyIdentity id = create();
+ byte[] copy = id.copyNamespace();
+ assertArrayEquals(NAMESPACE, copy);
+ copy[0] = (byte) 0xFF;
+ assertArrayEquals("Mutating returned copy must not affect the object", NAMESPACE,
+ id.copyNamespace());
+ }
+
+ @Test
+ public void testCopyPartialIdentity_throws() {
+ ManagedKeyIdentity id = create();
+ assertThrows(UnsupportedOperationException.class, () -> id.copyPartialIdentity());
+ }
+
+ // -- Length methods --
+
+ @Test
+ public void testGetCustodianLength() {
+ assertEquals(CUSTODIAN.length, create().getCustodianLength());
+ }
+
+ @Test
+ public void testGetNamespaceLength() {
+ assertEquals(NAMESPACE.length, create().getNamespaceLength());
+ }
+
+ @Test
+ public void testGetPartialIdentityLength() {
+ assertEquals(0, create().getPartialIdentityLength());
+ }
+
+ // -- String / encoded getters --
+
+ @Test
+ public void testGetCustodianEncoded() {
+ assertEquals(ManagedKeyProvider.encodeToStr(CUSTODIAN), create().getCustodianEncoded());
+ }
+
+ @Test
+ public void testGetNamespaceString() {
+ assertEquals(Bytes.toString(NAMESPACE), create().getNamespaceString());
+ }
+
+ @Test
+ public void testGetPartialIdentityEncoded_throws() {
+ assertThrows(UnsupportedOperationException.class, () -> create().getPartialIdentityEncoded());
+ }
+
+ // -- equals / hashCode --
+
+ @Test
+ public void testEqualsReflexive() {
+ ManagedKeyIdentity id = create();
+ assertTrue(id.equals(id));
+ }
+
+ @Test
+ public void testEqualsSameData_byteArrayAndBytesCtors() {
+ KeyIdentityPrefixBytesBacked fromArrays =
+ new KeyIdentityPrefixBytesBacked(CUSTODIAN, NAMESPACE);
+ KeyIdentityPrefixBytesBacked fromBytes =
+ new KeyIdentityPrefixBytesBacked(new Bytes(CUSTODIAN), new Bytes(NAMESPACE));
+ assertTrue(fromArrays.equals(fromBytes));
+ assertTrue(fromBytes.equals(fromArrays));
+ assertEquals(fromArrays.hashCode(), fromBytes.hashCode());
+ }
+
+ @Test
+ public void testEqualsSameData_twoInstances() {
+ ManagedKeyIdentity a = create();
+ ManagedKeyIdentity b = create();
+ assertTrue(a.equals(b));
+ assertTrue(b.equals(a));
+ assertEquals(a.hashCode(), b.hashCode());
+ }
+
+ @Test
+ public void testEqualsDifferentCustodian() {
+ assertFalse(create().equals(create(new byte[] { 0x0A, 0x0B }, NAMESPACE)));
+ }
+
+ @Test
+ public void testEqualsDifferentNamespace() {
+ assertFalse(create().equals(create(CUSTODIAN, Bytes.toBytes("other"))));
+ }
+
+ @Test
+ public void testEqualsNull() {
+ assertFalse(create().equals(null));
+ }
+
+ @Test
+ public void testEqualsDifferentRuntimeClass() {
+ assertFalse(create().equals("not a ManagedKeyIdentity"));
+ }
+
+ // -- clone --
+
+ @Test
+ public void testCloneIsEqualSameClassAndIndependentCustodianNamespace() {
+ KeyIdentityPrefixBytesBacked id = create();
+ KeyIdentityPrefixBytesBacked clone = id.clone();
+ assertNotNull(clone);
+ assertEquals(KeyIdentityPrefixBytesBacked.class, clone.getClass());
+ assertTrue(id.equals(clone));
+ assertNotSame(id, clone);
+ assertNotSame(id.getCustodianView(), clone.getCustodianView());
+ assertNotSame(id.getNamespaceView(), clone.getNamespaceView());
+ // Partial view is the shared empty sentinel for both instances.
+ assertSame(ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES, id.getPartialIdentityView());
+ assertSame(ManagedKeyIdentity.KEY_NULL_IDENTITY_BYTES, clone.getPartialIdentityView());
+ }
+
+ @Test
+ public void testCloneHoldsCorrectCustodianAndNamespace() {
+ KeyIdentityPrefixBytesBacked clone = create().clone();
+ assertArrayEquals(CUSTODIAN, clone.copyCustodian());
+ assertArrayEquals(NAMESPACE, clone.copyNamespace());
+ assertEquals(0, clone.getPartialIdentityLength());
+ }
+
+ // -- compareCustodian / compareNamespace --
+
+ @Test
+ public void testCompareCustodianEqual() {
+ assertEquals(0, create().compareCustodian(CUSTODIAN));
+ }
+
+ @Test
+ public void testCompareCustodianLess() {
+ assertTrue(
+ create(new byte[] { 0x01, 0x02, 0x03 }, NAMESPACE).compareCustodian(CUSTODIAN) < 0);
+ }
+
+ @Test
+ public void testCompareCustodianGreater() {
+ assertTrue(
+ create(new byte[] { 0x01, 0x02, 0x03, 0x05 }, NAMESPACE).compareCustodian(CUSTODIAN) > 0);
+ }
+
+ @Test
+ public void testCompareCustodianWithOffsetLength() {
+ byte[] padded = new byte[CUSTODIAN.length + 2];
+ padded[0] = (byte) 0xFF;
+ padded[1] = (byte) 0xFF;
+ System.arraycopy(CUSTODIAN, 0, padded, 2, CUSTODIAN.length);
+ assertEquals(0, create().compareCustodian(padded, 2, CUSTODIAN.length));
+ }
+
+ @Test
+ public void testCompareNamespaceEqual() {
+ assertEquals(0, create().compareNamespace(NAMESPACE));
+ }
+
+ @Test
+ public void testCompareNamespaceLess() {
+ assertTrue(
+ create(CUSTODIAN, Bytes.toBytes("aaa")).compareNamespace(Bytes.toBytes("zzz")) < 0);
+ }
+
+ @Test
+ public void testCompareNamespaceGreater() {
+ assertTrue(
+ create(CUSTODIAN, Bytes.toBytes("zzz")).compareNamespace(Bytes.toBytes("aaa")) > 0);
+ }
+
+ @Test
+ public void testCompareNamespaceWithOffsetLength() {
+ byte[] padded = new byte[NAMESPACE.length + 1];
+ padded[0] = (byte) 0x00;
+ System.arraycopy(NAMESPACE, 0, padded, 1, NAMESPACE.length);
+ assertEquals(0, create().compareNamespace(padded, 1, NAMESPACE.length));
+ }
+
+ @Test
+ public void testComparePartialIdentity_throws() {
+ assertThrows(UnsupportedOperationException.class,
+ () -> create().comparePartialIdentity(PARTIAL));
+ }
+
+ @Test
+ public void testComparePartialIdentityWithOffsetLength_throws() {
+ assertThrows(UnsupportedOperationException.class,
+ () -> create().comparePartialIdentity(PARTIAL, 0, PARTIAL.length));
+ }
+
+ // -- Equivalence documented on KeyIdentityPrefixBytesBacked --
+
+ @Test
+ public void testEqualsFullKeyIdentityBytesBackedWithEmptyPartial() {
+ KeyIdentityPrefixBytesBacked prefix = create();
+ KeyIdentityBytesBacked bb = new KeyIdentityBytesBacked(new Bytes(CUSTODIAN),
+ new Bytes(NAMESPACE), new Bytes(new byte[0]));
+ assertTrue(prefix.equals(bb));
+ assertTrue(bb.equals(prefix));
+ assertEquals(prefix.hashCode(), bb.hashCode());
+ }
+
+ @Test
+ public void testEqualsFullKeyIdentitySingleArrayBackedCustNamespaceMarkerRow() {
+ KeyIdentityPrefixBytesBacked prefix = create();
+ byte[] marker = ManagedKeyIdentityUtils.constructRowKeyForCustNamespace(CUSTODIAN, NAMESPACE);
+ KeyIdentitySingleArrayBacked sa = new KeyIdentitySingleArrayBacked(marker);
+ assertTrue(prefix.equals(sa));
+ assertTrue(sa.equals(prefix));
+ assertEquals(prefix.hashCode(), sa.hashCode());
+ }
+
+ @Test
+ public void testGetCustodianViewReturnsSameReferenceWhenUsingBytesCtor() {
+ Bytes cust = new Bytes(CUSTODIAN);
+ KeyIdentityPrefixBytesBacked id =
+ new KeyIdentityPrefixBytesBacked(cust, new Bytes(NAMESPACE));
+ assertSame(cust, id.getCustodianView());
+ }
+
+ @Test
+ public void testGetNamespaceViewReturnsSameReferenceWhenUsingBytesCtor() {
+ Bytes ns = new Bytes(NAMESPACE);
+ KeyIdentityPrefixBytesBacked id = new KeyIdentityPrefixBytesBacked(new Bytes(CUSTODIAN), ns);
+ assertSame(ns, id.getNamespaceView());
+ }
+ }
+}
diff --git a/hbase-protocol-shaded/src/main/protobuf/HBase.proto b/hbase-protocol-shaded/src/main/protobuf/HBase.proto
index 674619d9b5ae..f89863d3c3c7 100644
--- a/hbase-protocol-shaded/src/main/protobuf/HBase.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/HBase.proto
@@ -308,6 +308,11 @@ message ManagedKeyEntryRequest {
required bytes key_metadata_hash = 2;
}
+message SetManagedKeyRequest {
+ required ManagedKeyRequest key_cust_ns = 1;
+ required string key_metadata = 2;
+}
+
enum ManagedKeyState {
KEY_ACTIVE = 1;
KEY_DISABLED = 2;
diff --git a/hbase-protocol-shaded/src/main/protobuf/server/ManagedKeys.proto b/hbase-protocol-shaded/src/main/protobuf/server/ManagedKeys.proto
index f79badb544fc..0319b8146494 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/ManagedKeys.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/ManagedKeys.proto
@@ -41,4 +41,6 @@ service ManagedKeysService {
returns (ManagedKeyResponse);
rpc RefreshManagedKeys(ManagedKeyRequest)
returns (EmptyMsg);
+ rpc SetManagedKey(SetManagedKeyRequest)
+ returns (ManagedKeyResponse);
}
diff --git a/hbase-protocol-shaded/src/main/protobuf/server/io/HFile.proto b/hbase-protocol-shaded/src/main/protobuf/server/io/HFile.proto
index 26a343a5d04f..9c1a6063c228 100644
--- a/hbase-protocol-shaded/src/main/protobuf/server/io/HFile.proto
+++ b/hbase-protocol-shaded/src/main/protobuf/server/io/HFile.proto
@@ -51,7 +51,6 @@ message FileTrailerProto {
optional string comparator_class_name = 11;
optional uint32 compression_codec = 12;
optional bytes encryption_key = 13;
- optional string key_namespace = 14;
+ optional bytes kek_identity = 14;
optional string kek_metadata = 15;
- optional uint64 kek_checksum = 16;
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/HBaseServerBase.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/HBaseServerBase.java
index eb1502685fe6..cdb7f59c84c7 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/HBaseServerBase.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/HBaseServerBase.java
@@ -54,7 +54,7 @@
import org.apache.hadoop.hbase.ipc.RpcServerInterface;
import org.apache.hadoop.hbase.keymeta.KeyManagementService;
import org.apache.hadoop.hbase.keymeta.KeymetaAdmin;
-import org.apache.hadoop.hbase.keymeta.KeymetaAdminImpl;
+import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.keymeta.SystemKeyAccessor;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
@@ -195,7 +195,7 @@ public abstract class HBaseServerBase> extends
protected final NettyEventLoopGroupConfig eventLoopGroupConfig;
protected SystemKeyCache systemKeyCache;
- protected KeymetaAdminImpl keymetaAdmin;
+ protected KeymetaTableAccessor keymetaAccessor;
protected ManagedKeyDataCache managedKeyDataCache;
private void setupSignalHandlers() {
@@ -294,7 +294,7 @@ public HBaseServerBase(Configuration conf, String name) throws IOException {
initializeFileSystem();
- keymetaAdmin = new KeymetaAdminImpl(this);
+ keymetaAccessor = new KeymetaTableAccessor(this);
int choreServiceInitialSize =
conf.getInt(CHORE_SERVICE_INITIAL_POOL_SIZE, DEFAULT_CHORE_SERVICE_INITIAL_POOL_SIZE);
@@ -418,7 +418,7 @@ public ZKWatcher getZooKeeper() {
@Override
public KeymetaAdmin getKeymetaAdmin() {
- return keymetaAdmin;
+ throw new UnsupportedOperationException("KeymetaAdmin is not supported on region server");
}
@Override
@@ -431,10 +431,11 @@ public SystemKeyCache getSystemKeyCache() {
return systemKeyCache;
}
- protected void buildSystemKeyCache() throws IOException {
- if (systemKeyCache == null && SecurityUtil.isKeyManagementEnabled(conf)) {
- systemKeyCache = SystemKeyCache.createCache(new SystemKeyAccessor(this));
+ protected SystemKeyCache buildSystemKeyCache() throws IOException {
+ if (SecurityUtil.isKeyManagementEnabled(conf)) {
+ return SystemKeyCache.createCache(new SystemKeyAccessor(this));
}
+ return null;
}
/**
@@ -443,9 +444,7 @@ protected void buildSystemKeyCache() throws IOException {
* @throws IOException if there is an error rebuilding the cache
*/
public void rebuildSystemKeyCache() throws IOException {
- if (SecurityUtil.isKeyManagementEnabled(conf)) {
- systemKeyCache = SystemKeyCache.createCache(new SystemKeyAccessor(this));
- }
+ systemKeyCache = buildSystemKeyCache();
}
protected final void shutdownChore(ScheduledChore chore) {
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java
index 85201ccd8bdf..bd5fac1c3c45 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/HFileLink.java
@@ -174,15 +174,6 @@ public Path getMobPath() {
return this.mobPath;
}
- /**
- * Get the table name and family name from the origin path.
- * @return the table name and family name
- */
- public Pair getTableNameAndFamilyName() {
- return new Pair<>(this.originPath.getParent().getName(),
- this.originPath.getParent().getParent().getParent().getName());
- }
-
/**
* @param path Path to check.
* @return True if the path is a HFileLink.
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
index e43f839bff08..2227038c6aa7 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/FixedFileTrailer.java
@@ -30,6 +30,7 @@
import org.apache.hadoop.hbase.InnerStoreCellComparator;
import org.apache.hadoop.hbase.MetaCellComparator;
import org.apache.hadoop.hbase.io.compress.Compression;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.monitoring.ThreadLocalServerSideScanMetrics;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
@@ -131,14 +132,9 @@ public class FixedFileTrailer {
private byte[] encryptionKey;
/**
- * The key namespace
+ * The KEK identity (full identity bytes for system/managed key lookup).
*/
- private String keyNamespace;
-
- /**
- * The KEK checksum
- */
- private long kekChecksum;
+ private byte[] kekIdentity;
/**
* The KEK metadata
@@ -226,14 +222,11 @@ HFileProtos.FileTrailerProto toProtobuf() {
if (encryptionKey != null) {
builder.setEncryptionKey(UnsafeByteOperations.unsafeWrap(encryptionKey));
}
- if (keyNamespace != null) {
- builder.setKeyNamespace(keyNamespace);
- }
if (kekMetadata != null) {
builder.setKekMetadata(kekMetadata);
}
- if (kekChecksum != 0) {
- builder.setKekChecksum(kekChecksum);
+ if (kekIdentity != null && kekIdentity.length > 0) {
+ builder.setKekIdentity(UnsafeByteOperations.unsafeWrap(kekIdentity));
}
return builder.build();
}
@@ -337,14 +330,11 @@ void deserializeFromPB(DataInputStream inputStream) throws IOException {
if (trailerProto.hasEncryptionKey()) {
encryptionKey = trailerProto.getEncryptionKey().toByteArray();
}
- if (trailerProto.hasKeyNamespace()) {
- keyNamespace = trailerProto.getKeyNamespace();
- }
if (trailerProto.hasKekMetadata()) {
kekMetadata = trailerProto.getKekMetadata();
}
- if (trailerProto.hasKekChecksum()) {
- kekChecksum = trailerProto.getKekChecksum();
+ if (trailerProto.hasKekIdentity()) {
+ kekIdentity = trailerProto.getKekIdentity().toByteArray();
}
}
@@ -394,9 +384,11 @@ public String toString() {
append(sb, "comparatorClassName=" + comparatorClassName);
if (majorVersion >= 3) {
append(sb, "encryptionKey=" + (encryptionKey != null ? "PRESENT" : "NONE"));
- }
- if (keyNamespace != null) {
- append(sb, "keyNamespace=" + keyNamespace);
+ append(sb,
+ "kekIdentity=" + (kekIdentity != null
+ ? ManagedKeyProvider.encodeToStr(kekIdentity, 0, kekIdentity.length)
+ : "NONE"));
+ append(sb, "kekMetadata=" + (kekMetadata != null ? "PRESENT" : "NONE"));
}
append(sb, "majorVersion=" + majorVersion);
append(sb, "minorVersion=" + minorVersion);
@@ -677,22 +669,6 @@ public byte[] getEncryptionKey() {
return encryptionKey;
}
- public String getKeyNamespace() {
- return keyNamespace;
- }
-
- public void setKeyNamespace(String keyNamespace) {
- this.keyNamespace = keyNamespace;
- }
-
- public void setKEKChecksum(long kekChecksum) {
- this.kekChecksum = kekChecksum;
- }
-
- public long getKEKChecksum() {
- return kekChecksum;
- }
-
public void setEncryptionKey(byte[] keyBytes) {
this.encryptionKey = keyBytes;
}
@@ -705,6 +681,14 @@ public void setKEKMetadata(String kekMetadata) {
this.kekMetadata = kekMetadata;
}
+ public byte[] getKekIdentity() {
+ return kekIdentity;
+ }
+
+ public void setKekIdentity(byte[] kekIdentity) {
+ this.kekIdentity = kekIdentity;
+ }
+
/**
* Extracts the major version for a 4-byte serialized version data. The major version is the 3
* least significant bytes
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
index 2b74d177a4fe..13f6df7a01aa 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterImpl.java
@@ -883,12 +883,11 @@ protected void finishClose(FixedFileTrailer trailer) throws IOException {
Key encKey = null;
Key wrapperKey = null;
ManagedKeyData kekData = cryptoContext.getKEKData();
- String keyNamespace = cryptoContext.getKeyNamespace();
String kekMetadata = null;
- long kekChecksum = 0;
+ byte[] kekIdentity = null;
if (kekData != null) {
kekMetadata = kekData.getKeyMetadata();
- kekChecksum = kekData.getKeyChecksum();
+ kekIdentity = kekData.getKeyIdentity().getFullIdentityView().copyBytesIfNecessary();
wrapperKey = kekData.getTheKey();
encKey = cryptoContext.getKey();
} else {
@@ -903,9 +902,8 @@ protected void finishClose(FixedFileTrailer trailer) throws IOException {
EncryptionUtil.wrapKey(cryptoContext.getConf(), wrapperSubject, encKey, wrapperKey);
trailer.setEncryptionKey(wrappedKey);
}
- trailer.setKeyNamespace(keyNamespace);
trailer.setKEKMetadata(kekMetadata);
- trailer.setKEKChecksum(kekChecksum);
+ trailer.setKekIdentity(kekIdentity);
}
// Now we can finish the close
trailer.setMetaIndexCount(metaNames.size());
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementBase.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementBase.java
index 1885790c3ca9..aed0dcc69506 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementBase.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementBase.java
@@ -59,6 +59,7 @@ public KeyManagementBase(Configuration configuration) {
throw new IllegalArgumentException("Configuration must be non-null");
}
this.configuration = configuration;
+ ManagedKeyIdentityUtils.initDigestAlgos(configuration);
}
protected KeyManagementService getKeyManagementService() {
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementService.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementService.java
index bdb76f5bbe6d..6b3698ba9dee 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementService.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementService.java
@@ -75,4 +75,12 @@ static KeyManagementService createDefault(Configuration configuration, FileSyste
/** Returns the configuration. */
public Configuration getConfiguration();
+
+ /**
+ * Rotate the system key if it has changed.
+ * @return true if the key was rotated, false otherwise
+ */
+ default boolean rotateSystemKeyIfChanged() throws IOException {
+ return false;
+ }
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementUtils.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementUtils.java
index 816d5371475a..cdbf67417bea 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementUtils.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyManagementUtils.java
@@ -19,9 +19,11 @@
import java.io.IOException;
import java.security.KeyException;
+import java.util.Objects;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -52,44 +54,49 @@ private KeyManagementUtils() {
* @throws KeyException if an error occurs
*/
public static ManagedKeyData retrieveActiveKey(ManagedKeyProvider provider,
- KeymetaTableAccessor accessor, String encKeyCust, byte[] key_cust, String keyNamespace,
+ KeymetaTableAccessor accessor, String encKeyCust, ManagedKeyIdentity custNamespacePrefix,
ManagedKeyData existingActiveKey) throws IOException, KeyException {
Preconditions.checkArgument(
existingActiveKey == null || existingActiveKey.getKeyState() == ManagedKeyState.ACTIVE,
"Expected existing active key to be null or having ACTIVE state"
+ (existingActiveKey == null ? "" : ", but got: " + existingActiveKey.getKeyState()));
+ String keyNamespaceStr = custNamespacePrefix.getNamespaceString();
ManagedKeyData keyData;
try {
- keyData = provider.getManagedKey(key_cust, keyNamespace);
+ keyData = provider.getManagedKey(custNamespacePrefix);
} catch (IOException e) {
- keyData = new ManagedKeyData(key_cust, keyNamespace, ManagedKeyState.FAILED);
+ LOG.warn(
+ "retrieveActiveKey: Failed to get managed key from provider for (custodian: {}, namespace: {})",
+ encKeyCust, keyNamespaceStr, e);
+ keyData = new ManagedKeyData(custNamespacePrefix, ManagedKeyState.FAILED);
}
if (keyData == null) {
- throw new IOException("Invalid null managed key received from key provider");
+ throw new KeyException("Invalid null managed key received from key provider for "
+ + formatScope(encKeyCust, keyNamespaceStr));
}
if (keyData.getKeyMetadata() != null && keyData.getKeyState() == ManagedKeyState.INACTIVE) {
- throw new IOException(
- "Expected key to be ACTIVE, but got an INACTIVE key with metadata hash: "
- + keyData.getKeyMetadataHashEncoded() + " for (custodian: " + encKeyCust + ", namespace: "
- + keyNamespace + ")");
+ throw new KeyException(
+ "Expected key to be ACTIVE, but got an INACTIVE key with partial identity: "
+ + keyData.getPartialIdentityEncoded() + " for "
+ + formatScope(encKeyCust, keyNamespaceStr));
}
if (existingActiveKey != null && existingActiveKey.equals(keyData)) {
LOG.info("retrieveActiveKey: no change in active key for (custodian: {}, namespace: {}",
- encKeyCust, keyNamespace);
+ encKeyCust, keyNamespaceStr);
return existingActiveKey;
}
LOG.info(
"retrieveActiveKey: got key with state: {} and metadata: {} for custodian: {} namespace: {}",
- keyData.getKeyState(), keyData.getKeyMetadataHashEncoded(), encKeyCust,
+ keyData.getKeyState(), keyData.getPartialIdentityEncoded(), encKeyCust,
keyData.getKeyNamespace());
if (accessor != null) {
if (keyData.getKeyMetadata() != null) {
accessor.addKey(keyData);
} else {
- accessor.addKeyManagementStateMarker(keyData.getKeyCustodian(), keyData.getKeyNamespace(),
- keyData.getKeyState());
+ accessor.addKeyManagementStateMarker(keyData.getKeyIdentity(),
+ keyData.getKeyState().getExternalState());
}
}
return keyData;
@@ -97,22 +104,24 @@ public static ManagedKeyData retrieveActiveKey(ManagedKeyProvider provider,
/**
* Retrieves a key from the key provider for the specified metadata.
- * @param provider the managed key provider
- * @param accessor the accessor to use to persist the key. If null, the key will not be
- * persisted.
- * @param encKeyCust the encoded key custodian
- * @param keyCust the key custodian
- * @param keyNamespace the key namespace
- * @param keyMetadata the key metadata
- * @param wrappedKey the wrapped key, if available, can be null.
+ * @param provider the managed key provider
+ * @param accessor the accessor to use to persist the key. If null, the key will not be
+ * persisted.
+ * @param custNamespacePrefix the custodian and namespace prefix
+ * @param keyMetadata the key metadata
+ * @param wrappedKey the wrapped key, if available, can be null.
* @return the retrieved key that is guaranteed to be not null and have non-null metadata.
* @throws IOException if an error occurs while retrieving or persisting the key
* @throws KeyException if an error occurs while retrieving or validating the key
*/
public static ManagedKeyData retrieveKey(ManagedKeyProvider provider,
- KeymetaTableAccessor accessor, String encKeyCust, byte[] keyCust, String keyNamespace,
- String keyMetadata, byte[] wrappedKey) throws IOException, KeyException {
- ManagedKeyData keyData = provider.unwrapKey(keyMetadata, wrappedKey);
+ KeymetaTableAccessor accessor, ManagedKeyIdentity custNamespacePrefix, String keyMetadata,
+ byte[] wrappedKey) throws IOException, KeyException {
+ Preconditions.checkArgument(keyMetadata != null && !keyMetadata.isEmpty(),
+ "key_metadata must not be empty");
+ String metadataHash =
+ ManagedKeyProvider.encodeToStr(ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata));
+ ManagedKeyData keyData = provider.unwrapKey(custNamespacePrefix, keyMetadata, wrappedKey);
// Do some validation of the resposne, as we can't trust that all providers honour the contract.
// If the key is disabled, we expect a more specific key state to be used, not the generic
// DISABLED state.
@@ -121,29 +130,29 @@ public static ManagedKeyData retrieveKey(ManagedKeyProvider provider,
|| !keyData.getKeyMetadata().equals(keyMetadata)
|| keyData.getKeyState() == ManagedKeyState.DISABLED
) {
- throw new KeyException(
- "Invalid key that is null or having invalid metadata or state received from key provider "
- + "for (custodian: " + encKeyCust + ", namespace: " + keyNamespace
- + ") and metadata hash: "
- + ManagedKeyProvider.encodeToStr(ManagedKeyData.constructMetadataHash(keyMetadata)));
+ throw new KeyException("Invalid key received from key provider (null/metadata/state) for "
+ + formatScope(custNamespacePrefix.getCustodianEncoded(),
+ custNamespacePrefix.getNamespaceString())
+ + " and metadata hash: " + metadataHash);
+ }
+ if (
+ !Bytes.equals(custNamespacePrefix.copyCustodian(), keyData.getKeyCustodian())
+ || !Objects.equals(custNamespacePrefix.getNamespaceString(), keyData.getKeyNamespace())
+ ) {
+ throw new KeyException("Unwrapped key scope does not match request: request (custodian: "
+ + custNamespacePrefix.getCustodianEncoded() + ", namespace: "
+ + custNamespacePrefix.getNamespaceString() + "), got (custodian: "
+ + keyData.getKeyCustodianEncoded() + ", namespace: " + keyData.getKeyNamespace() + ")");
}
if (LOG.isInfoEnabled()) {
LOG.info(
"retrieveKey: got key with state: {} and metadata: {} for (custodian: {}, "
+ "namespace: {}) and metadata hash: {}",
- keyData.getKeyState(), keyData.getKeyMetadata(), encKeyCust, keyNamespace,
- ManagedKeyProvider.encodeToStr(ManagedKeyData.constructMetadataHash(keyMetadata)));
+ keyData.getKeyState(), keyData.getKeyMetadata(), custNamespacePrefix.getCustodianEncoded(),
+ custNamespacePrefix.getNamespaceString(), metadataHash);
}
if (accessor != null) {
- try {
- accessor.addKey(keyData);
- } catch (IOException e) {
- LOG.warn(
- "retrieveKey: Failed to add key to L2 for metadata hash: {}, for custodian: {}, "
- + "namespace: {}",
- ManagedKeyProvider.encodeToStr(ManagedKeyData.constructMetadataHash(keyMetadata)),
- encKeyCust, keyNamespace, e);
- }
+ accessor.addKey(keyData);
}
return keyData;
}
@@ -162,10 +171,10 @@ public static ManagedKeyData refreshKey(ManagedKeyProvider provider,
KeymetaTableAccessor accessor, ManagedKeyData keyData) throws IOException, KeyException {
if (LOG.isDebugEnabled()) {
LOG.debug(
- "refreshKey: entry with keyData state: {}, metadata hash: {} for (custodian: {}, "
+ "refreshKey: entry with keyData state: {}, partial identity: {} for (custodian: {}, "
+ "namespace: {})",
- keyData.getKeyState(), keyData.getKeyMetadataHashEncoded(),
- ManagedKeyProvider.encodeToStr(keyData.getKeyCustodian()), keyData.getKeyNamespace());
+ keyData.getKeyState(), keyData.getPartialIdentityEncoded(),
+ keyData.getKeyCustodianEncoded(), keyData.getKeyNamespace());
}
Preconditions.checkArgument(keyData.getKeyMetadata() != null,
@@ -176,56 +185,64 @@ public static ManagedKeyData refreshKey(ManagedKeyProvider provider,
// Refresh key using unwrapKey
ManagedKeyData newKeyData;
try {
- newKeyData = provider.unwrapKey(keyData.getKeyMetadata(), null);
+ newKeyData = provider.unwrapKey(keyData.getKeyIdentity(), keyData.getKeyMetadata(), null);
if (LOG.isDebugEnabled()) {
LOG.debug(
- "refreshKey: unwrapped key with state: {}, metadata hash: {} for (custodian: "
+ "refreshKey: unwrapped key with state: {}, partial identity: {} for (custodian: "
+ "{}, namespace: {})",
- newKeyData.getKeyState(), newKeyData.getKeyMetadataHashEncoded(),
- ManagedKeyProvider.encodeToStr(newKeyData.getKeyCustodian()),
- newKeyData.getKeyNamespace());
+ newKeyData.getKeyState(), newKeyData.getPartialIdentityEncoded(),
+ newKeyData.getKeyCustodianEncoded(), newKeyData.getKeyNamespace());
}
} catch (IOException e) {
LOG.warn("refreshKey: Failed to unwrap key for (custodian: {}, namespace: {})",
- ManagedKeyProvider.encodeToStr(keyData.getKeyCustodian()), keyData.getKeyNamespace(), e);
- newKeyData = new ManagedKeyData(keyData.getKeyCustodian(), keyData.getKeyNamespace(), null,
- ManagedKeyState.FAILED, keyData.getKeyMetadata());
+ keyData.getKeyCustodianEncoded(), keyData.getKeyNamespace(), e);
+ newKeyData = new ManagedKeyData(keyData.getKeyCustodian(), keyData.getKeyNamespaceBytes(),
+ null, ManagedKeyState.FAILED, keyData.getKeyMetadata());
}
// Validate metadata hasn't changed
if (!keyData.getKeyMetadata().equals(newKeyData.getKeyMetadata())) {
- throw new KeyException("Key metadata changed during refresh: current metadata hash: "
- + keyData.getKeyMetadataHashEncoded() + ", got metadata hash: "
- + newKeyData.getKeyMetadataHashEncoded() + " for (custodian: "
- + ManagedKeyProvider.encodeToStr(keyData.getKeyCustodian()) + ", namespace: "
- + keyData.getKeyNamespace() + ")");
+ throw new KeyException("Key metadata changed during refresh: current partial identity: "
+ + keyData.getPartialIdentityEncoded() + ", got partial identity: "
+ + newKeyData.getPartialIdentityEncoded() + " for "
+ + formatScope(keyData.getKeyCustodianEncoded(), keyData.getKeyNamespace()));
}
// Check if state changed
if (keyData.getKeyState() == newKeyData.getKeyState()) {
- // No change, return original
+ // No change, return original but update refresh timestamp in storage
+ if (accessor != null) {
+ accessor.updateRefreshTimestamp(keyData);
+ }
result = keyData;
} else if (newKeyData.getKeyState() == ManagedKeyState.FAILED) {
// Ignore if new state is FAILED, let us just keep the existing key data as is as this is
- // most likely a transitional issue with KMS.
+ // most likely a transitional issue with KMS. Still update refresh timestamp.
+ if (accessor != null) {
+ accessor.updateRefreshTimestamp(keyData);
+ }
result = keyData;
} else {
if (newKeyData.getKeyState().getExternalState() == ManagedKeyState.DISABLED) {
// Handle DISABLED state change specially.
- accessor.disableKey(keyData);
+ if (accessor != null) {
+ accessor.disableKey(keyData);
+ }
} else {
// Rest of the state changes are only ACTIVE and INACTIVE..
- accessor.updateActiveState(keyData, newKeyData.getKeyState());
+ if (accessor != null) {
+ accessor.updateActiveState(keyData, newKeyData.getKeyState(), false);
+ }
}
result = newKeyData;
}
if (LOG.isDebugEnabled()) {
LOG.debug(
- "refreshKey: completed with result state: {}, metadata hash: {} for (custodian: "
+ "refreshKey: completed with result state: {}, partial identity: {} for (custodian: "
+ "{}, namespace: {})",
- result.getKeyState(), result.getKeyMetadataHashEncoded(),
- ManagedKeyProvider.encodeToStr(result.getKeyCustodian()), result.getKeyNamespace());
+ result.getKeyState(), result.getPartialIdentityEncoded(), result.getKeyCustodianEncoded(),
+ result.getKeyNamespace());
}
return result;
@@ -242,44 +259,43 @@ public static ManagedKeyData refreshKey(ManagedKeyProvider provider,
* @throws KeyException if an error occurs
*/
public static ManagedKeyData rotateActiveKey(ManagedKeyProvider provider,
- KeymetaTableAccessor accessor, String encKeyCust, byte[] keyCust, String keyNamespace)
+ KeymetaTableAccessor accessor, String encKeyCust, ManagedKeyIdentity custNamespacePrefix)
throws IOException, KeyException {
// Get current active key
- ManagedKeyData currentActiveKey = accessor.getKeyManagementStateMarker(keyCust, keyNamespace);
+ ManagedKeyData currentActiveKey = accessor.getKeyManagementStateMarker(custNamespacePrefix);
if (currentActiveKey == null || currentActiveKey.getKeyState() != ManagedKeyState.ACTIVE) {
throw new IOException("No active key found, key management not yet enabled for (custodian: "
- + encKeyCust + ", namespace: " + keyNamespace + ") ?");
+ + encKeyCust + ", namespace: " + custNamespacePrefix.getNamespaceString() + ") ?");
}
// Retrieve new key from provider, We pass null accessor to skip default persistence logic,
// because a failure to rotate shouldn't make the current active key invalid.
- ManagedKeyData newKey = retrieveActiveKey(provider, null,
- ManagedKeyProvider.encodeToStr(keyCust), keyCust, keyNamespace, currentActiveKey);
+ ManagedKeyData newKey =
+ retrieveActiveKey(provider, null, encKeyCust, custNamespacePrefix, currentActiveKey);
if (newKey == null || newKey.equals(currentActiveKey)) {
LOG.warn(
"rotateActiveKey: failed to retrieve new active key for (custodian: {}, namespace: {})",
- encKeyCust, keyNamespace);
+ encKeyCust, custNamespacePrefix.getNamespaceString());
return null;
}
// If rotation succeeds in generating a new active key, persist the new key and mark the current
// active key as inactive.
if (newKey.getKeyState() == ManagedKeyState.ACTIVE) {
- try {
- accessor.addKey(newKey);
- accessor.updateActiveState(currentActiveKey, ManagedKeyState.INACTIVE);
- return newKey;
- } catch (IOException e) {
- LOG.warn("rotateActiveKey: failed to persist new active key to L2 for (custodian: {}, "
- + "namespace: {})", encKeyCust, keyNamespace, e);
- return null;
- }
+ accessor.addKey(newKey);
+ accessor.updateActiveState(currentActiveKey, ManagedKeyState.INACTIVE, true);
+ return newKey;
} else {
LOG.warn(
- "rotateActiveKey: ignoring new key with state {} without metadata hash: {} for "
+ "rotateActiveKey: ignoring new key with state {} without partial identity: {} for "
+ "(custodian: {}, namespace: {})",
- newKey.getKeyState(), newKey.getKeyMetadataHashEncoded(), encKeyCust, keyNamespace);
+ newKey.getKeyState(), newKey.getPartialIdentityEncoded(), encKeyCust,
+ custNamespacePrefix.getNamespaceString());
return null;
}
}
+
+ private static String formatScope(String encodedCustodian, String keyNamespace) {
+ return "(custodian: " + encodedCustodian + ", namespace: " + keyNamespace + ")";
+ }
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyNamespaceUtil.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyNamespaceUtil.java
deleted file mode 100644
index f1dc239cd76f..000000000000
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeyNamespaceUtil.java
+++ /dev/null
@@ -1,93 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.keymeta;
-
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
-import org.apache.hadoop.hbase.client.TableDescriptor;
-import org.apache.hadoop.hbase.regionserver.StoreContext;
-import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
-import org.apache.yetus.audience.InterfaceAudience;
-import org.apache.yetus.audience.InterfaceStability;
-
-import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
-
-/**
- * Utility class for constructing key namespaces used in key management operations.
- */
-@InterfaceAudience.Private
-@InterfaceStability.Evolving
-public final class KeyNamespaceUtil {
- private KeyNamespaceUtil() {
- throw new UnsupportedOperationException("Cannot instantiate utility class");
- }
-
- /**
- * Construct a key namespace from a table descriptor and column family descriptor.
- * @param tableDescriptor The table descriptor
- * @param family The column family descriptor
- * @return The constructed key namespace
- */
- public static String constructKeyNamespace(TableDescriptor tableDescriptor,
- ColumnFamilyDescriptor family) {
- return tableDescriptor.getTableName().getNameAsString() + "/" + family.getNameAsString();
- }
-
- /**
- * Construct a key namespace from a store context.
- * @param storeContext The store context
- * @return The constructed key namespace
- */
- public static String constructKeyNamespace(StoreContext storeContext) {
- return storeContext.getTableName().getNameAsString() + "/"
- + storeContext.getFamily().getNameAsString();
- }
-
- /**
- * Construct a key namespace by deriving table name and family name from a store file info.
- * @param fileInfo The store file info
- * @return The constructed key namespace
- */
- public static String constructKeyNamespace(StoreFileInfo fileInfo) {
- return constructKeyNamespace(
- fileInfo.isLink() ? fileInfo.getLink().getOriginPath() : fileInfo.getPath());
- }
-
- /**
- * Construct a key namespace by deriving table name and family name from a store file path.
- * @param path The path
- * @return The constructed key namespace
- */
- public static String constructKeyNamespace(Path path) {
- return constructKeyNamespace(path.getParent().getParent().getParent().getName(),
- path.getParent().getName());
- }
-
- /**
- * Construct a key namespace from a table name and family name.
- * @param tableName The table name
- * @param family The family name
- * @return The constructed key namespace
- */
- public static String constructKeyNamespace(String tableName, String family) {
- // Add precoditions for null check
- Preconditions.checkNotNull(tableName, "tableName should not be null");
- Preconditions.checkNotNull(family, "family should not be null");
- return tableName + "/" + family;
- }
-}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminImpl.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminImpl.java
index 33b7423fcba5..c65812091c5b 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminImpl.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaAdminImpl.java
@@ -24,32 +24,51 @@
import org.apache.hadoop.hbase.ServerName;
import org.apache.hadoop.hbase.client.AsyncAdmin;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
-import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.hadoop.hbase.master.MasterServices;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.FutureUtils;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+/**
+ * Implementation of the {@link KeymetaAdmin} interface that builds on top of
+ * {@link KeymetaTableAccessor} to provide management interface to avoid going through the
+ * {@link KeymetaDataCache}.
+ */
@InterfaceAudience.Private
-public class KeymetaAdminImpl extends KeymetaTableAccessor implements KeymetaAdmin {
+public class KeymetaAdminImpl extends KeyManagementBase implements KeymetaAdmin {
private static final Logger LOG = LoggerFactory.getLogger(KeymetaAdminImpl.class);
+ private final Server server;
+ private final KeymetaTableAccessor accessor;
public KeymetaAdminImpl(Server server) {
- super(server);
+ this(server, new KeymetaTableAccessor(server));
+ }
+
+ public KeymetaAdminImpl(Server server, KeymetaTableAccessor accessor) {
+ super(server.getKeyManagementService());
+ this.server = server;
+ this.accessor = accessor;
+ }
+
+ protected Server getServer() {
+ return server;
}
@Override
public ManagedKeyData enableKeyManagement(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
assertKeyManagementEnabled();
- String encodedCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyIdentity custNamespacePrefix =
+ new KeyIdentityPrefixBytesBacked(new Bytes(keyCust), new Bytes(Bytes.toBytes(keyNamespace)));
+ String encodedCust = custNamespacePrefix.getCustodianEncoded();
LOG.info("Trying to enable key management on custodian: {} under namespace: {}", encodedCust,
keyNamespace);
// Check if (cust, namespace) pair is already enabled and has an active key.
- ManagedKeyData markerKey = getKeyManagementStateMarker(keyCust, keyNamespace);
+ ManagedKeyData markerKey = accessor.getKeyManagementStateMarker(custNamespacePrefix);
if (markerKey != null && markerKey.getKeyState() == ManagedKeyState.ACTIVE) {
LOG.info(
"enableManagedKeys: specified (custodian: {}, namespace: {}) already has "
@@ -62,19 +81,21 @@ public ManagedKeyData enableKeyManagement(byte[] keyCust, String keyNamespace)
// Retrieve an active key from provider if this is the first time enabling key management or
// the previous attempt left it in a non-ACTIVE state. This may or may not succeed. When fails,
// it can leave the key management state in FAILED or DISABLED.
- return KeyManagementUtils.retrieveActiveKey(getKeyProvider(), this, encodedCust, keyCust,
- keyNamespace, null);
+ return KeyManagementUtils.retrieveActiveKey(getKeyProvider(), accessor, encodedCust,
+ custNamespacePrefix, null);
}
@Override
public List getManagedKeys(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
assertKeyManagementEnabled();
+ ManagedKeyIdentity custNamespacePrefix =
+ new KeyIdentityPrefixBytesBacked(new Bytes(keyCust), new Bytes(Bytes.toBytes(keyNamespace)));
if (LOG.isInfoEnabled()) {
LOG.info("Getting key statuses for custodian: {} under namespace: {}",
- ManagedKeyProvider.encodeToStr(keyCust), keyNamespace);
+ custNamespacePrefix.getCustodianEncoded(), keyNamespace);
}
- return getAllKeys(keyCust, keyNamespace, false);
+ return accessor.getAllKeys(custNamespacePrefix, false);
}
@Override
@@ -153,37 +174,63 @@ public void clearManagedKeyDataCache() throws IOException {
public ManagedKeyData disableKeyManagement(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
assertKeyManagementEnabled();
- String encodedCust = LOG.isInfoEnabled() ? ManagedKeyProvider.encodeToStr(keyCust) : null;
+ Bytes keyCustBytes = new Bytes(keyCust);
+ Bytes keyNamespaceBytes = new Bytes(Bytes.toBytes(keyNamespace));
+ ManagedKeyIdentity custNamespacePrefix =
+ new KeyIdentityPrefixBytesBacked(keyCustBytes, keyNamespaceBytes);
+ String encodedCust = LOG.isInfoEnabled() ? custNamespacePrefix.getCustodianEncoded() : null;
LOG.info("disableKeyManagement started for custodian: {} under namespace: {}", encodedCust,
keyNamespace);
- ManagedKeyData markerKey = getKeyManagementStateMarker(keyCust, keyNamespace);
- if (markerKey != null && markerKey.getKeyState() == ManagedKeyState.ACTIVE) {
- updateActiveState(markerKey, ManagedKeyState.INACTIVE);
- LOG.info("disableKeyManagement completed for custodian: {} under namespace: {}", encodedCust,
- keyNamespace);
+ // First add the DISABLED state marker so no new ACTIVE key is retrieved for this scope.
+ accessor.addKeyManagementStateMarker(custNamespacePrefix, ManagedKeyState.DISABLED);
+
+ // Enumerate all keys for this (cust, namespace) and mark each as INACTIVE.
+ // but without deleting the cust+namespace row so that the above update doesn't get cleared.
+ List allKeys = accessor.getAllKeys(custNamespacePrefix, false);
+ for (ManagedKeyData keyData : allKeys) {
+ if (
+ keyData.getKeyMetadata() == null
+ || keyData.getKeyState().getExternalState() == ManagedKeyState.DISABLED
+ ) {
+ continue;
+ }
+ LOG
+ .debug(
+ "disableKeyManagement: marking key as INACTIVE with metadata hash: {} for custodian: {} "
+ + "under namespace: {}",
+ keyData.getPartialIdentityEncoded(), encodedCust, keyNamespace);
+ accessor.updateActiveState(keyData, ManagedKeyState.INACTIVE, true);
+ if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ ejectManagedKeyDataCacheEntry(keyCust, keyNamespace, keyData.getKeyMetadata());
+ }
}
- // Add key management state marker for the specified (keyCust, keyNamespace) combination
- addKeyManagementStateMarker(keyCust, keyNamespace, ManagedKeyState.DISABLED);
+ ManagedKeyData marker = accessor.getKeyManagementStateMarker(custNamespacePrefix);
+ // If read-back is null, return a synthetic DISABLED marker so callers never see null.
+ // Diagnostic logging above (KeymetaTableAccessor.getKey / parseFromResult) will indicate why.
+ if (marker == null) {
+ marker = new ManagedKeyData(custNamespacePrefix, ManagedKeyState.DISABLED);
+ }
LOG.info("disableKeyManagement completed for custodian: {} under namespace: {}", encodedCust,
keyNamespace);
+ return marker;
- return getKeyManagementStateMarker(keyCust, keyNamespace);
}
@Override
public ManagedKeyData disableManagedKey(byte[] keyCust, String keyNamespace,
byte[] keyMetadataHash) throws IOException, KeyException {
assertKeyManagementEnabled();
- String encodedCust = LOG.isInfoEnabled() ? ManagedKeyProvider.encodeToStr(keyCust) : null;
- String encodedHash =
- LOG.isInfoEnabled() ? ManagedKeyProvider.encodeToStr(keyMetadataHash) : null;
+ ManagedKeyIdentity fullKeyIdentity = new KeyIdentityBytesBacked(new Bytes(keyCust),
+ new Bytes(Bytes.toBytes(keyNamespace)), new Bytes(keyMetadataHash));
+ String encodedCust = LOG.isInfoEnabled() ? fullKeyIdentity.getCustodianEncoded() : null;
+ String encodedHash = LOG.isInfoEnabled() ? fullKeyIdentity.getPartialIdentityEncoded() : null;
LOG.info("Disabling managed key with metadata hash: {} for custodian: {} under namespace: {}",
encodedHash, encodedCust, keyNamespace);
// First retrieve the key to verify it exists and get the full metadata for cache ejection
- ManagedKeyData existingKey = getKey(keyCust, keyNamespace, keyMetadataHash);
+ ManagedKeyData existingKey = accessor.getKey(fullKeyIdentity);
if (existingKey == null) {
throw new IOException("Key not found for (custodian: " + encodedCust + ", namespace: "
+ keyNamespace + ") with metadata hash: " + encodedHash);
@@ -193,14 +240,14 @@ public ManagedKeyData disableManagedKey(byte[] keyCust, String keyNamespace,
+ ", namespace: " + keyNamespace + ") with metadata hash: " + encodedHash);
}
- disableKey(existingKey);
+ accessor.disableKey(existingKey);
// Eject from cache on all region servers (requires full metadata)
ejectManagedKeyDataCacheEntry(keyCust, keyNamespace, existingKey.getKeyMetadata());
LOG.info("Successfully disabled managed key with metadata hash: {} for custodian: {} under "
+ "namespace: {}", encodedHash, encodedCust, keyNamespace);
// Retrieve and return the disabled key
- ManagedKeyData disabledKey = getKey(keyCust, keyNamespace, keyMetadataHash);
+ ManagedKeyData disabledKey = accessor.getKey(fullKeyIdentity);
return disabledKey;
}
@@ -208,24 +255,51 @@ public ManagedKeyData disableManagedKey(byte[] keyCust, String keyNamespace,
public ManagedKeyData rotateManagedKey(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
assertKeyManagementEnabled();
- String encodedCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyIdentity custNamespacePrefix =
+ new KeyIdentityPrefixBytesBacked(new Bytes(keyCust), new Bytes(Bytes.toBytes(keyNamespace)));
+ String encodedCust = custNamespacePrefix.getCustodianEncoded();
LOG.info("Rotating managed key for custodian: {} under namespace: {}", encodedCust,
keyNamespace);
// Attempt rotation
- return KeyManagementUtils.rotateActiveKey(getKeyProvider(), this, encodedCust, keyCust,
- keyNamespace);
+ return KeyManagementUtils.rotateActiveKey(getKeyProvider(), accessor, encodedCust,
+ custNamespacePrefix);
+ }
+
+ @Override
+ public ManagedKeyData setManagedKey(byte[] keyCust, String keyNamespace, String keyMetadata)
+ throws IOException, KeyException {
+ assertKeyManagementEnabled();
+ ManagedKeyIdentity fullKeyIdentity = ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(
+ new Bytes(keyCust), new Bytes(Bytes.toBytes(keyNamespace)), keyMetadata);
+ ManagedKeyIdentity custNamespacePrefix = fullKeyIdentity.getKeyIdentityPrefix();
+ if (LOG.isInfoEnabled()) {
+ LOG.info("setManagedKey for custodian: {} namespace: {} metadata hash (encoded): {}",
+ fullKeyIdentity.getCustodianEncoded(), keyNamespace,
+ fullKeyIdentity.getPartialIdentityEncoded());
+ }
+ ManagedKeyData currentActiveKey = accessor.getKeyManagementStateMarker(custNamespacePrefix);
+ ManagedKeyData keyData = KeyManagementUtils.retrieveKey(getKeyProvider(), accessor,
+ custNamespacePrefix, keyMetadata, null);
+ if (keyData.getKeyState() == ManagedKeyState.ACTIVE && currentActiveKey != null) {
+ ejectManagedKeyDataCacheEntry(keyCust, keyNamespace, currentActiveKey.getKeyMetadata());
+ }
+ LOG.info("setManagedKey completed for custodian: {} namespace: {} ",
+ fullKeyIdentity.getCustodianEncoded(), keyNamespace);
+ return keyData;
}
@Override
public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
throws IOException, KeyException {
assertKeyManagementEnabled();
- String encodedCust = ManagedKeyProvider.encodeToStr(keyCust);
+ ManagedKeyIdentity custNamespacePrefix =
+ new KeyIdentityPrefixBytesBacked(new Bytes(keyCust), new Bytes(Bytes.toBytes(keyNamespace)));
+ String encodedCust = LOG.isInfoEnabled() ? custNamespacePrefix.getCustodianEncoded() : null;
LOG.info("refreshManagedKeys started for custodian: {} under namespace: {}", encodedCust,
keyNamespace);
- ManagedKeyData markerKey = getKeyManagementStateMarker(keyCust, keyNamespace);
+ ManagedKeyData markerKey = accessor.getKeyManagementStateMarker(custNamespacePrefix);
if (markerKey != null && markerKey.getKeyState() == ManagedKeyState.DISABLED) {
LOG.info(
"refreshManagedKeys skipping since key management is disabled for custodian: {} under "
@@ -235,7 +309,7 @@ public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
}
// First, get all keys for the specified custodian and namespace and refresh those that have a
// non-null metadata.
- List allKeys = getAllKeys(keyCust, keyNamespace, false);
+ List allKeys = accessor.getAllKeys(custNamespacePrefix, false);
IOException refreshException = null;
for (ManagedKeyData keyData : allKeys) {
if (keyData.getKeyMetadata() == null) {
@@ -244,15 +318,15 @@ public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
LOG.debug(
"refreshManagedKeys: Refreshing key with metadata hash: {} for custodian: {} under "
+ "namespace: {} with state: {}",
- keyData.getKeyMetadataHashEncoded(), encodedCust, keyNamespace, keyData.getKeyState());
+ keyData.getPartialIdentityEncoded(), encodedCust, keyNamespace, keyData.getKeyState());
try {
ManagedKeyData refreshedKey =
- KeyManagementUtils.refreshKey(getKeyProvider(), this, keyData);
+ KeyManagementUtils.refreshKey(getKeyProvider(), accessor, keyData);
if (refreshedKey == keyData) {
LOG.debug(
"refreshManagedKeys: Key with metadata hash: {} for custodian: {} under "
+ "namespace: {} is unchanged",
- keyData.getKeyMetadataHashEncoded(), encodedCust, keyNamespace);
+ keyData.getPartialIdentityEncoded(), encodedCust, keyNamespace);
} else {
if (refreshedKey.getKeyState().getExternalState() == ManagedKeyState.DISABLED) {
LOG.info("refreshManagedKeys: Refreshed key is DISABLED, ejecting from cache");
@@ -261,14 +335,14 @@ public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
LOG.info(
"refreshManagedKeys: Successfully refreshed key with metadata hash: {} for "
+ "custodian: {} under namespace: {}",
- refreshedKey.getKeyMetadataHashEncoded(), encodedCust, keyNamespace);
+ refreshedKey.getPartialIdentityEncoded(), encodedCust, keyNamespace);
}
}
} catch (IOException | KeyException e) {
LOG.error(
"refreshManagedKeys: Failed to refresh key with metadata hash: {} for custodian: {} "
+ "under namespace: {}",
- keyData.getKeyMetadataHashEncoded(), encodedCust, keyNamespace, e);
+ keyData.getPartialIdentityEncoded(), encodedCust, keyNamespace, e);
if (refreshException == null) {
refreshException = new IOException("Key refresh failed for (custodian: " + encodedCust
+ ", namespace: " + keyNamespace + ")", e);
@@ -281,9 +355,10 @@ public void refreshManagedKeys(byte[] keyCust, String keyNamespace)
}
if (markerKey != null && markerKey.getKeyState() == ManagedKeyState.FAILED) {
- LOG.info("refreshManagedKeys: Found FAILED marker for (custodian: " + encodedCust
- + ", namespace: " + keyNamespace
- + ") indicating previous attempt to enable, reattempting to enable key management");
+ LOG.info(
+ "refreshManagedKeys: Found FAILED marker for custodian: {} under namespace: {}"
+ + ") indicating previous attempt to enable, reattempting to enable key management",
+ encodedCust, keyNamespace);
enableKeyManagement(keyCust, keyNamespace);
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaServiceEndpoint.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaServiceEndpoint.java
index 94539100aab1..1bddef052969 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaServiceEndpoint.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaServiceEndpoint.java
@@ -27,6 +27,7 @@
import org.apache.hadoop.hbase.coprocessor.HasMasterServices;
import org.apache.hadoop.hbase.coprocessor.MasterCoprocessor;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
import org.apache.hadoop.hbase.master.MasterServices;
import org.apache.yetus.audience.InterfaceAudience;
@@ -45,6 +46,7 @@
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyResponse;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyState;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.SetManagedKeyRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ManagedKeysProtos;
import org.apache.hadoop.hbase.shaded.protobuf.generated.ManagedKeysProtos.ManagedKeysService;
@@ -114,8 +116,9 @@ public void enableKeyManagement(RpcController controller, ManagedKeyRequest requ
ManagedKeyData managedKeyState = master.getKeymetaAdmin()
.enableKeyManagement(request.getKeyCust().toByteArray(), request.getKeyNamespace());
response = generateKeyStateResponse(managedKeyState, builder);
- } catch (IOException | KeyException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("enableKeyManagement", e));
builder.setKeyState(ManagedKeyState.KEY_FAILED);
}
if (response == null) {
@@ -134,8 +137,9 @@ public void getManagedKeys(RpcController controller, ManagedKeyRequest request,
List managedKeyStates = master.getKeymetaAdmin()
.getManagedKeys(request.getKeyCust().toByteArray(), request.getKeyNamespace());
keyStateResponse = generateKeyStateResponse(managedKeyStates, builder);
- } catch (IOException | KeyException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("getManagedKeys", e));
}
if (keyStateResponse == null) {
keyStateResponse = GetManagedKeysResponse.getDefaultInstance();
@@ -156,8 +160,9 @@ public void rotateSTK(RpcController controller, EmptyMsg request,
boolean rotated;
try {
rotated = master.getKeymetaAdmin().rotateSTK();
- } catch (IOException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("rotateSTK", e));
rotated = false;
}
done.run(BooleanMsg.newBuilder().setBoolMsg(rotated).build());
@@ -179,8 +184,9 @@ public void disableKeyManagement(RpcController controller, ManagedKeyRequest req
ManagedKeyData managedKeyState = master.getKeymetaAdmin()
.disableKeyManagement(request.getKeyCust().toByteArray(), request.getKeyNamespace());
response = generateKeyStateResponse(managedKeyState, builder);
- } catch (IOException | KeyException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("disableKeyManagement", e));
builder.setKeyState(ManagedKeyState.KEY_FAILED);
}
if (response == null) {
@@ -203,16 +209,17 @@ public void disableManagedKey(RpcController controller, ManagedKeyEntryRequest r
ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
try {
initManagedKeyResponseBuilder(controller, request.getKeyCustNs(), builder);
- // Convert hash to metadata by looking up the key first
- byte[] keyMetadataHash = request.getKeyMetadataHash().toByteArray();
+ // Convert partial identity to key lookup
+ byte[] partialIdentity = request.getKeyMetadataHash().toByteArray();
byte[] keyCust = request.getKeyCustNs().getKeyCust().toByteArray();
String keyNamespace = request.getKeyCustNs().getKeyNamespace();
ManagedKeyData managedKeyState =
- master.getKeymetaAdmin().disableManagedKey(keyCust, keyNamespace, keyMetadataHash);
+ master.getKeymetaAdmin().disableManagedKey(keyCust, keyNamespace, partialIdentity);
response = generateKeyStateResponse(managedKeyState, builder);
- } catch (IOException | KeyException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("disableManagedKey", e));
builder.setKeyState(ManagedKeyState.KEY_FAILED);
}
if (response == null) {
@@ -236,9 +243,15 @@ public void rotateManagedKey(RpcController controller, ManagedKeyRequest request
initManagedKeyResponseBuilder(controller, request, builder);
ManagedKeyData managedKeyState = master.getKeymetaAdmin()
.rotateManagedKey(request.getKeyCust().toByteArray(), request.getKeyNamespace());
+ if (managedKeyState == null) {
+ throw new IOException("Failed to rotate managed key for (custodian: "
+ + ManagedKeyProvider.encodeToStr(request.getKeyCust().toByteArray()) + ", namespace: "
+ + request.getKeyNamespace() + ")");
+ }
response = generateKeyStateResponse(managedKeyState, builder);
- } catch (IOException | KeyException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("rotateManagedKey", e));
builder.setKeyState(ManagedKeyState.KEY_FAILED);
}
if (response == null) {
@@ -261,11 +274,40 @@ public void refreshManagedKeys(RpcController controller, ManagedKeyRequest reque
initManagedKeyResponseBuilder(controller, request, ManagedKeyResponse.newBuilder());
master.getKeymetaAdmin().refreshManagedKeys(request.getKeyCust().toByteArray(),
request.getKeyNamespace());
- } catch (IOException | KeyException e) {
- CoprocessorRpcUtils.setControllerException(controller, new DoNotRetryIOException(e));
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("refreshManagedKeys", e));
}
done.run(EmptyMsg.getDefaultInstance());
}
+
+ /**
+ * Unwraps key metadata from the provider and persists it for the given custodian and namespace.
+ */
+ @Override
+ public void setManagedKey(RpcController controller, SetManagedKeyRequest request,
+ RpcCallback done) {
+ ManagedKeyResponse response = null;
+ ManagedKeyResponse.Builder builder = ManagedKeyResponse.newBuilder();
+ try {
+ initManagedKeyResponseBuilder(controller, request.getKeyCustNs(), builder);
+ if (request.getKeyMetadata().isEmpty()) {
+ throw new IOException("key_metadata must not be empty");
+ }
+ ManagedKeyData managedKeyState =
+ master.getKeymetaAdmin().setManagedKey(request.getKeyCustNs().getKeyCust().toByteArray(),
+ request.getKeyCustNs().getKeyNamespace(), request.getKeyMetadata());
+ response = generateKeyStateResponse(managedKeyState, builder);
+ } catch (IOException | KeyException | RuntimeException e) {
+ CoprocessorRpcUtils.setControllerException(controller,
+ toControllerException("setManagedKey", e));
+ builder.setKeyState(ManagedKeyState.KEY_FAILED);
+ }
+ if (response == null) {
+ response = builder.build();
+ }
+ done.run(response);
+ }
}
@InterfaceAudience.Private
@@ -301,12 +343,19 @@ private static ManagedKeyResponse generateKeyStateResponse(ManagedKeyData keyDat
.setRefreshTimestamp(keyData.getRefreshTimestamp())
.setKeyNamespace(keyData.getKeyNamespace());
- // Set metadata hash if available
- byte[] metadataHash = keyData.getKeyMetadataHash();
- if (metadataHash != null) {
- builder.setKeyMetadataHash(ByteString.copyFrom(metadataHash));
+ // Set partial identity (key_metadata_hash in proto) if available
+ if (keyData.getKeyIdentity().getPartialIdentityLength() > 0) {
+ builder.setKeyMetadataHash(ByteString.copyFrom(keyData.getPartialIdentity()));
}
return builder.build();
}
+
+ private static DoNotRetryIOException toControllerException(String operation, Exception e) {
+ if (e instanceof RuntimeException) {
+ return new DoNotRetryIOException(
+ operation + " failed with unexpected runtime exception in KeymetaServiceEndpoint", e);
+ }
+ return new DoNotRetryIOException(e);
+ }
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java
index 4a291ff39d89..aa9305bdc9d8 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/KeymetaTableAccessor.java
@@ -24,6 +24,7 @@
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Set;
+import java.util.stream.Collectors;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.NamespaceDescriptor;
import org.apache.hadoop.hbase.Server;
@@ -104,16 +105,18 @@ public Server getServer() {
/**
* Add the specified key to the keymeta table.
* @param keyData The key data.
- * @throws IOException when there is an underlying IOException.
+ * @throws IOException when there is an underlying IOException.
+ * @throws IllegalArgumentException if key custodian or key namespace exceeds max length (255).
*/
public void addKey(ManagedKeyData keyData) throws IOException {
assertKeyManagementEnabled();
List puts = new ArrayList<>(2);
if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
- puts.add(addMutationColumns(new Put(constructRowKeyForCustNamespace(keyData)), keyData));
+ puts.add(addMutationColumns(
+ new Put(keyData.getKeyIdentity().getIdentityPrefixView().copyBytesIfNecessary()), keyData));
}
- final Put putForMetadata =
- addMutationColumns(new Put(constructRowKeyForMetadata(keyData)), keyData);
+ final Put putForMetadata = addMutationColumns(
+ new Put(keyData.getKeyIdentity().getFullIdentityView().copyBytesIfNecessary()), keyData);
puts.add(putForMetadata);
Connection connection = getServer().getConnection();
try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
@@ -123,18 +126,17 @@ public void addKey(ManagedKeyData keyData) throws IOException {
/**
* Get all the keys for the specified keyCust and key_namespace.
- * @param keyCust The key custodian.
- * @param keyNamespace The namespace
- * @param includeMarkers Whether to include key management state markers in the result.
+ * @param fullKeyIdentity The full key identity.
+ * @param includeMarkers Whether to include key management state markers in the result.
* @return a list of key data, one for each key, can be empty when none were found.
* @throws IOException when there is an underlying IOException.
* @throws KeyException when there is an underlying KeyException.
*/
- public List getAllKeys(byte[] keyCust, String keyNamespace,
- boolean includeMarkers) throws IOException, KeyException {
+ public List getAllKeys(ManagedKeyIdentity fullKeyIdentity, boolean includeMarkers)
+ throws IOException, KeyException {
assertKeyManagementEnabled();
Connection connection = getServer().getConnection();
- byte[] prefixForScan = constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ byte[] prefixForScan = fullKeyIdentity.getIdentityPrefixView().copyBytesIfNecessary();
PrefixFilter prefixFilter = new PrefixFilter(prefixForScan);
Scan scan = new Scan();
scan.setFilter(prefixFilter);
@@ -145,12 +147,12 @@ public List getAllKeys(byte[] keyCust, String keyNamespace,
Set allKeys = new LinkedHashSet<>();
for (Result result : scanner) {
ManagedKeyData keyData =
- parseFromResult(getKeyManagementService(), keyCust, keyNamespace, result);
+ parseFromResult(getKeyManagementService(), fullKeyIdentity, result);
if (keyData != null && (includeMarkers || keyData.getKeyMetadata() != null)) {
allKeys.add(keyData);
}
}
- return allKeys.stream().toList();
+ return allKeys.stream().collect(Collectors.toList());
}
}
@@ -162,30 +164,29 @@ public List getAllKeys(byte[] keyCust, String keyNamespace,
* @throws IOException when there is an underlying IOException.
* @throws KeyException when there is an underlying KeyException.
*/
- public ManagedKeyData getKeyManagementStateMarker(byte[] keyCust, String keyNamespace)
+ public ManagedKeyData getKeyManagementStateMarker(ManagedKeyIdentity keyIdentity)
throws IOException, KeyException {
- return getKey(keyCust, keyNamespace, null);
+ Preconditions.checkNotNull(keyIdentity.getPartialIdentityLength() == 0,
+ "fullKeyIdentity must have no partial identity");
+ return getKey(keyIdentity);
}
/**
- * Get the specific key identified by keyCust, keyNamespace and keyMetadataHash.
- * @param keyCust The prefix.
- * @param keyNamespace The namespace.
- * @param keyMetadataHash The metadata hash.
+ * Get the specific key identified by keyCust, keyNamespace and partialIdentity.
+ * @param keyIdentity The full key identity.
* @return the key or {@code null}
* @throws IOException when there is an underlying IOException.
* @throws KeyException when there is an underlying KeyException.
*/
- public ManagedKeyData getKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash)
- throws IOException, KeyException {
+ public ManagedKeyData getKey(ManagedKeyIdentity keyIdentity) throws IOException, KeyException {
assertKeyManagementEnabled();
Connection connection = getServer().getConnection();
try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
- byte[] rowKey = keyMetadataHash != null
- ? constructRowKeyForMetadata(keyCust, keyNamespace, keyMetadataHash)
- : constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ byte[] rowKey = keyIdentity.getPartialIdentityLength() > 0
+ ? keyIdentity.getFullIdentityView().copyBytesIfNecessary()
+ : keyIdentity.getIdentityPrefixView().copyBytesIfNecessary();
Result result = table.get(new Get(rowKey));
- return parseFromResult(getKeyManagementService(), keyCust, keyNamespace, result);
+ return parseFromResult(getKeyManagementService(), keyIdentity, result);
}
}
@@ -195,24 +196,39 @@ public ManagedKeyData getKey(byte[] keyCust, String keyNamespace, byte[] keyMeta
* @throws IOException when there is an underlying IOException.
*/
public void disableKey(ManagedKeyData keyData) throws IOException {
+ disableKey(keyData, true);
+ }
+
+ /**
+ * Disables a key by removing the wrapped key and updating its state to DISABLED. When
+ * {@code deleteCustNamespaceRow} is true and the key is ACTIVE, the (custodian, namespace) row is
+ * deleted so no key is marked ACTIVE for that scope. When false, that row is left unchanged (e.g.
+ * when disabling all keys as part of disableKeyManagement, the DISABLED marker has already been
+ * written to that row).
+ * @param keyData The key data to disable.
+ * @param deleteCustNamespaceRow If true, delete the (custodian, namespace) row when key is
+ * ACTIVE; if false, skip that delete.
+ * @throws IOException when there is an underlying IOException.
+ */
+ public void disableKey(ManagedKeyData keyData, boolean deleteCustNamespaceRow)
+ throws IOException {
assertKeyManagementEnabled();
Preconditions.checkNotNull(keyData.getKeyMetadata(), "Key metadata cannot be null");
- byte[] keyCust = keyData.getKeyCustodian();
- String keyNamespace = keyData.getKeyNamespace();
- byte[] keyMetadataHash = keyData.getKeyMetadataHash();
List mutations = new ArrayList<>(3); // Max possible mutations.
- if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ if (deleteCustNamespaceRow && keyData.getKeyState() == ManagedKeyState.ACTIVE) {
// Delete the CustNamespace row
- byte[] rowKeyForCustNamespace = constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ byte[] rowKeyForCustNamespace =
+ keyData.getKeyIdentity().getIdentityPrefixView().copyBytesIfNecessary();
mutations.add(new Delete(rowKeyForCustNamespace).setDurability(Durability.SKIP_WAL)
.setPriority(HConstants.SYSTEMTABLE_QOS));
}
- // Update state to DISABLED and timestamp on Metadata row
- byte[] rowKeyForMetadata = constructRowKeyForMetadata(keyCust, keyNamespace, keyMetadataHash);
- addMutationsForKeyDisabled(mutations, rowKeyForMetadata, keyData.getKeyMetadata(),
+ // Update state to DISABLED and timestamp on Identity row
+ byte[] rowKeyForIdentity =
+ keyData.getKeyIdentity().getFullIdentityView().copyBytesIfNecessary();
+ addMutationsForKeyDisabled(mutations, rowKeyForIdentity, keyData.getKeyMetadata(),
keyData.getKeyState() == ManagedKeyState.ACTIVE
? ManagedKeyState.ACTIVE_DISABLED
: ManagedKeyState.INACTIVE_DISABLED,
@@ -227,6 +243,7 @@ public void disableKey(ManagedKeyData keyData) throws IOException {
}
}
+ // Also used for adding key management state marker.
private void addMutationsForKeyDisabled(List mutations, byte[] rowKey, String metadata,
ManagedKeyState targetState, ManagedKeyState currentState) {
Put put = new Put(rowKey);
@@ -243,6 +260,9 @@ private void addMutationsForKeyDisabled(List mutations, byte[] rowKey,
.addColumns(KEY_META_INFO_FAMILY, DEK_CHECKSUM_QUAL_BYTES)
.addColumns(KEY_META_INFO_FAMILY, DEK_WRAPPED_BY_STK_QUAL_BYTES)
.addColumns(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES);
+ if (metadata == null) {
+ deleteWrappedKey.addColumns(KEY_META_INFO_FAMILY, DEK_METADATA_QUAL_BYTES);
+ }
mutations.add(deleteWrappedKey);
}
}
@@ -257,13 +277,13 @@ private void addMutationsForKeyDisabled(List mutations, byte[] rowKey,
* @param state The key management state to add.
* @throws IOException when there is an underlying IOException.
*/
- public void addKeyManagementStateMarker(byte[] keyCust, String keyNamespace,
- ManagedKeyState state) throws IOException {
+ public void addKeyManagementStateMarker(ManagedKeyIdentity keyIdentity, ManagedKeyState state)
+ throws IOException {
assertKeyManagementEnabled();
Preconditions.checkArgument(ManagedKeyState.isKeyManagementState(state),
"State must be a key management state, got: " + state);
List mutations = new ArrayList<>(2);
- byte[] rowKey = constructRowKeyForCustNamespace(keyCust, keyNamespace);
+ byte[] rowKey = keyIdentity.getIdentityPrefixView().copyBytesIfNecessary();
addMutationsForKeyDisabled(mutations, rowKey, null, state, null);
Connection connection = getServer().getConnection();
try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
@@ -277,12 +297,22 @@ public void addKeyManagementStateMarker(byte[] keyCust, String keyNamespace,
/**
* Updates the state of a key to one of the ACTIVE or INACTIVE states. The current state can be
* any state, but if it the same, it becomes a no-op.
- * @param keyData The key data.
- * @param newState The new state (must be ACTIVE or INACTIVE).
+ *
+ * When transitioning from ACTIVE to INACTIVE, this method normally deletes the (custodian,
+ * namespace) row so that no key is marked ACTIVE for that scope. During key rotation and
+ * disablement, the caller has already written the marker key before demoting the old key; in that
+ * case pass {@code skipActiveRowDelete == true} to skip the delete and avoid removing the new
+ * active key's row.
+ * @param keyData The key data.
+ * @param newState The new state (must be ACTIVE or INACTIVE).
+ * @param skipActiveRowDelete If true, do not delete the (custodian, namespace) row when
+ * transitioning from ACTIVE to INACTIVE. Use only when the caller has
+ * already updated that row with a new ACTIVE key (e.g. during
+ * rotateActiveKey) to avoid removing the new active key's row.
* @throws IOException when there is an underlying IOException.
*/
- public void updateActiveState(ManagedKeyData keyData, ManagedKeyState newState)
- throws IOException {
+ public void updateActiveState(ManagedKeyData keyData, ManagedKeyState newState,
+ boolean skipActiveRowDelete) throws IOException {
assertKeyManagementEnabled();
ManagedKeyState currentState = keyData.getKeyState();
@@ -298,27 +328,31 @@ public void updateActiveState(ManagedKeyData keyData, ManagedKeyState newState)
}
List mutations = new ArrayList<>(2);
- byte[] rowKeyForCustNamespace = constructRowKeyForCustNamespace(keyData);
- byte[] rowKeyForMetadata = constructRowKeyForMetadata(keyData);
+ Bytes rowKeyForCustNamespace = keyData.getKeyIdentity().getIdentityPrefixView();
// First take care of the active key specific row.
if (newState == ManagedKeyState.ACTIVE) {
// INACTIVE -> ACTIVE: Add CustNamespace row and update Metadata row
- mutations.add(addMutationColumns(new Put(rowKeyForCustNamespace), keyData));
+ mutations
+ .add(addMutationColumns(new Put(rowKeyForCustNamespace.copyBytesIfNecessary()), keyData));
}
- if (currentState == ManagedKeyState.ACTIVE) {
- mutations.add(new Delete(rowKeyForCustNamespace).setDurability(Durability.SKIP_WAL)
- .setPriority(HConstants.SYSTEMTABLE_QOS));
+ if (currentState == ManagedKeyState.ACTIVE && !skipActiveRowDelete) {
+ // ACTIVE -> INACTIVE: Delete CustNamespace row (unless it was already overwritten by
+ // a new ACTIVE key, e.g. during rotation).
+ mutations.add(new Delete(rowKeyForCustNamespace.copyBytesIfNecessary())
+ .setDurability(Durability.SKIP_WAL).setPriority(HConstants.SYSTEMTABLE_QOS));
}
- // Now take care of the key specific row (for point gets by metadata).
+ Bytes rowKeyForIdentity = keyData.getKeyIdentity().getFullIdentityView();
+ // Now take care of the key specific row (for point gets by identity).
if (!ManagedKeyState.isUsable(currentState)) {
// For DISABLED and FAILED keys, we don't expect cached key material, so add all columns
// similar to what addKey() does.
- mutations.add(addMutationColumns(new Put(rowKeyForMetadata), keyData));
+ mutations.add(addMutationColumns(new Put(rowKeyForIdentity.copyBytesIfNecessary()), keyData));
} else {
// We expect cached key material, so only update the state and timestamp columns.
- mutations.add(addMutationColumnsForState(new Put(rowKeyForMetadata), newState));
+ mutations.add(
+ addMutationColumnsForState(new Put(rowKeyForIdentity.copyBytesIfNecessary()), newState));
}
Connection connection = getServer().getConnection();
@@ -343,6 +377,40 @@ private Put addMutationColumnsForState(Put put, ManagedKeyState newState, long t
.addColumn(KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES, Bytes.toBytes(timestamp));
}
+ /**
+ * Updates only the refresh timestamp column for the given key to the current time. Used when a
+ * key is refreshed but unchanged, so that the stored timestamp reflects the last refresh time.
+ * @param keyData The key data whose refresh timestamp should be updated.
+ * @throws IOException when there is an underlying IOException.
+ */
+ public void updateRefreshTimestamp(ManagedKeyData keyData) throws IOException {
+ assertKeyManagementEnabled();
+ Preconditions.checkNotNull(keyData.getKeyMetadata(), "Key metadata cannot be null");
+ long now = EnvironmentEdgeManager.currentTime();
+ List mutations = new ArrayList<>(2);
+ byte[] rowKeyForIdentity =
+ keyData.getKeyIdentity().getFullIdentityView().copyBytesIfNecessary();
+ Put putMetadata = new Put(rowKeyForIdentity).setDurability(Durability.SKIP_WAL)
+ .setPriority(HConstants.SYSTEMTABLE_QOS)
+ .addColumn(KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES, Bytes.toBytes(now));
+ mutations.add(putMetadata);
+ if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ byte[] rowKeyForCustNamespace =
+ keyData.getKeyIdentity().getIdentityPrefixView().copyBytesIfNecessary();
+ Put putCustNamespace = new Put(rowKeyForCustNamespace).setDurability(Durability.SKIP_WAL)
+ .setPriority(HConstants.SYSTEMTABLE_QOS)
+ .addColumn(KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES, Bytes.toBytes(now));
+ mutations.add(putCustNamespace);
+ }
+ Connection connection = getServer().getConnection();
+ try (Table table = connection.getTable(KEY_META_TABLE_NAME)) {
+ table.batch(mutations, null);
+ } catch (InterruptedException e) {
+ Thread.currentThread().interrupt();
+ throw new IOException("Interrupted while updating refresh timestamp", e);
+ }
+ }
+
/**
* Add the mutation columns to the given Put that are derived from the keyData.
*/
@@ -357,7 +425,7 @@ private Put addMutationColumns(Put put, ManagedKeyData keyData) throws IOExcepti
Bytes.toBytes(keyData.getKeyChecksum()))
.addColumn(KEY_META_INFO_FAMILY, DEK_WRAPPED_BY_STK_QUAL_BYTES, dekWrappedBySTK)
.addColumn(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES,
- Bytes.toBytes(latestSystemKey.getKeyChecksum()));
+ latestSystemKey.getKeyIdentity().getFullIdentityView().copyBytesIfNecessary());
}
Put result =
addMutationColumnsForState(put, keyData.getKeyState(), keyData.getRefreshTimestamp());
@@ -371,34 +439,15 @@ private Put addMutationColumns(Put put, ManagedKeyData keyData) throws IOExcepti
return result;
}
- @InterfaceAudience.Private
- public static byte[] constructRowKeyForMetadata(ManagedKeyData keyData) {
- Preconditions.checkNotNull(keyData.getKeyMetadata(), "Key metadata cannot be null");
- return constructRowKeyForMetadata(keyData.getKeyCustodian(), keyData.getKeyNamespace(),
- keyData.getKeyMetadataHash());
- }
-
- @InterfaceAudience.Private
- public static byte[] constructRowKeyForMetadata(byte[] keyCust, String keyNamespace,
- byte[] keyMetadataHash) {
- return Bytes.add(constructRowKeyForCustNamespace(keyCust, keyNamespace), keyMetadataHash);
- }
-
- @InterfaceAudience.Private
- public static byte[] constructRowKeyForCustNamespace(ManagedKeyData keyData) {
- return constructRowKeyForCustNamespace(keyData.getKeyCustodian(), keyData.getKeyNamespace());
- }
-
- @InterfaceAudience.Private
- public static byte[] constructRowKeyForCustNamespace(byte[] keyCust, String keyNamespace) {
- int custLength = keyCust.length;
- return Bytes.add(Bytes.toBytes(custLength), keyCust, Bytes.toBytes(keyNamespace));
- }
-
@InterfaceAudience.Private
public static ManagedKeyData parseFromResult(KeyManagementService keyManagementService,
- byte[] keyCust, String keyNamespace, Result result) throws IOException, KeyException {
+ ManagedKeyIdentity originalFullKeyIdentity, Result result) throws IOException, KeyException {
if (result == null || result.isEmpty()) {
+ LOG.warn(
+ "parseFromResult: returning null because result is null or empty (keyCust={}, keyCust length={}, "
+ + "keyNamespace={})",
+ originalFullKeyIdentity.getCustodianEncoded(), originalFullKeyIdentity.getCustodianLength(),
+ originalFullKeyIdentity.getNamespaceString());
return null;
}
ManagedKeyState keyState =
@@ -414,13 +463,16 @@ public static ManagedKeyData parseFromResult(KeyManagementService keyManagementS
}
Key dek = null;
if (dekWrappedByStk != null) {
- long stkChecksum =
- Bytes.toLong(result.getValue(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES));
+ byte[] stkIdentityBytes = result.getValue(KEY_META_INFO_FAMILY, STK_CHECKSUM_QUAL_BYTES);
ManagedKeyData clusterKey =
- keyManagementService.getSystemKeyCache().getSystemKeyByChecksum(stkChecksum);
+ keyManagementService.getSystemKeyCache().getSystemKeyByIdentity(stkIdentityBytes);
if (clusterKey == null) {
- LOG.error("Dropping key with metadata: {} as STK with checksum: {} is unavailable",
- dekMetadata, stkChecksum);
+ LOG.error(
+ "Dropping key with metadata: {} as STK with identity is unavailable"
+ + "(keyCust={}, keyCust length={}, keyNamespace={})",
+ dekMetadata, originalFullKeyIdentity.getCustodianEncoded(),
+ originalFullKeyIdentity.getCustodianLength(),
+ originalFullKeyIdentity.getNamespaceString());
return null;
}
dek = EncryptionUtil.unwrapKey(keyManagementService.getConfiguration(), null, dekWrappedByStk,
@@ -430,22 +482,40 @@ public static ManagedKeyData parseFromResult(KeyManagementService keyManagementS
Bytes.toLong(result.getValue(KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES));
ManagedKeyData dekKeyData;
if (dekMetadata != null) {
+ // We do nnot have a partial identity in the originalFullKeyIdentity in the following cases
+ // and we need to create a full identity from the metadata:
+ // - When it is a point get of an ACTIVE marker
+ // - When it is a scan for all keys and the key state is usable.
+ ManagedKeyIdentity fullKeyIdentity = originalFullKeyIdentity.getPartialIdentityLength() > 0
+ ? originalFullKeyIdentity
+ : ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(
+ originalFullKeyIdentity.getCustodianView(), originalFullKeyIdentity.getNamespaceView(),
+ dekMetadata);
dekKeyData =
- new ManagedKeyData(keyCust, keyNamespace, dek, keyState, dekMetadata, refreshedTimestamp);
+ new ManagedKeyData(fullKeyIdentity, dek, keyState, dekMetadata, refreshedTimestamp);
if (dek != null) {
long dekChecksum =
Bytes.toLong(result.getValue(KEY_META_INFO_FAMILY, DEK_CHECKSUM_QUAL_BYTES));
if (dekKeyData.getKeyChecksum() != dekChecksum) {
LOG.error(
"Dropping key, current key checksum: {} didn't match the expected checksum: {}"
- + " for key with metadata: {}",
- dekKeyData.getKeyChecksum(), dekChecksum, dekMetadata);
+ + " for key with metadata: {} (keyCust={}, keyCust length={}, keyNamespace={})",
+ dekKeyData.getKeyChecksum(), dekChecksum, dekMetadata,
+ fullKeyIdentity.getCustodianEncoded(), fullKeyIdentity.getCustodianLength(),
+ fullKeyIdentity.getNamespaceString());
dekKeyData = null;
}
}
} else {
- // Key management marker.
- dekKeyData = new ManagedKeyData(keyCust, keyNamespace, keyState, refreshedTimestamp);
+ if (originalFullKeyIdentity.getPartialIdentityLength() != 0) {
+ throw new IllegalArgumentException(
+ "Partial identity length must be 0 for key management marker, got: "
+ + originalFullKeyIdentity.getPartialIdentityLength() + " (keyCust="
+ + originalFullKeyIdentity.getCustodianEncoded() + ", keyCust length="
+ + originalFullKeyIdentity.getCustodianLength() + ", keyNamespace="
+ + originalFullKeyIdentity.getNamespaceString() + ")");
+ }
+ dekKeyData = new ManagedKeyData(originalFullKeyIdentity, keyState, refreshedTimestamp);
}
return dekKeyData;
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java
index f93706690ded..e3d0497db1f3 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/ManagedKeyDataCache.java
@@ -21,67 +21,37 @@
import com.github.benmanes.caffeine.cache.Caffeine;
import java.io.IOException;
import java.security.KeyException;
-import java.util.Objects;
import java.util.concurrent.atomic.AtomicBoolean;
-import java.util.concurrent.atomic.AtomicReference;
-import java.util.function.Function;
import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.hbase.HBaseInterfaceAudience;
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
-import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+
/**
* In-memory cache for ManagedKeyData entries, using key metadata hash as the cache key. Uses two
* independent Caffeine caches: one for general key data and one for active keys only with
* hierarchical structure for efficient single key retrieval.
+ *
+ * Cache keys are {@link ManagedKeyIdentity}; any implementation (e.g.
+ * {@link KeyIdentityBytesBacked}, {@link KeyIdentitySingleArrayBacked}) is supported and
+ * interoperable—lookups match entries regardless of concrete type because equality and hashCode are
+ * content-based (see {@link ManagedKeyIdentity#contentEquals contentEquals}/
+ * {@link ManagedKeyIdentity#contentHashCode contentHashCode}).
*/
@InterfaceAudience.Private
public class ManagedKeyDataCache extends KeyManagementBase {
private static final Logger LOG = LoggerFactory.getLogger(ManagedKeyDataCache.class);
- private Cache cacheByMetadataHash; // Key is Bytes wrapper around hash
- private Cache activeKeysCache;
+ private Cache cacheByIdentity;
+ private Cache activeKeysCache;
private final KeymetaTableAccessor keymetaAccessor;
- /**
- * Composite key for active keys cache containing custodian and namespace. NOTE: Pair won't work
- * out of the box because it won't work with byte[] as is.
- */
- @InterfaceAudience.LimitedPrivate({ HBaseInterfaceAudience.UNITTEST })
- public static class ActiveKeysCacheKey {
- private final byte[] custodian;
- private final String namespace;
-
- public ActiveKeysCacheKey(byte[] custodian, String namespace) {
- this.custodian = custodian;
- this.namespace = namespace;
- }
-
- @Override
- public boolean equals(Object obj) {
- if (this == obj) {
- return true;
- }
- if (obj == null || getClass() != obj.getClass()) {
- return false;
- }
- ActiveKeysCacheKey cacheKey = (ActiveKeysCacheKey) obj;
- return Bytes.equals(custodian, cacheKey.custodian)
- && Objects.equals(namespace, cacheKey.namespace);
- }
-
- @Override
- public int hashCode() {
- return Objects.hash(Bytes.hashCode(custodian), namespace);
- }
- }
-
/**
* Constructs the ManagedKeyDataCache with the given configuration and keymeta accessor. When
* keymetaAccessor is null, L2 lookup is disabled and dynamic lookup is enabled.
@@ -100,100 +70,95 @@ public ManagedKeyDataCache(Configuration conf, KeymetaTableAccessor keymetaAcces
int activeKeysMaxEntries =
conf.getInt(HConstants.CRYPTO_MANAGED_KEYS_L1_ACTIVE_CACHE_MAX_NS_ENTRIES_CONF_KEY,
HConstants.CRYPTO_MANAGED_KEYS_L1_ACTIVE_CACHE_MAX_NS_ENTRIES_DEFAULT);
- this.cacheByMetadataHash = Caffeine.newBuilder().maximumSize(maxEntries).build();
+ this.cacheByIdentity = Caffeine.newBuilder().maximumSize(maxEntries).build();
this.activeKeysCache = Caffeine.newBuilder().maximumSize(activeKeysMaxEntries).build();
}
/**
- * Retrieves an entry from the cache, if it already exists, otherwise a null is returned. No
- * attempt will be made to load from L2 or provider.
- * @return the corresponding ManagedKeyData entry, or null if not found
- */
- public ManagedKeyData getEntry(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash)
- throws IOException, KeyException {
- Bytes metadataHashKey = new Bytes(keyMetadataHash);
- // Return the entry if it exists in the generic cache or active keys cache, otherwise return
- // null.
- ManagedKeyData entry = cacheByMetadataHash.get(metadataHashKey, hashKey -> {
- return getFromActiveKeysCache(keyCust, keyNamespace, keyMetadataHash);
- });
- return entry;
- }
-
- /**
- * Retrieves an entry from the cache, loading it from L2 if KeymetaTableAccessor is available.
- * When L2 is not available, it will try to load from provider, unless dynamic lookup is disabled.
- * @param keyCust the key custodian
- * @param keyNamespace the key namespace
- * @param keyMetadata the key metadata of the entry to be retrieved
- * @param wrappedKey The DEK key material encrypted with the corresponding KEK, if available.
+ * Retrieves an entry from the cache, loading from L2 or provider if not found. Exactly one of
+ * partialIdentity or keyMetadata must be non-null. When keyMetadata is null, the provider is not
+ * consulted and resolution is from cache and L2 only. When no entry is found and keyMetadata is
+ * null, this method returns null and does not cache a placeholder so a future call can retry.
+ *
+ * Uses Caffeine's get() so that the result is always cached in cacheByMetadataHash, whether the
+ * entry was found in the active-key cache, L2, or provider.
+ * @param fullKeyIdentity the full key identity
+ * @param keyMetadata the key metadata string, or null if partialIdentity is provided
+ * @param wrappedKey the DEK key material encrypted with the corresponding KEK, if available
* @return the corresponding ManagedKeyData entry, or null if not found
* @throws IOException if an error occurs while loading from KeymetaTableAccessor
* @throws KeyException if an error occurs while loading from KeymetaTableAccessor
*/
- public ManagedKeyData getEntry(byte[] keyCust, String keyNamespace, String keyMetadata,
+ public ManagedKeyData getEntry(ManagedKeyIdentity fullKeyIdentity, String keyMetadata,
byte[] wrappedKey) throws IOException, KeyException {
- // Compute hash and use it as cache key
- byte[] metadataHashBytes = ManagedKeyData.constructMetadataHash(keyMetadata);
- Bytes metadataHashKey = new Bytes(metadataHashBytes);
+ Preconditions.checkArgument(
+ fullKeyIdentity.getPartialIdentityLength() > 0 || keyMetadata != null,
+ "Exactly one of partialIdentity or keyMetadata must be non-empty");
- ManagedKeyData entry = cacheByMetadataHash.get(metadataHashKey, hashKey -> {
- // First check if it's in the active keys cache
- ManagedKeyData keyData = getFromActiveKeysCache(keyCust, keyNamespace, metadataHashBytes);
+ if (fullKeyIdentity.getPartialIdentityLength() <= 0) {
+ fullKeyIdentity = ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(
+ fullKeyIdentity.getCustodianView(), fullKeyIdentity.getNamespaceView(), keyMetadata);
+ }
+
+ ManagedKeyData entry = cacheByIdentity.getIfPresent(fullKeyIdentity);
+ if (entry != null) {
+ // Treat FAILED as "not found" so callers get null when L2 failed or key was missing
+ return entry.getKeyState() == ManagedKeyState.FAILED ? null : entry;
+ }
- // Try to load from L2
- if (keyData == null && keymetaAccessor != null) {
+ // Technically we don't need to clone the fullKeyIdentity as all existing execution paths ensure
+ // that the underlying byte[] are not reused, but doing so avoids hard to track
+ // bugs, in case there are some new code paths not following this practice.
+ // However, since we do lazy cloning, it avoids most of the extra cost.
+ fullKeyIdentity = fullKeyIdentity.clone();
+ entry = cacheByIdentity.get(fullKeyIdentity, keyIdentity -> {
+ // Try active keys cache first (same as L2/provider path so result is cached via get())
+ ManagedKeyData keyData = getFromActiveKeysCache(keyIdentity);
+ if (keyData != null) {
+ return keyData;
+ }
+
+ // L2 + provider
+ if (keymetaAccessor != null) {
try {
- keyData = keymetaAccessor.getKey(keyCust, keyNamespace, metadataHashBytes);
- } catch (IOException | KeyException e) {
+ keyData = keymetaAccessor.getKey(keyIdentity);
+ } catch (IOException | KeyException | RuntimeException e) {
LOG.warn(
- "Failed to load key from L2 for (custodian: {}, namespace: {}) with metadata hash: {}",
- ManagedKeyProvider.encodeToStr(keyCust), keyNamespace,
- ManagedKeyProvider.encodeToStr(metadataHashBytes), e);
+ "Failed to load key from L2 for (custodian: {}, namespace: {}) with partial identity: {}",
+ keyIdentity.getCustodianEncoded(), keyIdentity.getNamespaceString(),
+ keyIdentity.getPartialIdentityEncoded(), e);
}
}
- // If not found in L2 and dynamic lookup is enabled, try with Key Provider
- if (keyData == null && isDynamicLookupEnabled()) {
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
+ if (keyData == null && isDynamicLookupEnabled() && keyMetadata != null) {
try {
- keyData = KeyManagementUtils.retrieveKey(getKeyProvider(), keymetaAccessor, encKeyCust,
- keyCust, keyNamespace, keyMetadata, wrappedKey);
+ keyData = KeyManagementUtils.retrieveKey(getKeyProvider(), keymetaAccessor, keyIdentity,
+ keyMetadata, wrappedKey);
} catch (IOException | KeyException | RuntimeException e) {
LOG.warn(
"Failed to retrieve key from provider for (custodian: {}, namespace: {}) with "
- + "metadata hash: {}", ManagedKeyProvider.encodeToStr(keyCust), keyNamespace,
- ManagedKeyProvider.encodeToStr(metadataHashBytes), e);
+ + "metadata hash: {}",
+ keyIdentity.getCustodianEncoded(), keyIdentity.getNamespaceString(),
+ keyIdentity.getPartialIdentityEncoded(), e);
}
}
if (keyData == null) {
- keyData =
- new ManagedKeyData(keyCust, keyNamespace, null, ManagedKeyState.FAILED, keyMetadata);
+ if (keyMetadata == null) {
+ return null;
+ }
+ keyData = new ManagedKeyData(keyIdentity, null, ManagedKeyState.FAILED, keyMetadata);
}
// Also update activeKeysCache if relevant and is missing.
if (keyData.getKeyState() == ManagedKeyState.ACTIVE) {
- activeKeysCache.asMap().putIfAbsent(new ActiveKeysCacheKey(keyCust, keyNamespace), keyData);
+ activeKeysCache.asMap().putIfAbsent(new KeyIdentityPrefixBytesBacked(
+ keyIdentity.getCustodianView(), keyIdentity.getNamespaceView()), keyData);
}
-
return keyData;
});
-
- // Verify custodian/namespace match to guard against hash collisions
if (entry != null && ManagedKeyState.isUsable(entry.getKeyState())) {
- if (
- Bytes.equals(entry.getKeyCustodian(), keyCust)
- && entry.getKeyNamespace().equals(keyNamespace)
- ) {
- return entry;
- }
- LOG.warn(
- "Hash collision or incorrect/mismatched custodian/namespace detected for metadata hash: "
- + "{} - custodian/namespace mismatch expected: ({}, {}), actual: ({}, {})",
- ManagedKeyProvider.encodeToStr(metadataHashBytes), ManagedKeyProvider.encodeToStr(keyCust),
- keyNamespace, ManagedKeyProvider.encodeToStr(entry.getKeyCustodian()),
- entry.getKeyNamespace());
+ return entry;
}
return null;
}
@@ -202,65 +167,48 @@ public ManagedKeyData getEntry(byte[] keyCust, String keyNamespace, String keyMe
* Retrieves an existing key from the active keys cache.
* @param keyCust the key custodian
* @param keyNamespace the key namespace
- * @param keyMetadataHash the key metadata hash
+ * @param partialIdentity the partial identity (digest of key metadata)
* @return the ManagedKeyData if found, null otherwise
*/
- private ManagedKeyData getFromActiveKeysCache(byte[] keyCust, String keyNamespace,
- byte[] keyMetadataHash) {
- ActiveKeysCacheKey cacheKey = new ActiveKeysCacheKey(keyCust, keyNamespace);
+ private ManagedKeyData getFromActiveKeysCache(ManagedKeyIdentity fullKeyIdentity) {
+ KeyIdentityPrefixBytesBacked cacheKey = new KeyIdentityPrefixBytesBacked(
+ fullKeyIdentity.getCustodianView(), fullKeyIdentity.getNamespaceView());
ManagedKeyData keyData = activeKeysCache.getIfPresent(cacheKey);
- if (keyData != null && Bytes.equals(keyData.getKeyMetadataHash(), keyMetadataHash)) {
+ if (
+ keyData != null
+ && fullKeyIdentity.getPartialIdentityView().equals(keyData.getPartialIdentity())
+ ) {
return keyData;
}
return null;
}
/**
- * Eject the key identified by the given custodian, namespace and metadata from both the active
- * keys cache and the generic cache.
- * @param keyCust the key custodian
- * @param keyNamespace the key namespace
- * @param keyMetadataHash the key metadata hash
- * @return true if the key was ejected from either cache, false otherwise
+ * Ejects the key with specified full key identity from all the caches. Ejects from the active
+ * keys cache if a key exists for the specified prefix and the full identity matches.
+ * @param fullKeyIdentity the full key identity
+ * @return true if the key was ejected from the active keys cache, false otherwise
*/
- public boolean ejectKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash) {
- Bytes keyMetadataHashKey = new Bytes(keyMetadataHash);
- ActiveKeysCacheKey cacheKey = new ActiveKeysCacheKey(keyCust, keyNamespace);
+ public boolean ejectKey(ManagedKeyIdentity fullKeyIdentity) {
+ KeyIdentityPrefixBytesBacked cacheKey = new KeyIdentityPrefixBytesBacked(
+ fullKeyIdentity.getCustodianView(), fullKeyIdentity.getNamespaceView());
AtomicBoolean ejected = new AtomicBoolean(false);
- AtomicReference rejectedValue = new AtomicReference<>(null);
- Function conditionalCompute = (value) -> {
- if (rejectedValue.get() != null) {
- return value;
- }
- if (
- Bytes.equals(value.getKeyMetadataHash(), keyMetadataHash)
- && Bytes.equals(value.getKeyCustodian(), keyCust)
- && value.getKeyNamespace().equals(keyNamespace)
- ) {
+ // Eject from active keys cache only when partial identity matches.
+ // Custodian and namespace are already matched by computeIfPresent(cacheKey, ...).
+ activeKeysCache.asMap().computeIfPresent(cacheKey, (key, value) -> {
+ if (fullKeyIdentity.comparePartialIdentity(value.getPartialIdentity()) == 0) {
ejected.set(true);
return null;
}
- rejectedValue.set(value);
return value;
- };
-
- // Try to eject from active keys cache by matching hash with collision check
- activeKeysCache.asMap().computeIfPresent(cacheKey,
- (key, value) -> conditionalCompute.apply(value));
-
- // Also remove from generic cache by hash, with collision check
- cacheByMetadataHash.asMap().computeIfPresent(keyMetadataHashKey,
- (hash, value) -> conditionalCompute.apply(value));
+ });
- if (rejectedValue.get() != null) {
- LOG.warn(
- "Hash collision or incorrect/mismatched custodian/namespace detected for metadata "
- + "hash: {} - custodian/namespace mismatch expected: ({}, {}), actual: ({}, {})",
- ManagedKeyProvider.encodeToStr(keyMetadataHash), ManagedKeyProvider.encodeToStr(keyCust),
- keyNamespace, ManagedKeyProvider.encodeToStr(rejectedValue.get().getKeyCustodian()),
- rejectedValue.get().getKeyNamespace());
- }
+ // Also remove from generic cache by partial identity, with collision check
+ cacheByIdentity.asMap().computeIfPresent(fullKeyIdentity, (hash, value) -> {
+ ejected.set(true);
+ return null;
+ });
return ejected.get();
}
@@ -269,7 +217,7 @@ public boolean ejectKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataH
* Clear all the cached entries.
*/
public void clearCache() {
- cacheByMetadataHash.invalidateAll();
+ cacheByIdentity.invalidateAll();
activeKeysCache.invalidateAll();
}
@@ -278,7 +226,7 @@ public void clearCache() {
* by key metadata hash.
*/
public int getGenericCacheEntryCount() {
- return (int) cacheByMetadataHash.estimatedSize();
+ return (int) cacheByIdentity.estimatedSize();
}
/** Returns the approximate number of entries in the active keys cache */
@@ -290,23 +238,30 @@ public int getActiveCacheEntryCount() {
* Retrieves the active entry from the cache based on its key custodian and key namespace. This
* method also loads active keys from provider if not found in cache.
* @param keyCust The key custodian.
- * @param keyNamespace the key namespace to search for
+ * @param keyNamespace the key namespace to search for (as Bytes)
* @return the ManagedKeyData entry with the given custodian and ACTIVE status, or null if not
* found
*/
- public ManagedKeyData getActiveEntry(byte[] keyCust, String keyNamespace) {
- ActiveKeysCacheKey cacheKey = new ActiveKeysCacheKey(keyCust, keyNamespace);
+ public ManagedKeyData getActiveEntry(final Bytes keyCust, final Bytes keyNamespace) {
+ ManagedKeyIdentity cacheKey = new KeyIdentityPrefixBytesBacked(keyCust, keyNamespace);
+ ManagedKeyData keyData = activeKeysCache.getIfPresent(cacheKey);
+ if (keyData != null && keyData.getKeyState() == ManagedKeyState.ACTIVE) {
+ return keyData;
+ }
+
+ // Doing lazy cloning of the key custodian and namespace for long-term storage.
+ cacheKey = new KeyIdentityPrefixBytesBacked(keyCust.clone(), keyNamespace.clone());
- ManagedKeyData keyData = activeKeysCache.get(cacheKey, key -> {
+ keyData = activeKeysCache.get(cacheKey, key -> {
ManagedKeyData retrievedKey = null;
// Try to load from KeymetaTableAccessor if not found in cache
if (keymetaAccessor != null) {
try {
- retrievedKey = keymetaAccessor.getKeyManagementStateMarker(keyCust, keyNamespace);
+ retrievedKey = keymetaAccessor.getKeyManagementStateMarker(key);
} catch (IOException | KeyException | RuntimeException e) {
LOG.warn("Failed to load active key from KeymetaTableAccessor for custodian: {} "
- + "namespace: {}", ManagedKeyProvider.encodeToStr(keyCust), keyNamespace, e);
+ + "namespace: {}", key.getCustodianEncoded(), key.getNamespaceString(), e);
}
}
@@ -314,17 +269,16 @@ public ManagedKeyData getActiveEntry(byte[] keyCust, String keyNamespace) {
// standalone tools.
if (retrievedKey == null && isDynamicLookupEnabled()) {
try {
- String keyCustEnc = ManagedKeyProvider.encodeToStr(keyCust);
retrievedKey = KeyManagementUtils.retrieveActiveKey(getKeyProvider(), keymetaAccessor,
- keyCustEnc, keyCust, keyNamespace, null);
+ key.getCustodianEncoded(), key, null);
} catch (IOException | KeyException | RuntimeException e) {
LOG.warn("Failed to load active key from provider for custodian: {} namespace: {}",
- ManagedKeyProvider.encodeToStr(keyCust), keyNamespace, e);
+ key.getCustodianEncoded(), key.getNamespaceString(), e);
}
}
if (retrievedKey == null) {
- retrievedKey = new ManagedKeyData(keyCust, keyNamespace, ManagedKeyState.FAILED);
+ retrievedKey = new ManagedKeyData(key, ManagedKeyState.FAILED);
}
return retrievedKey;
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java
index 8de01319e25b..fd1cf2adf2f2 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyAccessor.java
@@ -95,7 +95,7 @@ public List getAllSystemKeyFiles() throws IOException {
public ManagedKeyData loadSystemKey(Path keyPath) throws IOException {
ManagedKeyProvider provider = getKeyProvider();
- ManagedKeyData keyData = provider.unwrapKey(loadKeyMetadata(keyPath), null);
+ ManagedKeyData keyData = provider.unwrapKey(null, loadKeyMetadata(keyPath), null);
if (keyData == null) {
throw new RuntimeException("Failed to load system key from: " + keyPath);
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java
index b01af650d764..2b465b41d541 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/keymeta/SystemKeyCache.java
@@ -18,13 +18,14 @@
package org.apache.hadoop.hbase.keymeta;
import java.io.IOException;
+import java.util.HashMap;
import java.util.List;
import java.util.Map;
-import java.util.TreeMap;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
@@ -35,7 +36,7 @@ public class SystemKeyCache {
private static final Logger LOG = LoggerFactory.getLogger(SystemKeyCache.class);
private final ManagedKeyData latestSystemKey;
- private final Map systemKeys;
+ private final Map systemKeys;
/**
* Create a SystemKeyCache from the specified configuration and file system.
@@ -63,23 +64,23 @@ public static SystemKeyCache createCache(SystemKeyAccessor accessor) throws IOEx
return null;
}
ManagedKeyData latestSystemKey = null;
- Map systemKeys = new TreeMap<>();
+ Map systemKeys = new HashMap<>();
for (Path keyPath : allSystemKeyFiles) {
ManagedKeyData keyData = accessor.loadSystemKey(keyPath);
LOG.info(
- "Loaded system key with (custodian: {}, namespace: {}), checksum: {} and metadata hash: {} "
+ "Loaded system key with (custodian: {}, namespace: {}), full identity and partial: {} "
+ " from file: {}",
- keyData.getKeyCustodianEncoded(), keyData.getKeyNamespace(), keyData.getKeyChecksum(),
- keyData.getKeyMetadataHashEncoded(), keyPath);
+ keyData.getKeyCustodianEncoded(), keyData.getKeyNamespace(),
+ keyData.getPartialIdentityEncoded(), keyPath);
if (latestSystemKey == null) {
latestSystemKey = keyData;
}
- systemKeys.put(keyData.getKeyChecksum(), keyData);
+ systemKeys.put(keyData.getKeyIdentity().getFullIdentityView(), keyData);
}
return new SystemKeyCache(systemKeys, latestSystemKey);
}
- private SystemKeyCache(Map systemKeys, ManagedKeyData latestSystemKey) {
+ private SystemKeyCache(Map systemKeys, ManagedKeyData latestSystemKey) {
this.systemKeys = systemKeys;
this.latestSystemKey = latestSystemKey;
}
@@ -88,7 +89,17 @@ public ManagedKeyData getLatestSystemKey() {
return latestSystemKey;
}
- public ManagedKeyData getSystemKeyByChecksum(long checksum) {
- return systemKeys.get(checksum);
+ /**
+ * Look up a system key by its full identity (row key bytes).
+ * @param fullIdentity full identity bytes such as from
+ * {@link ManagedKeyIdentity#getFullIdentityView()} or
+ * {@link ManagedKeyIdentityUtils#constructRowKeyForIdentity(byte[], byte[], byte[])}
+ * @return the cached system key, or null if not found
+ */
+ public ManagedKeyData getSystemKeyByIdentity(byte[] fullIdentity) {
+ if (fullIdentity == null || fullIdentity.length == 0) {
+ return null;
+ }
+ return systemKeys.get(new Bytes(fullIdentity));
}
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
index 8902c0f91174..25d8ec8bb338 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
@@ -123,6 +123,8 @@
import org.apache.hadoop.hbase.ipc.CoprocessorRpcUtils;
import org.apache.hadoop.hbase.ipc.RpcServer;
import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
+import org.apache.hadoop.hbase.keymeta.KeymetaAdmin;
+import org.apache.hadoop.hbase.keymeta.KeymetaAdminImpl;
import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor;
import org.apache.hadoop.hbase.log.HBaseMarkers;
import org.apache.hadoop.hbase.master.MasterRpcServices.BalanceSwitchMode;
@@ -361,6 +363,7 @@ public class HMaster extends HBaseServerBase implements Maste
private MasterFileSystem fileSystemManager;
private MasterWalManager walManager;
private SystemKeyManager systemKeyManager;
+ private KeymetaAdmin keymetaAdmin;
// manager to manage procedure-based WAL splitting, can be null if current
// is zk-based WAL splitting. SplitWALManager will replace SplitLogManager
@@ -528,6 +531,7 @@ public HMaster(final Configuration conf) throws IOException {
super(conf, "Master");
final Span span = TraceUtil.createSpan("HMaster.cxtor");
try (Scope ignored = span.makeCurrent()) {
+ this.keymetaAdmin = new KeymetaAdminImpl(this, keymetaAccessor);
if (conf.getBoolean(MAINTENANCE_MODE, false)) {
LOG.info("Detected {}=true via configuration.", MAINTENANCE_MODE);
maintenanceMode = true;
@@ -803,6 +807,11 @@ public MetricsMaster getMasterMetrics() {
return metricsMaster;
}
+ @Override
+ public KeymetaAdmin getKeymetaAdmin() {
+ return keymetaAdmin;
+ }
+
/**
* Initialize all ZK based system trackers. But do not include {@link RegionServerTracker}, it
* should have already been initialized along with {@link ServerManager}.
@@ -1000,7 +1009,7 @@ private void finishActiveMasterInitialization() throws IOException, InterruptedE
systemKeyManager = new SystemKeyManager(this);
systemKeyManager.ensureSystemKeyInitialized();
- buildSystemKeyCache();
+ systemKeyCache = buildSystemKeyCache();
// Precaution. Put in place the old hbck1 lock file to fence out old hbase1s running their
// hbck1s against an hbase2 cluster; it could do damage. To skip this behavior, set
@@ -1731,8 +1740,7 @@ public MasterWalManager getMasterWalManager() {
public boolean rotateSystemKeyIfChanged() throws IOException {
ManagedKeyData newKey = this.systemKeyManager.rotateSystemKeyIfChanged();
if (newKey != null) {
- this.systemKeyCache = null;
- buildSystemKeyCache();
+ systemKeyCache = buildSystemKeyCache();
return true;
}
return false;
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
index 2161ed9c5ad5..3d6033b31ab6 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
@@ -1453,8 +1453,8 @@ protected void handleReportForDutyResponse(final RegionServerStartupResponse c)
initializeFileSystem();
}
- buildSystemKeyCache();
- managedKeyDataCache = new ManagedKeyDataCache(this.getConfiguration(), keymetaAdmin);
+ systemKeyCache = buildSystemKeyCache();
+ managedKeyDataCache = new ManagedKeyDataCache(this.getConfiguration(), keymetaAccessor);
// hack! Maps DFSClient => RegionServer for logs. HDFS made this
// config param for task trackers, but we can piggyback off of it.
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
index 0fb5c2e5f940..9b0723b06af0 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStoreFile.java
@@ -17,8 +17,6 @@
*/
package org.apache.hadoop.hbase.regionserver;
-import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
-
import java.io.IOException;
import java.io.UnsupportedEncodingException;
import java.net.URLEncoder;
@@ -45,7 +43,6 @@
import org.apache.hadoop.hbase.io.hfile.HFile;
import org.apache.hadoop.hbase.io.hfile.ReaderContext;
import org.apache.hadoop.hbase.io.hfile.ReaderContext.ReaderType;
-import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
import org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTracker;
@@ -219,8 +216,6 @@ public long getMaxMemStoreTS() {
*/
private final BloomType cfBloomType;
- private String keyNamespace;
-
private SystemKeyCache systemKeyCache;
private final ManagedKeyDataCache managedKeyDataCache;
@@ -242,7 +237,7 @@ public long getMaxMemStoreTS() {
*/
public HStoreFile(FileSystem fs, Path p, Configuration conf, CacheConfig cacheConf,
BloomType cfBloomType, boolean primaryReplica, StoreFileTracker sft) throws IOException {
- this(sft.getStoreFileInfo(p, primaryReplica), cfBloomType, cacheConf, null, null,
+ this(sft.getStoreFileInfo(p, primaryReplica), cfBloomType, cacheConf, null,
SecurityUtil.isKeyManagementEnabled(conf) ? SystemKeyCache.createCache(conf, fs) : null,
SecurityUtil.isKeyManagementEnabled(conf) ? new ManagedKeyDataCache(conf, null) : null);
}
@@ -260,7 +255,7 @@ public HStoreFile(FileSystem fs, Path p, Configuration conf, CacheConfig cacheCo
*/
public HStoreFile(StoreFileInfo fileInfo, BloomType cfBloomType, CacheConfig cacheConf)
throws IOException {
- this(fileInfo, cfBloomType, cacheConf, null, KeyNamespaceUtil.constructKeyNamespace(fileInfo),
+ this(fileInfo, cfBloomType, cacheConf, null,
SecurityUtil.isKeyManagementEnabled(fileInfo.getConf())
? SystemKeyCache.createCache(fileInfo.getConf(), fileInfo.getFileSystem())
: null,
@@ -282,12 +277,11 @@ public HStoreFile(StoreFileInfo fileInfo, BloomType cfBloomType, CacheConfig cac
* @param metrics Tracks bloom filter requests and results. May be null.
*/
public HStoreFile(StoreFileInfo fileInfo, BloomType cfBloomType, CacheConfig cacheConf,
- BloomFilterMetrics metrics, String keyNamespace, SystemKeyCache systemKeyCache,
+ BloomFilterMetrics metrics, SystemKeyCache systemKeyCache,
ManagedKeyDataCache managedKeyDataCache) {
this.fileInfo = fileInfo;
this.cacheConf = cacheConf;
this.metrics = metrics;
- this.keyNamespace = keyNamespace != null ? keyNamespace : KEY_SPACE_GLOBAL;
this.systemKeyCache = systemKeyCache;
this.managedKeyDataCache = managedKeyDataCache;
if (BloomFilterFactory.isGeneralBloomEnabled(fileInfo.getConf())) {
@@ -419,7 +413,7 @@ private void open() throws IOException {
fileInfo.initHDFSBlocksDistribution();
long readahead = fileInfo.isNoReadahead() ? 0L : -1L;
ReaderContext context = fileInfo.createReaderContext(false, readahead, ReaderType.PREAD,
- keyNamespace, systemKeyCache, managedKeyDataCache);
+ systemKeyCache, managedKeyDataCache);
fileInfo.initHFileInfo(context);
StoreFileReader reader = fileInfo.preStoreFileReaderOpen(context, cacheConf);
if (reader == null) {
@@ -568,7 +562,7 @@ private StoreFileReader createStreamReader(boolean canUseDropBehind) throws IOEx
initReader();
final boolean doDropBehind = canUseDropBehind && cacheConf.shouldDropBehindCompaction();
ReaderContext context = fileInfo.createReaderContext(doDropBehind, -1, ReaderType.STREAM,
- keyNamespace, systemKeyCache, managedKeyDataCache);
+ systemKeyCache, managedKeyDataCache);
StoreFileReader reader = fileInfo.preStoreFileReaderOpen(context, cacheConf);
if (reader == null) {
reader = fileInfo.createReader(context, cacheConf);
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 61bd92821de7..4590bcc53009 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -104,6 +104,7 @@
import org.apache.hadoop.hbase.ipc.RpcServerInterface;
import org.apache.hadoop.hbase.ipc.ServerNotRunningYetException;
import org.apache.hadoop.hbase.ipc.ServerRpcController;
+import org.apache.hadoop.hbase.keymeta.KeyIdentityBytesBacked;
import org.apache.hadoop.hbase.monitoring.ThreadLocalServerSideScanMetrics;
import org.apache.hadoop.hbase.net.Address;
import org.apache.hadoop.hbase.procedure2.RSProcedureCallable;
@@ -4104,19 +4105,20 @@ public BooleanMsg ejectManagedKeyDataCacheEntry(final RpcController controller,
requestCount.increment();
byte[] keyCustodian = request.getKeyCustNs().getKeyCust().toByteArray();
String keyNamespace = request.getKeyCustNs().getKeyNamespace();
- byte[] keyMetadataHash = request.getKeyMetadataHash().toByteArray();
+ byte[] partialIdentity = request.getKeyMetadataHash().toByteArray();
if (LOG.isInfoEnabled()) {
String keyCustodianEncoded = ManagedKeyProvider.encodeToStr(keyCustodian);
- String keyMetadataHashEncoded = ManagedKeyProvider.encodeToStr(keyMetadataHash);
+ String partialIdentityEncoded = ManagedKeyProvider.encodeToStr(partialIdentity);
LOG.info(
"Received EjectManagedKeyDataCacheEntry request for key custodian: {}, namespace: {}, "
- + "metadata hash: {}",
- keyCustodianEncoded, keyNamespace, keyMetadataHashEncoded);
+ + "partial identity: {}",
+ keyCustodianEncoded, keyNamespace, partialIdentityEncoded);
}
boolean ejected = server.getKeyManagementService().getManagedKeyDataCache()
- .ejectKey(keyCustodian, keyNamespace, keyMetadataHash);
+ .ejectKey(new KeyIdentityBytesBacked(new Bytes(keyCustodian),
+ new Bytes(Bytes.toBytes(keyNamespace)), new Bytes(partialIdentity)));
return BooleanMsg.newBuilder().setBoolMsg(ejected).build();
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java
index 08e710826358..99bba77b592f 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreEngine.java
@@ -41,7 +41,6 @@
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.conf.ConfigKey;
import org.apache.hadoop.hbase.io.hfile.BloomFilterMetrics;
-import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
import org.apache.hadoop.hbase.log.HBaseMarkers;
@@ -237,8 +236,7 @@ public HStoreFile createStoreFileAndReader(Path p) throws IOException {
public HStoreFile createStoreFileAndReader(StoreFileInfo info) throws IOException {
info.setRegionCoprocessorHost(coprocessorHost);
HStoreFile storeFile = new HStoreFile(info, ctx.getFamily().getBloomFilterType(),
- ctx.getCacheConf(), bloomFilterMetrics, KeyNamespaceUtil.constructKeyNamespace(ctx),
- systemKeyCache, managedKeyDataCache);
+ ctx.getCacheConf(), bloomFilterMetrics, systemKeyCache, managedKeyDataCache);
storeFile.initReader();
return storeFile;
}
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
index 1184f39da66a..2a5bfb5a6813 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
@@ -297,8 +297,7 @@ public StoreFileReader createReader(ReaderContext context, CacheConfig cacheConf
}
ReaderContext createReaderContext(boolean doDropBehind, long readahead, ReaderType type,
- String keyNamespace, SystemKeyCache systemKeyCache, ManagedKeyDataCache managedKeyDataCache)
- throws IOException {
+ SystemKeyCache systemKeyCache, ManagedKeyDataCache managedKeyDataCache) throws IOException {
FSDataInputStreamWrapper in;
FileStatus status;
if (this.link != null) {
diff --git a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java
index 5fff2a417ebc..6313eb5359f4 100644
--- a/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java
+++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/security/SecurityUtil.java
@@ -29,9 +29,11 @@
import org.apache.hadoop.hbase.io.crypto.Encryption;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer;
-import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
+import org.apache.hadoop.hbase.keymeta.KeyIdentitySingleArrayBacked;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.yetus.audience.InterfaceAudience;
import org.apache.yetus.audience.InterfaceStability;
import org.slf4j.Logger;
@@ -85,7 +87,6 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
Encryption.Context cryptoContext = Encryption.Context.NONE;
boolean isKeyManagementEnabled = isKeyManagementEnabled(conf);
String cipherName = family.getEncryptionType();
- String keyNamespace = null; // Will be set by fallback logic
if (LOG.isDebugEnabled()) {
LOG.debug("Creating encryption context for table: {} and column family: {}",
tableDescriptor.getTableName().getNameAsString(), family.getNameAsString());
@@ -103,7 +104,7 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
ManagedKeyData kekKeyData =
isKeyManagementEnabled ? systemKeyCache.getLatestSystemKey() : null;
- // Scenario 1: If family has a key, unwrap it and use that as DEK.
+ // Scenario 1: If family has a key, unwrap it and use that as CEK.
byte[] familyKeyBytes = family.getEncryptionKey();
if (familyKeyBytes != null) {
try {
@@ -114,8 +115,8 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
// Scenario 1b: If key management is disabled, unwrap the key using master key.
key = EncryptionUtil.unwrapKey(conf, familyKeyBytes);
}
- LOG.debug("Scenario 1: Use family key for namespace {} cipher: {} "
- + "key management enabled: {}", keyNamespace, cipherName, isKeyManagementEnabled);
+ LOG.debug("Scenario 1: Use family key for cipher: {} key management enabled: {}",
+ cipherName, isKeyManagementEnabled);
} catch (KeyException e) {
throw new IOException(e);
}
@@ -124,66 +125,67 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
boolean localKeyGenEnabled =
conf.getBoolean(HConstants.CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_ENABLED_CONF_KEY,
HConstants.CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_DEFAULT_ENABLED);
- // Implement 4-step fallback logic for key namespace resolution in the order of
+ Bytes keyNamespaceCFAttribute = family.getEncryptionKeyNamespaceBytes();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Looking for active key for table: {} and column family: {} with "
+ + "encryption key namespace: {} and global namespace(custodian: {}, namespace: {})",
+ tableDescriptor.getTableName().getNameAsString(), family.getNameAsString(),
+ keyNamespaceCFAttribute, ManagedKeyData.GLOBAL_CUST_ENCODED,
+ ManagedKeyData.KEY_SPACE_GLOBAL);
+ }
+ // Implement 2-step fallback logic for key namespace resolution in the order of
// 1. CF KEY_NAMESPACE attribute
- // 2. Constructed namespace
- // 3. Table name
- // 4. Global namespace
- String[] candidateNamespaces = { family.getEncryptionKeyNamespace(),
- KeyNamespaceUtil.constructKeyNamespace(tableDescriptor, family),
- tableDescriptor.getTableName().getNameAsString(), ManagedKeyData.KEY_SPACE_GLOBAL };
-
- ManagedKeyData activeKeyData = null;
- for (String candidate : candidateNamespaces) {
- if (candidate != null) {
- // Log information on the table and column family we are looking for the active key in
- if (LOG.isDebugEnabled()) {
- LOG.debug(
- "Looking for active key for table: {} and column family: {} with "
- + "(custodian: {}, namespace: {})",
- tableDescriptor.getTableName().getNameAsString(), family.getNameAsString(),
- ManagedKeyData.KEY_GLOBAL_CUSTODIAN, candidate);
- }
- activeKeyData = managedKeyDataCache
- .getActiveEntry(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES, candidate);
- if (activeKeyData != null) {
- keyNamespace = candidate;
- break;
- }
- }
+ // 2. Global namespace
+ ManagedKeyData activeKeyData = keyNamespaceCFAttribute != null
+ ? managedKeyDataCache.getActiveEntry(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES,
+ keyNamespaceCFAttribute)
+ : null;
+ if (activeKeyData == null) {
+ activeKeyData = managedKeyDataCache.getActiveEntry(
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES, ManagedKeyData.KEY_SPACE_GLOBAL_BYTES);
}
// Scenario 2: There is an active key
if (activeKeyData != null) {
if (!localKeyGenEnabled) {
- // Scenario 2a: Use active key as DEK and latest STK as KEK
+ // Scenario 2a: Use active key as CEK and latest STK as KEK
key = activeKeyData.getTheKey();
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Scenario 2a: Use active key as CEK with (custodian: {}, namespace: {}) and "
+ + " STK as KEK for cipher: {} for table: {} and column family: {}",
+ activeKeyData.getKeyCustodianEncoded(), activeKeyData.getKeyNamespace(),
+ cipherName, tableDescriptor.getTableName().getNameAsString(),
+ family.getNameAsString());
+ }
} else {
- // Scenario 2b: Use active key as KEK and generate local key as DEK
+ // Scenario 2b: Use active key (DEK) as KEK and let a local key be generated as CEK
+ // later.
kekKeyData = activeKeyData;
// TODO: Use the active key as a seed to generate the local key instead of
// random generation
cipher = getCipherIfValid(conf, cipherName, activeKeyData.getTheKey(),
family.getNameAsString());
- }
- if (LOG.isDebugEnabled()) {
- LOG.debug(
- "Scenario 2: Use active key with (custodian: {}, namespace: {}) for cipher: {} "
- + "localKeyGenEnabled: {} for table: {} and column family: {}",
- activeKeyData.getKeyCustodianEncoded(), activeKeyData.getKeyNamespace(), cipherName,
- localKeyGenEnabled, tableDescriptor.getTableName().getNameAsString(),
- family.getNameAsString());
+ if (LOG.isDebugEnabled()) {
+ LOG.debug(
+ "Scenario 2b: Use random key as CEK and active key as KEK with (custodian: {}, "
+ + "namespace: {}) for cipher: {} for table: {} and column family: {}",
+ activeKeyData.getKeyCustodianEncoded(), activeKeyData.getKeyNamespace(),
+ cipherName, tableDescriptor.getTableName().getNameAsString(),
+ family.getNameAsString());
+ }
}
} else {
if (LOG.isDebugEnabled()) {
LOG.debug("Scenario 3a: No active key found for table: {} and column family: {}",
tableDescriptor.getTableName().getNameAsString(), family.getNameAsString());
}
- // Scenario 3a: Do nothing, let a random key be generated as DEK and if key management
+ // Scenario 3a: Do nothing, let a random key be generated as CEK and if key management
// is enabled, let STK be used as KEK.
}
} else {
- // Scenario 3b: Do nothing, let a random key be generated as DEK, let STK be used as KEK.
+ // Scenario 3b: Do nothing, let a random key be generated as CEK, let STK be used as KEK.
if (LOG.isDebugEnabled()) {
LOG.debug(
"Scenario 3b: Key management is disabled and no ENCRYPTION_KEY attribute "
@@ -196,7 +198,7 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
LOG.debug(
"Usigng KEK with (custodian: {}, namespace: {}), checksum: {} and metadata " + "hash: {}",
kekKeyData.getKeyCustodianEncoded(), kekKeyData.getKeyNamespace(),
- kekKeyData.getKeyChecksum(), kekKeyData.getKeyMetadataHashEncoded());
+ kekKeyData.getKeyChecksum(), kekKeyData.getPartialIdentityEncoded());
}
if (cipher == null) {
@@ -209,7 +211,6 @@ public static Encryption.Context createEncryptionContext(Configuration conf,
cryptoContext = Encryption.newContext(conf);
cryptoContext.setCipher(cipher);
cryptoContext.setKey(key);
- cryptoContext.setKeyNamespace(keyNamespace);
cryptoContext.setKEKData(kekKeyData);
}
return cryptoContext;
@@ -239,52 +240,50 @@ public static Encryption.Context createEncryptionContext(Configuration conf, Pat
// When there is key material, determine the appropriate KEK
boolean isKeyManagementEnabled = isKeyManagementEnabled(conf);
- if (((trailer.getKEKChecksum() != 0L) || isKeyManagementEnabled) && systemKeyCache == null) {
+ byte[] kekIdentity = trailer.getKekIdentity();
+ boolean hasKekIdentity = (kekIdentity != null && kekIdentity.length > 0);
+ if ((hasKekIdentity || isKeyManagementEnabled) && systemKeyCache == null) {
throw new IOException("SystemKeyCache can't be null when using key management feature");
}
- if ((trailer.getKEKChecksum() != 0L && !isKeyManagementEnabled)) {
+ if (hasKekIdentity && !isKeyManagementEnabled) {
throw new IOException(
- "Seeing newer trailer with KEK checksum, but key management is disabled");
+ "Seeing newer trailer with KEK identity, but key management is disabled");
}
- // Try STK lookup first if checksum is available.
- if (trailer.getKEKChecksum() != 0L) {
- LOG.debug("Looking for System Key with checksum: {}", trailer.getKEKChecksum());
- ManagedKeyData systemKeyData =
- systemKeyCache.getSystemKeyByChecksum(trailer.getKEKChecksum());
+ // Try STK lookup first if full identity is available.
+ if (hasKekIdentity) {
+ LOG.debug("Looking for System Key by identity (length: {})", kekIdentity.length);
+ ManagedKeyData systemKeyData = systemKeyCache.getSystemKeyByIdentity(kekIdentity);
if (systemKeyData != null) {
kek = systemKeyData.getTheKey();
kekKeyData = systemKeyData;
if (LOG.isDebugEnabled()) {
LOG.debug(
- "Found System Key with (custodian: {}, namespace: {}), checksum: {} and "
- + "metadata hash: {}",
+ "Found System Key with (custodian: {}, namespace: {}), full identity and partial: {}",
systemKeyData.getKeyCustodianEncoded(), systemKeyData.getKeyNamespace(),
- systemKeyData.getKeyChecksum(), systemKeyData.getKeyMetadataHashEncoded());
+ systemKeyData.getPartialIdentityEncoded());
}
}
}
- // If STK lookup failed or no checksum available, try managed key lookup using metadata
- if (kek == null && trailer.getKEKMetadata() != null) {
- if (managedKeyDataCache == null) {
- throw new IOException("KEK metadata is available, but ManagedKeyDataCache is null");
- }
- Throwable cause = null;
+ // If STK lookup failed or no identity available, try managed key lookup by full identity
+ if (kek == null && hasKekIdentity && managedKeyDataCache == null) {
+ throw new IOException("KEK identity is available, but ManagedKeyDataCache is null");
+ }
+ if (kek == null && hasKekIdentity && managedKeyDataCache != null) {
try {
- kekKeyData = managedKeyDataCache.getEntry(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES,
- trailer.getKeyNamespace(), trailer.getKEKMetadata(), keyBytes);
- } catch (KeyException | IOException e) {
- cause = e;
- }
- // When getEntry returns null we treat it the same as exception case.
- if (kekKeyData == null) {
- throw new IOException(
- "Failed to get key data for KEK metadata: " + trailer.getKEKMetadata(), cause);
+ ManagedKeyIdentity keyIdentity = new KeyIdentitySingleArrayBacked(kekIdentity);
+ kekKeyData = managedKeyDataCache.getEntry(keyIdentity, trailer.getKEKMetadata(), null);
+ if (kekKeyData != null) {
+ kek = kekKeyData.getTheKey();
+ }
+ } catch (KeyException e) {
+ throw new IOException("Failed to resolve KEK from trailer full identity", e);
}
- kek = kekKeyData.getTheKey();
- } else if (kek == null && isKeyManagementEnabled) {
- // No checksum or metadata available, fall back to latest system key for backwards
+ }
+
+ if (kek == null && isKeyManagementEnabled) {
+ // No identity or metadata available, fall back to latest system key for backwards
// compatibility
ManagedKeyData systemKeyData = systemKeyCache.getLatestSystemKey();
if (systemKeyData == null) {
@@ -299,8 +298,8 @@ public static Encryption.Context createEncryptionContext(Configuration conf, Pat
try {
key = EncryptionUtil.unwrapKey(conf, null, keyBytes, kek);
} catch (KeyException | IOException e) {
- throw new IOException("Failed to unwrap key with KEK checksum: "
- + trailer.getKEKChecksum() + ", metadata: " + trailer.getKEKMetadata(), e);
+ throw new IOException("Failed to unwrap key with KEK identity (length: "
+ + (kekIdentity != null ? kekIdentity.length : 0) + ")", e);
}
} else {
key = EncryptionUtil.unwrapKey(conf, keyBytes);
@@ -309,7 +308,6 @@ public static Encryption.Context createEncryptionContext(Configuration conf, Pat
Cipher cipher = getCipherIfValid(conf, key.getAlgorithm(), key, null);
cryptoContext.setCipher(cipher);
cryptoContext.setKey(key);
- cryptoContext.setKeyNamespace(trailer.getKeyNamespace());
cryptoContext.setKEKData(kekKeyData);
}
return cryptoContext;
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
index b37c2bf7e75f..ee18c8b861a6 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestFixedFileTrailer.java
@@ -70,7 +70,7 @@ public class TestFixedFileTrailer {
* The number of used fields by version. Indexed by version minus two. Min version that we support
* is V2
*/
- private static final int[] NUM_FIELDS_BY_VERSION = new int[] { 14, 15 };
+ private static final int[] NUM_FIELDS_BY_VERSION = new int[] { 14, 17 };
private HBaseTestingUtil util = new HBaseTestingUtil();
private FileSystem fs;
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java
index c91539b7ed68..73252bf04fb4 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyProviderInterceptor.java
@@ -40,8 +40,8 @@ public void initConfig(Configuration conf, String providerParameters) {
}
@Override
- public ManagedKeyData getManagedKey(byte[] custodian, String namespace) throws IOException {
- return spy.getManagedKey(custodian, namespace);
+ public ManagedKeyData getManagedKey(ManagedKeyIdentity keyIdentity) throws IOException {
+ return spy.getManagedKey(keyIdentity);
}
@Override
@@ -50,8 +50,9 @@ public ManagedKeyData getSystemKey(byte[] systemId) throws IOException {
}
@Override
- public ManagedKeyData unwrapKey(String keyMetadata, byte[] wrappedKey) throws IOException {
- return spy.unwrapKey(keyMetadata, wrappedKey);
+ public ManagedKeyData unwrapKey(ManagedKeyIdentity keyIdentity, String keyMetadata,
+ byte[] wrappedKey) throws IOException {
+ return spy.unwrapKey(keyIdentity, keyMetadata, wrappedKey);
}
@Override
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java
index 9f2381e849bb..39830113dbfd 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/ManagedKeyTestBase.java
@@ -43,6 +43,7 @@ public void setUp() throws Exception {
TEST_UTIL.getConfiguration().set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
TEST_UTIL.getConfiguration().set("hbase.coprocessor.master.classes",
KeymetaServiceEndpoint.class.getName());
+ configureKeyProvider();
}
// Start the minicluster if needed
@@ -118,6 +119,9 @@ protected TableName getSystemTableNameToWaitFor() {
return KeymetaTableAccessor.KEY_META_TABLE_NAME;
}
+ protected void configureKeyProvider() {
+ }
+
/**
* Useful hook to enable setting a breakpoint while debugging ruby tests, just log a message and
* you can even have a conditional breakpoint.
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java
index bfd8be319895..9e2a512b31a3 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementService.java
@@ -41,6 +41,7 @@
import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
import org.apache.hadoop.hbase.testclassification.MiscTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.CommonFSUtils;
import org.junit.Before;
import org.junit.ClassRule;
@@ -68,7 +69,7 @@ public void setUp() throws Exception {
conf.set(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, "true");
conf.set(HConstants.CRYPTO_MANAGED_KEYPROVIDER_CONF_KEY,
MockManagedKeyProvider.class.getName());
- conf.set(HConstants.HBASE_ORIGINAL_DIR, "/tmp/hbase");
+ conf.set(HConstants.HBASE_ORIGINAL_ROOT_DIR, "/tmp/hbase");
}
@Test
@@ -78,7 +79,8 @@ public void testDefaultKeyManagementServiceCreation() throws IOException {
MockManagedKeyProvider provider =
(MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
ManagedKeyData keyData =
- provider.getManagedKey("system".getBytes(), ManagedKeyData.KEY_SPACE_GLOBAL);
+ provider.getManagedKey(new KeyIdentityPrefixBytesBacked(new Bytes("system".getBytes()),
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES));
String fileName = SYSTEM_KEY_FILE_PREFIX + "1";
Path systemKeyDir = CommonFSUtils.getSystemKeyDir(conf);
FileStatus mockFileStatus = KeymetaTestUtils.createMockFile(fileName);
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java
index 36df6a32ccd8..940007a349d0 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyManagementUtils.java
@@ -21,9 +21,13 @@
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertThrows;
import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.doThrow;
import static org.mockito.Mockito.mock;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
+import java.io.IOException;
import java.security.Key;
import java.security.KeyException;
import javax.crypto.KeyGenerator;
@@ -33,6 +37,7 @@
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.hadoop.hbase.testclassification.MasterTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
@@ -49,8 +54,9 @@ public class TestKeyManagementUtils {
private ManagedKeyProvider mockProvider;
private KeymetaTableAccessor mockAccessor;
+ private ManagedKeyIdentity custNamespacePrefix;
private byte[] keyCust;
- private String keyNamespace;
+ private byte[] keyNamespace;
private String keyMetadata;
private byte[] wrappedKey;
private Key testKey;
@@ -60,7 +66,8 @@ public void setUp() throws Exception {
mockProvider = mock(ManagedKeyProvider.class);
mockAccessor = mock(KeymetaTableAccessor.class);
keyCust = "testCustodian".getBytes();
- keyNamespace = "testNamespace";
+ keyNamespace = Bytes.toBytes("testNamespace");
+ custNamespacePrefix = new KeyIdentityPrefixBytesBacked(keyCust, keyNamespace);
keyMetadata = "testMetadata";
wrappedKey = new byte[] { 1, 2, 3, 4 };
@@ -71,78 +78,80 @@ public void setUp() throws Exception {
@Test
public void testRetrieveKeyWithNullResponse() throws Exception {
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(null);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any())).thenReturn(null);
KeyException exception = assertThrows(KeyException.class, () -> {
- KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
- keyMetadata, wrappedKey);
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
});
assertNotNull(exception.getMessage());
- assertEquals(true, exception.getMessage().contains("Invalid key that is null"));
+ assertEquals(exception.getMessage(), true, exception.getMessage()
+ .contains("Invalid key received from key provider (null/metadata/state)"));
}
@Test
public void testRetrieveKeyWithNullMetadata() throws Exception {
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
// Create a mock that returns null for getKeyMetadata()
ManagedKeyData mockKeyData = mock(ManagedKeyData.class);
when(mockKeyData.getKeyMetadata()).thenReturn(null);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(mockKeyData);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(mockKeyData);
KeyException exception = assertThrows(KeyException.class, () -> {
- KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
- keyMetadata, wrappedKey);
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
});
assertNotNull(exception.getMessage());
- assertEquals(true, exception.getMessage().contains("Invalid key that is null"));
+ assertEquals(exception.getMessage(), true, exception.getMessage()
+ .contains("Invalid key received from key provider (null/metadata/state)"));
}
@Test
public void testRetrieveKeyWithMismatchedMetadata() throws Exception {
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
String differentMetadata = "differentMetadata";
ManagedKeyData keyDataWithDifferentMetadata =
new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, differentMetadata);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithDifferentMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(keyDataWithDifferentMetadata);
KeyException exception = assertThrows(KeyException.class, () -> {
- KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
- keyMetadata, wrappedKey);
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
});
assertNotNull(exception.getMessage());
- assertEquals(true, exception.getMessage().contains("invalid metadata"));
+ assertEquals(exception.getMessage(), true,
+ exception.getMessage().contains("Invalid key received from key provider"));
}
@Test
public void testRetrieveKeyWithDisabledState() throws Exception {
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
ManagedKeyData keyDataWithDisabledState =
new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.DISABLED, keyMetadata);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithDisabledState);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(keyDataWithDisabledState);
KeyException exception = assertThrows(KeyException.class, () -> {
- KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust, keyCust, keyNamespace,
- keyMetadata, wrappedKey);
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
});
assertNotNull(exception.getMessage());
- assertEquals(true,
- exception.getMessage().contains("Invalid key that is null or having invalid metadata"));
+ assertEquals(exception.getMessage(), true,
+ exception.getMessage().contains("Invalid key received from key provider"));
}
@Test
public void testRetrieveKeySuccess() throws Exception {
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
ManagedKeyData validKeyData =
new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(validKeyData);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(validKeyData);
- ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust,
- keyCust, keyNamespace, keyMetadata, wrappedKey);
+ ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor,
+ custNamespacePrefix, keyMetadata, wrappedKey);
assertNotNull(result);
assertEquals(keyMetadata, result.getKeyMetadata());
@@ -152,13 +161,13 @@ public void testRetrieveKeySuccess() throws Exception {
@Test
public void testRetrieveKeyWithFailedState() throws Exception {
// FAILED state is allowed (unlike DISABLED), so this should succeed
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
ManagedKeyData keyDataWithFailedState =
new ManagedKeyData(keyCust, keyNamespace, null, ManagedKeyState.FAILED, keyMetadata);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithFailedState);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(keyDataWithFailedState);
- ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust,
- keyCust, keyNamespace, keyMetadata, wrappedKey);
+ ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor,
+ custNamespacePrefix, keyMetadata, wrappedKey);
assertNotNull(result);
assertEquals(ManagedKeyState.FAILED, result.getKeyState());
@@ -167,15 +176,180 @@ public void testRetrieveKeyWithFailedState() throws Exception {
@Test
public void testRetrieveKeyWithInactiveState() throws Exception {
// INACTIVE state is allowed, so this should succeed
- String encKeyCust = ManagedKeyProvider.encodeToStr(keyCust);
ManagedKeyData keyDataWithInactiveState =
new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.INACTIVE, keyMetadata);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(keyDataWithInactiveState);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(keyDataWithInactiveState);
- ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, encKeyCust,
- keyCust, keyNamespace, keyMetadata, wrappedKey);
+ ManagedKeyData result = KeyManagementUtils.retrieveKey(mockProvider, mockAccessor,
+ custNamespacePrefix, keyMetadata, wrappedKey);
assertNotNull(result);
assertEquals(ManagedKeyState.INACTIVE, result.getKeyState());
}
+
+ @Test
+ public void testRetrieveKeyWithMismatchedCustodian() throws Exception {
+ ManagedKeyData keyDataWithWrongCustodian = new ManagedKeyData("otherCustodian".getBytes(),
+ keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(keyDataWithWrongCustodian);
+
+ KeyException exception = assertThrows(KeyException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
+ });
+
+ assertNotNull(exception.getMessage());
+ assertEquals(true, exception.getMessage().contains("scope does not match request"));
+ verify(mockAccessor, never()).addKey(any());
+ }
+
+ @Test
+ public void testRetrieveKeyWithMismatchedNamespace() throws Exception {
+ ManagedKeyData keyDataWithWrongNamespace = new ManagedKeyData(keyCust,
+ Bytes.toBytes("otherNamespace"), testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(keyDataWithWrongNamespace);
+
+ KeyException exception = assertThrows(KeyException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
+ });
+
+ assertNotNull(exception.getMessage());
+ assertEquals(true, exception.getMessage().contains("scope does not match request"));
+ verify(mockAccessor, never()).addKey(any());
+ }
+
+ @Test
+ public void testRetrieveKeyPersistenceFailure_ThrowsIOException() throws Exception {
+ ManagedKeyData validKeyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(validKeyData);
+ doThrow(new IOException("persist failed")).when(mockAccessor).addKey(any());
+
+ IOException exception = assertThrows(IOException.class, () -> {
+ KeyManagementUtils.retrieveKey(mockProvider, mockAccessor, custNamespacePrefix, keyMetadata,
+ wrappedKey);
+ });
+
+ assertEquals("persist failed", exception.getMessage());
+ }
+
+ @Test
+ public void testRefreshKeyWithNoChange_NullAccessor() throws Exception {
+ doTestRefreshKey_OnNoChange(null);
+ }
+
+ @Test
+ public void testRefreshKey_updateRefreshTimestamp_OnNoChange() throws Exception {
+ doTestRefreshKey_OnNoChange(mockAccessor);
+ }
+
+ private void doTestRefreshKey_OnNoChange(KeymetaTableAccessor accessor) throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any())).thenReturn(keyData);
+
+ ManagedKeyData result = KeyManagementUtils.refreshKey(mockProvider, accessor, keyData);
+ assertNotNull(result);
+ assertEquals(keyData, result);
+ if (accessor != null) {
+ verify(accessor).updateRefreshTimestamp(keyData);
+ }
+ }
+
+ @Test
+ public void testRefreshKeyWithChange_NullAccessor() throws Exception {
+ doTestRefreshKeyWith_StateChange(null);
+ }
+
+ @Test
+ public void testRefreshKeyWith_StateChange() throws Exception {
+ doTestRefreshKeyWith_StateChange(mockAccessor);
+ }
+
+ private void doTestRefreshKeyWith_StateChange(KeymetaTableAccessor accessor) throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ ManagedKeyData newKeyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.INACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(newKeyData);
+
+ ManagedKeyData result = KeyManagementUtils.refreshKey(mockProvider, accessor, keyData);
+ assertNotNull(result);
+ assertEquals(newKeyData, result);
+ if (accessor != null) {
+ verify(accessor).updateActiveState(keyData, ManagedKeyState.INACTIVE, false);
+ }
+ }
+
+ /** Test that when refresh fails and returns a FAILED key, we keep the current good key intact. */
+ @Test
+ public void testRefreshKeyFailedState_NullAccessor() throws Exception {
+ doTestRefreshKey_OnFailedState(null);
+ }
+
+ @Test
+ public void testRefreshKeyFailedState() throws Exception {
+ doTestRefreshKey_OnFailedState(mockAccessor);
+ }
+
+ private void doTestRefreshKey_OnFailedState(KeymetaTableAccessor accessor) throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ ManagedKeyData newKeyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.FAILED, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(newKeyData);
+
+ ManagedKeyData result = KeyManagementUtils.refreshKey(mockProvider, accessor, keyData);
+ assertNotNull(result);
+ assertEquals(keyData, result);
+ if (accessor != null) {
+ verify(accessor).updateRefreshTimestamp(keyData);
+ }
+ }
+
+ /** Test that when refresh throws an exception, we keep the current good key intact. */
+ @Test
+ public void testRefreshKeyException_NullAccessor() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenThrow(new IOException("Test exception"));
+
+ ManagedKeyData result = KeyManagementUtils.refreshKey(mockProvider, null, keyData);
+ assertNotNull(result);
+ assertEquals(keyData, result);
+ }
+
+ @Test
+ public void testRefreshKey_KeyDisabled_NullAccessor() throws Exception {
+ doTestRefreshKey_KeyDisabled(null);
+ }
+
+ @Test
+ public void testRefreshKey_KeyDisabled() throws Exception {
+ doTestRefreshKey_KeyDisabled(mockAccessor);
+ }
+
+ private void doTestRefreshKey_KeyDisabled(KeymetaTableAccessor accessor) throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.ACTIVE, keyMetadata);
+ ManagedKeyData newKeyData =
+ new ManagedKeyData(keyCust, keyNamespace, testKey, ManagedKeyState.DISABLED, keyMetadata);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(newKeyData);
+
+ ManagedKeyData result = KeyManagementUtils.refreshKey(mockProvider, accessor, keyData);
+ assertNotNull(result);
+ assertEquals(newKeyData, result);
+ if (accessor != null) {
+ verify(accessor).disableKey(keyData);
+ }
+ }
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java
deleted file mode 100644
index 1012d2b5a08f..000000000000
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeyNamespaceUtil.java
+++ /dev/null
@@ -1,126 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.hadoop.hbase.keymeta;
-
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertThrows;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
-
-import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hbase.HBaseClassTestRule;
-import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
-import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
-import org.apache.hadoop.hbase.client.RegionInfo;
-import org.apache.hadoop.hbase.client.RegionInfoBuilder;
-import org.apache.hadoop.hbase.client.TableDescriptor;
-import org.apache.hadoop.hbase.io.HFileLink;
-import org.apache.hadoop.hbase.io.crypto.KeymetaTestUtils;
-import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
-import org.apache.hadoop.hbase.regionserver.StoreContext;
-import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
-import org.apache.hadoop.hbase.testclassification.MiscTests;
-import org.apache.hadoop.hbase.testclassification.SmallTests;
-import org.junit.ClassRule;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
-
-@Category({ MiscTests.class, SmallTests.class })
-public class TestKeyNamespaceUtil {
- @ClassRule
- public static final HBaseClassTestRule CLASS_RULE =
- HBaseClassTestRule.forClass(TestKeyNamespaceUtil.class);
-
- @Test
- public void testConstructKeyNamespace_FromTableDescriptorAndFamilyDescriptor() {
- TableDescriptor tableDescriptor = mock(TableDescriptor.class);
- ColumnFamilyDescriptor familyDescriptor = mock(ColumnFamilyDescriptor.class);
- when(tableDescriptor.getTableName()).thenReturn(TableName.valueOf("test"));
- when(familyDescriptor.getNameAsString()).thenReturn("family");
- String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(tableDescriptor, familyDescriptor);
- assertEquals("test/family", keyNamespace);
- }
-
- @Test
- public void testConstructKeyNamespace_FromStoreContext() {
- // Test store context path construction
- TableName tableName = TableName.valueOf("test");
- RegionInfo regionInfo = RegionInfoBuilder.newBuilder(tableName).build();
- HRegionFileSystem regionFileSystem = mock(HRegionFileSystem.class);
- when(regionFileSystem.getRegionInfo()).thenReturn(regionInfo);
-
- ColumnFamilyDescriptor familyDescriptor = ColumnFamilyDescriptorBuilder.of("family");
-
- StoreContext storeContext = StoreContext.getBuilder().withRegionFileSystem(regionFileSystem)
- .withColumnFamilyDescriptor(familyDescriptor).build();
-
- String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(storeContext);
- assertEquals("test/family", keyNamespace);
- }
-
- @Test
- public void testConstructKeyNamespace_FromStoreFileInfo_RegularFile() {
- // Test both regular files and linked files
- StoreFileInfo storeFileInfo = mock(StoreFileInfo.class);
- when(storeFileInfo.isLink()).thenReturn(false);
- Path path = KeymetaTestUtils.createMockPath("test", "family");
- when(storeFileInfo.getPath()).thenReturn(path);
- String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(storeFileInfo);
- assertEquals("test/family", keyNamespace);
- }
-
- @Test
- public void testConstructKeyNamespace_FromStoreFileInfo_LinkedFile() {
- // Test both regular files and linked files
- StoreFileInfo storeFileInfo = mock(StoreFileInfo.class);
- HFileLink link = mock(HFileLink.class);
- when(storeFileInfo.isLink()).thenReturn(true);
- Path path = KeymetaTestUtils.createMockPath("test", "family");
- when(link.getOriginPath()).thenReturn(path);
- when(storeFileInfo.getLink()).thenReturn(link);
- String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(storeFileInfo);
- assertEquals("test/family", keyNamespace);
- }
-
- @Test
- public void testConstructKeyNamespace_FromPath() {
- // Test path parsing with different HBase directory structures
- Path path = KeymetaTestUtils.createMockPath("test", "family");
- String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(path);
- assertEquals("test/family", keyNamespace);
- }
-
- @Test
- public void testConstructKeyNamespace_FromStrings() {
- // Test string-based construction
- String tableName = "test";
- String family = "family";
- String keyNamespace = KeyNamespaceUtil.constructKeyNamespace(tableName, family);
- assertEquals("test/family", keyNamespace);
- }
-
- @Test
- public void testConstructKeyNamespace_NullChecks() {
- // Test null inputs for both table name and family
- assertThrows(NullPointerException.class,
- () -> KeyNamespaceUtil.constructKeyNamespace(null, "family"));
- assertThrows(NullPointerException.class,
- () -> KeyNamespaceUtil.constructKeyNamespace("test", null));
- }
-}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java
index 0e9c0eae2393..ee1c12f5f37c 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaEndpoint.java
@@ -66,6 +66,7 @@
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyRequest;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyResponse;
import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.ManagedKeyState;
+import org.apache.hadoop.hbase.shaded.protobuf.generated.HBaseProtos.SetManagedKeyRequest;
@Category({ MasterTests.class, SmallTests.class })
public class TestKeymetaEndpoint {
@@ -76,6 +77,7 @@ public class TestKeymetaEndpoint {
private static final String KEY_CUST = "keyCust";
private static final String KEY_NAMESPACE = "keyNamespace";
+ private static final byte[] KEY_NAMESPACE_BYTES = Bytes.toBytes(KEY_NAMESPACE);
private static final String KEY_METADATA1 = "keyMetadata1";
private static final String KEY_METADATA2 = "keyMetadata2";
@@ -94,6 +96,8 @@ public class TestKeymetaEndpoint {
@Mock
private RpcCallback rotateManagedKeyDone;
@Mock
+ private RpcCallback setManagedKeyDone;
+ @Mock
private RpcCallback refreshManagedKeysDone;
KeymetaServiceEndpoint keymetaServiceEndpoint;
@@ -119,9 +123,9 @@ public void setUp() throws Exception {
responseBuilder = ManagedKeyResponse.newBuilder().setKeyState(ManagedKeyState.KEY_ACTIVE);
requestBuilder =
ManagedKeyRequest.newBuilder().setKeyNamespace(ManagedKeyData.KEY_SPACE_GLOBAL);
- keyData1 = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE,
+ keyData1 = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE_BYTES,
new SecretKeySpec("key1".getBytes(), "AES"), ACTIVE, KEY_METADATA1);
- keyData2 = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE,
+ keyData2 = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE_BYTES,
new SecretKeySpec("key2".getBytes(), "AES"), ACTIVE, KEY_METADATA2);
when(master.getKeymetaAdmin()).thenReturn(keymetaAdmin);
}
@@ -299,7 +303,8 @@ public void testDisableKeyManagement_Success() throws Exception {
// Arrange
ManagedKeyRequest request =
requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build();
- ManagedKeyData disabledKey = new ManagedKeyData(KEY_CUST.getBytes(), KEY_NAMESPACE, DISABLED);
+ ManagedKeyData disabledKey = new ManagedKeyData(
+ new KeyIdentityPrefixBytesBacked(KEY_CUST.getBytes(), KEY_NAMESPACE_BYTES), DISABLED);
when(keymetaAdmin.disableKeyManagement(any(), any())).thenReturn(disabledKey);
// Act
keyMetaAdminService.disableKeyManagement(controller, request, disableKeyManagementDone);
@@ -368,7 +373,7 @@ public void testDisableManagedKey_Success() throws Exception {
// Arrange
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build())
- .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getPartialIdentity())).build();
when(keymetaAdmin.disableManagedKey(any(), any(), any())).thenReturn(keyData1);
// Act
@@ -394,7 +399,7 @@ private void doTestDisableManagedKeyError(Class extends Exception> exType) thr
when(keymetaAdmin.disableManagedKey(any(), any(), any())).thenThrow(exType);
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes())).build())
- .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getPartialIdentity())).build();
// Act
keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
@@ -412,7 +417,7 @@ public void testDisableManagedKey_InvalidCust() throws Exception {
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(
requestBuilder.setKeyCust(ByteString.EMPTY).setKeyNamespace(KEY_NAMESPACE).build())
- .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getPartialIdentity())).build();
keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
@@ -428,7 +433,7 @@ public void testDisableManagedKey_InvalidNamespace() throws Exception {
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
.setKeyNamespace("").build())
- .setKeyMetadataHash(ByteString.copyFrom(keyData1.getKeyMetadataHash())).build();
+ .setKeyMetadataHash(ByteString.copyFrom(keyData1.getPartialIdentity())).build();
keyMetaAdminService.disableManagedKey(controller, request, disableManagedKeyDone);
@@ -558,4 +563,50 @@ public void testRefreshManagedKeys_InvalidNamespace() throws Exception {
verify(keymetaAdmin, never()).refreshManagedKeys(any(), any());
verify(refreshManagedKeysDone).run(EmptyMsg.getDefaultInstance());
}
+
+ @Test
+ public void testSetManagedKey_Success() throws Exception {
+ ManagedKeyRequest custNs = requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
+ .setKeyNamespace(KEY_NAMESPACE).build();
+ SetManagedKeyRequest request =
+ SetManagedKeyRequest.newBuilder().setKeyCustNs(custNs).setKeyMetadata(KEY_METADATA1).build();
+ when(keymetaAdmin.setManagedKey(any(), anyString(), anyString())).thenReturn(keyData1);
+
+ keyMetaAdminService.setManagedKey(controller, request, setManagedKeyDone);
+
+ verify(setManagedKeyDone).run(any());
+ verify(controller, never()).setFailed(anyString());
+ verify(keymetaAdmin).setManagedKey(KEY_CUST.getBytes(), KEY_NAMESPACE, KEY_METADATA1);
+ }
+
+ @Test
+ public void testSetManagedKey_InvalidMetadata() throws Exception {
+ ManagedKeyRequest custNs = requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
+ .setKeyNamespace(KEY_NAMESPACE).build();
+ SetManagedKeyRequest request =
+ SetManagedKeyRequest.newBuilder().setKeyCustNs(custNs).setKeyMetadata("").build();
+
+ keyMetaAdminService.setManagedKey(controller, request, setManagedKeyDone);
+
+ verify(controller).setFailed(contains("key_metadata must not be empty"));
+ verify(keymetaAdmin, never()).setManagedKey(any(), any(), any());
+ verify(setManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
+
+ @Test
+ public void testSetManagedKey_KeyException() throws Exception {
+ when(keymetaAdmin.setManagedKey(any(), anyString(), anyString()))
+ .thenThrow(new KeyException("bad key"));
+ ManagedKeyRequest custNs = requestBuilder.setKeyCust(ByteString.copyFrom(KEY_CUST.getBytes()))
+ .setKeyNamespace(KEY_NAMESPACE).build();
+ SetManagedKeyRequest request =
+ SetManagedKeyRequest.newBuilder().setKeyCustNs(custNs).setKeyMetadata(KEY_METADATA1).build();
+
+ keyMetaAdminService.setManagedKey(controller, request, setManagedKeyDone);
+
+ verify(controller).setFailed(contains("KeyException"));
+ verify(setManagedKeyDone)
+ .run(argThat(response -> response.getKeyState() == ManagedKeyState.KEY_FAILED));
+ }
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java
index fde1d81481c1..4f2c32a53a00 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestKeymetaTableAccessor.java
@@ -17,7 +17,7 @@
*/
package org.apache.hadoop.hbase.keymeta;
-import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL_BYTES;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE_DISABLED;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
@@ -31,18 +31,16 @@
import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.KEY_STATE_QUAL_BYTES;
import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.REFRESHED_TIMESTAMP_QUAL_BYTES;
import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.STK_CHECKSUM_QUAL_BYTES;
-import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.constructRowKeyForCustNamespace;
-import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.constructRowKeyForMetadata;
import static org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor.parseFromResult;
+import static org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils.constructRowKeyForCustNamespace;
+import static org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils.constructRowKeyForIdentity;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertThrows;
import static org.junit.Assert.assertTrue;
import static org.mockito.Mockito.any;
-import static org.mockito.Mockito.anyLong;
import static org.mockito.Mockito.eq;
-import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.verify;
import static org.mockito.Mockito.when;
@@ -61,6 +59,7 @@
import org.apache.hadoop.hbase.HBaseClassTestRule;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Durability;
@@ -79,6 +78,8 @@
import org.apache.hadoop.hbase.testclassification.MasterTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
+import org.apache.hadoop.hbase.util.ManualEnvironmentEdge;
import org.junit.After;
import org.junit.Before;
import org.junit.ClassRule;
@@ -99,13 +100,23 @@
@Suite.SuiteClasses({ TestKeymetaTableAccessor.TestAdd.class,
TestKeymetaTableAccessor.TestAddWithNullableFields.class, TestKeymetaTableAccessor.TestGet.class,
TestKeymetaTableAccessor.TestDisableKey.class,
- TestKeymetaTableAccessor.TestUpdateActiveState.class, })
+ TestKeymetaTableAccessor.TestUpdateActiveState.class,
+ TestKeymetaTableAccessor.TestCustodianNamespaceLength.class,
+ TestKeymetaTableAccessor.TestRefreshTimestamp.class })
@Category({ MasterTests.class, SmallTests.class })
public class TestKeymetaTableAccessor {
protected static final String ALIAS = "custId1";
protected static final byte[] CUST_ID = ALIAS.getBytes();
+ protected static final Bytes CUST_ID_BYTES = new Bytes(CUST_ID);
protected static final String KEY_NAMESPACE = "namespace";
+ protected static final Bytes KEY_NAMESPACE_BYTES = new Bytes(KEY_NAMESPACE.getBytes());
protected static String KEY_METADATA = "metadata1";
+ protected static ManagedKeyIdentity KEY_IDENTITY_PREFIX =
+ new KeyIdentityPrefixBytesBacked(CUST_ID_BYTES, KEY_NAMESPACE_BYTES);
+ protected static ManagedKeyIdentity KEY_IDENTITY_FULL = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(CUST_ID_BYTES, KEY_NAMESPACE_BYTES, KEY_METADATA);
+ protected static ManagedKeyIdentity CUST_GLOBAL_ID =
+ new KeyIdentityPrefixBytesBacked(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
@Mock
protected MasterServices server;
@@ -149,7 +160,7 @@ public void setUp() throws Exception {
latestSystemKey = managedKeyProvider.getSystemKey("system-id".getBytes());
when(systemKeyCache.getLatestSystemKey()).thenReturn(latestSystemKey);
- when(systemKeyCache.getSystemKeyByChecksum(anyLong())).thenReturn(latestSystemKey);
+ when(systemKeyCache.getSystemKeyByIdentity(any(byte[].class))).thenReturn(latestSystemKey);
}
@After
@@ -177,7 +188,7 @@ public static Collection data() {
@Test
public void testAddKey() throws Exception {
managedKeyProvider.setMockedKeyState(ALIAS, keyState);
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData keyData = managedKeyProvider.getManagedKey(CUST_GLOBAL_ID);
accessor.addKey(keyData);
@@ -185,10 +196,13 @@ public void testAddKey() throws Exception {
List puts = putCaptor.getValue();
assertEquals(keyState == ACTIVE ? 2 : 1, puts.size());
if (keyState == ACTIVE) {
- assertPut(keyData, puts.get(0), constructRowKeyForCustNamespace(keyData), ACTIVE);
- assertPut(keyData, puts.get(1), constructRowKeyForMetadata(keyData), ACTIVE);
+ assertPut(keyData, puts.get(0), constructRowKeyForCustNamespace(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes()), ACTIVE);
+ assertPut(keyData, puts.get(1), constructRowKeyForIdentity(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes(), keyData.getPartialIdentity()), ACTIVE);
} else {
- assertPut(keyData, puts.get(0), constructRowKeyForMetadata(keyData), keyState);
+ assertPut(keyData, puts.get(0), constructRowKeyForIdentity(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes(), keyData.getPartialIdentity()), keyState);
}
}
}
@@ -202,14 +216,15 @@ public static class TestAddWithNullableFields extends TestKeymetaTableAccessor {
@Captor
private ArgumentCaptor> batchCaptor;
+ @Captor
+ private ArgumentCaptor> putCaptor;
@Test
public void testAddKeyManagementStateMarker() throws Exception {
managedKeyProvider.setMockedKeyState(ALIAS, FAILED);
- ManagedKeyData keyData = new ManagedKeyData(CUST_ID, KEY_SPACE_GLOBAL, FAILED);
+ ManagedKeyData keyData = new ManagedKeyData(KEY_IDENTITY_PREFIX, FAILED);
- accessor.addKeyManagementStateMarker(keyData.getKeyCustodian(), keyData.getKeyNamespace(),
- keyData.getKeyState());
+ accessor.addKeyManagementStateMarker(keyData.getKeyIdentity(), keyData.getKeyState());
verify(table).batch(batchCaptor.capture(), any());
List mutations = batchCaptor.getValue();
@@ -221,8 +236,8 @@ public void testAddKeyManagementStateMarker() throws Exception {
Put put = (Put) mutation1;
Delete delete = (Delete) mutation2;
- // Verify the row key uses state value for metadata hash
- byte[] expectedRowKey = constructRowKeyForCustNamespace(CUST_ID, KEY_SPACE_GLOBAL);
+ // Row key must match KEY_IDENTITY_PREFIX (custodian + namespace), not global key space
+ byte[] expectedRowKey = constructRowKeyForCustNamespace(CUST_ID, KEY_NAMESPACE_BYTES.get());
assertEquals(0, Bytes.compareTo(expectedRowKey, put.getRow()));
Map valueMap = getValueMap(put);
@@ -246,7 +261,23 @@ public void testAddKeyManagementStateMarker() throws Exception {
// Verify the row key is correct for a failure marker
assertEquals(0, Bytes.compareTo(expectedRowKey, delete.getRow()));
// Verify the key checksum, wrapped key, and STK checksum columns are deleted
- assertDeleteColumns(delete);
+ assertDeleteColumns(delete, true);
+ }
+
+ @Test
+ public void testAddKeyWithoutMetadataSkipsMetadataColumn() throws Exception {
+ ManagedKeyData keyData = new ManagedKeyData(KEY_IDENTITY_FULL, FAILED);
+
+ accessor.addKey(keyData);
+
+ verify(table).put(putCaptor.capture());
+ List puts = putCaptor.getValue();
+ assertEquals(1, puts.size());
+ Map valueMap = getValueMap(puts.get(0));
+ assertNull(valueMap.get(new Bytes(DEK_METADATA_QUAL_BYTES)));
+ assertEquals(new Bytes(new byte[] { FAILED.getVal() }),
+ valueMap.get(new Bytes(KEY_STATE_QUAL_BYTES)));
+ assertNotNull(valueMap.get(new Bytes(REFRESHED_TIMESTAMP_QUAL_BYTES)));
}
}
@@ -269,6 +300,10 @@ public void setUp() throws Exception {
when(result1.isEmpty()).thenReturn(false);
when(result2.isEmpty()).thenReturn(false);
+ when(result1.getRow())
+ .thenReturn(KEY_IDENTITY_FULL.getFullIdentityView().copyBytesIfNecessary());
+ when(result2.getRow())
+ .thenReturn(KEY_IDENTITY_FULL.getFullIdentityView().copyBytesIfNecessary());
when(result1.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
.thenReturn(new byte[] { ACTIVE.getVal() });
when(result2.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
@@ -287,39 +322,50 @@ public void setUp() throws Exception {
@Test
public void testParseEmptyResult() throws Exception {
- Result result = mock(Result.class);
- when(result.isEmpty()).thenReturn(true);
+ assertNull(parseFromResult(server, KEY_IDENTITY_FULL, null));
+ assertNull(parseFromResult(server, KEY_IDENTITY_FULL, Result.EMPTY_RESULT));
+ }
- assertNull(parseFromResult(server, CUST_ID, KEY_NAMESPACE, null));
- assertNull(parseFromResult(server, CUST_ID, KEY_NAMESPACE, result));
+ @Test
+ public void testParseMarkerResultWithNonZeroPartialIdentityLength() throws Exception {
+ byte[] row = KEY_IDENTITY_FULL.getFullIdentityView().copyBytesIfNecessary();
+ Result result = Result.create(Arrays.asList(
+ new KeyValue(row, KEY_META_INFO_FAMILY, KEY_STATE_QUAL_BYTES, new byte[] { FAILED.getVal() }),
+ new KeyValue(row, KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES, Bytes.toBytes(0L))));
+
+ IllegalArgumentException ex = null;
+ try {
+ parseFromResult(server, KEY_IDENTITY_FULL, result);
+ } catch (IllegalArgumentException e) {
+ ex = e;
+ }
+ assertNotNull(ex);
+ assertTrue(ex.getMessage().contains("Partial identity length must be 0"));
+ assertTrue(ex.getMessage().contains("got: " + KEY_IDENTITY_FULL.getPartialIdentityLength()));
}
@Test
public void testGetActiveKeyMissingWrappedKey() throws Exception {
- Result result = mock(Result.class);
- when(table.get(any(Get.class))).thenReturn(result);
- when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
+ when(table.get(any(Get.class))).thenReturn(result1);
+ when(result1.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
.thenReturn(new byte[] { ACTIVE.getVal() }, new byte[] { INACTIVE.getVal() });
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(KEY_METADATA);
IOException ex;
- ex = assertThrows(IOException.class,
- () -> accessor.getKey(CUST_ID, KEY_SPACE_GLOBAL, keyMetadataHash));
+ ex = assertThrows(IOException.class, () -> accessor.getKey(KEY_IDENTITY_FULL));
assertEquals("ACTIVE key must have a wrapped key", ex.getMessage());
- ex = assertThrows(IOException.class,
- () -> accessor.getKey(CUST_ID, KEY_SPACE_GLOBAL, keyMetadataHash));
+ ex = assertThrows(IOException.class, () -> accessor.getKey(KEY_IDENTITY_FULL));
assertEquals("INACTIVE key must have a wrapped key", ex.getMessage());
+
}
@Test
public void testGetKeyMissingSTK() throws Exception {
when(result1.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_WRAPPED_BY_STK_QUAL_BYTES)))
.thenReturn(new byte[] { 0 });
- when(systemKeyCache.getSystemKeyByChecksum(anyLong())).thenReturn(null);
+ when(systemKeyCache.getSystemKeyByIdentity(any(byte[].class))).thenReturn(null);
when(table.get(any(Get.class))).thenReturn(result1);
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(KEY_METADATA);
- ManagedKeyData result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+ ManagedKeyData result = accessor.getKey(KEY_IDENTITY_FULL);
assertNull(result);
}
@@ -328,8 +374,7 @@ public void testGetKeyMissingSTK() throws Exception {
public void testGetKeyWithWrappedKey() throws Exception {
ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyData.getKeyMetadata());
- ManagedKeyData result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+ ManagedKeyData result = accessor.getKey(KEY_IDENTITY_FULL);
verify(table).get(any(Get.class));
assertNotNull(result);
@@ -341,7 +386,7 @@ public void testGetKeyWithWrappedKey() throws Exception {
assertEquals(ACTIVE, result.getKeyState());
// When DEK checksum doesn't match, we expect a null value.
- result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+ result = accessor.getKey(KEY_IDENTITY_FULL);
assertNull(result);
}
@@ -349,8 +394,7 @@ public void testGetKeyWithWrappedKey() throws Exception {
public void testGetKeyWithoutWrappedKey() throws Exception {
when(table.get(any(Get.class))).thenReturn(result2);
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata2);
- ManagedKeyData result = accessor.getKey(CUST_ID, KEY_NAMESPACE, keyMetadataHash);
+ ManagedKeyData result = accessor.getKey(KEY_IDENTITY_FULL);
verify(table).get(any(Get.class));
assertNotNull(result);
@@ -365,10 +409,10 @@ public void testGetKeyWithoutWrappedKey() throws Exception {
public void testGetAllKeys() throws Exception {
ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
- when(scanner.iterator()).thenReturn(List.of(result1, result2).iterator());
+ when(scanner.iterator()).thenReturn(java.util.Arrays.asList(result1, result2).iterator());
when(table.getScanner(any(Scan.class))).thenReturn(scanner);
- List allKeys = accessor.getAllKeys(CUST_ID, KEY_NAMESPACE, true);
+ List allKeys = accessor.getAllKeys(KEY_IDENTITY_PREFIX, true);
assertEquals(2, allKeys.size());
assertEquals(keyData.getKeyMetadata(), allKeys.get(0).getKeyMetadata());
@@ -376,28 +420,94 @@ public void testGetAllKeys() throws Exception {
verify(table).getScanner(any(Scan.class));
}
+ @Test
+ public void testGetAllKeysExcludeMarkersWhenIncludeMarkersFalse() throws Exception {
+ ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
+ byte[] markerRow = KEY_IDENTITY_PREFIX.getIdentityPrefixView().copyBytesIfNecessary();
+ Result markerResult = Result.create(Arrays.asList(
+ new KeyValue(markerRow, KEY_META_INFO_FAMILY, KEY_STATE_QUAL_BYTES, new byte[] { FAILED.getVal() }),
+ new KeyValue(markerRow, KEY_META_INFO_FAMILY, REFRESHED_TIMESTAMP_QUAL_BYTES, Bytes.toBytes(0L))));
+
+ when(scanner.iterator()).thenReturn(java.util.Arrays.asList(result1, markerResult).iterator());
+ when(table.getScanner(any(Scan.class))).thenReturn(scanner);
+
+ List allKeys = accessor.getAllKeys(KEY_IDENTITY_PREFIX, false);
+
+ assertEquals(1, allKeys.size());
+ assertEquals(keyData.getKeyMetadata(), allKeys.get(0).getKeyMetadata());
+ verify(table).getScanner(any(Scan.class));
+ }
+
+ @Test
+ public void testGetAllKeysSkipsNullParsedResults() throws Exception {
+ ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
+ Result missingStkResult = Mockito.mock(Result.class);
+ when(missingStkResult.isEmpty()).thenReturn(false);
+ when(missingStkResult.getRow())
+ .thenReturn(KEY_IDENTITY_FULL.getFullIdentityView().copyBytesIfNecessary());
+ when(missingStkResult.getValue(eq(KEY_META_INFO_FAMILY), eq(KEY_STATE_QUAL_BYTES)))
+ .thenReturn(new byte[] { ACTIVE.getVal() });
+ when(missingStkResult.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_METADATA_QUAL_BYTES)))
+ .thenReturn("metadata-missing-stk".getBytes());
+ when(missingStkResult.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_WRAPPED_BY_STK_QUAL_BYTES)))
+ .thenReturn(new byte[] { 1 });
+ when(missingStkResult.getValue(eq(KEY_META_INFO_FAMILY), eq(STK_CHECKSUM_QUAL_BYTES)))
+ .thenReturn(Bytes.toBytes(0L));
+ when(missingStkResult.getValue(eq(KEY_META_INFO_FAMILY), eq(REFRESHED_TIMESTAMP_QUAL_BYTES)))
+ .thenReturn(Bytes.toBytes(0L));
+ when(systemKeyCache.getSystemKeyByIdentity(any(byte[].class)))
+ .thenReturn(latestSystemKey, (ManagedKeyData) null);
+
+ when(scanner.iterator())
+ .thenReturn(java.util.Arrays.asList(result1, missingStkResult).iterator());
+ when(table.getScanner(any(Scan.class))).thenReturn(scanner);
+
+ List allKeys = accessor.getAllKeys(KEY_IDENTITY_PREFIX, true);
+
+ assertEquals(1, allKeys.size());
+ assertEquals(keyData.getKeyMetadata(), allKeys.get(0).getKeyMetadata());
+ verify(table).getScanner(any(Scan.class));
+ }
+
@Test
public void testGetActiveKey() throws Exception {
ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
- when(scanner.iterator()).thenReturn(List.of(result1).iterator());
+ when(scanner.iterator()).thenReturn(java.util.Arrays.asList(result1).iterator());
when(table.get(any(Get.class))).thenReturn(result1);
- ManagedKeyData activeKey = accessor.getKeyManagementStateMarker(CUST_ID, KEY_NAMESPACE);
+ ManagedKeyData activeKey = accessor.getKeyManagementStateMarker(KEY_IDENTITY_PREFIX);
assertNotNull(activeKey);
assertEquals(keyData, activeKey);
verify(table).get(any(Get.class));
}
+ @Test
+ public void testGetKeyManagementStateMarkerWithFullIdentityCoversFalseBranch() throws Exception {
+ ManagedKeyData keyData = setupActiveKey(CUST_ID, result1);
+
+ when(table.get(any(Get.class))).thenReturn(result1);
+
+ ManagedKeyData activeKey = accessor.getKeyManagementStateMarker(KEY_IDENTITY_FULL);
+
+ assertNotNull(activeKey);
+ assertEquals(keyData.getKeyMetadata(), activeKey.getKeyMetadata());
+ assertEquals(ACTIVE, activeKey.getKeyState());
+ verify(table).get(any(Get.class));
+ }
+
private ManagedKeyData setupActiveKey(byte[] custId, Result result) throws Exception {
- ManagedKeyData keyData = managedKeyProvider.getManagedKey(custId, KEY_NAMESPACE);
+ ManagedKeyData keyData = managedKeyProvider.getManagedKey(KEY_IDENTITY_PREFIX);
byte[] dekWrappedBySTK =
EncryptionUtil.wrapKey(conf, null, keyData.getTheKey(), latestSystemKey.getTheKey());
when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_WRAPPED_BY_STK_QUAL_BYTES)))
.thenReturn(dekWrappedBySTK);
when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_CHECKSUM_QUAL_BYTES)))
.thenReturn(Bytes.toBytes(keyData.getKeyChecksum()), Bytes.toBytes(0L));
+ when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(STK_CHECKSUM_QUAL_BYTES))).thenReturn(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(latestSystemKey.getKeyCustodian(),
+ latestSystemKey.getKeyNamespaceBytes(), latestSystemKey.getPartialIdentity()));
// Update the mock to return the correct metadata from the keyData
when(result.getValue(eq(KEY_META_INFO_FAMILY), eq(DEK_METADATA_QUAL_BYTES)))
.thenReturn(keyData.getKeyMetadata().getBytes());
@@ -417,7 +527,10 @@ protected void assertPut(ManagedKeyData keyData, Put put, byte[] rowKey,
if (keyData.getTheKey() != null) {
assertNotNull(valueMap.get(new Bytes(DEK_CHECKSUM_QUAL_BYTES)));
assertNotNull(valueMap.get(new Bytes(DEK_WRAPPED_BY_STK_QUAL_BYTES)));
- assertEquals(new Bytes(Bytes.toBytes(latestSystemKey.getKeyChecksum())),
+ assertEquals(
+ new Bytes(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(latestSystemKey.getKeyCustodian(),
+ latestSystemKey.getKeyNamespaceBytes(), latestSystemKey.getPartialIdentity())),
valueMap.get(new Bytes(STK_CHECKSUM_QUAL_BYTES)));
} else {
assertNull(valueMap.get(new Bytes(DEK_CHECKSUM_QUAL_BYTES)));
@@ -432,12 +545,12 @@ protected void assertPut(ManagedKeyData keyData, Put put, byte[] rowKey,
}
// Verify the key checksum, wrapped key, and STK checksum columns are deleted
- private static void assertDeleteColumns(Delete delete) {
+ private static void assertDeleteColumns(Delete delete, boolean includeMetadata) {
Map> familyCellMap = delete.getFamilyCellMap();
assertTrue(familyCellMap.containsKey(KEY_META_INFO_FAMILY));
List cells = familyCellMap.get(KEY_META_INFO_FAMILY);
- assertEquals(3, cells.size());
+ assertEquals(includeMetadata ? 4 : 3, cells.size());
// Verify each column is present in the delete
Set qualifiers =
@@ -446,6 +559,9 @@ private static void assertDeleteColumns(Delete delete) {
assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, DEK_CHECKSUM_QUAL_BYTES)));
assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, DEK_WRAPPED_BY_STK_QUAL_BYTES)));
assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, STK_CHECKSUM_QUAL_BYTES)));
+ if (includeMetadata) {
+ assertTrue(qualifiers.stream().anyMatch(q -> Bytes.equals(q, DEK_METADATA_QUAL_BYTES)));
+ }
}
private static Map getValueMap(Mutation mutation) {
@@ -485,8 +601,10 @@ public static Collection data() {
@Test
public void testDisableKey() throws Exception {
+ ManagedKeyIdentity fullKeyIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(CUST_ID_BYTES, KEY_NAMESPACE_BYTES, "testMetadata");
ManagedKeyData keyData =
- new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, keyState, "testMetadata");
+ new ManagedKeyData(fullKeyIdentity, null, keyState, "testMetadata", 123L);
accessor.disableKey(keyData);
@@ -496,21 +614,46 @@ public void testDisableKey() throws Exception {
int putIndex = 0;
ManagedKeyState targetState = keyState == ACTIVE ? ACTIVE_DISABLED : INACTIVE_DISABLED;
if (keyState == ACTIVE) {
- assertTrue(
- Bytes.compareTo(constructRowKeyForCustNamespace(keyData), mutations.get(0).getRow())
- == 0);
+ assertTrue(Bytes.compareTo(constructRowKeyForCustNamespace(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes()), mutations.get(0).getRow()) == 0);
++putIndex;
}
- assertPut(keyData, (Put) mutations.get(putIndex), constructRowKeyForMetadata(keyData),
+ assertPut(keyData, (Put) mutations.get(putIndex),
+ constructRowKeyForIdentity(keyData.getKeyCustodian(), keyData.getKeyNamespaceBytes(),
+ keyData.getPartialIdentity()),
targetState);
if (keyState == INACTIVE) {
- assertTrue(
- Bytes.compareTo(constructRowKeyForMetadata(keyData), mutations.get(putIndex + 1).getRow())
- == 0);
+ assertTrue(Bytes.compareTo(constructRowKeyForIdentity(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes(), keyData.getPartialIdentity()),
+ mutations.get(putIndex + 1).getRow()) == 0);
// Verify the key checksum, wrapped key, and STK checksum columns are deleted
- assertDeleteColumns((Delete) mutations.get(putIndex + 1));
+ assertDeleteColumns((Delete) mutations.get(putIndex + 1), false);
}
}
+
+ @Test
+ public void testDisableKeySkipCustNamespaceRowWhenActive() throws Exception {
+ ManagedKeyIdentity fullKeyIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(CUST_ID_BYTES, KEY_NAMESPACE_BYTES, "testMetadata");
+ ManagedKeyData keyData =
+ new ManagedKeyData(fullKeyIdentity, null, ACTIVE, "testMetadata", 123L);
+
+ accessor.disableKey(keyData, false);
+
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ // When deleteCustNamespaceRow=false, ACTIVE key should not delete cust+namespace row.
+ assertEquals(2, mutations.size());
+ assertTrue(mutations.get(0) instanceof Put);
+ assertTrue(mutations.get(1) instanceof Delete);
+ assertTrue(Bytes.compareTo(constructRowKeyForIdentity(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes(), keyData.getPartialIdentity()), mutations.get(0).getRow())
+ == 0);
+ assertTrue(Bytes.compareTo(constructRowKeyForIdentity(keyData.getKeyCustodian(),
+ keyData.getKeyNamespaceBytes(), keyData.getPartialIdentity()), mutations.get(1).getRow())
+ == 0);
+ assertDeleteColumns((Delete) mutations.get(1), false);
+ }
}
/**
@@ -529,12 +672,13 @@ public static class TestUpdateActiveState extends TestKeymetaTableAccessor {
@Test
public void testUpdateActiveStateFromInactiveToActive() throws Exception {
ManagedKeyData keyData =
- new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, INACTIVE, "metadata", 123L);
- ManagedKeyData systemKey =
- new ManagedKeyData(new byte[] { 1 }, KEY_SPACE_GLOBAL, null, ACTIVE, "syskey", 100L);
+ new ManagedKeyData(KEY_IDENTITY_FULL, null, INACTIVE, "metadata", 123L);
+ ManagedKeyIdentity fullKeyIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(new Bytes(new byte[] { 1 }), KEY_NAMESPACE_BYTES, "syskey");
+ ManagedKeyData systemKey = new ManagedKeyData(fullKeyIdentity, null, ACTIVE, "syskey", 100L);
when(systemKeyCache.getLatestSystemKey()).thenReturn(systemKey);
- accessor.updateActiveState(keyData, ACTIVE);
+ accessor.updateActiveState(keyData, ACTIVE, false);
verify(table).batch(mutationsCaptor.capture(), any());
List mutations = mutationsCaptor.getValue();
@@ -544,21 +688,34 @@ public void testUpdateActiveStateFromInactiveToActive() throws Exception {
@Test
public void testUpdateActiveStateFromActiveToInactive() throws Exception {
ManagedKeyData keyData =
- new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, ACTIVE, "metadata", 123L);
+ new ManagedKeyData(KEY_IDENTITY_FULL, null, ACTIVE, "metadata", 123L);
- accessor.updateActiveState(keyData, INACTIVE);
+ accessor.updateActiveState(keyData, INACTIVE, false);
verify(table).batch(mutationsCaptor.capture(), any());
List mutations = mutationsCaptor.getValue();
assertEquals(2, mutations.size());
}
+ @Test
+ public void testUpdateActiveStateFromActiveToInactiveSkipDelete() throws Exception {
+ ManagedKeyData keyData =
+ new ManagedKeyData(KEY_IDENTITY_FULL, null, ACTIVE, "metadata", 123L);
+
+ accessor.updateActiveState(keyData, INACTIVE, true);
+
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ assertEquals(1, mutations.size());
+ assertTrue(mutations.get(0) instanceof Put);
+ }
+
@Test
public void testUpdateActiveStateNoOp() throws Exception {
ManagedKeyData keyData =
- new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, ACTIVE, "metadata", 123L);
+ new ManagedKeyData(KEY_IDENTITY_FULL, null, ACTIVE, "metadata", 123L);
- accessor.updateActiveState(keyData, ACTIVE);
+ accessor.updateActiveState(keyData, ACTIVE, false);
verify(table, Mockito.never()).batch(any(), any());
}
@@ -566,12 +723,13 @@ public void testUpdateActiveStateNoOp() throws Exception {
@Test
public void testUpdateActiveStateFromDisabledToActive() throws Exception {
ManagedKeyData keyData =
- new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, DISABLED, "metadata", 123L);
- ManagedKeyData systemKey =
- new ManagedKeyData(new byte[] { 1 }, KEY_SPACE_GLOBAL, null, ACTIVE, "syskey", 100L);
+ new ManagedKeyData(KEY_IDENTITY_FULL, null, DISABLED, "metadata", 123L);
+ ManagedKeyIdentity fullKeyIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(new Bytes(new byte[] { 1 }), KEY_NAMESPACE_BYTES, "syskey");
+ ManagedKeyData systemKey = new ManagedKeyData(fullKeyIdentity, null, ACTIVE, "syskey", 100L);
when(systemKeyCache.getLatestSystemKey()).thenReturn(systemKey);
- accessor.updateActiveState(keyData, ACTIVE);
+ accessor.updateActiveState(keyData, ACTIVE, false);
verify(table).batch(mutationsCaptor.capture(), any());
List mutations = mutationsCaptor.getValue();
@@ -582,10 +740,129 @@ public void testUpdateActiveStateFromDisabledToActive() throws Exception {
@Test
public void testUpdateActiveStateInvalidNewState() {
ManagedKeyData keyData =
- new ManagedKeyData(CUST_ID, KEY_NAMESPACE, null, ACTIVE, "metadata", 123L);
+ new ManagedKeyData(KEY_IDENTITY_FULL, null, ACTIVE, "metadata", 123L);
assertThrows(IllegalArgumentException.class,
- () -> accessor.updateActiveState(keyData, DISABLED));
+ () -> accessor.updateActiveState(keyData, DISABLED, false));
+ }
+ }
+
+ /**
+ * Tests for custodian and namespace length limits (max 255 bytes / 255 chars) and row key format.
+ */
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestCustodianNamespaceLength extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestCustodianNamespaceLength.class);
+
+ @Test
+ public void testConstructRowKeyUsesSingleByteForCustodianLength() {
+ byte[] keyCust = new byte[] { 1, 2, 3 };
+ String keyNamespace = "ns";
+ byte[] keyNamespaceBytes = Bytes.toBytes(keyNamespace);
+ byte[] rowKey = constructRowKeyForCustNamespace(keyCust, keyNamespaceBytes);
+ // Format: 1 byte custLen + custodian + 1 byte nsLen + namespace UTF-8 bytes
+ int nsLen = keyNamespaceBytes.length;
+ assertEquals(1 + keyCust.length + 1 + nsLen, rowKey.length);
+ assertEquals(keyCust.length, rowKey[0] & 0xFF);
+ assertTrue("Row key should contain custodian at offset 1",
+ Bytes.equals(keyCust, Bytes.copy(rowKey, 1, keyCust.length)));
+ int offsetAfterCust = 1 + keyCust.length;
+ int storedNsLen = rowKey[offsetAfterCust] & 0xFF;
+ assertEquals(keyNamespace, Bytes.toString(rowKey, offsetAfterCust + 1, storedNsLen));
+ }
+
+ @Test
+ public void testMaxLengthCustodianAndNamespaceAllowed() throws Exception {
+ byte[] maxCust = new byte[ManagedKeyData.MAX_UNSIGNED_BYTE];
+ for (int i = 0; i < maxCust.length; i++) {
+ maxCust[i] = (byte) ('a' + (i % 26));
+ }
+ StringBuilder nsSb = new StringBuilder(ManagedKeyData.MAX_UNSIGNED_BYTE);
+ for (int i = 0; i < ManagedKeyData.MAX_UNSIGNED_BYTE; i++) {
+ nsSb.append('n');
+ }
+ String maxNamespace = nsSb.toString();
+ byte[] maxNamespaceBytes = Bytes.toBytes(maxNamespace);
+
+ byte[] rowKey = constructRowKeyForCustNamespace(maxCust, maxNamespaceBytes);
+ assertNotNull(rowKey);
+ assertEquals(1 + maxCust.length + 1 + maxNamespaceBytes.length, rowKey.length);
+
+ ManagedKeyIdentity maxCustIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(new Bytes(maxCust), new Bytes(maxNamespaceBytes), "meta");
+ ManagedKeyData keyData = new ManagedKeyData(maxCustIdentity, null, INACTIVE, "meta", 0L);
+ accessor.addKey(keyData);
+ }
+
+ @Test
+ public void testEmptyCustodianRejected() {
+ byte[] emptyCust = new byte[0];
+ byte[] keyNamespaceBytes = Bytes.toBytes("ns");
+ IllegalArgumentException ex = assertThrows(IllegalArgumentException.class,
+ () -> constructRowKeyForCustNamespace(emptyCust, keyNamespaceBytes));
+ assertTrue(ex.getMessage(), ex.getMessage().contains("custodian length must be 1-255"));
+ }
+
+ @Test
+ public void testEmptyNamespaceRejected() {
+ byte[] keyCust = new byte[] { 1 };
+ byte[] emptyNamespaceBytes = Bytes.toBytes("");
+ IllegalArgumentException ex = assertThrows(IllegalArgumentException.class,
+ () -> constructRowKeyForCustNamespace(keyCust, emptyNamespaceBytes));
+ assertTrue(ex.getMessage(), ex.getMessage().contains("namespace length must be 1-255"));
+ }
+ }
+
+ @RunWith(BlockJUnit4ClassRunner.class)
+ @Category({ MasterTests.class, SmallTests.class })
+ public static class TestRefreshTimestamp extends TestKeymetaTableAccessor {
+ @ClassRule
+ public static final HBaseClassTestRule CLASS_RULE =
+ HBaseClassTestRule.forClass(TestRefreshTimestamp.class);
+
+ @Captor
+ private ArgumentCaptor> mutationsCaptor;
+
+ @Test
+ public void testUpdateRefreshTimestamp_ActiveState() throws Exception {
+ doTestUpdateRefreshTimestamp(ACTIVE);
+ }
+
+ @Test
+ public void testUpdateRefreshTimestamp_InactiveState() throws Exception {
+ doTestUpdateRefreshTimestamp(INACTIVE);
+ }
+
+ public void doTestUpdateRefreshTimestamp(ManagedKeyState state) throws Exception {
+ ManagedKeyData keyData = new ManagedKeyData(KEY_IDENTITY_FULL, null, state, "metadata", 123L);
+
+ // Inject timestamp via environment edge manager and verify that it is used in the mutation.
+ long newTimestamp = System.currentTimeMillis();
+ ManualEnvironmentEdge edge = new ManualEnvironmentEdge();
+ EnvironmentEdgeManager.injectEdge(edge);
+ edge.setValue(newTimestamp);
+ try {
+ accessor.updateRefreshTimestamp(keyData);
+ } finally {
+ EnvironmentEdgeManager.reset();
+ }
+
+ // Verify that there are 2 mutations in the batch and both contain only refreshed timestamp
+ // column.
+ verify(table).batch(mutationsCaptor.capture(), any());
+ List mutations = mutationsCaptor.getValue();
+ assertEquals(state == ACTIVE ? 2 : 1, mutations.size());
+ Map valueMap0 = getValueMap(mutations.get(0));
+ assertTrue(Bytes.compareTo(Bytes.toBytes(newTimestamp),
+ valueMap0.get(new Bytes(REFRESHED_TIMESTAMP_QUAL_BYTES)).copyBytes()) == 0);
+ if (state == ACTIVE) {
+ Map valueMap1 = getValueMap(mutations.get(1));
+ assertTrue(Bytes.compareTo(Bytes.toBytes(newTimestamp),
+ valueMap1.get(new Bytes(REFRESHED_TIMESTAMP_QUAL_BYTES)).copyBytes()) == 0);
+ }
}
}
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java
index 0b00df9e57b6..a3a08859e4eb 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeyDataCache.java
@@ -17,7 +17,8 @@
*/
package org.apache.hadoop.hbase.keymeta;
-import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL_BYTES;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.FAILED;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.INACTIVE;
@@ -26,8 +27,8 @@
import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertThrows;
import static org.junit.Assert.assertTrue;
-import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.any;
import static org.mockito.Mockito.clearInvocations;
import static org.mockito.Mockito.doReturn;
@@ -41,6 +42,7 @@
import java.io.IOException;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
+import java.security.Key;
import java.util.Arrays;
import java.util.stream.Collectors;
import net.bytebuddy.ByteBuddy;
@@ -61,6 +63,7 @@
import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
import org.apache.hadoop.hbase.testclassification.MasterTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.ClassRule;
@@ -82,8 +85,30 @@
public class TestManagedKeyDataCache {
private static final String ALIAS = "cust1";
private static final byte[] CUST_ID = ALIAS.getBytes();
+ private static final Bytes CUST_ID_BYTES = new Bytes(CUST_ID);
+ private static final ManagedKeyIdentity CUST_KEY_SPACE_GLOBAL_ID =
+ new KeyIdentityPrefixBytesBacked(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
+ private static final ManagedKeyIdentity CUST_NAMESPACE1_ID =
+ new KeyIdentityPrefixBytesBacked(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1")));
private static Class extends MockManagedKeyProvider> providerClass;
+ /**
+ * Build FullKeyIdentity for use with getEntry(FullKeyIdentity, String, byte[]). Uses
+ * {@link KeyIdentityBytesBacked} for convenience; any {@link ManagedKeyIdentity} implementation
+ * is interchangeable as cache keys because equality and hashCode are content-based (see
+ * {@link ManagedKeyIdentity#contentEquals
+ * contentEquals}/{@link ManagedKeyIdentity#contentHashCode contentHashCode}).
+ * {@link TestManagedKeyIdentity} verifies cross-type equality, hashCode consistency, and map/set
+ * interoperability, so parameterizing these tests by concrete type is not required.
+ */
+ private static ManagedKeyIdentity fullKeyIdentity(byte[] cust, byte[] ns, byte[] partial) {
+ if (partial == null) {
+ return new KeyIdentityPrefixBytesBacked(new Bytes(cust), new Bytes(ns));
+ } else {
+ return new KeyIdentityBytesBacked(new Bytes(cust), new Bytes(ns), new Bytes(partial));
+ }
+ }
+
@Mock
private Server server;
@Spy
@@ -153,49 +178,6 @@ public void testEmptyCache() throws Exception {
assertEquals(0, cache.getGenericCacheEntryCount());
assertEquals(0, cache.getActiveCacheEntryCount());
}
-
- @Test
- public void testActiveKeysCacheKeyEqualsAndHashCode() {
- byte[] custodian1 = new byte[] { 1, 2, 3 };
- byte[] custodian2 = new byte[] { 1, 2, 3 };
- byte[] custodian3 = new byte[] { 4, 5, 6 };
- String namespace1 = "ns1";
- String namespace2 = "ns2";
-
- // Reflexive
- ManagedKeyDataCache.ActiveKeysCacheKey key1 =
- new ManagedKeyDataCache.ActiveKeysCacheKey(custodian1, namespace1);
- assertTrue(key1.equals(key1));
-
- // Symmetric and consistent for equal content
- ManagedKeyDataCache.ActiveKeysCacheKey key2 =
- new ManagedKeyDataCache.ActiveKeysCacheKey(custodian2, namespace1);
- assertTrue(key1.equals(key2));
- assertTrue(key2.equals(key1));
- assertEquals(key1.hashCode(), key2.hashCode());
-
- // Different custodian
- ManagedKeyDataCache.ActiveKeysCacheKey key3 =
- new ManagedKeyDataCache.ActiveKeysCacheKey(custodian3, namespace1);
- assertFalse(key1.equals(key3));
- assertFalse(key3.equals(key1));
-
- // Different namespace
- ManagedKeyDataCache.ActiveKeysCacheKey key4 =
- new ManagedKeyDataCache.ActiveKeysCacheKey(custodian1, namespace2);
- assertFalse(key1.equals(key4));
- assertFalse(key4.equals(key1));
-
- // Null and different class
- assertFalse(key1.equals(null));
- assertFalse(key1.equals("not a key"));
-
- // Both fields different
- ManagedKeyDataCache.ActiveKeysCacheKey key5 =
- new ManagedKeyDataCache.ActiveKeysCacheKey(custodian3, namespace2);
- assertFalse(key1.equals(key5));
- assertFalse(key5.equals(key1));
- }
}
@RunWith(BlockJUnit4ClassRunner.class)
@@ -213,121 +195,132 @@ public void setUp() {
@Test
public void testGenericCacheForInvalidMetadata() throws Exception {
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
- verify(testProvider).unwrapKey(any(String.class), any());
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ "test-metadata", null));
+ verify(testProvider).unwrapKey(any(ManagedKeyIdentity.class), any(String.class), any());
}
@Test
public void testWithInvalidProvider() throws Exception {
- ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- doThrow(new IOException("Test exception")).when(testProvider).unwrapKey(any(String.class),
- any());
+ ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ doThrow(new IOException("Test exception")).when(testProvider)
+ .unwrapKey(any(ManagedKeyIdentity.class), any(String.class), any());
// With no L2 and invalid provider, there will be no entry.
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey1.getKeyMetadata(), null));
- verify(testProvider).unwrapKey(any(String.class), any());
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ globalKey1.getKeyMetadata(), null));
+ verify(testProvider).unwrapKey(any(ManagedKeyIdentity.class), any(String.class), any());
clearInvocations(testProvider);
// A second call to getEntry should not result in a call to the provider due to -ve entry.
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey1.getKeyMetadata(), null));
- verify(testProvider, never()).unwrapKey(any(String.class), any());
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ globalKey1.getKeyMetadata(), null));
+ verify(testProvider, never()).unwrapKey(any(ManagedKeyIdentity.class), any(String.class),
+ any());
//
- doThrow(new IOException("Test exception")).when(testProvider).getManagedKey(any(),
- any(String.class));
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ doThrow(new IOException("Test exception")).when(testProvider).getManagedKey(any());
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
// A second call to getActiveEntry should not result in a call to the provider due to -ve
// entry.
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider, never()).getManagedKey(any());
}
@Test
public void testGenericCache() throws Exception {
- ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
assertEquals(globalKey1,
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey1.getKeyMetadata(), null));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ globalKey1.getKeyMetadata(), null));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
- ManagedKeyData globalKey2 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData globalKey2 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
assertEquals(globalKey2,
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey2.getKeyMetadata(), null));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ globalKey2.getKeyMetadata(), null));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
- ManagedKeyData globalKey3 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData globalKey3 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
assertEquals(globalKey3,
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, globalKey3.getKeyMetadata(), null));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ globalKey3.getKeyMetadata(), null));
+ verify(testProvider).getManagedKey(any());
}
@Test
public void testActiveKeysCache() throws Exception {
- assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ assertNotNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
- ManagedKeyData activeKey = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData activeKey = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(activeKey);
- assertEquals(activeKey, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ assertEquals(activeKey, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider, never()).getManagedKey(any());
}
@Test
public void testGenericCacheOperations() throws Exception {
- ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- ManagedKeyData nsKey1 = testProvider.getManagedKey(CUST_ID, "namespace1");
+ ManagedKeyData globalKey1 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ ManagedKeyData nsKey1 = testProvider.getManagedKey(CUST_NAMESPACE1_ID);
assertGenericCacheEntries(nsKey1, globalKey1);
- ManagedKeyData globalKey2 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData globalKey2 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
assertGenericCacheEntries(globalKey2, nsKey1, globalKey1);
- ManagedKeyData nsKey2 = testProvider.getManagedKey(CUST_ID, "namespace1");
+ ManagedKeyData nsKey2 = testProvider.getManagedKey(CUST_NAMESPACE1_ID);
assertGenericCacheEntries(nsKey2, globalKey2, nsKey1, globalKey1);
}
@Test
public void testActiveKeyGetNoActive() throws Exception {
testProvider.setMockedKeyState(ALIAS, FAILED);
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider, never()).getManagedKey(any());
}
@Test
public void testActiveKeysCacheOperations() throws Exception {
- assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- assertNotNull(cache.getActiveEntry(CUST_ID, "namespace1"));
+ assertNotNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ assertNotNull(cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1"))));
assertEquals(2, cache.getActiveCacheEntryCount());
cache.clearCache();
assertEquals(0, cache.getActiveCacheEntryCount());
- assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertNotNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
assertEquals(1, cache.getActiveCacheEntryCount());
}
@Test
public void testGenericCacheUsingActiveKeysCacheOverProvider() throws Exception {
- ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key);
- assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
- verify(testProvider, never()).unwrapKey(any(String.class), any());
+ assertEquals(key, cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
+ verify(testProvider, never()).unwrapKey(any(ManagedKeyIdentity.class), any(String.class),
+ any());
}
@Test
public void testThatActiveKeysCache_SkipsProvider_WhenLoadedViaGenericCache() throws Exception {
- ManagedKeyData key1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- assertEquals(key1, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null));
- ManagedKeyData key2 = testProvider.getManagedKey(CUST_ID, "namespace1");
- assertEquals(key2, cache.getEntry(CUST_ID, "namespace1", key2.getKeyMetadata(), null));
- verify(testProvider, times(2)).getManagedKey(any(), any(String.class));
+ ManagedKeyData key1 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ assertEquals(key1, cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), key1.getKeyMetadata(), null));
+ ManagedKeyData key2 = testProvider.getManagedKey(CUST_NAMESPACE1_ID);
+ assertEquals(key2, cache.getEntry(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace1"), null),
+ key2.getKeyMetadata(), null));
+ verify(testProvider, times(2)).getManagedKey(any());
assertEquals(2, cache.getActiveCacheEntryCount());
clearInvocations(testProvider);
- assertEquals(key1, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- assertEquals(key2, cache.getActiveEntry(CUST_ID, "namespace1"));
+ assertEquals(key1, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ assertEquals(key2,
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1"))));
// ACTIVE keys are automatically added to activeKeysCache when loaded
// via getEntry, so getActiveEntry will find them there and won't call the provider
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ verify(testProvider, never()).getManagedKey(any());
cache.clearCache();
assertEquals(0, cache.getActiveCacheEntryCount());
}
@@ -335,63 +328,75 @@ public void testThatActiveKeysCache_SkipsProvider_WhenLoadedViaGenericCache() th
@Test
public void testThatNonActiveKey_IsIgnored_WhenLoadedViaGenericCache() throws Exception {
testProvider.setMockedKeyState(ALIAS, FAILED);
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
assertEquals(0, cache.getActiveCacheEntryCount());
testProvider.setMockedKeyState(ALIAS, DISABLED);
- key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
assertEquals(0, cache.getActiveCacheEntryCount());
testProvider.setMockedKeyState(ALIAS, INACTIVE);
- key = testProvider.getManagedKey(CUST_ID, "namespace1");
- assertEquals(key, cache.getEntry(CUST_ID, "namespace1", key.getKeyMetadata(), null));
+ key = testProvider.getManagedKey(CUST_NAMESPACE1_ID);
+ assertEquals(key, cache.getEntry(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace1"), null),
+ key.getKeyMetadata(), null));
assertEquals(0, cache.getActiveCacheEntryCount());
}
@Test
public void testActiveKeysCacheWithMultipleCustodiansInGenericCache() throws Exception {
- ManagedKeyData key1 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- assertNotNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null));
+ ManagedKeyData key1 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ assertNotNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key1.getKeyMetadata(), null));
String alias2 = "cust2";
byte[] cust_id2 = alias2.getBytes();
- ManagedKeyData key2 = testProvider.getManagedKey(cust_id2, KEY_SPACE_GLOBAL);
- assertNotNull(cache.getEntry(cust_id2, KEY_SPACE_GLOBAL, key2.getKeyMetadata(), null));
- assertNotNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ ManagedKeyData key2 = testProvider.getManagedKey(
+ new KeyIdentityPrefixBytesBacked(new Bytes(cust_id2), KEY_SPACE_GLOBAL_BYTES));
+ assertNotNull(cache.getEntry(fullKeyIdentity(cust_id2, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key2.getKeyMetadata(), null));
+ assertNotNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
// ACTIVE keys are automatically added to activeKeysCache when loaded.
assertEquals(2, cache.getActiveCacheEntryCount());
}
@Test
public void testActiveKeysCacheWithMultipleNamespaces() throws Exception {
- ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key1);
- assertEquals(key1, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- ManagedKeyData key2 = cache.getActiveEntry(CUST_ID, "namespace1");
+ assertEquals(key1, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ ManagedKeyData key2 =
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1")));
assertNotNull(key2);
- assertEquals(key2, cache.getActiveEntry(CUST_ID, "namespace1"));
- ManagedKeyData key3 = cache.getActiveEntry(CUST_ID, "namespace2");
+ assertEquals(key2,
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1"))));
+ ManagedKeyData key3 =
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace2")));
assertNotNull(key3);
- assertEquals(key3, cache.getActiveEntry(CUST_ID, "namespace2"));
- verify(testProvider, times(3)).getManagedKey(any(), any(String.class));
+ assertEquals(key3,
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace2"))));
+ verify(testProvider, times(3)).getManagedKey(any());
assertEquals(3, cache.getActiveCacheEntryCount());
}
@Test
public void testEjectKey_ActiveKeysCacheOnly() throws Exception {
// Load a key into the active keys cache
- ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key);
assertEquals(1, cache.getActiveCacheEntryCount());
// Eject the key - should remove from active keys cache
- boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejected = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertTrue("Key should be ejected when metadata matches", ejected);
assertEquals(0, cache.getActiveCacheEntryCount());
// Try to eject again - should return false since it's already gone from active keys cache
- boolean ejectedAgain = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejectedAgain = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertFalse("Should return false when key is already ejected", ejectedAgain);
assertEquals(0, cache.getActiveCacheEntryCount());
}
@@ -399,18 +404,21 @@ public void testEjectKey_ActiveKeysCacheOnly() throws Exception {
@Test
public void testEjectKey_GenericCacheOnly() throws Exception {
// Load a key into the generic cache
- ManagedKeyData key = cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL,
- testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL).getKeyMetadata(), null);
+ ManagedKeyData key =
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID).getKeyMetadata(), null);
assertNotNull(key);
assertEquals(1, cache.getGenericCacheEntryCount());
// Eject the key - should remove from generic cache
- boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejected = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertTrue("Key should be ejected when metadata matches", ejected);
assertEquals(0, cache.getGenericCacheEntryCount());
// Try to eject again - should return false since it's already gone from generic cache
- boolean ejectedAgain = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejectedAgain = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertFalse("Should return false when key is already ejected", ejectedAgain);
assertEquals(0, cache.getGenericCacheEntryCount());
}
@@ -418,24 +426,27 @@ public void testEjectKey_GenericCacheOnly() throws Exception {
@Test
public void testEjectKey_Success() throws Exception {
// Load a key into the active keys cache
- ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key);
String metadata = key.getKeyMetadata();
assertEquals(1, cache.getActiveCacheEntryCount());
// Also load into the generic cache
- ManagedKeyData keyFromGeneric = cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, metadata, null);
+ ManagedKeyData keyFromGeneric = cache
+ .getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), metadata, null);
assertNotNull(keyFromGeneric);
assertEquals(1, cache.getGenericCacheEntryCount());
// Eject the key with matching metadata - should remove from both caches
- boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejected = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertTrue("Key should be ejected when metadata matches", ejected);
assertEquals(0, cache.getActiveCacheEntryCount());
assertEquals(0, cache.getGenericCacheEntryCount());
// Try to eject again - should return false since it's already gone from active keys cache
- boolean ejectedAgain = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejectedAgain = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertFalse("Should return false when key is already ejected", ejectedAgain);
assertEquals(0, cache.getActiveCacheEntryCount());
assertEquals(0, cache.getGenericCacheEntryCount());
@@ -444,34 +455,39 @@ public void testEjectKey_Success() throws Exception {
@Test
public void testEjectKey_MetadataMismatch() throws Exception {
// Load a key into both caches
- ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key);
assertEquals(1, cache.getActiveCacheEntryCount());
// Also load into the generic cache
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()), null,
+ null);
assertEquals(1, cache.getGenericCacheEntryCount());
// Try to eject with wrong metadata - should not eject from either cache
String wrongMetadata = "wrong-metadata";
- boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL,
- ManagedKeyData.constructMetadataHash(wrongMetadata));
+ boolean ejected = cache.ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(),
+ ManagedKeyIdentityUtils.constructMetadataHash(wrongMetadata)));
assertFalse("Key should not be ejected when metadata doesn't match", ejected);
assertEquals(1, cache.getActiveCacheEntryCount());
assertEquals(1, cache.getGenericCacheEntryCount());
// Verify the key is still in both caches
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
assertEquals(key.getKeyMetadata(),
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash()).getKeyMetadata());
+ cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()), null,
+ null).getKeyMetadata());
}
@Test
public void testEjectKey_KeyNotPresent() throws Exception {
// Try to eject a key that doesn't exist in the cache
String nonExistentMetadata = "non-existent-metadata";
- boolean ejected = cache.ejectKey(CUST_ID, "non-existent-namespace",
- ManagedKeyData.constructMetadataHash(nonExistentMetadata));
+ boolean ejected =
+ cache.ejectKey(fullKeyIdentity(CUST_ID, Bytes.toBytes("non-existent-namespace"),
+ ManagedKeyIdentityUtils.constructMetadataHash(nonExistentMetadata)));
assertFalse("Should return false when key is not present", ejected);
assertEquals(0, cache.getActiveCacheEntryCount());
}
@@ -479,41 +495,51 @@ public void testEjectKey_KeyNotPresent() throws Exception {
@Test
public void testEjectKey_MultipleKeys() throws Exception {
// Load multiple keys into both caches
- ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
- ManagedKeyData key2 = cache.getActiveEntry(CUST_ID, "namespace1");
- ManagedKeyData key3 = cache.getActiveEntry(CUST_ID, "namespace2");
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
+ ManagedKeyData key2 =
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1")));
+ ManagedKeyData key3 =
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace2")));
assertNotNull(key1);
assertNotNull(key2);
assertNotNull(key3);
assertEquals(3, cache.getActiveCacheEntryCount());
// Also load all keys into the generic cache
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null);
- cache.getEntry(CUST_ID, "namespace1", key2.getKeyMetadata(), null);
- cache.getEntry(CUST_ID, "namespace2", key3.getKeyMetadata(), null);
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key1.getKeyMetadata(), null);
+ cache.getEntry(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace1"), null),
+ key2.getKeyMetadata(), null);
+ cache.getEntry(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace2"), null),
+ key3.getKeyMetadata(), null);
assertEquals(3, cache.getGenericCacheEntryCount());
// Eject only the middle key from both caches
- boolean ejected = cache.ejectKey(CUST_ID, "namespace1", key2.getKeyMetadataHash());
+ boolean ejected = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace1"), key2.getPartialIdentity()));
assertTrue("Key should be ejected from both caches", ejected);
assertEquals(2, cache.getActiveCacheEntryCount());
assertEquals(2, cache.getGenericCacheEntryCount());
// Verify only key2 was ejected - key1 and key3 should still be there
clearInvocations(testProvider);
- assertEquals(key1, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- assertEquals(key3, cache.getActiveEntry(CUST_ID, "namespace2"));
+ assertEquals(key1, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ assertEquals(key3,
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace2"))));
// These getActiveEntry() calls should not trigger provider calls since keys are still cached
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ verify(testProvider, never()).getManagedKey(any());
// Verify generic cache still has key1 and key3
assertEquals(key1.getKeyMetadata(),
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null).getKeyMetadata());
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key1.getKeyMetadata(), null).getKeyMetadata());
assertEquals(key3.getKeyMetadata(),
- cache.getEntry(CUST_ID, "namespace2", key3.getKeyMetadata(), null).getKeyMetadata());
+ cache.getEntry(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace2"), null),
+ key3.getKeyMetadata(), null).getKeyMetadata());
// Try to eject key2 again - should return false since it's already gone from both caches
- boolean ejectedAgain = cache.ejectKey(CUST_ID, "namespace1", key2.getKeyMetadataHash());
+ boolean ejectedAgain = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace1"), key2.getPartialIdentity()));
assertFalse("Should return false when key is already ejected", ejectedAgain);
assertEquals(2, cache.getActiveCacheEntryCount());
assertEquals(2, cache.getGenericCacheEntryCount());
@@ -522,39 +548,42 @@ public void testEjectKey_MultipleKeys() throws Exception {
@Test
public void testEjectKey_DifferentCustodian() throws Exception {
// Load a key for one custodian into both caches
- ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key);
String metadata = key.getKeyMetadata();
assertEquals(1, cache.getActiveCacheEntryCount());
// Also load into the generic cache
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()), null,
+ null);
assertEquals(1, cache.getGenericCacheEntryCount());
// Try to eject with a different custodian - should not eject from either cache
byte[] differentCustodian = "different-cust".getBytes();
- boolean ejected =
- cache.ejectKey(differentCustodian, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejected = cache.ejectKey(fullKeyIdentity(differentCustodian,
+ KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertFalse("Should not eject key for different custodian", ejected);
assertEquals(1, cache.getActiveCacheEntryCount());
assertEquals(1, cache.getGenericCacheEntryCount());
// Verify the original key is still in both caches
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
assertEquals(metadata,
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, metadata, null).getKeyMetadata());
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), metadata, null)
+ .getKeyMetadata());
}
@Test
public void testEjectKey_AfterClearCache() throws Exception {
// Load a key into both caches
- ManagedKeyData key = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key);
String metadata = key.getKeyMetadata();
assertEquals(1, cache.getActiveCacheEntryCount());
// Also load into the generic cache
- cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, metadata, null);
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), metadata, null);
assertEquals(1, cache.getGenericCacheEntryCount());
// Clear both caches
@@ -563,92 +592,13 @@ public void testEjectKey_AfterClearCache() throws Exception {
assertEquals(0, cache.getGenericCacheEntryCount());
// Try to eject the key after both caches are cleared
- boolean ejected = cache.ejectKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash());
+ boolean ejected = cache
+ .ejectKey(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key.getPartialIdentity()));
assertFalse("Should return false when both caches are empty", ejected);
assertEquals(0, cache.getActiveCacheEntryCount());
assertEquals(0, cache.getGenericCacheEntryCount());
}
- @Test
- public void testGetEntry_HashCollisionOrMismatchDetection() throws Exception {
- // Create a key and get it into the cache
- ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
- assertNotNull(key1);
-
- // Now simulate a hash collision by trying to get an entry with the same hash
- // but different custodian/namespace
- byte[] differentCust = "different-cust".getBytes();
- String differentNamespace = "different-namespace";
-
- // This should return null due to custodian/namespace mismatch (collision detection)
- ManagedKeyData result =
- cache.getEntry(differentCust, differentNamespace, key1.getKeyMetadata(), null);
-
- // Result should be null because of hash collision detection
- // The cache finds an entry with the same metadata hash, but custodian/namespace don't match
- assertNull("Should return null when hash collision is detected", result);
- }
-
- @Test
- public void testEjectKey_HashCollisionOrMismatchProtection() throws Exception {
- // Create two keys with potential hash collision scenario
- byte[] cust1 = "cust1".getBytes();
- byte[] cust2 = "cust2".getBytes();
- String namespace1 = "namespace1";
-
- // Load a key for cust1
- ManagedKeyData key1 = cache.getActiveEntry(cust1, namespace1);
- assertNotNull(key1);
- assertEquals(1, cache.getActiveCacheEntryCount());
-
- // Try to eject using same metadata hash but different custodian
- // This should not eject the key due to custodian mismatch protection
- boolean ejected = cache.ejectKey(cust2, namespace1, key1.getKeyMetadataHash());
- assertFalse("Should not eject key with different custodian even if hash matches", ejected);
- assertEquals(1, cache.getActiveCacheEntryCount());
-
- // Verify the original key is still there
- assertEquals(key1, cache.getActiveEntry(cust1, namespace1));
- }
-
- @Test
- public void testEjectKey_HashCollisionInBothCaches() throws Exception {
- // This test covers the scenario where rejectedValue is set during the first cache check
- // (activeKeysCache) and then the second cache check (cacheByMetadataHash) takes the
- // early return path because rejectedValue is already set.
- byte[] cust1 = "cust1".getBytes();
- byte[] cust2 = "cust2".getBytes();
- String namespace1 = "namespace1";
-
- // Load a key for cust1 - this will put it in BOTH activeKeysCache and cacheByMetadataHash
- ManagedKeyData key1 = cache.getActiveEntry(cust1, namespace1);
- assertNotNull(key1);
-
- // Also access via generic cache to ensure it's in both caches
- ManagedKeyData key1viaGeneric =
- cache.getEntry(cust1, namespace1, key1.getKeyMetadata(), null);
- assertNotNull(key1viaGeneric);
- assertEquals(key1, key1viaGeneric);
-
- // Verify both cache counts
- assertEquals(1, cache.getActiveCacheEntryCount());
- assertEquals(1, cache.getGenericCacheEntryCount());
-
- // Try to eject using same metadata hash but different custodian
- // This will trigger the collision detection in BOTH caches:
- // 1. First check in activeKeysCache will detect mismatch and set rejectedValue
- // 2. Second check in cacheByMetadataHash should take early return (line 234)
- boolean ejected = cache.ejectKey(cust2, namespace1, key1.getKeyMetadataHash());
- assertFalse("Should not eject key with different custodian even if hash matches", ejected);
-
- // Verify both caches still have the entry
- assertEquals(1, cache.getActiveCacheEntryCount());
- assertEquals(1, cache.getGenericCacheEntryCount());
-
- // Verify the original key is still accessible
- assertEquals(key1, cache.getActiveEntry(cust1, namespace1));
- assertEquals(key1, cache.getEntry(cust1, namespace1, key1.getKeyMetadata(), null));
- }
}
@RunWith(BlockJUnit4ClassRunner.class)
@@ -668,61 +618,69 @@ public void setUp() {
@Test
public void testGenericCacheNonExistentKeyInL2Cache() throws Exception {
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
- verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ "test-metadata", null));
+ verify(mockL2).getKey(any(ManagedKeyIdentity.class));
clearInvocations(mockL2);
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
- verify(mockL2, never()).getKey(any(), any(String.class), any(byte[].class));
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ "test-metadata", null));
+ verify(mockL2, never()).getKey(any(ManagedKeyIdentity.class));
}
@Test
public void testGenericCacheRetrievalFromL2Cache() throws Exception {
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- when(mockL2.getKey(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadataHash())).thenReturn(key);
- assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
- verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ ManagedKeyIdentity l2LookupIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES, key.getKeyMetadata());
+ when(mockL2.getKey(l2LookupIdentity)).thenReturn(key);
+ assertEquals(key, cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
+ verify(mockL2).getKey(any(ManagedKeyIdentity.class));
}
@Test
public void testActiveKeysCacheNonExistentKeyInL2Cache() throws Exception {
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2).getKeyManagementStateMarker(any());
clearInvocations(mockL2);
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2, never()).getKeyManagementStateMarker(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2, never()).getKeyManagementStateMarker(any());
}
@Test
public void testActiveKeysCacheRetrievalFromL2Cache() throws Exception {
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- when(mockL2.getKeyManagementStateMarker(CUST_ID, KEY_SPACE_GLOBAL)).thenReturn(key);
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ when(mockL2.getKeyManagementStateMarker(
+ new KeyIdentityPrefixBytesBacked(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES))).thenReturn(key);
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2).getKeyManagementStateMarker(any(ManagedKeyIdentity.class));
clearInvocations(mockL2);
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2, never()).getKeyManagementStateMarker(any(), any(String.class));
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2, never()).getKeyManagementStateMarker(any(ManagedKeyIdentity.class));
}
@Test
public void testGenericCacheWithKeymetaAccessorException() throws Exception {
- when(mockL2.getKey(eq(CUST_ID), eq(KEY_SPACE_GLOBAL), any(byte[].class)))
- .thenThrow(new IOException("Test exception"));
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
- verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ ManagedKeyIdentity keyIdentity = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES, "test-metadata");
+ when(mockL2.getKey(keyIdentity)).thenThrow(new IOException("Test exception"));
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ "test-metadata", null));
+ verify(mockL2).getKey(keyIdentity);
clearInvocations(mockL2);
- assertNull(cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, "test-metadata", null));
- verify(mockL2, never()).getKey(any(), any(String.class), any(byte[].class));
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ "test-metadata", null));
+ verify(mockL2, never()).getKey(keyIdentity);
}
@Test
public void testGetActiveEntryWithKeymetaAccessorException() throws Exception {
- when(mockL2.getKeyManagementStateMarker(CUST_ID, KEY_SPACE_GLOBAL))
- .thenThrow(new IOException("Test exception"));
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ when(mockL2.getKeyManagementStateMarker(any())).thenThrow(new IOException("Test exception"));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2).getKeyManagementStateMarker(any());
clearInvocations(mockL2);
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2, never()).getKeyManagementStateMarker(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2, never()).getKeyManagementStateMarker(any());
}
@Test
@@ -730,13 +688,14 @@ public void testActiveKeysCacheUsesKeymetaAccessorWhenGenericCacheEmpty() throws
// Ensure generic cache is empty
cache.clearCache();
- // Mock the keymetaAccessor to return a key
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- when(mockL2.getKeyManagementStateMarker(CUST_ID, KEY_SPACE_GLOBAL)).thenReturn(key);
+ // Mock L2: getActiveEntry uses prefix identity (clone of custodian/namespace), not full key
+ // identity
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ when(mockL2.getKeyManagementStateMarker(any(ManagedKeyIdentity.class))).thenReturn(key);
// Get the active entry - it should call keymetaAccessor since generic cache is empty
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2).getKeyManagementStateMarker(any());
}
}
@@ -757,105 +716,168 @@ public void setUp() {
@Test
public void testGenericCacheRetrivalFromProviderWhenKeyNotFoundInL2Cache() throws Exception {
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- doReturn(key).when(testProvider).unwrapKey(any(String.class), any());
- assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
- verify(mockL2).getKey(any(), any(String.class), any(byte[].class));
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ doReturn(key).when(testProvider).unwrapKey(any(ManagedKeyIdentity.class), any(String.class),
+ any());
+ assertEquals(key, cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
+ verify(mockL2).getKey(any(ManagedKeyIdentity.class));
verify(mockL2).addKey(any(ManagedKeyData.class));
}
@Test
public void testAddKeyFailure() throws Exception {
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- doReturn(key).when(testProvider).unwrapKey(any(String.class), any());
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ doReturn(key).when(testProvider).unwrapKey(any(ManagedKeyIdentity.class), any(String.class),
+ any());
doThrow(new IOException("Test exception")).when(mockL2).addKey(any(ManagedKeyData.class));
- assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
+ assertNull(cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
verify(mockL2).addKey(any(ManagedKeyData.class));
}
@Test
public void testActiveKeysCacheDynamicLookupWithUnexpectedException() throws Exception {
- doThrow(new RuntimeException("Test exception")).when(testProvider).getManagedKey(any(),
- any(String.class));
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider).getManagedKey(any(), any(String.class));
+ doThrow(new RuntimeException("Test exception")).when(testProvider).getManagedKey(any());
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
// A 2nd invocation should not result in a call to the provider.
- assertNull(cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ assertNull(cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider, never()).getManagedKey(any());
}
@Test
public void testActiveKeysCacheRetrivalFromProviderWhenKeyNotFoundInL2Cache() throws Exception {
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- doReturn(key).when(testProvider).getManagedKey(any(), any(String.class));
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(mockL2).getKeyManagementStateMarker(any(), any(String.class));
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ doReturn(key).when(testProvider).getManagedKey(any());
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(mockL2).getKeyManagementStateMarker(any());
}
@Test
public void testGenericCacheUsesActiveKeysCacheFirst() throws Exception {
// First populate the active keys cache with an active key
- ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
- verify(testProvider).getManagedKey(any(), any(String.class));
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
// Now get the generic cache entry - it should use the active keys cache first, not call
// keymetaAccessor
- assertEquals(key1, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key1.getKeyMetadata(), null));
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ assertEquals(key1, cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), key1.getKeyMetadata(), null));
+ verify(testProvider, never()).getManagedKey(any());
// Lookup a diffrent key.
- ManagedKeyData key2 = cache.getActiveEntry(CUST_ID, "namespace1");
+ ManagedKeyData key2 =
+ cache.getActiveEntry(CUST_ID_BYTES, new Bytes(Bytes.toBytes("namespace1")));
assertNotEquals(key1, key2);
- verify(testProvider).getManagedKey(any(), any(String.class));
+ verify(testProvider).getManagedKey(any());
clearInvocations(testProvider);
// Now get the generic cache entry - it should use the active keys cache first, not call
// keymetaAccessor
- assertEquals(key2, cache.getEntry(CUST_ID, "namespace1", key2.getKeyMetadata(), null));
- verify(testProvider, never()).getManagedKey(any(), any(String.class));
+ assertEquals(key2, cache.getEntry(fullKeyIdentity(CUST_ID, Bytes.toBytes("namespace1"), null),
+ key2.getKeyMetadata(), null));
+ verify(testProvider, never()).getManagedKey(any());
}
@Test
public void testGetOlderEntryFromGenericCache() throws Exception {
// Get one version of the key in to ActiveKeysCache
- ManagedKeyData key1 = cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL);
+ ManagedKeyData key1 = cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES);
assertNotNull(key1);
clearInvocations(testProvider);
// Now try to lookup another version of the key, it should lookup and discard the active key.
- ManagedKeyData key2 = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- assertEquals(key2, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key2.getKeyMetadata(), null));
- verify(testProvider).unwrapKey(any(String.class), any());
+ ManagedKeyData key2 = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ assertEquals(key2, cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), key2.getKeyMetadata(), null));
+ verify(testProvider).unwrapKey(any(ManagedKeyIdentity.class), any(String.class), any());
}
@Test
public void testThatActiveKeysCache_PopulatedByGenericCache() throws Exception {
// First populate the generic cache with an active key
- ManagedKeyData key = testProvider.getManagedKey(CUST_ID, KEY_SPACE_GLOBAL);
- assertEquals(key, cache.getEntry(CUST_ID, KEY_SPACE_GLOBAL, key.getKeyMetadata(), null));
- verify(testProvider).unwrapKey(any(String.class), any());
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ assertEquals(key, cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null),
+ key.getKeyMetadata(), null));
+ verify(testProvider).unwrapKey(any(ManagedKeyIdentity.class), any(String.class), any());
// Clear invocations to reset the mock state
clearInvocations(testProvider);
// Now get the active entry - it should already be there due to the generic cache first
- assertEquals(key, cache.getActiveEntry(CUST_ID, KEY_SPACE_GLOBAL));
- verify(testProvider, never()).unwrapKey(any(String.class), any());
+ assertEquals(key, cache.getActiveEntry(CUST_ID_BYTES, KEY_SPACE_GLOBAL_BYTES));
+ verify(testProvider, never()).unwrapKey(any(ManagedKeyIdentity.class), any(String.class),
+ any());
+ }
+
+ @Test
+ public void testGetEntry_BothPartialIdentityAndKeyMetadataNull_Throws() {
+ assertThrows(IllegalArgumentException.class, () -> {
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), null, null);
+ });
+ }
+
+ @Test
+ public void testGetEntry_BothPartialIdentityAndKeyMetadataNonNull_ExistingEntry()
+ throws Exception {
+ ManagedKeyData key = testProvider.getManagedKey(CUST_KEY_SPACE_GLOBAL_ID);
+ byte[] partialIdentity = key.getPartialIdentity();
+ String keyMetadata = key.getKeyMetadata();
+ cache.getEntry(fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), null), keyMetadata,
+ null);
+ assertEquals(1, cache.getGenericCacheEntryCount());
+ ManagedKeyData found = cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), partialIdentity), keyMetadata, null);
+ assertEquals(key, found);
+ }
+
+ @Test
+ public void testGetEntry_BothPartialIdentityAndKeyMetadataNonNull_NoExistingEntry()
+ throws Exception {
+ // Use explicit keyMetadata so retrieveKey validation sees matching metadata.
+ String keyMetadata = "cust1:cust1:*:0";
+ Key key = MockManagedKeyProvider.generateSecretKey();
+ ManagedKeyData keyFromProvider =
+ new ManagedKeyData(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), key, ACTIVE, keyMetadata);
+ byte[] partialIdentity = ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata);
+ cache.clearCache();
+ doReturn(keyFromProvider).when(testProvider).unwrapKey(any(ManagedKeyIdentity.class),
+ any(String.class), any());
+ ManagedKeyData found = cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), partialIdentity), keyMetadata, null);
+ assertNotNull(found);
+ assertEquals(keyFromProvider, found);
+ }
+
+ @Test
+ public void testGetEntry_WithPartialIdentityOnly_ReturnsNullWithoutProviderLookup()
+ throws Exception {
+ byte[] partialIdentity = ManagedKeyIdentityUtils.constructMetadataHash("missing-metadata");
+
+ ManagedKeyData found = cache.getEntry(
+ fullKeyIdentity(CUST_ID, KEY_SPACE_GLOBAL_BYTES.get(), partialIdentity), null, null);
+
+ assertNull(found);
+ verify(mockL2).getKey(any(ManagedKeyIdentity.class));
+ verify(testProvider, never()).unwrapKey(any(ManagedKeyIdentity.class), any(String.class),
+ any());
}
}
protected void assertGenericCacheEntries(ManagedKeyData... keys) throws Exception {
for (ManagedKeyData key : keys) {
assertEquals(key,
- cache.getEntry(key.getKeyCustodian(), key.getKeyNamespace(), key.getKeyMetadata(), null));
+ cache.getEntry(fullKeyIdentity(key.getKeyCustodian(), key.getKeyNamespaceBytes(), null),
+ key.getKeyMetadata(), null));
}
assertEquals(keys.length, cache.getGenericCacheEntryCount());
int activeKeysCount =
Arrays.stream(keys).filter(key -> key.getKeyState() == ManagedKeyState.ACTIVE)
- .map(key -> new ManagedKeyDataCache.ActiveKeysCacheKey(key.getKeyCustodian(),
- key.getKeyNamespace()))
+ .map(key -> new KeyIdentityPrefixBytesBacked(new Bytes(key.getKeyCustodian()),
+ new Bytes(key.getKeyNamespaceBytes())))
.collect(Collectors.toSet()).size();
assertEquals(activeKeysCount, cache.getActiveCacheEntryCount());
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java
index d04dee3853e9..7948498309f3 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestManagedKeymeta.java
@@ -106,15 +106,16 @@ private void doTestEnable(KeymetaAdmin adminClient) throws IOException, KeyExcep
// should get the same key even after ejecting it.
HRegionServer regionServer = TEST_UTIL.getHBaseCluster().getRegionServer(0);
ManagedKeyDataCache managedKeyDataCache = regionServer.getManagedKeyDataCache();
- ManagedKeyData activeEntry =
- managedKeyDataCache.getActiveEntry(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ ManagedKeyData activeEntry = managedKeyDataCache.getActiveEntry(new Bytes(custBytes),
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES);
assertNotNull(activeEntry);
- assertTrue(Bytes.equals(managedKey.getKeyMetadataHash(), activeEntry.getKeyMetadataHash()));
- assertTrue(managedKeyDataCache.ejectKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL,
- managedKey.getKeyMetadataHash()));
- activeEntry = managedKeyDataCache.getActiveEntry(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertTrue(Bytes.equals(managedKey.getPartialIdentity(), activeEntry.getPartialIdentity()));
+ assertTrue(managedKeyDataCache.ejectKey(new KeyIdentityBytesBacked(new Bytes(custBytes),
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES, new Bytes(managedKey.getPartialIdentity()))));
+ activeEntry = managedKeyDataCache.getActiveEntry(new Bytes(custBytes),
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES);
assertNotNull(activeEntry);
- assertTrue(Bytes.equals(managedKey.getKeyMetadataHash(), activeEntry.getKeyMetadataHash()));
+ assertTrue(Bytes.equals(managedKey.getPartialIdentity(), activeEntry.getPartialIdentity()));
List managedKeys =
adminClient.getManagedKeys(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
@@ -252,16 +253,16 @@ private void doTestWithClientSideServiceException(SetupFunction setupFunction,
public void testDisableKeyManagementLocal() throws Exception {
HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
- doTestDisableKeyManagement(keymetaAdmin);
+ doTestDisableKeyManagement(keymetaAdmin, false);
}
@Test
public void testDisableKeyManagementOverRPC() throws Exception {
KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
- doTestDisableKeyManagement(adminClient);
+ doTestDisableKeyManagement(adminClient, true);
}
- private void doTestDisableKeyManagement(KeymetaAdmin adminClient)
+ private void doTestDisableKeyManagement(KeymetaAdmin adminClient, boolean overRPC)
throws IOException, KeyException {
String cust = "cust2";
byte[] custBytes = cust.getBytes();
@@ -277,6 +278,12 @@ private void doTestDisableKeyManagement(KeymetaAdmin adminClient)
adminClient.disableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
assertNotNull(disabledKey);
assertEquals(ManagedKeyState.DISABLED, disabledKey.getKeyState().getExternalState());
+
+ List managedKeys =
+ adminClient.getManagedKeys(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(managedKeys);
+ assertEquals(1, managedKeys.size());
+ assertEquals(ManagedKeyState.INACTIVE, managedKeys.get(0).getKeyState());
}
@Test
@@ -309,11 +316,11 @@ private void doTestDisableManagedKey(KeymetaAdmin adminClient) throws IOExceptio
adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
assertNotNull(managedKey);
assertKeyDataSingleKey(managedKey, ManagedKeyState.ACTIVE);
- byte[] keyMetadataHash = managedKey.getKeyMetadataHash();
+ byte[] partialIdentity = managedKey.getPartialIdentity();
// Now disable the specific key
ManagedKeyData disabledKey =
- adminClient.disableManagedKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL, keyMetadataHash);
+ adminClient.disableManagedKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL, partialIdentity);
assertNotNull(disabledKey);
assertEquals(ManagedKeyState.DISABLED, disabledKey.getKeyState().getExternalState());
}
@@ -340,6 +347,52 @@ public void testRefreshManagedKeysWithClientSideServiceException() throws Except
(client) -> client.refreshManagedKeys(new byte[0], "namespace"));
}
+ @Test
+ public void testSetManagedKeyLocal() throws Exception {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ KeymetaAdmin keymetaAdmin = master.getKeymetaAdmin();
+ doTestSetManagedKey(keymetaAdmin);
+ }
+
+ @Test
+ public void testSetManagedKeyOverRPC() throws Exception {
+ KeymetaAdmin adminClient = new KeymetaAdminClient(TEST_UTIL.getConnection());
+ doTestSetManagedKey(adminClient);
+ }
+
+ private void doTestSetManagedKey(KeymetaAdmin adminClient) throws IOException, KeyException {
+ String cust = "custSetMk";
+ byte[] custBytes = cust.getBytes();
+ ManagedKeyData enabled =
+ adminClient.enableKeyManagement(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(enabled);
+ assertEquals(ManagedKeyState.ACTIVE, enabled.getKeyState());
+ // RPC client rebuilds ManagedKeyData without full metadata string; resolve from test provider.
+ String keyMetadata = enabled.getKeyMetadata();
+ if (keyMetadata == null) {
+ HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
+ MockManagedKeyProvider managedKeyProvider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(master.getConfiguration());
+ ManagedKeyData fromProvider =
+ managedKeyProvider.getLastGeneratedKeyData(cust, ManagedKeyData.KEY_SPACE_GLOBAL);
+ assertNotNull(fromProvider);
+ keyMetadata = fromProvider.getKeyMetadata();
+ }
+ assertNotNull(keyMetadata);
+ ManagedKeyData setAgain =
+ adminClient.setManagedKey(custBytes, ManagedKeyData.KEY_SPACE_GLOBAL, keyMetadata);
+ assertEquals(ManagedKeyState.ACTIVE, setAgain.getKeyState());
+ assertTrue(Bytes.equals(enabled.getPartialIdentity(), setAgain.getPartialIdentity()));
+ }
+
+ @Test
+ public void testSetManagedKeyWithClientSideServiceException() throws Exception {
+ doTestWithClientSideServiceException(
+ (mockStub, networkError) -> when(mockStub.setManagedKey(any(), any()))
+ .thenThrow(networkError),
+ (client) -> client.setManagedKey(new byte[0], "namespace", "meta"));
+ }
+
@Test
public void testRotateManagedKeyLocal() throws Exception {
HMaster master = TEST_UTIL.getHBaseCluster().getMaster();
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java
index f541d4bac18c..930bc69c5e68 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/keymeta/TestSystemKeyCache.java
@@ -17,7 +17,6 @@
*/
package org.apache.hadoop.hbase.keymeta;
-import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertSame;
@@ -39,6 +38,7 @@
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.hadoop.hbase.testclassification.MasterTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
@@ -60,8 +60,9 @@ public class TestSystemKeyCache {
@Mock
private SystemKeyAccessor mockAccessor;
- private static final byte[] TEST_CUSTODIAN = "test-custodian".getBytes();
+ private static final Bytes TEST_CUSTODIAN = new Bytes("test-custodian".getBytes());
private static final String TEST_NAMESPACE = "test-namespace";
+ private static final Bytes TEST_NAMESPACE_BYTES = new Bytes(TEST_NAMESPACE.getBytes());
private static final String TEST_METADATA_1 = "metadata-1";
private static final String TEST_METADATA_2 = "metadata-2";
private static final String TEST_METADATA_3 = "metadata-3";
@@ -86,12 +87,15 @@ public void setUp() {
testKey3 = new SecretKeySpec("test-key-3-bytes".getBytes(), "AES");
// Create test key data with different checksums
- keyData1 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, testKey1, ManagedKeyState.ACTIVE,
- TEST_METADATA_1, 1000L);
- keyData2 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, testKey2, ManagedKeyState.ACTIVE,
- TEST_METADATA_2, 2000L);
- keyData3 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, testKey3, ManagedKeyState.ACTIVE,
- TEST_METADATA_3, 3000L);
+ keyData1 = new ManagedKeyData(ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(TEST_CUSTODIAN, TEST_NAMESPACE_BYTES, TEST_METADATA_1), testKey1,
+ ManagedKeyState.ACTIVE, TEST_METADATA_1, 1000L);
+ keyData2 = new ManagedKeyData(ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(TEST_CUSTODIAN, TEST_NAMESPACE_BYTES, TEST_METADATA_2), testKey2,
+ ManagedKeyState.ACTIVE, TEST_METADATA_2, 2000L);
+ keyData3 = new ManagedKeyData(ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(TEST_CUSTODIAN, TEST_NAMESPACE_BYTES, TEST_METADATA_3), testKey3,
+ ManagedKeyState.ACTIVE, TEST_METADATA_3, 3000L);
// Create test paths
keyPath1 = new Path("/system/keys/key1");
@@ -112,8 +116,11 @@ public void testCreateCacheWithSingleSystemKey() throws Exception {
// Verify
assertNotNull(cache);
assertSame(keyData1, cache.getLatestSystemKey());
- assertSame(keyData1, cache.getSystemKeyByChecksum(keyData1.getKeyChecksum()));
- assertNull(cache.getSystemKeyByChecksum(999L)); // Non-existent checksum
+ assertSame(keyData1,
+ cache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(keyData1.getKeyCustodian(),
+ keyData1.getKeyNamespaceBytes(), keyData1.getPartialIdentity())));
+ assertNull(cache.getSystemKeyByIdentity(new byte[] { 1, 2, 3 })); // Non-existent identity
verify(mockAccessor).getAllSystemKeyFiles();
verify(mockAccessor).loadSystemKey(keyPath1);
@@ -135,13 +142,22 @@ public void testCreateCacheWithMultipleSystemKeys() throws Exception {
assertNotNull(cache);
assertSame(keyData1, cache.getLatestSystemKey()); // First key becomes latest
- // All keys should be accessible by checksum
- assertSame(keyData1, cache.getSystemKeyByChecksum(keyData1.getKeyChecksum()));
- assertSame(keyData2, cache.getSystemKeyByChecksum(keyData2.getKeyChecksum()));
- assertSame(keyData3, cache.getSystemKeyByChecksum(keyData3.getKeyChecksum()));
-
- // Non-existent checksum should return null
- assertNull(cache.getSystemKeyByChecksum(999L));
+ // All keys should be accessible by full identity
+ assertSame(keyData1,
+ cache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(keyData1.getKeyCustodian(),
+ keyData1.getKeyNamespaceBytes(), keyData1.getPartialIdentity())));
+ assertSame(keyData2,
+ cache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(keyData2.getKeyCustodian(),
+ keyData2.getKeyNamespaceBytes(), keyData2.getPartialIdentity())));
+ assertSame(keyData3,
+ cache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(keyData3.getKeyCustodian(),
+ keyData3.getKeyNamespaceBytes(), keyData3.getPartialIdentity())));
+
+ // Non-existent identity should return null
+ assertNull(cache.getSystemKeyByIdentity(new byte[] { 1, 2, 3 }));
verify(mockAccessor).getAllSystemKeyFiles();
verify(mockAccessor).loadSystemKey(keyPath1);
@@ -196,7 +212,7 @@ public void testGetLatestSystemKeyConsistency() throws Exception {
}
@Test
- public void testGetSystemKeyByChecksumWithDifferentKeys() throws Exception {
+ public void testGetSystemKeyByIdentityWithDifferentKeys() throws Exception {
// Setup
List keyPaths = Arrays.asList(keyPath1, keyPath2, keyPath3);
when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
@@ -207,24 +223,27 @@ public void testGetSystemKeyByChecksumWithDifferentKeys() throws Exception {
// Execute
SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
- // Verify each key can be retrieved by its unique checksum
- long checksum1 = keyData1.getKeyChecksum();
- long checksum2 = keyData2.getKeyChecksum();
- long checksum3 = keyData3.getKeyChecksum();
-
- // Checksums should be different
- assert checksum1 != checksum2;
- assert checksum2 != checksum3;
- assert checksum1 != checksum3;
-
- // Each key should be retrievable by its checksum
- assertSame(keyData1, cache.getSystemKeyByChecksum(checksum1));
- assertSame(keyData2, cache.getSystemKeyByChecksum(checksum2));
- assertSame(keyData3, cache.getSystemKeyByChecksum(checksum3));
+ // Verify each key can be retrieved by its unique full identity
+ byte[] identity1 = ManagedKeyIdentityUtils.constructRowKeyForIdentity(
+ keyData1.getKeyCustodian(), keyData1.getKeyNamespaceBytes(), keyData1.getPartialIdentity());
+ byte[] identity2 = ManagedKeyIdentityUtils.constructRowKeyForIdentity(
+ keyData2.getKeyCustodian(), keyData2.getKeyNamespaceBytes(), keyData2.getPartialIdentity());
+ byte[] identity3 = ManagedKeyIdentityUtils.constructRowKeyForIdentity(
+ keyData3.getKeyCustodian(), keyData3.getKeyNamespaceBytes(), keyData3.getPartialIdentity());
+
+ // Identities should be different
+ assert !java.util.Arrays.equals(identity1, identity2);
+ assert !java.util.Arrays.equals(identity2, identity3);
+ assert !java.util.Arrays.equals(identity1, identity3);
+
+ // Each key should be retrievable by its full identity
+ assertSame(keyData1, cache.getSystemKeyByIdentity(identity1));
+ assertSame(keyData2, cache.getSystemKeyByIdentity(identity2));
+ assertSame(keyData3, cache.getSystemKeyByIdentity(identity3));
}
@Test
- public void testGetSystemKeyByChecksumWithNonExistentChecksum() throws Exception {
+ public void testGetSystemKeyByIdentityWithNonExistentIdentity() throws Exception {
// Setup
List keyPaths = Collections.singletonList(keyPath1);
when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
@@ -236,14 +255,16 @@ public void testGetSystemKeyByChecksumWithNonExistentChecksum() throws Exception
// Verify
assertNotNull(cache);
- // Test various non-existent checksums
- assertNull(cache.getSystemKeyByChecksum(0L));
- assertNull(cache.getSystemKeyByChecksum(-1L));
- assertNull(cache.getSystemKeyByChecksum(Long.MAX_VALUE));
- assertNull(cache.getSystemKeyByChecksum(Long.MIN_VALUE));
+ // Test various non-existent identities
+ assertNull(cache.getSystemKeyByIdentity(null));
+ assertNull(cache.getSystemKeyByIdentity(new byte[0]));
+ assertNull(cache.getSystemKeyByIdentity(new byte[] { 1, 2, 3 }));
- // But the actual checksum should work
- assertSame(keyData1, cache.getSystemKeyByChecksum(keyData1.getKeyChecksum()));
+ // But the actual full identity should work
+ assertSame(keyData1,
+ cache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(keyData1.getKeyCustodian(),
+ keyData1.getKeyNamespaceBytes(), keyData1.getPartialIdentity())));
}
@Test(expected = IOException.class)
@@ -267,18 +288,25 @@ public void testCreateCacheWithLoadSystemKeyIOException() throws Exception {
}
@Test
- public void testCacheWithKeysHavingSameChecksum() throws Exception {
- // Setup - create two keys that will have the same checksum (same content)
+ public void testCacheWithKeysHavingSameFullIdentity() throws Exception {
+ // Setup - two keys with same custodian/namespace/metadata so same full identity
Key sameKey1 = new SecretKeySpec("identical-bytes".getBytes(), "AES");
Key sameKey2 = new SecretKeySpec("identical-bytes".getBytes(), "AES");
-
- ManagedKeyData sameManagedKey1 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, sameKey1,
- ManagedKeyState.ACTIVE, "metadata-A", 1000L);
- ManagedKeyData sameManagedKey2 = new ManagedKeyData(TEST_CUSTODIAN, TEST_NAMESPACE, sameKey2,
- ManagedKeyState.ACTIVE, "metadata-B", 2000L);
-
- // Verify they have the same checksum
- assertEquals(sameManagedKey1.getKeyChecksum(), sameManagedKey2.getKeyChecksum());
+ String sameMetadata = "same-metadata";
+
+ ManagedKeyData sameManagedKey1 =
+ new ManagedKeyData(ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(TEST_CUSTODIAN,
+ TEST_NAMESPACE_BYTES, sameMetadata), sameKey1, ManagedKeyState.ACTIVE, sameMetadata, 1000L);
+ ManagedKeyData sameManagedKey2 =
+ new ManagedKeyData(ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(TEST_CUSTODIAN,
+ TEST_NAMESPACE_BYTES, sameMetadata), sameKey2, ManagedKeyState.ACTIVE, sameMetadata, 2000L);
+
+ // Same custodian/namespace/metadata -> same partial identity -> same full identity
+ assertTrue(java.util.Arrays.equals(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(sameManagedKey1.getKeyCustodian(),
+ sameManagedKey1.getKeyNamespaceBytes(), sameManagedKey1.getPartialIdentity()),
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(sameManagedKey2.getKeyCustodian(),
+ sameManagedKey2.getKeyNamespaceBytes(), sameManagedKey2.getPartialIdentity())));
List keyPaths = Arrays.asList(keyPath1, keyPath2);
when(mockAccessor.getAllSystemKeyFiles()).thenReturn(keyPaths);
@@ -288,13 +316,14 @@ public void testCacheWithKeysHavingSameChecksum() throws Exception {
// Execute
SystemKeyCache cache = SystemKeyCache.createCache(mockAccessor);
- // Verify - second key should overwrite first in the map due to same checksum
+ // Verify - second key should overwrite first in the map due to same full identity
assertNotNull(cache);
assertSame(sameManagedKey1, cache.getLatestSystemKey()); // First is still latest
- // The map should contain the second key for the shared checksum
- ManagedKeyData retrievedKey = cache.getSystemKeyByChecksum(sameManagedKey1.getKeyChecksum());
- assertSame(sameManagedKey2, retrievedKey); // Last one wins in TreeMap
+ ManagedKeyData retrievedKey = cache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(sameManagedKey1.getKeyCustodian(),
+ sameManagedKey1.getKeyNamespaceBytes(), sameManagedKey1.getPartialIdentity()));
+ assertSame(sameManagedKey2, retrievedKey); // Last one wins
}
@Test
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java
index 9c3e5991c6e7..04e0bb8fa69f 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestKeymetaAdminImpl.java
@@ -18,6 +18,7 @@
package org.apache.hadoop.hbase.master;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL_BYTES;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.ACTIVE;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.DISABLED;
import static org.apache.hadoop.hbase.io.crypto.ManagedKeyState.FAILED;
@@ -31,6 +32,7 @@
import static org.junit.Assume.assumeTrue;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyBoolean;
+import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.clearInvocations;
import static org.mockito.Mockito.doThrow;
import static org.mockito.Mockito.mock;
@@ -57,18 +59,21 @@
import org.apache.hadoop.hbase.HConstants;
import org.apache.hadoop.hbase.Server;
import org.apache.hadoop.hbase.client.AsyncAdmin;
-import org.apache.hadoop.hbase.client.AsyncClusterConnection;
import org.apache.hadoop.hbase.io.crypto.Encryption;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.keymeta.KeyIdentityPrefixBytesBacked;
import org.apache.hadoop.hbase.keymeta.KeyManagementService;
import org.apache.hadoop.hbase.keymeta.KeymetaAdminImpl;
import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.testclassification.MasterTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FutureUtils;
import org.junit.After;
import org.junit.Before;
import org.junit.ClassRule;
@@ -95,6 +100,9 @@ public class TestKeymetaAdminImpl {
private static final String CUST = "cust1";
private static final byte[] CUST_BYTES = CUST.getBytes();
+ private static final ManagedKeyIdentity CUST_GLOBAL_PREFIX =
+ new KeyIdentityPrefixBytesBacked(new Bytes(CUST_BYTES), KEY_SPACE_GLOBAL_BYTES);
+ private static final String KEY_METADATA = "metadata1";
protected final HBaseTestingUtil TEST_UTIL = new HBaseTestingUtil();
@@ -105,11 +113,22 @@ public class TestKeymetaAdminImpl {
protected Path testRootDir;
protected FileSystem fs;
+ protected AsyncAdmin mockAdmin = mock(AsyncAdmin.class);
protected FileSystem mockFileSystem = mock(FileSystem.class);
protected MasterServices mockServer = mock(MasterServices.class);
protected KeymetaAdminImplForTest keymetaAdmin;
KeymetaTableAccessor keymetaAccessor = mock(KeymetaTableAccessor.class);
+ /**
+ * Builds ManagedKeyData with metadata and refresh timestamp using FullKeyIdentity constructor.
+ */
+ private static ManagedKeyData keyData(byte[] cust, byte[] ns, ManagedKeyState state,
+ String metadata, long refreshTs) {
+ ManagedKeyIdentity id =
+ ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(new Bytes(cust), new Bytes(ns), metadata);
+ return new ManagedKeyData(id, null, state, metadata, refreshTs);
+ }
+
@Before
public void setUp() throws Exception {
conf = TEST_UTIL.getConfiguration();
@@ -148,9 +167,11 @@ public void setUp() throws Exception {
@Test
public void testDisabled() throws Exception {
assertThrows(IOException.class, () -> keymetaAdmin
- .enableKeyManagement(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES, KEY_SPACE_GLOBAL));
+ .enableKeyManagement(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.get(), KEY_SPACE_GLOBAL));
assertThrows(IOException.class, () -> keymetaAdmin
- .getManagedKeys(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES, KEY_SPACE_GLOBAL));
+ .getManagedKeys(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.get(), KEY_SPACE_GLOBAL));
+ assertThrows(IOException.class,
+ () -> keymetaAdmin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA));
}
}
@@ -180,16 +201,18 @@ public void testEnableAndGet() throws Exception {
MockManagedKeyProvider managedKeyProvider =
(MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
managedKeyProvider.setMockedKeyState(CUST, keyState);
- when(keymetaAccessor.getKeyManagementStateMarker(CUST.getBytes(), keySpace))
- .thenReturn(managedKeyProvider.getManagedKey(CUST.getBytes(), keySpace));
+ ManagedKeyIdentity custNamespacePrefix =
+ new KeyIdentityPrefixBytesBacked(new Bytes(CUST_BYTES), new Bytes(Bytes.toBytes(keySpace)));
+ when(keymetaAccessor.getKeyManagementStateMarker(custNamespacePrefix))
+ .thenReturn(managedKeyProvider.getManagedKey(custNamespacePrefix));
ManagedKeyData managedKey = keymetaAdmin.enableKeyManagement(CUST_BYTES, keySpace);
assertNotNull(managedKey);
assertEquals(keyState, managedKey.getKeyState());
- verify(keymetaAccessor).getKeyManagementStateMarker(CUST.getBytes(), keySpace);
+ verify(keymetaAccessor).getKeyManagementStateMarker(custNamespacePrefix);
keymetaAdmin.getManagedKeys(CUST_BYTES, keySpace);
- verify(keymetaAccessor).getAllKeys(CUST.getBytes(), keySpace, false);
+ verify(keymetaAccessor).getAllKeys(custNamespacePrefix, false);
}
@Test
@@ -237,32 +260,21 @@ public void test() throws Exception {
String cust = "invalidcust1";
byte[] custBytes = cust.getBytes();
managedKeyProvider.setMockedKey(cust, null, keySpace);
- IOException ex = assertThrows(IOException.class,
+ KeyException ex = assertThrows(KeyException.class,
() -> keymetaAdmin.enableKeyManagement(custBytes, keySpace));
- assertEquals("Invalid null managed key received from key provider", ex.getMessage());
+ assertEquals(ex.getMessage(), true,
+ ex.getMessage().contains("Invalid null managed key received from key provider"));
}
}
private class KeymetaAdminImplForTest extends KeymetaAdminImpl {
public KeymetaAdminImplForTest(MasterServices mockServer, KeymetaTableAccessor mockAccessor) {
- super(mockServer);
- }
-
- @Override
- public void addKey(ManagedKeyData keyData) throws IOException {
- keymetaAccessor.addKey(keyData);
- }
-
- @Override
- public List getAllKeys(byte[] key_cust, String keyNamespace,
- boolean includeMarkers) throws IOException, KeyException {
- return keymetaAccessor.getAllKeys(key_cust, keyNamespace, includeMarkers);
+ super(mockServer, mockAccessor);
}
@Override
- public ManagedKeyData getKeyManagementStateMarker(byte[] key_cust, String keyNamespace)
- throws IOException, KeyException {
- return keymetaAccessor.getKeyManagementStateMarker(key_cust, keyNamespace);
+ protected AsyncAdmin getAsyncAdmin(MasterServices master) {
+ return mockAdmin;
}
}
@@ -292,17 +304,17 @@ public static class TestMiscAPIs extends TestKeymetaAdminImpl {
HBaseClassTestRule.forClass(TestMiscAPIs.class);
private ServerManager mockServerManager = mock(ServerManager.class);
- private AsyncClusterConnection mockConnection;
- private AsyncAdmin mockAsyncAdmin;
@Override
public void setUp() throws Exception {
super.setUp();
- mockConnection = mock(AsyncClusterConnection.class);
- mockAsyncAdmin = mock(AsyncAdmin.class);
when(mockServer.getServerManager()).thenReturn(mockServerManager);
- when(mockServer.getAsyncClusterConnection()).thenReturn(mockConnection);
- when(mockConnection.getAdmin()).thenReturn(mockAsyncAdmin);
+ when(mockAdmin.refreshSystemKeyCacheOnServers(any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+ when(mockAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+ when(mockAdmin.clearManagedKeyDataCacheOnServers(any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
}
@Test
@@ -310,10 +322,10 @@ public void testEnableWithInactiveKey() throws Exception {
MockManagedKeyProvider managedKeyProvider =
(MockManagedKeyProvider) Encryption.getManagedKeyProvider(conf);
managedKeyProvider.setMockedKeyState(CUST, INACTIVE);
- when(keymetaAccessor.getKeyManagementStateMarker(CUST.getBytes(), KEY_SPACE_GLOBAL))
- .thenReturn(managedKeyProvider.getManagedKey(CUST.getBytes(), KEY_SPACE_GLOBAL));
+ when(keymetaAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX))
+ .thenReturn(managedKeyProvider.getManagedKey(CUST_GLOBAL_PREFIX));
- IOException exception = assertThrows(IOException.class,
+ KeyException exception = assertThrows(KeyException.class,
() -> keymetaAdmin.enableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL));
assertTrue(exception.getMessage(),
exception.getMessage().contains("Expected key to be ACTIVE, but got an INACTIVE key"));
@@ -383,9 +395,6 @@ public void testRotateSTKWithNewKey() throws Exception {
// Mock SystemKeyManager to return a new key (non-null)
when(mockServer.rotateSystemKeyIfChanged()).thenReturn(true);
- when(mockAsyncAdmin.refreshSystemKeyCacheOnServers(any()))
- .thenReturn(CompletableFuture.completedFuture(null));
-
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
// Call rotateSTK - should return true since new key was detected
@@ -396,7 +405,7 @@ public void testRotateSTKWithNewKey() throws Exception {
// Verify that rotateSystemKeyIfChanged was called
verify(mockServer).rotateSystemKeyIfChanged();
- verify(mockAsyncAdmin).refreshSystemKeyCacheOnServers(any());
+ verify(mockAdmin).refreshSystemKeyCacheOnServers(any());
}
/**
@@ -444,21 +453,22 @@ public void testRotateSTKWithFailedServerRefresh() throws Exception {
// Mock SystemKeyManager to return a new key (non-null)
when(mockServer.rotateSystemKeyIfChanged()).thenReturn(true);
- CompletableFuture failedFuture = new CompletableFuture<>();
- failedFuture.completeExceptionally(new IOException("refresh failed"));
- when(mockAsyncAdmin.refreshSystemKeyCacheOnServers(any())).thenReturn(failedFuture);
+ when(mockAdmin.refreshSystemKeyCacheOnServers(any()))
+ .thenReturn(FutureUtils.failedFuture(new IOException("refresh failed")));
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
// Call rotateSTK and expect IOException
IOException ex = assertThrows(IOException.class, () -> admin.rotateSTK());
- assertTrue(ex.getMessage()
- .contains("Failed to initiate System Key cache refresh on one or more region servers"));
+ assertTrue(ex.getMessage(), ex.getMessage()
+ .equals("Failed to initiate System Key cache refresh on one or more region servers"));
+ assertTrue(ex.getCause() instanceof IOException);
+ assertTrue(ex.getCause().getMessage(), ex.getCause().getMessage().equals("refresh failed"));
// Verify that rotateSystemKeyIfChanged was called
verify(mockServer).rotateSystemKeyIfChanged();
- verify(mockAsyncAdmin).refreshSystemKeyCacheOnServers(any());
+ verify(mockAdmin).refreshSystemKeyCacheOnServers(any());
}
@Test
@@ -545,16 +555,13 @@ public void testEjectManagedKeyDataCacheEntry() throws Exception {
String keyNamespace = "testNamespace";
String keyMetadata = "testMetadata";
- when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
- .thenReturn(CompletableFuture.completedFuture(null));
-
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
// Call the method
admin.ejectManagedKeyDataCacheEntry(keyCustodian, keyNamespace, keyMetadata);
// Verify the AsyncAdmin method was called
- verify(mockAsyncAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ verify(mockAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
}
/**
@@ -566,19 +573,15 @@ public void testEjectManagedKeyDataCacheEntryWithFailure() throws Exception {
String keyNamespace = "testNamespace";
String keyMetadata = "testMetadata";
- CompletableFuture failedFuture = new CompletableFuture<>();
- failedFuture.completeExceptionally(new IOException("eject failed"));
- when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
- .thenReturn(failedFuture);
+ when(mockAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(FutureUtils.failedFuture(new IOException("eject failed")));
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
// Call the method and expect IOException
IOException ex = assertThrows(IOException.class,
() -> admin.ejectManagedKeyDataCacheEntry(keyCustodian, keyNamespace, keyMetadata));
-
assertTrue(ex.getMessage().contains("eject failed"));
- verify(mockAsyncAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
}
/**
@@ -586,16 +589,13 @@ public void testEjectManagedKeyDataCacheEntryWithFailure() throws Exception {
*/
@Test
public void testClearManagedKeyDataCache() throws Exception {
- when(mockAsyncAdmin.clearManagedKeyDataCacheOnServers(any()))
- .thenReturn(CompletableFuture.completedFuture(null));
-
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
// Call the method
admin.clearManagedKeyDataCache();
// Verify the AsyncAdmin method was called
- verify(mockAsyncAdmin).clearManagedKeyDataCacheOnServers(any());
+ verify(mockAdmin).clearManagedKeyDataCacheOnServers(any());
}
/**
@@ -603,9 +603,8 @@ public void testClearManagedKeyDataCache() throws Exception {
*/
@Test
public void testClearManagedKeyDataCacheWithFailure() throws Exception {
- CompletableFuture failedFuture = new CompletableFuture<>();
- failedFuture.completeExceptionally(new IOException("clear failed"));
- when(mockAsyncAdmin.clearManagedKeyDataCacheOnServers(any())).thenReturn(failedFuture);
+ when(mockAdmin.clearManagedKeyDataCacheOnServers(any()))
+ .thenReturn(FutureUtils.failedFuture(new IOException("clear failed")));
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockServer, keymetaAccessor);
@@ -613,7 +612,7 @@ public void testClearManagedKeyDataCacheWithFailure() throws Exception {
IOException ex = assertThrows(IOException.class, () -> admin.clearManagedKeyDataCache());
assertTrue(ex.getMessage().contains("clear failed"));
- verify(mockAsyncAdmin).clearManagedKeyDataCacheOnServers(any());
+ verify(mockAdmin).clearManagedKeyDataCacheOnServers(any());
}
}
@@ -622,7 +621,7 @@ public void testClearManagedKeyDataCacheWithFailure() throws Exception {
*/
@RunWith(BlockJUnit4ClassRunner.class)
@Category({ MasterTests.class, SmallTests.class })
- public static class TestNewKeyManagementAdminMethods {
+ public static class TestNewKeyManagementAdminMethods extends TestKeymetaAdminImpl {
@ClassRule
public static final HBaseClassTestRule CLASS_RULE =
HBaseClassTestRule.forClass(TestNewKeyManagementAdminMethods.class);
@@ -630,10 +629,6 @@ public static class TestNewKeyManagementAdminMethods {
@Mock
private MasterServices mockMasterServices;
@Mock
- private AsyncAdmin mockAsyncAdmin;
- @Mock
- private AsyncClusterConnection mockAsyncClusterConnection;
- @Mock
private ServerManager mockServerManager;
@Mock
private KeymetaTableAccessor mockAccessor;
@@ -645,8 +640,6 @@ public static class TestNewKeyManagementAdminMethods {
@Before
public void setUp() throws Exception {
MockitoAnnotations.openMocks(this);
- when(mockMasterServices.getAsyncClusterConnection()).thenReturn(mockAsyncClusterConnection);
- when(mockAsyncClusterConnection.getAdmin()).thenReturn(mockAsyncAdmin);
when(mockMasterServices.getServerManager()).thenReturn(mockServerManager);
when(mockServerManager.getOnlineServersList()).thenReturn(new ArrayList<>());
@@ -655,64 +648,153 @@ public void setUp() throws Exception {
conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
when(mockKeyManagementService.getConfiguration()).thenReturn(conf);
when(mockMasterServices.getKeyManagementService()).thenReturn(mockKeyManagementService);
+ when(mockAdmin.refreshSystemKeyCacheOnServers(any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+ when(mockAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
+ when(mockAdmin.clearManagedKeyDataCacheOnServers(any()))
+ .thenReturn(CompletableFuture.completedFuture(null));
}
@Test
public void testDisableKeyManagement() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- ManagedKeyData activeKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
- ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData activeKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, KEY_METADATA, 123L);
ManagedKeyData disabledMarker =
- new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, ManagedKeyState.DISABLED);
+ new ManagedKeyData(CUST_GLOBAL_PREFIX, ManagedKeyState.DISABLED);
- when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(activeKey)
- .thenReturn(disabledMarker);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(Arrays.asList(activeKey));
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(disabledMarker);
ManagedKeyData result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
assertNotNull(result);
assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
- verify(mockAccessor, times(2)).getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL);
- verify(mockAccessor).updateActiveState(activeKey, ManagedKeyState.INACTIVE);
-
- // Repeat the call for idempotency check.
+ verify(mockAccessor).addKeyManagementStateMarker(CUST_GLOBAL_PREFIX, DISABLED);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockAccessor).updateActiveState(eq(activeKey), eq(ManagedKeyState.INACTIVE), eq(true));
+ verify(mockAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), eq(CUST_BYTES),
+ eq(KEY_SPACE_GLOBAL), eq(KEY_METADATA));
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+
+ // Repeat the call for idempotency check: already-disabled keys are skipped.
clearInvocations(mockAccessor);
- when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(disabledMarker);
+ ManagedKeyData inactiveDisabledKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.INACTIVE_DISABLED, KEY_METADATA, 123L);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false))
+ .thenReturn(Arrays.asList(inactiveDisabledKey));
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(disabledMarker);
result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
assertNotNull(result);
assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
- verify(mockAccessor, times(2)).getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL);
- verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor).addKeyManagementStateMarker(CUST_GLOBAL_PREFIX, DISABLED);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ // Keys already in DISABLED external state are skipped
+ verify(mockAccessor, never()).disableKey(any(ManagedKeyData.class), anyBoolean());
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ }
+
+ @Test
+ public void testDisableKeyManagementWithMultipleKeys() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData activeKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData inactiveKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.INACTIVE, "metadata2", 124L);
+ ManagedKeyData disabledMarker =
+ new ManagedKeyData(CUST_GLOBAL_PREFIX, ManagedKeyState.DISABLED);
+
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false))
+ .thenReturn(Arrays.asList(activeKey, inactiveKey));
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(disabledMarker);
+
+ ManagedKeyData result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
+ verify(mockAccessor).addKeyManagementStateMarker(CUST_GLOBAL_PREFIX, DISABLED);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockAccessor).updateActiveState(eq(activeKey), eq(ManagedKeyState.INACTIVE), eq(true));
+ verify(mockAccessor).updateActiveState(eq(inactiveKey), eq(ManagedKeyState.INACTIVE),
+ eq(true));
+ verify(mockAdmin, times(1)).ejectManagedKeyDataCacheEntryOnServers(any(), eq(CUST_BYTES),
+ eq(KEY_SPACE_GLOBAL), any());
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ }
+
+ @Test
+ public void testDisableKeyManagementNoKeys() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData disabledMarker =
+ new ManagedKeyData(CUST_GLOBAL_PREFIX, ManagedKeyState.DISABLED);
+
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(new ArrayList<>());
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(disabledMarker);
+
+ ManagedKeyData result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
+ verify(mockAccessor).addKeyManagementStateMarker(CUST_GLOBAL_PREFIX, DISABLED);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockAccessor, never()).disableKey(any(ManagedKeyData.class), anyBoolean());
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ }
+
+ @Test
+ public void testDisableKeyManagementSkipsNullMetadataAndReturnsSyntheticMarker() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ // FAILED marker-style key with null metadata should be skipped by disable loop.
+ ManagedKeyData nullMetadataKey =
+ new ManagedKeyData(CUST_GLOBAL_PREFIX, ManagedKeyState.FAILED, 123L);
+
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false))
+ .thenReturn(Arrays.asList(nullMetadataKey));
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(null);
+
+ ManagedKeyData result = admin.disableKeyManagement(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ assertNotNull(result);
+ assertEquals(ManagedKeyState.DISABLED, result.getKeyState());
+ verify(mockAccessor).addKeyManagementStateMarker(CUST_GLOBAL_PREFIX, DISABLED);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockAccessor, never()).updateActiveState(any(), any(), anyBoolean());
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
}
@Test
public void testDisableManagedKey() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- ManagedKeyData disabledKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData disabledKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.DISABLED, "metadata1", 123L);
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash("metadata1");
- when(mockAccessor.getKey(any(), any(), any())).thenReturn(disabledKey);
-
- CompletableFuture successFuture = CompletableFuture.completedFuture(null);
- when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
- .thenReturn(successFuture);
+ when(mockAccessor.getKey(disabledKey.getKeyIdentity())).thenReturn(disabledKey);
+ byte[] keyMetadataHash = disabledKey.getPartialIdentity();
IOException exception = assertThrows(IOException.class,
() -> admin.disableManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, keyMetadataHash));
assertTrue(exception.getMessage(),
exception.getMessage().contains("Key is already disabled"));
verify(mockAccessor, never()).disableKey(any(ManagedKeyData.class));
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
}
@Test
public void testDisableManagedKeyNotFound() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash("metadata1");
+ ManagedKeyIdentity id = ManagedKeyIdentityUtils
+ .fullKeyIdentityFromMetadata(new Bytes(CUST_BYTES), KEY_SPACE_GLOBAL_BYTES, "metadata1");
+ byte[] keyMetadataHash = id.getPartialIdentityView().copyBytesIfNecessary();
// Return null to simulate key not found
- when(mockAccessor.getKey(any(), any(), any())).thenReturn(null);
+ when(mockAccessor.getKey(id)).thenReturn(null);
IOException exception = assertThrows(IOException.class,
() -> admin.disableManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, keyMetadataHash));
@@ -720,7 +802,7 @@ public void testDisableManagedKeyNotFound() throws Exception {
exception.getMessage()
.contains("Key not found for (custodian: Y3VzdDE=, namespace: *) with metadata hash: "
+ ManagedKeyProvider.encodeToStr(keyMetadataHash)));
- verify(mockAccessor).getKey(CUST_BYTES, KEY_SPACE_GLOBAL, keyMetadataHash);
+ verify(mockAccessor).getKey(id);
}
@Test
@@ -728,33 +810,33 @@ public void testRotateManagedKeyNoActiveKey() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
// Return null to simulate no active key exists
- when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(null);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(null);
IOException exception =
assertThrows(IOException.class, () -> admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL));
assertTrue(exception.getMessage().contains("No active key found"));
- verify(mockAccessor).getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL);
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
}
@Test
public void testRotateManagedKey() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- ManagedKeyData currentKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData currentKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
- ManagedKeyData newKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData newKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata2", 124L);
- when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(currentKey);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(currentKey);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.getManagedKey(any(), any())).thenReturn(newKey);
+ when(mockProvider.getManagedKey(any(ManagedKeyIdentity.class))).thenReturn(newKey);
ManagedKeyData result = admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL);
assertNotNull(result);
assertEquals(newKey, result);
verify(mockAccessor).addKey(newKey);
- verify(mockAccessor).updateActiveState(currentKey, ManagedKeyState.INACTIVE);
+ verify(mockAccessor).updateActiveState(currentKey, ManagedKeyState.INACTIVE, true);
}
@Test
@@ -762,18 +844,18 @@ public void testRefreshManagedKeysWithNoStateChange() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
List keys = new ArrayList<>();
- ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData key1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
keys.add(key1);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(key1);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any())).thenReturn(key1);
admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
- verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
- verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockAccessor, never()).updateActiveState(any(), any(), anyBoolean());
verify(mockAccessor, never()).disableKey(any());
}
@@ -781,24 +863,21 @@ public void testRefreshManagedKeysWithNoStateChange() throws Exception {
public void testRotateManagedKeyIgnoresFailedKey() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- ManagedKeyData currentKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData currentKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
- ManagedKeyData newKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData newKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.FAILED, "metadata1", 124L);
- when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(currentKey);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(currentKey);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.getManagedKey(any(), any())).thenReturn(newKey);
- // Mock the AsyncAdmin for ejectManagedKeyDataCacheEntry
- when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
- .thenReturn(CompletableFuture.completedFuture(null));
+ when(mockProvider.getManagedKey(any(ManagedKeyIdentity.class))).thenReturn(newKey);
ManagedKeyData result = admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL);
assertNull(result);
// Verify that the active key was not marked as inactive
verify(mockAccessor, never()).addKey(any());
- verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).updateActiveState(any(), any(), anyBoolean());
}
@Test
@@ -806,20 +885,19 @@ public void testRotateManagedKeyNoRotation() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
// Current and new keys have the same metadata hash, so no rotation should occur
- ManagedKeyData currentKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData currentKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
- when(mockAccessor.getKeyManagementStateMarker(any(), any())).thenReturn(currentKey);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(currentKey);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.getManagedKey(any(), any())).thenReturn(currentKey);
+ when(mockProvider.getManagedKey(any(ManagedKeyIdentity.class))).thenReturn(currentKey);
ManagedKeyData result = admin.rotateManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL);
assertNull(result);
- verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).updateActiveState(any(), any(), anyBoolean());
verify(mockAccessor, never()).addKey(any());
- verify(mockAsyncAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(),
- any());
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
}
@Test
@@ -827,22 +905,38 @@ public void testRefreshManagedKeysWithStateChange() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
List keys = new ArrayList<>();
- ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData key1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
keys.add(key1);
// Refreshed key has a different state (INACTIVE)
- ManagedKeyData refreshedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData refreshedKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.INACTIVE, "metadata1", 123L);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(refreshedKey);
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(refreshedKey);
+
+ admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
+
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockAccessor).updateActiveState(key1, ManagedKeyState.INACTIVE, false);
+ }
+
+ @Test
+ public void testRefreshManagedKeysReturnsEarlyWhenDisabledMarkerPresent() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData disabledMarker = new ManagedKeyData(CUST_GLOBAL_PREFIX, ManagedKeyState.DISABLED);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(disabledMarker);
admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
- verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
- verify(mockAccessor).updateActiveState(key1, ManagedKeyState.INACTIVE);
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ verify(mockAccessor, never()).getAllKeys(any(), anyBoolean());
+ verify(mockAccessor, never()).getKeyProvider();
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
}
@Test
@@ -850,27 +944,25 @@ public void testRefreshManagedKeysWithDisabledState() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
List keys = new ArrayList<>();
- ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData key1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
keys.add(key1);
// Refreshed key is DISABLED
- ManagedKeyData disabledKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData disabledKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.DISABLED, "metadata1", 123L);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.unwrapKey(any(), any())).thenReturn(disabledKey);
- // Mock the ejectManagedKeyDataCacheEntry to cover line 263
- when(mockAsyncAdmin.ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any()))
- .thenReturn(CompletableFuture.completedFuture(null));
+ when(mockProvider.unwrapKey(any(ManagedKeyIdentity.class), any(), any()))
+ .thenReturn(disabledKey);
admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
- verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
verify(mockAccessor).disableKey(key1);
// Verify cache ejection was called (line 263)
- verify(mockAsyncAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ verify(mockAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
}
@Test
@@ -878,22 +970,24 @@ public void testRefreshManagedKeysWithException() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
List keys = new ArrayList<>();
- ManagedKeyData key1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData key1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
- ManagedKeyData key2 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData key2 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata2", 124L);
- ManagedKeyData refreshedKey1 = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData refreshedKey1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.INACTIVE, "metadata1", 123L);
keys.add(key1);
keys.add(key2);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
// First key throws IOException, second key should still be refreshed
doThrow(new IOException("Simulated error")).when(mockAccessor)
- .updateActiveState(any(ManagedKeyData.class), any(ManagedKeyState.class));
- when(mockProvider.unwrapKey(key1.getKeyMetadata(), null)).thenReturn(refreshedKey1);
- when(mockProvider.unwrapKey(key2.getKeyMetadata(), null)).thenReturn(key2);
+ .updateActiveState(any(ManagedKeyData.class), any(ManagedKeyState.class), anyBoolean());
+ when(mockProvider.unwrapKey(key1.getKeyIdentity(), key1.getKeyMetadata(), null))
+ .thenReturn(refreshedKey1);
+ when(mockProvider.unwrapKey(key2.getKeyIdentity(), key2.getKeyMetadata(), null))
+ .thenReturn(key2);
// Should not throw exception, should continue refreshing other keys
IOException exception = assertThrows(IOException.class,
@@ -902,29 +996,64 @@ public void testRefreshManagedKeysWithException() throws Exception {
assertTrue(exception.getCause() instanceof IOException);
assertTrue(exception.getCause().getMessage(),
exception.getCause().getMessage().contains("Simulated error"));
- verify(mockAccessor).getAllKeys(CUST_BYTES, KEY_SPACE_GLOBAL, false);
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
verify(mockAccessor, never()).disableKey(any());
- verify(mockProvider).unwrapKey(key1.getKeyMetadata(), null);
- verify(mockProvider).unwrapKey(key2.getKeyMetadata(), null);
- verify(mockAsyncAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(),
- any());
+ verify(mockProvider).unwrapKey(key1.getKeyIdentity(), key1.getKeyMetadata(), null);
+ verify(mockProvider).unwrapKey(key2.getKeyIdentity(), key2.getKeyMetadata(), null);
+ verify(mockAccessor).updateRefreshTimestamp(key2);
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ }
+
+ @Test
+ public void testRefreshManagedKeysWithMultipleExceptionsKeepsFirstException() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+
+ ManagedKeyData key1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, "metadata1", 123L);
+ ManagedKeyData key2 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, "metadata2", 124L);
+ ManagedKeyData refreshedKey1 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.INACTIVE, "metadata1", 125L);
+ ManagedKeyData refreshedKey2 = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.INACTIVE, "metadata2", 126L);
+ List keys = Arrays.asList(key1, key2);
+
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ doThrow(new IOException("first refresh failure"), new IOException("second refresh failure"))
+ .when(mockAccessor).updateActiveState(any(ManagedKeyData.class), any(ManagedKeyState.class),
+ anyBoolean());
+ when(mockProvider.unwrapKey(key1.getKeyIdentity(), key1.getKeyMetadata(), null))
+ .thenReturn(refreshedKey1);
+ when(mockProvider.unwrapKey(key2.getKeyIdentity(), key2.getKeyMetadata(), null))
+ .thenReturn(refreshedKey2);
+
+ IOException exception = assertThrows(IOException.class,
+ () -> admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL));
+
+ assertNotNull(exception.getCause());
+ assertTrue(exception.getCause().getMessage().contains("first refresh failure"));
+ verify(mockAccessor).getAllKeys(CUST_GLOBAL_PREFIX, false);
+ verify(mockProvider).unwrapKey(key1.getKeyIdentity(), key1.getKeyMetadata(), null);
+ verify(mockProvider).unwrapKey(key2.getKeyIdentity(), key2.getKeyMetadata(), null);
}
@Test
public void testRefreshKeyWithMetadataValidationFailure() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- ManagedKeyData originalKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData originalKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
// Refreshed key has different metadata (which should not happen and indicates a serious
// error)
- ManagedKeyData refreshedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData refreshedKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata2", 124L);
List keys = Arrays.asList(originalKey);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.unwrapKey(originalKey.getKeyMetadata(), null)).thenReturn(refreshedKey);
+ when(mockProvider.unwrapKey(originalKey.getKeyIdentity(), originalKey.getKeyMetadata(), null))
+ .thenReturn(refreshedKey);
// The metadata mismatch triggers a KeyException which gets wrapped in an IOException
IOException exception = assertThrows(IOException.class,
@@ -932,33 +1061,38 @@ public void testRefreshKeyWithMetadataValidationFailure() throws Exception {
assertTrue(exception.getCause() instanceof KeyException);
assertTrue(exception.getCause().getMessage(),
exception.getCause().getMessage().contains("Key metadata changed during refresh"));
- verify(mockProvider).unwrapKey(originalKey.getKeyMetadata(), null);
+ verify(mockProvider).unwrapKey(originalKey.getKeyIdentity(), originalKey.getKeyMetadata(),
+ null);
// No state updates should happen due to the exception
- verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).updateActiveState(any(), any(), anyBoolean());
verify(mockAccessor, never()).disableKey(any());
+ verify(mockAccessor, never()).updateRefreshTimestamp(any());
}
@Test
public void testRefreshKeyWithFailedStateIgnored() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
- ManagedKeyData originalKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData originalKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 123L);
// Refreshed key is in FAILED state (provider issue) - using byte[] metadata hash constructor
- ManagedKeyData failedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData failedKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.FAILED, "metadata1", 124L);
List keys = Arrays.asList(originalKey);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockProvider.unwrapKey(originalKey.getKeyMetadata(), null)).thenReturn(failedKey);
+ when(mockProvider.unwrapKey(originalKey.getKeyIdentity(), originalKey.getKeyMetadata(), null))
+ .thenReturn(failedKey);
admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
// Should not update state when refreshed key is FAILED
- verify(mockAccessor, never()).updateActiveState(any(), any());
+ verify(mockAccessor, never()).updateActiveState(any(), any(), anyBoolean());
verify(mockAccessor, never()).disableKey(any());
- verify(mockProvider).unwrapKey(originalKey.getKeyMetadata(), null);
+ verify(mockProvider).unwrapKey(originalKey.getKeyIdentity(), originalKey.getKeyMetadata(),
+ null);
+ verify(mockAccessor).updateRefreshTimestamp(originalKey);
}
@Test
@@ -966,24 +1100,24 @@ public void testRefreshKeyRecoveryFromPriorEnableFailure() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
// FAILED key with null metadata (lines 119-135 in KeyManagementUtils)
- ManagedKeyData failedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, FAILED, 123L);
+ ManagedKeyData failedKey = new ManagedKeyData(
+ new KeyIdentityPrefixBytesBacked(new Bytes(CUST_BYTES), KEY_SPACE_GLOBAL_BYTES), FAILED,
+ 123L);
// Provider returns a recovered key
- ManagedKeyData recoveredKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, null,
+ ManagedKeyData recoveredKey = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
ManagedKeyState.ACTIVE, "metadata1", 124L);
List keys = Arrays.asList(failedKey);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockAccessor.getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL))
- .thenReturn(failedKey);
- when(mockProvider.getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace()))
- .thenReturn(recoveredKey);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(failedKey);
+ when(mockProvider.getManagedKey(eq(failedKey.getKeyIdentity()))).thenReturn(recoveredKey);
admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
// Should call getManagedKey for FAILED key with null metadata (line 125)
- verify(mockProvider).getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace());
+ verify(mockProvider).getManagedKey(eq(failedKey.getKeyIdentity()));
// Should add recovered key (line 130)
verify(mockAccessor).addKey(recoveredKey);
}
@@ -993,24 +1127,141 @@ public void testRefreshKeyNoRecoveryFromPriorEnableFailure() throws Exception {
KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
// FAILED key with null metadata
- ManagedKeyData failedKey = new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, FAILED, 123L);
+ ManagedKeyData failedKey = new ManagedKeyData(
+ new KeyIdentityPrefixBytesBacked(new Bytes(CUST_BYTES), KEY_SPACE_GLOBAL_BYTES), FAILED,
+ 123L);
// Provider returns another FAILED key (recovery didn't work)
- ManagedKeyData stillFailedKey =
- new ManagedKeyData(CUST_BYTES, KEY_SPACE_GLOBAL, ManagedKeyState.FAILED, 124L);
+ ManagedKeyData stillFailedKey = new ManagedKeyData(
+ new KeyIdentityPrefixBytesBacked(new Bytes(CUST_BYTES), KEY_SPACE_GLOBAL_BYTES),
+ ManagedKeyState.FAILED, 124L);
List keys = Arrays.asList(failedKey);
- when(mockAccessor.getAllKeys(any(), any(), anyBoolean())).thenReturn(keys);
+ when(mockAccessor.getAllKeys(CUST_GLOBAL_PREFIX, false)).thenReturn(keys);
when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
- when(mockAccessor.getKeyManagementStateMarker(CUST_BYTES, KEY_SPACE_GLOBAL))
- .thenReturn(failedKey);
- when(mockProvider.getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace()))
- .thenReturn(stillFailedKey);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(failedKey);
+ when(mockProvider.getManagedKey(eq(failedKey.getKeyIdentity()))).thenReturn(stillFailedKey);
admin.refreshManagedKeys(CUST_BYTES, KEY_SPACE_GLOBAL);
// Should call getManagedKey for FAILED key with null metadata
- verify(mockProvider).getManagedKey(failedKey.getKeyCustodian(), failedKey.getKeyNamespace());
+ verify(mockProvider).getManagedKey(eq(failedKey.getKeyIdentity()));
+ verify(mockAccessor, never()).addKey(any());
+ verify(mockAccessor, never()).updateRefreshTimestamp(any());
+ }
+
+ @Test
+ public void testSetManagedKeySuccess() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ ManagedKeyData unwrapped = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, KEY_METADATA, 123L);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null)).thenReturn(unwrapped);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(null);
+
+ ManagedKeyData result = admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA);
+
+ assertEquals(unwrapped, result);
+ verify(mockAccessor).addKey(unwrapped);
+ verify(mockProvider).unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null);
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ }
+
+ @Test
+ public void testSetManagedKeyUnchangedActiveKeyAddsAndEjectsCurrent() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ ManagedKeyData unwrapped = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, KEY_METADATA, 123L);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null)).thenReturn(unwrapped);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(unwrapped);
+
+ ManagedKeyData result = admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA);
+
+ assertEquals(unwrapped, result);
+ verify(mockAccessor).addKey(unwrapped);
+ verify(mockProvider).unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null);
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ verify(mockAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), eq(CUST_BYTES),
+ eq(KEY_SPACE_GLOBAL), eq(KEY_METADATA));
+ }
+
+ @Test
+ public void testSetManagedKeyDifferentInactiveKeyNoCacheEject() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ ManagedKeyData currentActive = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, "metadata0", 122L);
+ ManagedKeyData unwrapped = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.INACTIVE, KEY_METADATA, 123L);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null)).thenReturn(unwrapped);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(currentActive);
+
+ ManagedKeyData result = admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA);
+
+ assertEquals(unwrapped, result);
+ verify(mockAccessor).addKey(unwrapped);
+ verify(mockProvider).unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null);
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ verify(mockAdmin, never()).ejectManagedKeyDataCacheEntryOnServers(any(), any(), any(), any());
+ }
+
+ @Test
+ public void testSetManagedKeyDifferentActiveKeyEjectsCurrentMetadata() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ ManagedKeyData currentActive = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, "metadata0", 122L);
+ ManagedKeyData unwrapped = keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(),
+ ManagedKeyState.ACTIVE, KEY_METADATA, 123L);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null)).thenReturn(unwrapped);
+ when(mockAccessor.getKeyManagementStateMarker(CUST_GLOBAL_PREFIX)).thenReturn(currentActive);
+
+ ManagedKeyData result = admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA);
+
+ assertEquals(unwrapped, result);
+ verify(mockAccessor).addKey(unwrapped);
+ verify(mockProvider).unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null);
+ verify(mockAccessor).getKeyManagementStateMarker(CUST_GLOBAL_PREFIX);
+ verify(mockAdmin).ejectManagedKeyDataCacheEntryOnServers(any(), eq(CUST_BYTES),
+ eq(KEY_SPACE_GLOBAL), eq("metadata0"));
+ }
+
+ @Test
+ public void testSetManagedKeyScopeMismatch() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ ManagedKeyData wrongNs =
+ keyData(CUST_BYTES, Bytes.toBytes("otherNs"), ManagedKeyState.ACTIVE, KEY_METADATA, 123L);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null)).thenReturn(wrongNs);
+
+ KeyException ex = assertThrows(KeyException.class,
+ () -> admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA));
+ assertTrue(ex.getMessage().contains("scope does not match request"));
+ verify(mockAccessor, never()).addKey(any());
+ }
+
+ @Test
+ public void testSetManagedKeyEmptyMetadata() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ IllegalArgumentException ex = assertThrows(IllegalArgumentException.class,
+ () -> admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, ""));
+ assertTrue(ex.getMessage().contains("key_metadata must not be empty"));
+ verify(mockProvider, never()).unwrapKey(any(ManagedKeyIdentity.class), any(), any());
+ }
+
+ @Test
+ public void testSetManagedKeyInvalidFromProvider() throws Exception {
+ KeymetaAdminImplForTest admin = new KeymetaAdminImplForTest(mockMasterServices, mockAccessor);
+ ManagedKeyData disabled =
+ keyData(CUST_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(), DISABLED, KEY_METADATA, 123L);
+ when(mockAccessor.getKeyProvider()).thenReturn(mockProvider);
+ when(mockProvider.unwrapKey(CUST_GLOBAL_PREFIX, KEY_METADATA, null)).thenReturn(disabled);
+
+ assertThrows(KeyException.class,
+ () -> admin.setManagedKey(CUST_BYTES, KEY_SPACE_GLOBAL, KEY_METADATA));
verify(mockAccessor, never()).addKey(any());
}
@@ -1019,59 +1270,19 @@ private class KeymetaAdminImplForTest extends KeymetaAdminImpl {
public KeymetaAdminImplForTest(MasterServices server, KeymetaTableAccessor accessor)
throws IOException {
- super(server);
+ super(server, accessor);
this.accessor = accessor;
}
@Override
protected AsyncAdmin getAsyncAdmin(MasterServices master) {
- return mockAsyncAdmin;
- }
-
- @Override
- public List getAllKeys(byte[] keyCust, String keyNamespace,
- boolean includeMarkers) throws IOException, KeyException {
- return accessor.getAllKeys(keyCust, keyNamespace, includeMarkers);
- }
-
- @Override
- public ManagedKeyData getKey(byte[] keyCust, String keyNamespace, byte[] keyMetadataHash)
- throws IOException, KeyException {
- return accessor.getKey(keyCust, keyNamespace, keyMetadataHash);
- }
-
- @Override
- public void disableKey(ManagedKeyData keyData) throws IOException {
- accessor.disableKey(keyData);
- }
-
- @Override
- public ManagedKeyData getKeyManagementStateMarker(byte[] keyCust, String keyNamespace)
- throws IOException, KeyException {
- return accessor.getKeyManagementStateMarker(keyCust, keyNamespace);
- }
-
- @Override
- public void addKeyManagementStateMarker(byte[] keyCust, String keyNamespace,
- ManagedKeyState state) throws IOException {
- accessor.addKeyManagementStateMarker(keyCust, keyNamespace, state);
+ return mockAdmin;
}
@Override
public ManagedKeyProvider getKeyProvider() {
return accessor.getKeyProvider();
}
-
- @Override
- public void addKey(ManagedKeyData keyData) throws IOException {
- accessor.addKey(keyData);
- }
-
- @Override
- public void updateActiveState(ManagedKeyData keyData, ManagedKeyState newState)
- throws IOException {
- accessor.updateActiveState(keyData, newState);
- }
}
}
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java
index 09e409b11e7d..ed95be56f9a5 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyAccessorAndManager.java
@@ -49,9 +49,12 @@
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.keymeta.SystemKeyAccessor;
import org.apache.hadoop.hbase.testclassification.MasterTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.CommonFSUtils;
import org.apache.hadoop.hbase.util.Pair;
import org.junit.After;
@@ -362,12 +365,14 @@ public void testLoadSystemKeySuccess() throws Exception {
// Create test key data
Key testKey = new SecretKeySpec("test-key-bytes".getBytes(), "AES");
- ManagedKeyData testKeyData = new ManagedKeyData("custodian".getBytes(), "namespace", testKey,
- ManagedKeyState.ACTIVE, testMetadata, 1000L);
+ ManagedKeyIdentity fullKeyIdentity = ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(
+ new Bytes("custodian".getBytes()), new Bytes("namespace".getBytes()), testMetadata);
+ ManagedKeyData testKeyData =
+ new ManagedKeyData(fullKeyIdentity, testKey, ManagedKeyState.ACTIVE, testMetadata, 1000L);
// Mock key provider
ManagedKeyProvider realProvider = mock(ManagedKeyProvider.class);
- when(realProvider.unwrapKey(testMetadata, null)).thenReturn(testKeyData);
+ when(realProvider.unwrapKey(null, testMetadata, null)).thenReturn(testKeyData);
// Create testable SystemKeyAccessor that overrides both loadKeyMetadata and getKeyProvider
SystemKeyAccessor testAccessor = new SystemKeyAccessor(mockMaster) {
@@ -387,7 +392,7 @@ public ManagedKeyProvider getKeyProvider() {
assertEquals(testKeyData, result);
// Verify the key provider was called correctly
- verify(realProvider).unwrapKey(testMetadata, null);
+ verify(realProvider).unwrapKey(null, testMetadata, null);
}
@Test(expected = RuntimeException.class)
@@ -397,7 +402,7 @@ public void testLoadSystemKeyNullResult() throws Exception {
// Mock key provider to return null
ManagedKeyProvider realProvider = mock(ManagedKeyProvider.class);
- when(realProvider.unwrapKey(testMetadata, null)).thenReturn(null);
+ when(realProvider.unwrapKey(null, testMetadata, null)).thenReturn(null);
SystemKeyAccessor testAccessor = new SystemKeyAccessor(mockMaster) {
@Override
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java
index 54bfb5e0a120..b72ffd16346e 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestSystemKeyManager.java
@@ -30,6 +30,7 @@
import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider;
import org.apache.hadoop.hbase.io.crypto.ManagedKeyState;
import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.keymeta.ManagedKeyTestBase;
import org.apache.hadoop.hbase.keymeta.SystemKeyAccessor;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
@@ -78,7 +79,9 @@ public void testSystemKeyInitializationAndRotation() throws Exception {
assertEquals(initialSystemKey,
systemKeyAccessor.loadSystemKey(systemKeyAccessor.getAllSystemKeyFiles().get(1)));
assertEquals(initialSystemKey,
- systemKeyCache.getSystemKeyByChecksum(initialSystemKey.getKeyChecksum()));
+ systemKeyCache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(initialSystemKey.getKeyCustodian(),
+ initialSystemKey.getKeyNamespaceBytes(), initialSystemKey.getPartialIdentity())));
}
@Test
@@ -102,7 +105,10 @@ private ManagedKeyData validateInitialState(HMaster master, MockManagedKeyProvid
assertNotNull(systemKeyCache);
ManagedKeyData clusterKey = systemKeyCache.getLatestSystemKey();
assertEquals(pbeKeyProvider.getSystemKey(master.getClusterId().getBytes()), clusterKey);
- assertEquals(clusterKey, systemKeyCache.getSystemKeyByChecksum(clusterKey.getKeyChecksum()));
+ assertEquals(clusterKey,
+ systemKeyCache.getSystemKeyByIdentity(
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(clusterKey.getKeyCustodian(),
+ clusterKey.getKeyNamespaceBytes(), clusterKey.getPartialIdentity())));
return clusterKey;
}
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java
index 9efce81d9573..0baf67669782 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRSRpcServices.java
@@ -36,11 +36,12 @@
import org.apache.hadoop.hbase.HBaseClassTestRule;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.RegionInfoBuilder;
-import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.ipc.RpcCall;
import org.apache.hadoop.hbase.ipc.RpcServer;
import org.apache.hadoop.hbase.keymeta.KeyManagementService;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentity;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.testclassification.MediumTests;
import org.apache.hadoop.hbase.testclassification.RegionServerTests;
import org.apache.hadoop.hbase.util.Bytes;
@@ -232,7 +233,7 @@ public void testEjectManagedKeyDataCacheEntry() throws Exception {
when(mockServer.getKeyManagementService()).thenReturn(mockKeyService);
when(mockKeyService.getManagedKeyDataCache()).thenReturn(mockCache);
// Mock the ejectKey to return true
- when(mockCache.ejectKey(any(), any(), any())).thenReturn(true);
+ when(mockCache.ejectKey(any(ManagedKeyIdentity.class))).thenReturn(true);
// Create RSRpcServices
RSRpcServices rpcServices = new RSRpcServices(mockServer);
@@ -241,7 +242,7 @@ public void testEjectManagedKeyDataCacheEntry() throws Exception {
byte[] keyCustodian = Bytes.toBytes("testCustodian");
String keyNamespace = "testNamespace";
String keyMetadata = "testMetadata";
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata);
+ byte[] keyMetadataHash = ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata);
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCustodian))
@@ -258,7 +259,8 @@ public void testEjectManagedKeyDataCacheEntry() throws Exception {
assertTrue("Response should indicate key was ejected", response.getBoolMsg());
// Verify that ejectKey was called on the cache
- verify(mockCache).ejectKey(keyCustodian, keyNamespace, keyMetadataHash);
+ verify(mockCache).ejectKey(ManagedKeyIdentityUtils.fullKeyIdentityFromMetadata(
+ new Bytes(keyCustodian), new Bytes(keyNamespace.getBytes()), keyMetadata));
LOG.info("ejectManagedKeyDataCacheEntry test completed successfully");
}
@@ -287,7 +289,7 @@ public void testEjectManagedKeyDataCacheEntryWhenServerStopped() throws Exceptio
byte[] keyCustodian = Bytes.toBytes("testCustodian");
String keyNamespace = "testNamespace";
String keyMetadata = "testMetadata";
- byte[] keyMetadataHash = ManagedKeyData.constructMetadataHash(keyMetadata);
+ byte[] keyMetadataHash = ManagedKeyIdentityUtils.constructMetadataHash(keyMetadata);
ManagedKeyEntryRequest request = ManagedKeyEntryRequest.newBuilder()
.setKeyCustNs(ManagedKeyRequest.newBuilder().setKeyCust(ByteString.copyFrom(keyCustodian))
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java
index 29040ad58bec..d08045a590b6 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java
@@ -122,8 +122,7 @@ public void testOpenErrorMessageReference() throws IOException {
storeFileTrackerForTest.createReference(r, p);
StoreFileInfo sfi = storeFileTrackerForTest.getStoreFileInfo(p, true);
try {
- ReaderContext context =
- sfi.createReaderContext(false, 1000, ReaderType.PREAD, null, null, null);
+ ReaderContext context = sfi.createReaderContext(false, 1000, ReaderType.PREAD, null, null);
sfi.createReader(context, null);
throw new IllegalStateException();
} catch (FileNotFoundException fnfe) {
diff --git a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java
index e648d8a1c217..8591e7b0f31d 100644
--- a/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java
+++ b/hbase-server/src/test/java/org/apache/hadoop/hbase/security/TestSecurityUtil.java
@@ -17,12 +17,14 @@
*/
package org.apache.hadoop.hbase.security;
+import static org.apache.hadoop.hbase.io.crypto.ManagedKeyData.KEY_SPACE_GLOBAL_BYTES;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.Assert.assertNull;
import static org.junit.Assert.assertThrows;
import static org.junit.Assert.assertTrue;
+import static org.mockito.ArgumentMatchers.argThat;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.when;
@@ -49,11 +51,13 @@
import org.apache.hadoop.hbase.io.crypto.ManagedKeyData;
import org.apache.hadoop.hbase.io.crypto.MockAesKeyProvider;
import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer;
-import org.apache.hadoop.hbase.keymeta.KeyNamespaceUtil;
+import org.apache.hadoop.hbase.keymeta.KeyIdentitySingleArrayBacked;
import org.apache.hadoop.hbase.keymeta.ManagedKeyDataCache;
+import org.apache.hadoop.hbase.keymeta.ManagedKeyIdentityUtils;
import org.apache.hadoop.hbase.keymeta.SystemKeyCache;
import org.apache.hadoop.hbase.testclassification.SecurityTests;
import org.apache.hadoop.hbase.testclassification.SmallTests;
+import org.apache.hadoop.hbase.util.Bytes;
import org.junit.Before;
import org.junit.ClassRule;
import org.junit.Test;
@@ -78,10 +82,15 @@ public class TestSecurityUtil {
// Test constants to eliminate magic strings and improve maintainability
protected static final String TEST_NAMESPACE = "test-namespace";
+ protected static final byte[] TEST_NAMESPACE_BYTES = Bytes.toBytes(TEST_NAMESPACE);
protected static final String TEST_FAMILY = "test-family";
+ /** Namespace used in read-path tests (trailer KEK identity and cache lookup). */
+ protected static final String READ_PATH_TEST_NAMESPACE = "test:table/" + TEST_FAMILY;
+ protected static final byte[] READ_PATH_TEST_NAMESPACE_BYTES =
+ Bytes.toBytes(READ_PATH_TEST_NAMESPACE);
protected static final String HBASE_KEY = "hbase";
protected static final String TEST_KEK_METADATA = "test-kek-metadata";
- protected static final long TEST_KEK_CHECKSUM = 12345L;
+ protected static final byte[] TEST_KEK_IDENTITY = new byte[] { 1, 2, 3, 4, 5, 6, 7, 8 };
protected static final String TEST_KEY_16_BYTE = "test-key-16-byte";
protected static final String TEST_DEK_16_BYTE = "test-dek-16-byte";
protected static final String INVALID_KEY_DATA = "invalid-key-data";
@@ -103,7 +112,6 @@ public class TestSecurityUtil {
protected Key testKey;
protected byte[] testWrappedKey;
protected Key kekKey;
- protected String testTableNamespace;
/**
* Configuration builder for setting up different encryption test scenarios.
@@ -171,41 +179,65 @@ protected void setUpEncryptionConfigWithNullCipher() {
// ==== Mock Setup Helpers ====
- protected void setupManagedKeyDataCache(String namespace, ManagedKeyData keyData) {
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
+ protected void setupManagedKeyDataCache(byte[] namespace, ManagedKeyData keyData) {
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES),
+ eq(new Bytes(namespace)))).thenReturn(keyData);
+ }
+
+ protected void setupManagedKeyDataCache(Bytes namespace, ManagedKeyData keyData) {
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES),
eq(namespace))).thenReturn(keyData);
}
- protected void setupManagedKeyDataCache(String namespace, String globalSpace,
+ protected void setupManagedKeyDataCache(byte[] namespace, byte[] globalSpace,
ManagedKeyData keyData) {
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(namespace))).thenReturn(null);
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(globalSpace))).thenReturn(keyData);
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES),
+ eq(new Bytes(namespace)))).thenReturn(null);
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES),
+ eq(new Bytes(globalSpace)))).thenReturn(keyData);
}
- protected void setupTrailerMocks(byte[] keyBytes, String metadata, Long checksum,
+ protected void setupTrailerMocks(byte[] keyBytes, String metadata, byte[] kekIdentity,
String namespace) {
when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
when(mockTrailer.getKEKMetadata()).thenReturn(metadata);
- if (checksum != null) {
- when(mockTrailer.getKEKChecksum()).thenReturn(checksum);
+ if (kekIdentity != null && kekIdentity.length > 0) {
+ when(mockTrailer.getKekIdentity()).thenReturn(kekIdentity);
}
- when(mockTrailer.getKeyNamespace()).thenReturn(namespace);
}
- protected void setupSystemKeyCache(Long checksum, ManagedKeyData keyData) {
- when(mockSystemKeyCache.getSystemKeyByChecksum(checksum)).thenReturn(keyData);
+ protected void setupSystemKeyCache(byte[] fullIdentity, ManagedKeyData keyData) {
+ when(mockSystemKeyCache.getSystemKeyByIdentity(fullIdentity)).thenReturn(keyData);
}
protected void setupSystemKeyCache(ManagedKeyData latestKey) {
when(mockSystemKeyCache.getLatestSystemKey()).thenReturn(latestKey);
}
- protected void setupManagedKeyDataCacheEntry(String namespace, String metadata, byte[] keyBytes,
+ /**
+ * Mocks managed key cache getEntry for read path: 3-arg getEntry(FullKeyIdentity, metadata,
+ * null). Matches when the identity equals the one built from {@code kekIdentity}.
+ */
+ protected void setupManagedKeyDataCacheEntry(byte[] kekIdentity, String metadata,
ManagedKeyData keyData) throws IOException, KeyException {
- when(mockManagedKeyDataCache.getEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(namespace), eq(metadata), eq(keyBytes))).thenReturn(keyData);
+ when(mockManagedKeyDataCache.getEntry(
+ argThat(identity -> new KeyIdentitySingleArrayBacked(kekIdentity).equals(identity)),
+ eq(metadata), eq(null))).thenReturn(keyData);
+ }
+
+ /**
+ * Mocks managed key cache getEntry for read path when using full identity: 3-arg getEntry
+ * (FullKeyIdentity, metadata, null). Builds kekIdentity from custodian, namespace,
+ * partialIdentity.
+ */
+ protected void setupManagedKeyDataCacheEntryForFullIdentity(byte[] custodian, byte[] namespace,
+ byte[] partialIdentity, String metadata, ManagedKeyData keyData)
+ throws IOException, KeyException {
+ byte[] kekIdentity =
+ ManagedKeyIdentityUtils.constructRowKeyForIdentity(custodian, namespace, partialIdentity);
+ when(mockManagedKeyDataCache.getEntry(
+ argThat(identity -> new KeyIdentitySingleArrayBacked(kekIdentity).equals(identity)),
+ eq(metadata), eq(null))).thenReturn(keyData);
}
// ==== Exception Testing Helpers ====
@@ -261,13 +293,12 @@ public void setUp() throws Exception {
// Configure mocks
when(mockFamily.getEncryptionType()).thenReturn(AES_CIPHER);
when(mockFamily.getNameAsString()).thenReturn(TEST_FAMILY);
- when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // Default to null for fallback
- // logic
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(null); // Default to null for
+ // fallback
+ // logic
when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf("test:table"));
when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
- testTableNamespace = KeyNamespaceUtil.constructKeyNamespace(mockTableDescriptor, mockFamily);
-
// Set up default encryption config
setUpEncryptionConfig();
@@ -359,8 +390,9 @@ public void testWithEncryptionDisabled() throws IOException {
@Test
public void testWithKeyManagement_LocalKeyGen() throws IOException {
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(new Bytes(TEST_NAMESPACE_BYTES));
configBuilder().withKeyManagement(true).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, mockManagedKeyData);
Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
@@ -373,7 +405,7 @@ public void testWithKeyManagement_NoActiveKey_NoSystemKeyCache() throws IOExcept
// Test backwards compatibility: when no active key found and system cache is null, should
// throw
configBuilder().withKeyManagement(false).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, ManagedKeyData.KEY_SPACE_GLOBAL, null);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(), null);
when(mockFamily.getEncryptionKey()).thenReturn(null);
// With null system key cache, should still throw IOException
@@ -390,7 +422,7 @@ public void testWithKeyManagement_NoActiveKey_WithSystemKeyCache() throws IOExce
// Test backwards compatibility: when no active key found but system cache available, should
// work
configBuilder().withKeyManagement(false).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, ManagedKeyData.KEY_SPACE_GLOBAL, null);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(), null);
setupSystemKeyCache(mockManagedKeyData);
when(mockFamily.getEncryptionKey()).thenReturn(null);
@@ -409,8 +441,9 @@ public void testWithKeyManagement_LocalKeyGen_WithUnknownKeyCipher() throws IOEx
when(unknownKey.getAlgorithm()).thenReturn(UNKNOWN_CIPHER);
when(mockManagedKeyData.getTheKey()).thenReturn(unknownKey);
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(new Bytes(TEST_NAMESPACE_BYTES));
configBuilder().withKeyManagement(true).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, mockManagedKeyData);
assertEncryptionContextThrowsForWrites(RuntimeException.class,
"Cipher 'UNKNOWN_CIPHER' is not");
}
@@ -421,8 +454,9 @@ public void testWithKeyManagement_LocalKeyGen_WithKeyAlgorithmMismatch() throws
when(desKey.getAlgorithm()).thenReturn(DES_CIPHER);
when(mockManagedKeyData.getTheKey()).thenReturn(desKey);
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(new Bytes(TEST_NAMESPACE_BYTES));
configBuilder().withKeyManagement(true).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, mockManagedKeyData);
assertEncryptionContextThrowsForWrites(IllegalStateException.class,
"Encryption for family 'test-family' configured with type 'AES' but key specifies "
+ "algorithm 'DES'");
@@ -430,8 +464,9 @@ public void testWithKeyManagement_LocalKeyGen_WithKeyAlgorithmMismatch() throws
@Test
public void testWithKeyManagement_UseSystemKeyWithNSSpecificActiveKey() throws IOException {
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(new Bytes(TEST_NAMESPACE_BYTES));
configBuilder().withKeyManagement(false).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, mockManagedKeyData);
setupSystemKeyCache(mockManagedKeyData);
Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
@@ -443,8 +478,7 @@ public void testWithKeyManagement_UseSystemKeyWithNSSpecificActiveKey() throws I
@Test
public void testWithKeyManagement_UseSystemKeyWithoutNSSpecificActiveKey() throws IOException {
configBuilder().withKeyManagement(false).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, ManagedKeyData.KEY_SPACE_GLOBAL,
- mockManagedKeyData);
+ setupManagedKeyDataCache(KEY_SPACE_GLOBAL_BYTES, mockManagedKeyData);
setupSystemKeyCache(mockManagedKeyData);
when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
@@ -508,8 +542,9 @@ public void testBackwardsCompatibility_Scenario1_FamilyKeyWithKeyManagement()
public void testBackwardsCompatibility_Scenario2a_ActiveKeyAsDeK() throws IOException {
// Scenario 2a: Active key exists, local key gen disabled -> use active key as DEK, latest STK
// as KEK
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(new Bytes(TEST_NAMESPACE_BYTES));
configBuilder().withKeyManagement(false).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, mockManagedKeyData);
ManagedKeyData mockSystemKey = mock(ManagedKeyData.class);
when(mockSystemKey.getTheKey()).thenReturn(kekKey);
setupSystemKeyCache(mockSystemKey);
@@ -529,8 +564,9 @@ public void testBackwardsCompatibility_Scenario2b_ActiveKeyAsKekWithLocalKeyGen(
throws IOException {
// Scenario 2b: Active key exists, local key gen enabled -> use active key as KEK, generate
// random DEK
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(new Bytes(TEST_NAMESPACE_BYTES));
configBuilder().withKeyManagement(true).apply(conf);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, mockManagedKeyData);
when(mockFamily.getEncryptionKey()).thenReturn(null);
Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
@@ -547,8 +583,9 @@ public void testBackwardsCompatibility_Scenario3a_NoActiveKeyGenerateLocalKey()
throws IOException {
// Scenario 3: No active key -> generate random DEK, latest STK as KEK
configBuilder().withKeyManagement(false).apply(conf);
- setupManagedKeyDataCache(TEST_NAMESPACE, ManagedKeyData.KEY_SPACE_GLOBAL, null); // No active
- // key
+ setupManagedKeyDataCache(TEST_NAMESPACE_BYTES, KEY_SPACE_GLOBAL_BYTES.copyBytes(), null); // No
+ // active
+ // key
setupSystemKeyCache(mockManagedKeyData);
when(mockFamily.getEncryptionKey()).thenReturn(null);
@@ -569,63 +606,19 @@ public void testWithoutKeyManagement_Scenario3b_WithRandomKeyGeneration() throws
mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
verifyContext(result, false);
- // Here system key with a local key gen, so no namespace is set.
- assertNull(result.getKeyNamespace());
}
@Test
- public void testFallbackRule1_CFKeyNamespaceAttribute() throws IOException {
+ public void test_CFKeyNamespaceAttribute() throws IOException {
// Test Rule 1: Column family has KEY_NAMESPACE attribute
String cfKeyNamespace = "cf-specific-namespace";
- when(mockFamily.getEncryptionKeyNamespace()).thenReturn(cfKeyNamespace);
+ when(mockFamily.getEncryptionKeyNamespaceBytes())
+ .thenReturn(new Bytes(Bytes.toBytes(cfKeyNamespace)));
when(mockFamily.getEncryptionKey()).thenReturn(null);
configBuilder().withKeyManagement(false).apply(conf);
// Mock managed key data cache to return active key only for CF namespace
- setupManagedKeyDataCache(cfKeyNamespace, mockManagedKeyData);
- setupSystemKeyCache(mockManagedKeyData);
- when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
-
- Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
- mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
-
- verifyContext(result);
- // Verify that CF-specific namespace was used
- assertEquals(cfKeyNamespace, result.getKeyNamespace());
- }
-
- @Test
- public void testFallbackRule2_ConstructedNamespace() throws IOException {
- when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // No CF namespace
- when(mockFamily.getEncryptionKey()).thenReturn(null);
- setupManagedKeyDataCache(testTableNamespace, mockManagedKeyData);
- configBuilder().withKeyManagement(false).apply(conf);
- setupSystemKeyCache(mockManagedKeyData);
-
- Encryption.Context result = SecurityUtil.createEncryptionContext(conf, mockTableDescriptor,
- mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
-
- verifyContext(result);
- // Verify that constructed namespace was used
- assertEquals(testTableNamespace, result.getKeyNamespace());
- }
-
- @Test
- public void testFallbackRule3_TableNameAsNamespace() throws IOException {
- // Test Rule 3: Use table name as namespace when CF namespace and constructed namespace fail
- when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // No CF namespace
- when(mockFamily.getEncryptionKey()).thenReturn(null);
- configBuilder().withKeyManagement(false).apply(conf);
-
- String tableName = "test:table";
- when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf(tableName));
-
- // Mock cache to fail for CF and constructed namespace, succeed for table name
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(testTableNamespace))).thenReturn(null); // Constructed namespace fails
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(tableName))).thenReturn(mockManagedKeyData); // Table name succeeds
-
+ setupManagedKeyDataCache(Bytes.toBytes(cfKeyNamespace), mockManagedKeyData);
setupSystemKeyCache(mockManagedKeyData);
when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
@@ -633,28 +626,16 @@ public void testFallbackRule3_TableNameAsNamespace() throws IOException {
mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
verifyContext(result);
- // Verify that table name was used as namespace
- assertEquals(tableName, result.getKeyNamespace());
}
@Test
- public void testFallbackRule4_GlobalNamespace() throws IOException {
- // Test Rule 4: Fall back to global namespace when all other rules fail
- when(mockFamily.getEncryptionKeyNamespace()).thenReturn(null); // No CF namespace
+ public void testFallback_GlobalNamespace() throws IOException {
+ // Fall back to global namespace when CF namespace is null or has no key
+ when(mockFamily.getEncryptionKeyNamespaceBytes()).thenReturn(null); // No CF namespace
when(mockFamily.getEncryptionKey()).thenReturn(null);
configBuilder().withKeyManagement(false).apply(conf);
- String tableName = "test:table";
- when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf(tableName));
-
- // Mock cache to fail for all specific namespaces, succeed only for global
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(testTableNamespace))).thenReturn(null); // Constructed namespace fails
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(tableName))).thenReturn(null); // Table name fails
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(ManagedKeyData.KEY_SPACE_GLOBAL))).thenReturn(mockManagedKeyData); // Global succeeds
-
+ setupManagedKeyDataCache(KEY_SPACE_GLOBAL_BYTES, mockManagedKeyData);
setupSystemKeyCache(mockManagedKeyData);
when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
@@ -662,28 +643,23 @@ public void testFallbackRule4_GlobalNamespace() throws IOException {
mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
verifyContext(result);
- // Verify that global namespace was used
- assertEquals(ManagedKeyData.KEY_SPACE_GLOBAL, result.getKeyNamespace());
}
@Test
public void testFallbackRuleOrder() throws IOException {
- // Test that the rules are tried in the correct order
+ // Test that candidates are tried in order: CF namespace first, then global
String cfKeyNamespace = "cf-namespace";
- String tableName = "test:table";
- when(mockFamily.getEncryptionKeyNamespace()).thenReturn(cfKeyNamespace);
+ when(mockFamily.getEncryptionKeyNamespaceBytes())
+ .thenReturn(new Bytes(Bytes.toBytes(cfKeyNamespace)));
when(mockFamily.getEncryptionKey()).thenReturn(null);
- when(mockTableDescriptor.getTableName()).thenReturn(TableName.valueOf(tableName));
configBuilder().withKeyManagement(false).apply(conf);
- // Set up mocks so that CF namespace fails but table name would succeed
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(cfKeyNamespace))).thenReturn(null); // CF namespace fails
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(testTableNamespace))).thenReturn(mockManagedKeyData); // Constructed namespace succeeds
- when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq(tableName))).thenReturn(mockManagedKeyData); // Table name would also succeed
+ // CF namespace fails, global succeeds
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES),
+ eq(new Bytes(Bytes.toBytes(cfKeyNamespace))))).thenReturn(null);
+ when(mockManagedKeyDataCache.getActiveEntry(eq(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES),
+ eq(KEY_SPACE_GLOBAL_BYTES))).thenReturn(mockManagedKeyData);
setupSystemKeyCache(mockManagedKeyData);
when(mockManagedKeyData.getTheKey()).thenReturn(testKey);
@@ -692,8 +668,6 @@ public void testFallbackRuleOrder() throws IOException {
mockFamily, mockManagedKeyDataCache, mockSystemKeyCache);
verifyContext(result);
- // Verify that constructed namespace was used (Rule 2), not table name (Rule 3)
- assertEquals(testTableNamespace, result.getKeyNamespace());
}
@Test
@@ -742,7 +716,6 @@ public void testWithKeyManagement_FamilyKey_UnwrapKeyException() throws Exceptio
@Test
public void testWithNoKeyMaterial() throws IOException {
when(mockTrailer.getEncryptionKey()).thenReturn(null);
- when(mockTrailer.getKeyNamespace()).thenReturn(TEST_NAMESPACE);
Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
mockManagedKeyDataCache, mockSystemKeyCache);
@@ -763,17 +736,16 @@ public static class TestCreateEncryptionContext_ForReads extends TestSecurityUti
public void testWithKEKMetadata_STKLookupFirstThenManagedKey() throws Exception {
// Test new logic: STK lookup happens first, then metadata lookup if STK fails
// Set up scenario where both checksum and metadata are available
- setupTrailerMocks(testWrappedKey, TEST_KEK_METADATA, TEST_KEK_CHECKSUM, null);
+ setupTrailerMocks(testWrappedKey, TEST_KEK_METADATA, TEST_KEK_IDENTITY, null);
configBuilder().withKeyManagement(false).apply(conf);
// STK lookup should succeed and be used (first priority)
ManagedKeyData stkKeyData = mock(ManagedKeyData.class);
when(stkKeyData.getTheKey()).thenReturn(kekKey);
- setupSystemKeyCache(TEST_KEK_CHECKSUM, stkKeyData);
+ setupSystemKeyCache(TEST_KEK_IDENTITY, stkKeyData);
// Also set up managed key cache (but it shouldn't be used since STK succeeds)
- setupManagedKeyDataCacheEntry(testTableNamespace, TEST_KEK_METADATA, testWrappedKey,
- mockManagedKeyData);
+ setupManagedKeyDataCacheEntry(TEST_KEK_IDENTITY, TEST_KEK_METADATA, mockManagedKeyData);
when(mockManagedKeyData.getTheKey())
.thenThrow(new RuntimeException("This should not be called"));
@@ -787,63 +759,62 @@ public void testWithKEKMetadata_STKLookupFirstThenManagedKey() throws Exception
@Test
public void testWithKEKMetadata_STKFailsThenManagedKeySucceeds() throws Exception {
- // Test fallback: STK lookup fails, metadata lookup succeeds
- setupTrailerMocks(testWrappedKey, TEST_KEK_METADATA, TEST_KEK_CHECKSUM, testTableNamespace);
- configBuilder().withKeyManagement(false).apply(conf);
-
- // STK lookup should fail (returns null)
- when(mockSystemKeyCache.getSystemKeyByChecksum(TEST_KEK_CHECKSUM)).thenReturn(null);
+ // Test fallback: STK lookup fails, managed key lookup by full identity succeeds
+ byte[] partialIdentity = ManagedKeyIdentityUtils.constructMetadataHash(TEST_KEK_METADATA);
+ byte[] kekIdentity = ManagedKeyIdentityUtils.constructRowKeyForIdentity(
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.get(), READ_PATH_TEST_NAMESPACE_BYTES,
+ partialIdentity);
+ setupTrailerMocks(testWrappedKey, TEST_KEK_METADATA, kekIdentity, READ_PATH_TEST_NAMESPACE);
+ configBuilder().withKeyManagement(true).apply(conf);
- // Managed key lookup should succeed
- setupManagedKeyDataCacheEntry(testTableNamespace, TEST_KEK_METADATA, testWrappedKey,
- mockManagedKeyData);
+ when(mockSystemKeyCache.getSystemKeyByIdentity(kekIdentity)).thenReturn(null);
+ setupManagedKeyDataCacheEntryForFullIdentity(ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.get(),
+ READ_PATH_TEST_NAMESPACE_BYTES, partialIdentity, TEST_KEK_METADATA, mockManagedKeyData);
when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
mockManagedKeyDataCache, mockSystemKeyCache);
verifyContext(result);
- // Should use managed key data since STK failed
assertEquals(mockManagedKeyData, result.getKEKData());
}
@Test
public void testWithKeyManagement_KEKMetadataAndChecksumFailure()
throws IOException, KeyException {
- // Test scenario where both STK lookup and managed key lookup fail
+ // Test scenario: STK fails, managed key getEntry returns null, latest system key is null
byte[] keyBytes = "test-encrypted-key".getBytes();
String kekMetadata = "test-kek-metadata";
- configBuilder().withKeyManagement(false).apply(conf);
+ byte[] partialIdentity = ManagedKeyIdentityUtils.constructMetadataHash(kekMetadata);
+ byte[] kekIdentity = ManagedKeyIdentityUtils.constructRowKeyForIdentity(
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.get(), Bytes.toBytes("test-namespace"),
+ partialIdentity);
+ configBuilder().withKeyManagement(true).apply(conf);
when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
when(mockTrailer.getKEKMetadata()).thenReturn(kekMetadata);
- when(mockTrailer.getKEKChecksum()).thenReturn(TEST_KEK_CHECKSUM);
- when(mockTrailer.getKeyNamespace()).thenReturn("test-namespace");
+ when(mockTrailer.getKekIdentity()).thenReturn(kekIdentity);
- // STK lookup should fail
- when(mockSystemKeyCache.getSystemKeyByChecksum(TEST_KEK_CHECKSUM)).thenReturn(null);
-
- // Managed key lookup should also fail
- when(mockManagedKeyDataCache.getEntry(eq(ManagedKeyData.KEY_GLOBAL_CUSTODIAN_BYTES),
- eq("test-namespace"), eq(kekMetadata), eq(keyBytes)))
- .thenThrow(new IOException("Key not found"));
+ when(mockSystemKeyCache.getSystemKeyByIdentity(kekIdentity)).thenReturn(null);
+ when(mockManagedKeyDataCache.getEntry(
+ argThat(identity -> new KeyIdentitySingleArrayBacked(kekIdentity).equals(identity)),
+ eq(kekMetadata), eq(null))).thenReturn(null);
+ when(mockSystemKeyCache.getLatestSystemKey()).thenReturn(null);
IOException exception = assertThrows(IOException.class, () -> {
SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
mockSystemKeyCache);
});
- assertTrue(
- exception.getMessage().contains("Failed to get key data for KEK metadata: " + kekMetadata));
- assertTrue(exception.getCause().getMessage().contains("Key not found"));
+ assertTrue(exception.getMessage().contains("Failed to get latest system key"));
}
@Test
public void testWithKeyManagement_UseSystemKey() throws IOException {
- // Test STK lookup by checksum (first priority in new logic)
- setupTrailerMocks(testWrappedKey, null, TEST_KEK_CHECKSUM, null);
+ // Test STK lookup by identity (first priority in new logic)
+ setupTrailerMocks(testWrappedKey, null, TEST_KEK_IDENTITY, null);
configBuilder().withKeyManagement(false).apply(conf);
- setupSystemKeyCache(TEST_KEK_CHECKSUM, mockManagedKeyData);
+ setupSystemKeyCache(TEST_KEK_IDENTITY, mockManagedKeyData);
when(mockManagedKeyData.getTheKey()).thenReturn(kekKey);
Encryption.Context result = SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer,
@@ -878,7 +849,8 @@ public void testBackwardsCompatibility_WithKeyManagement_LatestSystemKeyNotFound
@Test
public void testBackwardsCompatibility_FallbackToLatestSystemKey() throws IOException {
// Test fallback to latest system key when both checksum and metadata are unavailable
- setupTrailerMocks(testWrappedKey, null, 0L, TEST_NAMESPACE); // No checksum, no metadata
+ setupTrailerMocks(testWrappedKey, null, new byte[0], TEST_NAMESPACE); // No identity, no
+ // metadata
configBuilder().withKeyManagement(false).apply(conf);
ManagedKeyData latestSystemKey = mock(ManagedKeyData.class);
@@ -960,12 +932,16 @@ public void testCreateEncryptionContext_WithKeyManagement_NullKeyManagementCache
throws IOException {
byte[] keyBytes = "test-encrypted-key".getBytes();
String kekMetadata = "test-kek-metadata";
+ byte[] partialIdentity = ManagedKeyIdentityUtils.constructMetadataHash(kekMetadata);
+ byte[] kekIdentity = ManagedKeyIdentityUtils.constructRowKeyForIdentity(
+ ManagedKeyData.KEY_SPACE_GLOBAL_BYTES.get(), Bytes.toBytes("test-namespace"),
+ partialIdentity);
when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
when(mockTrailer.getKEKMetadata()).thenReturn(kekMetadata);
- when(mockTrailer.getKeyNamespace()).thenReturn("test-namespace");
+ when(mockTrailer.getKekIdentity()).thenReturn(kekIdentity);
+ when(mockSystemKeyCache.getSystemKeyByIdentity(kekIdentity)).thenReturn(null);
- // Enable key management
conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
IOException exception = assertThrows(IOException.class, () -> {
@@ -982,7 +958,6 @@ public void testCreateEncryptionContext_WithKeyManagement_NullSystemKeyCache()
when(mockTrailer.getEncryptionKey()).thenReturn(keyBytes);
when(mockTrailer.getKEKMetadata()).thenReturn(null);
- when(mockTrailer.getKeyNamespace()).thenReturn("test-namespace");
// Enable key management
conf.setBoolean(HConstants.CRYPTO_MANAGED_KEYS_ENABLED_CONF_KEY, true);
@@ -1019,9 +994,8 @@ public void testWithDEK() throws IOException, KeyException {
MockAesKeyProvider keyProvider = (MockAesKeyProvider) Encryption.getKeyProvider(conf);
keyProvider.clearKeys(); // Let a new key be instantiated and cause a unwrap failure.
- setupTrailerMocks(wrappedKey, null, 0L, null);
- setupManagedKeyDataCacheEntry(TEST_NAMESPACE, TEST_KEK_METADATA, wrappedKey,
- mockManagedKeyData);
+ // No KEK identity so getEntry is never called; key comes from config and unwrap fails
+ setupTrailerMocks(wrappedKey, null, new byte[0], null);
IOException exception = assertThrows(IOException.class, () -> {
SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
@@ -1038,17 +1012,17 @@ public void testWithSystemKey() throws IOException {
// Use invalid key bytes to trigger unwrapping failure
byte[] invalidKeyBytes = INVALID_SYSTEM_KEY_DATA.getBytes();
- setupTrailerMocks(invalidKeyBytes, null, TEST_KEK_CHECKSUM, null);
+ setupTrailerMocks(invalidKeyBytes, null, TEST_KEK_IDENTITY, null);
configBuilder().withKeyManagement(false).apply(conf);
- setupSystemKeyCache(TEST_KEK_CHECKSUM, mockManagedKeyData);
+ setupSystemKeyCache(TEST_KEK_IDENTITY, mockManagedKeyData);
IOException exception = assertThrows(IOException.class, () -> {
SecurityUtil.createEncryptionContext(conf, testPath, mockTrailer, mockManagedKeyDataCache,
mockSystemKeyCache);
});
- assertTrue(exception.getMessage().contains(
- "Failed to unwrap key with KEK checksum: " + TEST_KEK_CHECKSUM + ", metadata: null"));
+ assertTrue(
+ exception.getMessage().contains("Failed to unwrap key with KEK identity (length: 8)"));
// The root cause should be some kind of parsing/unwrapping exception
assertNotNull(exception.getCause());
}
diff --git a/hbase-shell/pom.xml b/hbase-shell/pom.xml
index 7b689d3dc6ee..d7d79769c3ab 100644
--- a/hbase-shell/pom.xml
+++ b/hbase-shell/pom.xml
@@ -65,6 +65,26 @@
org.apache.hbasehbase-hadoop-compat
+
+ io.opentelemetry
+ opentelemetry-api
+
+
+ io.opentelemetry.semconv
+ opentelemetry-semconv
+
+
+ net.openhft
+ zero-allocation-hashing
+
+
+ org.apache.hbase.thirdparty
+ hbase-shaded-gson
+
+
+ org.apache.hbase.thirdparty
+ hbase-unsafe
+ org.apache.hbasehbase-testing-util
@@ -95,6 +115,11 @@
junit-jupiter-paramstest
+
+ org.junit.vintage
+ junit-vintage-engine
+ test
+ org.slf4jjcl-over-slf4j
diff --git a/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb b/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb
index 98a57766831a..9e42782d62ed 100644
--- a/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb
+++ b/hbase-shell/src/main/ruby/shell/commands/keymeta_command_base.rb
@@ -23,7 +23,7 @@ module Commands
# KeymetaCommandBase is a base class for all key management commands.
class KeymetaCommandBase < Command
def print_key_statuses(statuses)
- formatter.header(%w[ENCODED-KEY NAMESPACE STATUS METADATA METADATA-HASH REFRESH-TIMESTAMP])
+ formatter.header(%w[ENCODED-KEY NAMESPACE STATUS KEY-IDENTITY REFRESH-TIMESTAMP])
statuses.each { |status| formatter.row(format_status_row(status)) }
formatter.footer(statuses.size)
end
@@ -35,8 +35,7 @@ def format_status_row(status)
status.getKeyCustodianEncoded,
status.getKeyNamespace,
status.getKeyState.toString,
- status.getKeyMetadata,
- status.getKeyMetadataHashEncoded,
+ status.getPartialIdentityEncoded,
status.getRefreshTimestamp
]
end
diff --git a/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java
index cc4aabe4ff4e..f2e3f07be4a3 100644
--- a/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java
+++ b/hbase-shell/src/test/java/org/apache/hadoop/hbase/client/TestKeymetaMockProviderShell.java
@@ -19,6 +19,8 @@
import org.apache.hadoop.hbase.HBaseClassTestRule;
import org.apache.hadoop.hbase.HBaseTestingUtil;
+import org.apache.hadoop.hbase.io.crypto.Encryption;
+import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider;
import org.apache.hadoop.hbase.keymeta.ManagedKeyTestBase;
import org.apache.hadoop.hbase.testclassification.ClientTests;
import org.apache.hadoop.hbase.testclassification.IntegrationTests;
@@ -40,7 +42,7 @@ public class TestKeymetaMockProviderShell extends ManagedKeyTestBase implements
@Override
public void setUp() throws Exception {
// Enable to be able to debug without timing out.
- // final Configuration conf = TEST_UTIL.getConfiguration();
+ // final org.apache.hadoop.conf.Configuration conf = TEST_UTIL.getConfiguration();
// conf.set("zookeeper.session.timeout", "6000000");
// conf.set("hbase.rpc.timeout", "6000000");
// conf.set("hbase.rpc.read.timeout", "6000000");
@@ -73,11 +75,18 @@ public ScriptingContainer getJRuby() {
@Override
public String getSuitePattern() {
- return "**/*_keymeta_mock_provider_test.rb";
+ return "**/*_mock_provider_test.rb";
}
@Test
public void testRunShellTests() throws Exception {
RubyShellTest.testRunShellTests(this);
}
+
+ @Override
+ protected void configureKeyProvider() {
+ MockManagedKeyProvider key_provider =
+ (MockManagedKeyProvider) Encryption.getManagedKeyProvider(TEST_UTIL.getConfiguration());
+ key_provider.setMultikeyGenMode(false);
+ }
}
diff --git a/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb b/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb
index 061e3fc71230..146721d31f23 100644
--- a/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb
+++ b/hbase-shell/src/test/ruby/shell/admin_keymeta_mock_provider_test.rb
@@ -37,13 +37,13 @@ class KeymetaAdminMockProviderTest < Test::Unit::TestCase
def setup
setup_hbase
- @key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
+ key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
# Enable multikey generation mode for dynamic key creation on rotate
- @key_provider.setMultikeyGenMode(true)
+ key_provider.setMultikeyGenMode(true)
# Set up custodian variables
@glob_cust = '*'
- @glob_cust_encoded = ManagedKeyProvider.encodeToStr(@glob_cust.bytes.to_a)
+ @glob_cust_encoded = ManagedKeyProvider.encodeToStr(@glob_cust.bytes.to_a.to_java(:byte))
end
define_test 'Test rotate managed key operation' do
@@ -83,6 +83,7 @@ def test_rotate_key(cust, namespace)
"Expected 2 keys after rotation, got: #{output}")
# 4. Rotate again to test multiple rotations
+ $TEST.logMessage("Rotating again to test multiple rotations")
output = capture_stdout { @shell.command('rotate_managed_key', cust_and_namespace) }
puts "rotate_managed_key (second) output: #{output}"
assert(output.include?("#{cust} #{namespace}"),
diff --git a/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb b/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb
index ab413ecbb0bb..32babd9fb968 100644
--- a/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb
+++ b/hbase-shell/src/test/ruby/shell/admin_keymeta_test.rb
@@ -90,18 +90,17 @@ def test_key_operations(cust, namespace)
assert(output.include?("#{cust} #{namespace} ACTIVE"),
"Expected ACTIVE key after enable, got: #{output}")
- # 2. Get the initial key metadata hash for use in disable_managed_key test
+ # 2. Get the initial KEY-IDENTITY (partial identity encoded) for use in disable_managed_key test
output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
puts "show_key_status output: #{output}"
- # Extract the metadata hash from the output (it's in the 5th column)
- # Output format: ENCODED-KEY NAMESPACE STATUS METADATA METADATA-HASH REFRESH-TIMESTAMP
+ # Output format: ENCODED-KEY NAMESPACE STATUS KEY-IDENTITY REFRESH-TIMESTAMP
lines = output.split("\n")
key_line = lines.find { |line| line.include?(cust) && line.include?(namespace) }
assert_not_nil(key_line, "Could not find key line in output")
- # Parse the key metadata hash (Base64 encoded)
- key_metadata_hash = key_line.split[3]
- assert_not_nil(key_metadata_hash, "Could not extract key metadata hash")
- puts "Extracted key metadata hash: #{key_metadata_hash}"
+ # KEY-IDENTITY is the 4th column (index 3)
+ key_identity_encoded = key_line.split[3]
+ assert_not_nil(key_identity_encoded, "Could not extract KEY-IDENTITY column value")
+ puts "Extracted KEY-IDENTITY (partial identity encoded): #{key_identity_encoded}"
# 3. Refresh managed keys
output = capture_stdout { @shell.command('refresh_managed_keys', cust_and_namespace) }
@@ -113,9 +112,9 @@ def test_key_operations(cust, namespace)
puts "show_key_status after refresh: #{output}"
assert(output.include?('ACTIVE'), "Expected ACTIVE key after refresh, got: #{output}")
- # 4. Disable a specific managed key
+ # 4. Disable a specific managed key (use KEY-IDENTITY column value)
output = capture_stdout do
- @shell.command('disable_managed_key', cust_and_namespace, key_metadata_hash)
+ @shell.command('disable_managed_key', cust_and_namespace, key_identity_encoded)
end
puts "disable_managed_key output: #{output}"
assert(output.include?("#{cust} #{namespace} DISABLED"),
@@ -132,15 +131,20 @@ def test_key_operations(cust, namespace)
output = capture_stdout { @shell.command('disable_key_management', cust_and_namespace) }
puts "disable_key_management output: #{output}"
assert(output.include?("#{cust} #{namespace} DISABLED"),
- "Expected DISABLED keys, got: #{output}")
- # Verify all keys are now INACTIVE
+ "Expected DISABLED marker, got: #{output}")
+ # Verify all keys are now DISABLED
output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
puts "show_key_status after disable_key_management: #{output}"
- # All rows should show INACTIVE state
+ # All rows should show DISABLED state
lines = output.split("\n")
key_lines = lines.select { |line| line.include?(cust) && line.include?(namespace) }
key_lines.each do |line|
- assert(line.include?('INACTIVE'), "Expected all keys to be INACTIVE, but found: #{line}")
+ assert(
+ (line.include?('DISABLED') || line.include?('INACTIVE')) &&
+ !line.match?(/\bACTIVE\b/) &&
+ !line.include?('FAILED'),
+ "Expected all keys to be INACTIVE or DISABLED, but found: #{line}"
+ )
end
# 7. Refresh shouldn't do anything since the key management is disabled.
@@ -155,7 +159,7 @@ def test_key_operations(cust, namespace)
# 7. Enable key management again
@shell.command('enable_key_management', cust_and_namespace)
- # 8. Get the key metadata hash for the enabled key
+ # 8. Get the KEY-IDENTITY for the enabled key
output = capture_stdout { @shell.command('show_key_status', cust_and_namespace) }
puts "show_key_status after enable_key_management: #{output}"
assert(output.include?('ACTIVE'), "Expected ACTIVE key after enable_key_management, got: #{output}")
@@ -174,7 +178,7 @@ def test_key_operations(cust, namespace)
end
define_test 'Test disable operations error handling' do
- # Test disable_managed_key with invalid metadata hash
+ # Test disable_managed_key with invalid KEY-IDENTITY (partial identity encoded)
cust_and_namespace = "#{$CUST1_ENCODED}:*"
error = assert_raises(ArgumentError) do
@shell.command('disable_managed_key', cust_and_namespace, '!!!invalid!!!')
diff --git a/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb b/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb
index 35ad85785e0f..237bf06a945f 100644
--- a/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb
+++ b/hbase-shell/src/test/ruby/shell/encrypted_table_keymeta_test.rb
@@ -33,10 +33,14 @@
java_import org.apache.hadoop.hbase.io.crypto.Encryption
java_import org.apache.hadoop.hbase.io.crypto.ManagedKeyProvider
java_import org.apache.hadoop.hbase.io.crypto.MockManagedKeyProvider
+java_import org.apache.hadoop.hbase.keymeta.KeyIdentityPrefixBytesBacked
java_import org.apache.hadoop.hbase.io.hfile.CorruptHFileException
java_import org.apache.hadoop.hbase.io.hfile.FixedFileTrailer
java_import org.apache.hadoop.hbase.io.hfile.HFile
java_import org.apache.hadoop.hbase.io.hfile.CacheConfig
+java_import org.apache.hadoop.hbase.keymeta.KeyIdentitySingleArrayBacked
+java_import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor
+
java_import org.apache.hadoop.hbase.util.Bytes
module Hbase
@@ -50,27 +54,55 @@ def setup
@connection = $TEST_CLUSTER.connection
end
- define_test 'Test table put/get with encryption' do
+ define_test 'Test table put/get with encryption (scenario 2a)' do
# Custodian is currently not supported, so this will end up falling back to local key
# generation.
- test_table_put_get_with_encryption($CUST1_ENCODED, '*',
+ run_table_put_get_with_encryption(
{ 'NAME' => 'f', 'ENCRYPTION' => 'AES' },
- true)
+ false)
end
define_test 'Test table with custom namespace attribute in Column Family' do
custom_namespace = 'test_global_namespace'
- test_table_put_get_with_encryption(
- $GLOB_CUST_ENCODED, custom_namespace,
+ run_table_put_get_with_encryption(
{ 'NAME' => 'f', 'ENCRYPTION' => 'AES', 'ENCRYPTION_KEY_NAMESPACE' => custom_namespace },
false
)
end
- def test_table_put_get_with_encryption(cust, namespace, table_attrs, fallback_scenario)
- cust_and_namespace = "#{cust}:#{namespace}"
- output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
- assert(output.include?("#{cust} #{namespace} ACTIVE"))
+ define_test 'Test table put/get with encryption and local key gen per file (scenario 2b)' do
+ # Enable local key gen per file so SecurityUtil uses scenario 2b: active key (DEK) as KEK,
+ # locally generated key as CEK per file.
+ conf_key = HConstants::CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_ENABLED_CONF_KEY
+ $TEST_CLUSTER.getConfiguration.setBoolean(conf_key, true)
+ $TEST.restartMiniCluster(KeymetaTableAccessor::KEY_META_TABLE_NAME)
+ setup_hbase
+ begin
+ custom_namespace = 'test_global_namespace'
+ run_table_put_get_with_encryption(
+ { 'NAME' => 'f', 'ENCRYPTION' => 'AES', 'ENCRYPTION_KEY_NAMESPACE' => custom_namespace },
+ true
+ )
+ ensure
+ # Restore default so other tests or future runs are not affected
+ $TEST_CLUSTER.getConfiguration.setBoolean(conf_key, false)
+ $TEST.restartMiniCluster(KeymetaTableAccessor::KEY_META_TABLE_NAME)
+ setup_hbase
+ end
+ end
+
+ def run_table_put_get_with_encryption(table_attrs, local_key_gen_scenario)
+ cust = $GLOB_CUST_ENCODED
+ has_namespace = table_attrs.has_key?('ENCRYPTION_KEY_NAMESPACE')
+ if has_namespace
+ expected_ns = table_attrs['ENCRYPTION_KEY_NAMESPACE']
+ cust_and_namespace = "#{cust}:#{expected_ns}"
+
+ output = capture_stdout { @shell.command('enable_key_management', cust_and_namespace) }
+ assert(output.include?("#{cust} #{expected_ns} ACTIVE"), "Expected cust #{cust} and namespace #{expected_ns} to be ACTIVE, got: #{output}")
+ else
+ expected_ns = '*'
+ end
@shell.command(:create, @test_table, table_attrs)
test_table = table(@test_table)
test_table.put('1', 'f:a', '2')
@@ -91,19 +123,33 @@ def test_table_put_get_with_encryption(cust, namespace, table_attrs, fallback_sc
assert_not_nil(hfile_info)
live_trailer = hfile_info.getTrailer
assert_trailer(live_trailer)
- assert_equal(namespace, live_trailer.getKeyNamespace)
-
- # When active key is supposed to be used, we can valiate the key bytes in the context against
- # the actual key from provider.
- unless fallback_scenario
- encryption_context = hfile_info.getHFileContext.getEncryptionContext
- assert_not_nil(encryption_context)
- assert_not_nil(encryption_context.getKeyBytes)
- key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
- key_data = key_provider.getManagedKey(ManagedKeyProvider.decodeToBytes(cust), namespace)
- assert_not_nil(key_data)
- assert_equal(namespace, key_data.getKeyNamespace)
- assert_equal(key_data.getTheKey.getEncoded, encryption_context.getKeyBytes)
+ # When we have an active key (2a or 2b), KEK identity in trailer should reflect our namespace
+
+ encryption_context = hfile_info.getHFileContext.getEncryptionContext
+ assert_not_nil(encryption_context)
+ assert_not_nil(encryption_context.getKeyBytes)
+ key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
+ cluster_id = $TEST_CLUSTER.getMiniHBaseCluster().getMaster().getClusterId();
+ system_key = key_provider.getSystemKey(cluster_id.bytes)
+ dek_data = key_provider.getManagedKey(KeyIdentityPrefixBytesBacked.new(
+ Bytes.new(ManagedKeyProvider.decodeToBytes($GLOB_CUST_ENCODED)),
+ Bytes.new(Bytes.toBytes(expected_ns))))
+ assert_not_nil(dek_data)
+ parsed_namespace = parse_namespace_from_kek_identity(live_trailer.getKekIdentity)
+ # When active key is the CEK (scenario 2a), validate key bytes in context match
+ # provider. For scenario 2b (local key gen), CEK is generated per file so key bytes differ.
+ if local_key_gen_scenario
+ # Scenario 2b: CEK is locally generated per file, so it must not equal the provider key
+ # (which is used as KEK).
+ assert_not_equal(dek_data.getTheKey.getEncoded, encryption_context.getKeyBytes)
+ assert_not_equal(system_key.getTheKey.getEncoded, encryption_context.getKeyBytes)
+ assert_equal(encryption_context.getKEKData().getKeyNamespaceBytes(), parsed_namespace)
+ assert_equal(has_namespace ? dek_data : system_key, encryption_context.getKEKData())
+ else
+ # Scenario 2a: active key is used as CEK directly
+ assert_equal(dek_data.getTheKey.getEncoded, encryption_context.getKeyBytes)
+ assert_equal(system_key, encryption_context.getKEKData())
+ assert_equal(system_key.getKeyNamespaceBytes(), parsed_namespace)
end
## Disable table to ensure that the stores are not cached.
@@ -163,15 +209,25 @@ def assert_trailer(offline_trailer, live_trailer = nil)
assert_not_nil(offline_trailer)
assert_not_nil(offline_trailer.getEncryptionKey)
assert_not_nil(offline_trailer.getKEKMetadata)
- assert_not_nil(offline_trailer.getKEKChecksum)
- assert_not_nil(offline_trailer.getKeyNamespace)
+ assert_not_nil(offline_trailer.getKekIdentity)
+ assert_true(offline_trailer.getKekIdentity.length > 0)
+ parsed_namespace = parse_namespace_from_kek_identity(offline_trailer.getKekIdentity)
+ assert_not_nil(parsed_namespace)
return unless live_trailer
assert_equal(live_trailer.getEncryptionKey, offline_trailer.getEncryptionKey)
assert_equal(live_trailer.getKEKMetadata, offline_trailer.getKEKMetadata)
- assert_equal(live_trailer.getKEKChecksum, offline_trailer.getKEKChecksum)
- assert_equal(live_trailer.getKeyNamespace, offline_trailer.getKeyNamespace)
+ assert_equal(live_trailer.getKekIdentity.to_a, offline_trailer.getKekIdentity.to_a)
+ assert_equal(parse_namespace_from_kek_identity(live_trailer.getKekIdentity),
+ parse_namespace_from_kek_identity(offline_trailer.getKekIdentity))
+ end
+
+ # Returns the key namespace string parsed from KEK identity bytes, or nil if not present.
+ def parse_namespace_from_kek_identity(kek_identity)
+ return nil if kek_identity.nil? || kek_identity.length == 0
+
+ KeyIdentitySingleArrayBacked.new(kek_identity).getNamespaceView().copyBytes()
end
end
end
diff --git a/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb b/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb
index d527eea8240c..b798f463cb75 100644
--- a/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb
+++ b/hbase-shell/src/test/ruby/shell/key_provider_keymeta_migration_test.rb
@@ -44,6 +44,7 @@
java_import org.apache.hadoop.hbase.util.Bytes
java_import org.apache.hadoop.hbase.keymeta.KeymetaServiceEndpoint
java_import org.apache.hadoop.hbase.keymeta.KeymetaTableAccessor
+java_import org.apache.hadoop.hbase.keymeta.KeyIdentitySingleArrayBacked
java_import org.apache.hadoop.hbase.security.EncryptionUtil
java_import java.security.KeyStore
java_import java.security.MessageDigest
@@ -263,6 +264,9 @@ def setup_new_key_provider
HConstants::CRYPTO_MANAGED_KEY_STORE_SYSTEM_KEY_NAME_CONF_KEY,
'system_key'
)
+ $TEST_CLUSTER.getConfiguration.set(
+ HConstants::CRYPTO_MANAGED_KEYS_LOCAL_KEY_GEN_PER_FILE_ENABLED_CONF_KEY, 'true'
+ )
# Setup key configurations for ManagedKeyStoreKeyProvider
# Shared key configuration
@@ -275,7 +279,7 @@ def setup_new_key_provider
true
)
- # Table-level key configuration - let system determine namespace automatically
+ # Table-level key configuration (namespace set on CF at migration via ENCRYPTION_KEY_NAMESPACE)
$TEST_CLUSTER.getConfiguration.set(
"hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_table_key}.alias",
"#{@table_table_key}_key"
@@ -285,7 +289,7 @@ def setup_new_key_provider
true
)
- # CF-level key configurations - let system determine namespace automatically
+ # CF-level key configurations (namespace set on each CF at migration via ENCRYPTION_KEY_NAMESPACE)
$TEST_CLUSTER.getConfiguration.set(
"hbase.crypto.managed_key_store.cust.#{$GLOB_CUST_ENCODED}.#{@table_cf_keys}/cf1.alias",
"#{@table_cf_keys}_cf1_key"
@@ -379,8 +383,9 @@ def migrate_table_level_key
"Expected ACTIVE status for table key, got: #{output}")
puts ' >> Enabled key management for table-level key'
- # Migrate the table - no namespace attribute, let system auto-determine
- migrate_table_to_managed_key(@table_table_key, 'f', @table_table_key)
+ # Migrate the table: set ENCRYPTION_KEY_NAMESPACE on CF so server resolves via CF attribute
+ migrate_table_to_managed_key(@table_table_key, 'f', @table_table_key,
+ use_namespace_attribute: true)
end
def migrate_cf_level_keys
@@ -402,11 +407,13 @@ def migrate_cf_level_keys
"Expected ACTIVE status for CF2 key, got: #{output}")
puts ' >> Enabled key management for CF2'
- # Migrate CF1
- migrate_table_to_managed_key(@table_cf_keys, 'cf1', cf1_namespace)
+ # Migrate CF1: set ENCRYPTION_KEY_NAMESPACE on CF so server resolves via CF attribute
+ migrate_table_to_managed_key(@table_cf_keys, 'cf1', cf1_namespace,
+ use_namespace_attribute: true)
- # Migrate CF2
- migrate_table_to_managed_key(@table_cf_keys, 'cf2', cf2_namespace)
+ # Migrate CF2: set ENCRYPTION_KEY_NAMESPACE on CF so server resolves via CF attribute
+ migrate_table_to_managed_key(@table_cf_keys, 'cf2', cf2_namespace,
+ use_namespace_attribute: true)
end
def migrate_table_to_managed_key(table_name, cf_name, namespace,
@@ -416,13 +423,13 @@ def migrate_table_to_managed_key(table_name, cf_name, namespace,
# Use atomic alter operation to remove ENCRYPTION_KEY and optionally add
# ENCRYPTION_KEY_NAMESPACE
if use_namespace_attribute
- # For shared key tables: remove ENCRYPTION_KEY and add ENCRYPTION_KEY_NAMESPACE atomically
+ # Remove ENCRYPTION_KEY and set ENCRYPTION_KEY_NAMESPACE so server resolves key via CF attribute
command(:alter, table_name,
{ 'NAME' => cf_name,
'CONFIGURATION' => { 'ENCRYPTION_KEY' => '',
'ENCRYPTION_KEY_NAMESPACE' => namespace } })
else
- # For table/CF level keys: just remove ENCRYPTION_KEY, let system auto-determine namespace
+ # Remove ENCRYPTION_KEY only (server would resolve via global namespace only)
command(:alter, table_name,
{ 'NAME' => cf_name, 'CONFIGURATION' => { 'ENCRYPTION_KEY' => '' } })
end
@@ -507,17 +514,19 @@ def validate_hfile_trailer(table_name, cf_name, is_post_migration, is_key_manage
if is_key_management_enabled
assert_not_nil(trailer.getKEKMetadata)
- assert_not_equal(0, trailer.getKEKChecksum)
+ assert_not_nil(trailer.getKekIdentity)
+ assert_true(trailer.getKekIdentity.length > 0)
else
assert_nil(trailer.getKEKMetadata)
- assert_equal(0, trailer.getKEKChecksum)
+ assert_true(trailer.getKekIdentity.nil? || trailer.getKekIdentity.length == 0)
end
if is_post_migration
- assert_equal(expected_namespace, trailer.getKeyNamespace)
- puts " >> Trailer validation passed - namespace: #{trailer.getKeyNamespace}"
+ parsed = KeyIdentitySingleArrayBacked.new(trailer.getKekIdentity)
+ parsed_namespace = parsed.getNamespaceView().copyBytes()
+ assert_equal(expected_namespace.bytes, parsed_namespace) if expected_namespace
+ puts " >> Trailer validation passed - namespace: #{parsed_namespace}"
else
- assert_nil(trailer.getKeyNamespace)
puts ' >> Trailer validation passed - using legacy key format'
end
end
diff --git a/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb b/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb
index 77a2a339552e..7df08f21ee1c 100644
--- a/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb
+++ b/hbase-shell/src/test/ruby/shell/rotate_stk_keymeta_mock_provider_test.rb
@@ -39,13 +39,15 @@ def setup
define_test 'Test rotate_stk command' do
puts 'Testing rotate_stk command'
+ key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
+ key_provider.setMultikeyGenMode(false)
+
# this should return false (no rotation performed)
output = capture_stdout { @shell.command(:rotate_stk) }
puts "rotate_stk output: #{output}"
assert(output.include?('No System Key change was detected'),
"Expected output to contain rotation status message, but got: #{output}")
- key_provider = Encryption.getManagedKeyProvider($TEST_CLUSTER.getConfiguration)
# Once we enable multikeyGenMode on MockManagedKeyProvider, every call should return a new key
# which should trigger a rotation.
key_provider.setMultikeyGenMode(true)
diff --git a/pom.xml b/pom.xml
index 6b76dcaa083a..282ca0ddcde6 100644
--- a/pom.xml
+++ b/pom.xml
@@ -973,6 +973,7 @@
2.0.31.11.01.10.4
+ 0.161.1.10.41.5.7-2