Search in sources :

Example 31 with LockRequest

use of org.apache.hadoop.hive.metastore.api.LockRequest in project hive by apache.

the class TestLockRequestBuilder method testExSRTable.

// Test that existing exclusive table with new shared_read coalesces to
// exclusive
@Test
public void testExSRTable() {
    LockRequestBuilder bldr = new LockRequestBuilder();
    LockComponent comp = new LockComponent(LockType.EXCLUSIVE, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    bldr.addLockComponent(comp);
    comp = new LockComponent(LockType.SHARED_READ, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    bldr.addLockComponent(comp).setUser("fred");
    LockRequest req = bldr.build();
    List<LockComponent> locks = req.getComponent();
    Assert.assertEquals(1, locks.size());
    Assert.assertEquals(LockType.EXCLUSIVE, locks.get(0).getType());
}
Also used : LockComponent(org.apache.hadoop.hive.metastore.api.LockComponent) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) Test(org.junit.Test)

Example 32 with LockRequest

use of org.apache.hadoop.hive.metastore.api.LockRequest in project hive by apache.

the class TestLockRequestBuilder method testSWSWDb.

// Test that existing shared_write db with new shared_write coalesces to
// shared_write
@Test
public void testSWSWDb() {
    LockRequestBuilder bldr = new LockRequestBuilder();
    LockComponent comp = new LockComponent(LockType.SHARED_WRITE, LockLevel.DB, "mydb");
    bldr.addLockComponent(comp);
    comp = new LockComponent(LockType.SHARED_WRITE, LockLevel.DB, "mydb");
    bldr.addLockComponent(comp).setUser("fred");
    LockRequest req = bldr.build();
    List<LockComponent> locks = req.getComponent();
    Assert.assertEquals(1, locks.size());
    Assert.assertEquals(LockType.SHARED_WRITE, locks.get(0).getType());
}
Also used : LockComponent(org.apache.hadoop.hive.metastore.api.LockComponent) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) Test(org.junit.Test)

Example 33 with LockRequest

use of org.apache.hadoop.hive.metastore.api.LockRequest in project hive by apache.

the class TestLockRequestBuilder method testSRExDb.

// Test that existing shared_read db with new exclusive coalesces to
// exclusive
@Test
public void testSRExDb() {
    LockRequestBuilder bldr = new LockRequestBuilder();
    LockComponent comp = new LockComponent(LockType.SHARED_READ, LockLevel.DB, "mydb");
    bldr.addLockComponent(comp);
    comp = new LockComponent(LockType.EXCLUSIVE, LockLevel.DB, "mydb");
    bldr.addLockComponent(comp).setUser("fred");
    LockRequest req = bldr.build();
    List<LockComponent> locks = req.getComponent();
    Assert.assertEquals(1, locks.size());
    Assert.assertEquals(LockType.EXCLUSIVE, locks.get(0).getType());
}
Also used : LockComponent(org.apache.hadoop.hive.metastore.api.LockComponent) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) Test(org.junit.Test)

Example 34 with LockRequest

use of org.apache.hadoop.hive.metastore.api.LockRequest in project hive by apache.

the class TestLockRequestBuilder method testTwoSeparatePartitions.

// Test that 2 separate partitions don't coalesce.
@Test
public void testTwoSeparatePartitions() {
    LockRequestBuilder bldr = new LockRequestBuilder();
    LockComponent comp = new LockComponent(LockType.EXCLUSIVE, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    comp.setPartitionname("mypart");
    bldr.addLockComponent(comp);
    comp = new LockComponent(LockType.EXCLUSIVE, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    comp.setPartitionname("yourpart");
    bldr.addLockComponent(comp).setUser("fred");
    LockRequest req = bldr.build();
    List<LockComponent> locks = req.getComponent();
    Assert.assertEquals(2, locks.size());
}
Also used : LockComponent(org.apache.hadoop.hive.metastore.api.LockComponent) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) Test(org.junit.Test)

Example 35 with LockRequest

use of org.apache.hadoop.hive.metastore.api.LockRequest in project hive by apache.

the class TestLockRequestBuilder method testSRSWTable.

// Test that existing shared_read table with new shared_write coalesces to
// shared_write
@Test
public void testSRSWTable() {
    LockRequestBuilder bldr = new LockRequestBuilder();
    LockComponent comp = new LockComponent(LockType.SHARED_READ, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    bldr.addLockComponent(comp);
    comp = new LockComponent(LockType.SHARED_WRITE, LockLevel.DB, "mydb");
    comp.setTablename("mytable");
    bldr.addLockComponent(comp).setUser("fred");
    LockRequest req = bldr.build();
    List<LockComponent> locks = req.getComponent();
    Assert.assertEquals(1, locks.size());
    Assert.assertEquals(LockType.SHARED_WRITE, locks.get(0).getType());
}
Also used : LockComponent(org.apache.hadoop.hive.metastore.api.LockComponent) LockRequest(org.apache.hadoop.hive.metastore.api.LockRequest) Test(org.junit.Test)

Aggregations

LockRequest (org.apache.hadoop.hive.metastore.api.LockRequest)96 LockComponent (org.apache.hadoop.hive.metastore.api.LockComponent)94 Test (org.junit.Test)94 LockResponse (org.apache.hadoop.hive.metastore.api.LockResponse)59 ArrayList (java.util.ArrayList)57 CheckLockRequest (org.apache.hadoop.hive.metastore.api.CheckLockRequest)32 Table (org.apache.hadoop.hive.metastore.api.Table)24 ShowCompactRequest (org.apache.hadoop.hive.metastore.api.ShowCompactRequest)22 ShowCompactResponse (org.apache.hadoop.hive.metastore.api.ShowCompactResponse)22 ShowCompactResponseElement (org.apache.hadoop.hive.metastore.api.ShowCompactResponseElement)17 CommitTxnRequest (org.apache.hadoop.hive.metastore.api.CommitTxnRequest)15 AbortTxnRequest (org.apache.hadoop.hive.metastore.api.AbortTxnRequest)11 Partition (org.apache.hadoop.hive.metastore.api.Partition)10 CompactionRequest (org.apache.hadoop.hive.metastore.api.CompactionRequest)6 UnlockRequest (org.apache.hadoop.hive.metastore.api.UnlockRequest)5 NoSuchLockException (org.apache.hadoop.hive.metastore.api.NoSuchLockException)4 CompactionInfo (org.apache.hadoop.hive.metastore.txn.CompactionInfo)4 GetOpenTxnsResponse (org.apache.hadoop.hive.metastore.api.GetOpenTxnsResponse)3 OpenTxnRequest (org.apache.hadoop.hive.metastore.api.OpenTxnRequest)3 HashMap (java.util.HashMap)2