Available in VPC
Overview
Create a Cloud Hadoop instance.
Request
Parameter name |
Required |
Type |
Restrictions |
Description |
regionCode |
No |
String |
|
Region code- Determine the Region in which the Cloud Hadoop instance will be created
- regionCode can be obtained through the getRegionList action.
- Default: Select the first Region of the getRegionList query result.
|
vpcNo |
Yes |
String |
|
VPC number- Determine the VPC in which the Cloud Hadoop instance will be created.
- vpcNo can be obtained through the getVpcList action.
|
cloudHadoopImageProductCode |
No |
String |
|
Cloud Hadoop image product code- cloudHadoopImageProductCode can be obtained through productCode of the getCloudHadoopImageProductList action.
- It is created with the default value if not entered.
- Default: latest version of Cloud Hadoop.
|
masterNodeProductCode |
No |
String |
|
Cloud Hadoop master server product code- Determine the server specifications for the Cloud Hadoop instance to create.
- masterNodeProductCode can be obtained when calling the getCloudHadoopProductList action by setting its infraResourceDetailTypeCode to "MSTDT."
- Default: The minimum specification is selected.
- The minimum specification is based on 1. memory and 2. CPU.
|
edgeNodeProductCode |
No |
String |
|
Cloud Hadoop edge server product code- Determine the server specifications for the Cloud Hadoop instance to create.
- edgeNodeProductCode can be obtained when calling the getCloudHadoopProductList action by setting its infraResourceDetailTypeCode to "EDGND."
- Default: The minimum specification is selected.
- The minimum specification is based on 1. memory and 2. CPU.
|
workerNodeProductCode |
No |
String |
|
Cloud Hadoop worker server product code- Determine the server specifications for the Cloud Hadoop instance to create.
workerNodeProductCode can be obtained when calling the getCloudHadoopProductList action by setting its infraResourceDetailTypeCode to "MSTDT." - Default: The minimum specification is selected.
- The minimum specification is based on 1. memory and 2. CPU.
|
cloudHadoopClusterName |
Yes |
String |
Only English letters, numbers, hyphens (-), and Korean letters can be used.- The first and last character must be a lowercase English letter or number.
- Min: 3
- Max: 15
|
Cloud Hadoop cluster name |
cloudHadoopClusterTypeCode |
Yes |
String |
|
Cloud Hadoop cluster type code |
cloudHadoopAddOnCodeList |
No |
List<String> |
|
Cloud Hadoop add-on list- Use only Cloud Hadoop version 1.5 or later.
- cloudHadoopAddOnCode can be obtained through the getCloudHadoopAddOnList action.
- Example: cloudHadoopAddOnCodeList Presto, HBase
|
cloudHadoopAdminUserName |
Yes |
String |
Only lowercase English letters, numbers, and hyphens (-) can be used.The first and last character must be a lowercase English letter or number.Min: 3Max: 15 |
Cluster management key username- Admin account required to access the Ambari management console
|
cloudHadoopAdminUserPassword |
Yes |
String |
Must contain at least one uppercase English letter, one special character, and one number.Spaces or the following special characters can't be included: single quotes (' '), double quotes (" "), KRW symbols (₩), slashes (/), ampersands (&), or backticks (`)Min: 8Max: 20 |
Cluster admin passwordAdmin account password required to access the Ambari management console |
loginKeyName |
Yes |
String |
|
Authentication key name- loginKeyName can be obtained through the getCloudHadoopLoginKeyList action.
- Set the SSH authentication key required to connect directly to the node.
|
edgeNodeSubnetNo |
Yes |
String |
|
Subnet number of the edge node- Select the subnet on which the edge node will be located.
- Edge nodes are located in private/public subnets.
- edgeNodeSubnetNo can be obtained through the getSubnetList action.
|
masterNodeSubnetNo |
Yes |
String |
|
Subnet number of the master node- Select the subnet on which the master node will be located.
- Master nodes are located in private/public subnets.
- masterNodeSubnetNo can be obtained through the getSubnetList action.
|
bucketName |
Yes |
String |
|
Bucket name |
workerNodeSubnetNo |
Yes |
String |
|
Subnet number of the worker node- Select the subnet on which the worker node will be located.
- Edge nodes are only located in private subnets.
- workerNodeSubnetNo can be obtained through the getSubnetList action.
|
masterNodeDataStorageTypeCode |
No |
String |
SSD | HDD |
Master node's data storage type code- Data storage type can't be changed after installation.
- Options: SSD | HDD
- Default: SSD
|
workerNodeDataStorageTypeCode |
No |
String |
SSD | HDD |
Worker node's data storage type code- Data storage type can't be changed after installation.
- Options: SSD | HDD
- Default: SSD
|
masterNodeDataStorageSize |
Yes |
Integer |
Min: 100 Max: 2000 |
Master node's data storage size- Can be entered in 10 GB increments from 100 GB to 2000 GB.
- 4000 GB or 6000 GB can also be used.
|
workerNodeDataStorageSize |
Yes |
Integer |
Min: 100 Max: 2000 |
Worker node's data storage size- Can be entered in 10 GB increments from 100 GB to 2000 GB.
- 4000 GB or 6000 GB can also be used.
|
workerNodeCount |
No |
Integer |
|
Number of worker nodes- The number of worker nodes can be selected from 2 to 8.
- Default: 2
|
useKdc |
No |
Boolean |
|
Whether to enable Kerberos authentication configuration- Configure a secure Hadoop cluster using Kerberos.
- Default: false
|
kdcRealm |
Conditional |
String |
|
KDC's realm information- Enter only if useKdc is true. It is ignored if useKdc is false.
- Only allow domain rules in the realm format.
|
kdcPassword |
Conditional |
String |
|
KDC admin account's password- Enter only if useKdc is true.
- It is ignored if useKdc is false.
|
useBootstrapScript |
No |
Boolean |
|
Whether to use Cloud Hadoop bootstrap scripts |
bootstrapScript |
Conditional |
String |
Only English letters are supported, and spacers or special characters are not allowed.- Maximum length can only be 1024 bytes.
|
Cloud Hadoop bootstrap scriptEnter only if useBootstrapScript is true.- It is ignored if useBootstrapScript is false.
- Scripts can only be run with buckets that are integrated with Cloud Hadoop.
- Need to enter folder and file name, excluding bucket name.
- Example: init-script/example.sh
- Default: false
|
useDataCatalog |
No |
Boolean |
|
Whether to use Cloud Hadoop Data Catalog- Cloud Hadoop Hive metastores are served using catalogs from the Data Catalog service.
- Integration is possible only if the catalog status of the Data Catalog service is normal.
Only Cloud Hadoop version 2.0 or later is supported. - Default: false
|
|
engineVersionCode |
Conditional |
String |
|
output |
No |
String |
|
Response result's format type- Options: xml | json
- Default: json
|
Response
Response data type
- CloudHadoopInstanceList type
CloudHadoopInstanceList extends CommonResponse |
private Integer totalRows; |
private List<CloudHadoopInstance> cloudHadoopInstanceList = new ArrayList<>(); |
CloudHadoopInstance |
private String cloudHadoopInstanceNo; |
private String cloudHadoopClusterName; |
private String cloudHadoopInstanceStatusName; |
private CommonCode cloudHadoopInstanceStatus; |
private CommonCode cloudHadoopInstanceOperation; |
private CloudHadoopClusterType cloudHadoopClusterType; |
private CloudHadoopVersion cloudHadoopVersion; |
private List<CloudHadoopAddOn> cloudHadoopAddOnList |
private String ambariServerHost; |
private String clusterDirectAccessAccount; |
private String loginKey; |
private String objectStorageBucket; |
private String kdcRealm; |
private String cloudHadoopImageProductCode; |
private Boolean isHa; |
private String domain; |
private AccessControlGroupNoList accessControlGroupNoList; |
private Date createDate; |
private Boolean useDataCatalog; |
private List<CloudHadoopServerInstance> cloudHadoopServerInstanceList; |
CloudHadoopAddOn |
private String code; |
private String codeName; |
CloudHadoopClusterType |
private String code; |
private String codeName; |
CloudHadoopVersion |
private String code; |
private String codeName; |
AccessControlGroupNoList |
private List<String> accessControlGroupNoList = new ArrayList<>(); |
CloudHadoopServerInstance |
private String cloudHadoopServerInstanceNo; |
private String cloudHadoopServerName; |
private String cloudHadoopServerInstanceStatusName; |
private CommonCode cloudHadoopServerInstanceStatus; |
private CommonCode cloudHadoopServerInstanceOperation; |
private CommonCode cloudHadoopServerRole; |
private String regionCode; |
private String vpcNo; |
private String vpcName; |
private String subnetNo; |
private String subnetName; |
private String privateIp; |
private Date createDate; |
private Date uptime; |
private String zoneCode; |
private Long memorySize; |
private Integer cpuCount; |
private Boolean isPublicSubnet; |
private Long dataStorageSize; |
private String cloudHadoopProductCode; |
private CommonCode dataStorageType; |
Examples
Call
ncloud vhadoop createCloudHadoopInstance --regionCode KR --vpcNo **65 --cloudHadoopImageProductCode SW.VCHDP.LNX64.CNTOS.0708.HDP.15.B050 --masterNodeProductCode SVR.VCHDP.MSTDT.HIMEM.C004.M032.NET.HDD.B050.G002 --edgeNodeProductCode SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002 --workerNodeProductCode SVR.VCHDP.MSTDT.HICPU.C008.M016.NET.HDD.B050.G002 --cloudHadoopClusterName test*** --cloudHadoopClusterTypeCode CORE_HADOOP_WITH_SPARK --cloudHadoopAddOnCodeList PRESTO --cloudHadoopAdminUserName test-*** --cloudHadoopAdminUserPassword ******* --loginKeyName key**** --bucketName buc*** --edgeNodeSubnetNo 11** --masterNodeSubnetNo 11** --workerNodeSubnetNo 12** --masterNodeDataStorageTypeCode SSD --workerNodeDataStorageTypeCode SSD --masterNodeDataStorageSize 100 --workerNodeDataStorageSize 100 --workerNodeCount 2 --useKdc true --kdcRealm EX**LE.COM --kdcPassword ********* --useBootstrapScript true --bootstrapScript init-script/example.sh --useDataCatalog true
Response
{
"createCloudHadoopInstanceResponse": {
"totalRows": 1,
"cloudHadoopInstanceList": [
{
"cloudHadoopInstanceNo": "2775778",
"cloudHadoopClusterName": "test123",
"cloudHadoopInstanceStatusName": "creating",
"cloudHadoopInstanceStatus": {
"code": "INIT",
"codeName": "CLOUD DATABASE(VPC) Init State"
},
"cloudHadoopInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Creat OP"
},
"cloudHadoopClusterType": {
"code": "CORE_HADOOP_WITH_SPARK",
"codeName": "Core Hadoop with Spark : HDFS(3.1.1), YARN(3.1.1), Zookeeper(3.4.9), Ranger(2.0.0), HIVE(3.1.2), Hue(4.8.0), Zeppelin Notebook(0.10.1), Spark (2.4.8)"
},
"cloudHadoopVersion": {
"code": "HADOOP2.0",
"codeName": "Cloud Hadoop 2.0"
},
"cloudHadoopAddOnList": [],
"ambariServerHost": "e-001-test123-15iv-hd",
"clusterDirectAccessAccount": "sshuser",
"loginKey": "newkey",
"objectStorageBucket": "ffdd",
"cloudHadoopImageProductCode": "SW.VCHDP.LNX64.CNTOS.0708.HDP.20.B050",
"isHa": true,
"createDate": "2023-02-08T21:26:09+0900",
"accessControlGroupNoList": [],
"cloudHadoopServerInstanceList": [
{
"cloudHadoopServerName": "e-001-test123-15iv-hd",
"cloudHadoopServerRole": {
"code": "E",
"codeName": "Edge Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"vpcName": "vpcTestName",
"subnetNo": "5746",
"subnetName": "subnetTestName",
"privateIp": "192.168.***.***",
"isPublicSubnet": false,
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "m-001-test123-15it-hd",
"cloudHadoopServerRole": {
"code": "M",
"codeName": "Master Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"vpcName": "vpcTestName",
"subnetNo": "5746",
"subnetName": "subnetTestName",
"privateIp": "192.168.***.***",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "m-002-test123-15iu-hd",
"cloudHadoopServerRole": {
"code": "M",
"codeName": "Master Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"vpcName": "vpcTestName",
"subnetNo": "5746",
"subnetName": "subnetTestName",
"privateIp": "192.168.***.***",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "d-001-test123-15iw-hd",
"cloudHadoopServerRole": {
"code": "D",
"codeName": "Date Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"vpcName": "vpcTestName",
"subnetNo": "5746",
"subnetName": "subnetTestName",
"privateIp": "192.168.***.***",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
},
{
"cloudHadoopServerName": "d-002-test123-15ix-hd",
"cloudHadoopServerRole": {
"code": "D",
"codeName": "Date Node"
},
"cloudHadoopServerInstanceStatusName": "creating",
"cloudHadoopServerInstanceStatus": {
"code": "PEND",
"codeName": "CLOUD DATABASE(VPC) Server Pending State"
},
"cloudHadoopServerInstanceOperation": {
"code": "CREAT",
"codeName": "CLOUD DATABASE(VPC) Server Create OP"
},
"cloudHadoopProductCode": "SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002",
"regionCode": "KR",
"zoneCode": "KR-2",
"vpcNo": "1665",
"vpcName": "vpcTestName",
"subnetNo": "5746",
"subnetName": "subnetTestName",
"privateIp": "192.168.***.***",
"isPublicSubnet": false,
"dataStorageType": {
"code": "SSD",
"codeName": "SSD"
},
"dataStorageSize": 107374182400,
"cpuCount": 4,
"memorySize": 17179869184,
"createDate": "2023-02-08T21:26:09+0900"
}
]
}
],
"requestId": "aa789745-34de-416c-a4c2-159482eaa9ed",
"returnCode": "0",
"returnMessage": "success"
}
}
<createCloudHadoopInstanceResponse>
<requestId>b8828eca-c3f8-4ddb-86dd-3355026b4b94</requestId>
<returnCode>0</returnCode>
<returnMessage>success</returnMessage>
<totalRows>1</totalRows>
<cloudHadoopInstanceList>
<cloudHadoopInstance>
<cloudHadoopInstanceNo>***4904</cloudHadoopInstanceNo>
<cloudHadoopClusterName>test***</cloudHadoopClusterName>
<cloudHadoopInstanceStatusName>creating</cloudHadoopInstanceStatusName>
<cloudHadoopInstanceStatus>
<code>INIT</code>
<codeName>CLOUD DATABASE(VPC) Init State</codeName>
</cloudHadoopInstanceStatus>
<cloudHadoopInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Creat OP</codeName>
</cloudHadoopInstanceOperation>
<cloudHadoopClusterType>
<code>CORE_HADOOP_WITH_SPARK</code>
<codeName>Core Hadoop with Spark : HDFS(3.1.1), YARN(3.1.1), Zookeeper(3.4.6), Ranger(1.2.0), HIVE(3.1.0), Hue(4.3.0), Zepplin Notebook(0.8.0), Spark(2.4.8)</codeName>
</cloudHadoopClusterType>
<cloudHadoopVersion>
<code>HADOOP1.6</code>
<codeName>Cloud Hadoop 1.6</codeName>
</cloudHadoopVersion>
<ambariServerHost>e-001-dasfxc-mel-hd</ambariServerHost>
<clusterDirectAccessAccount>***user</clusterDirectAccessAccount>
<loginKey>****</loginKey>
<objectStorageBucket>****</objectStorageBucket>
<cloudHadoopImageProductCode>SW.VCHDP.LNX64.CNTOS.0708.HDP.16.B050</cloudHadoopImageProductCode>
<isHa>true</isHa>
<createDate>2021-11-09T18:55:29+0900</createDate>
<accessControlGroupNoList>
<accessControlGroupNo>11728</accessControlGroupNo>
</accessControlGroupNoList>
<cloudHadoopServerInstanceList>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>E</code>
<codeName>Edge Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.EDGND.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<vpcName>*****</vpcName>
<subnetNo>**23</subnetNo>
<subnetName>*****</subnetName>
<privateIp>***.***.***.***</privateIp>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>M</code>
<codeName>Master Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<vpcName>*****</vpcName>
<subnetNo>**23</subnetNo>
<subnetName>*****</subnetName>
<privateIp>***.***.***.***</privateIp>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>M</code>
<codeName>Master Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<vpcName>*****</vpcName>
<subnetNo>**23</subnetNo>
<subnetName>*****</subnetName>
<privateIp>***.***.***.***</privateIp>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>D</code>
<codeName>Date Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<vpcName>*****</vpcName>
<subnetNo>**23</subnetNo>
<subnetName>*****</subnetName>
<privateIp>***.***.***.***</privateIp>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
<cloudHadoopServerInstance>
<cloudHadoopServerName>****</cloudHadoopServerName>
<cloudHadoopServerRole>
<code>D</code>
<codeName>Date Node</codeName>
</cloudHadoopServerRole>
<cloudHadoopServerInstanceStatusName>creating</cloudHadoopServerInstanceStatusName>
<cloudHadoopServerInstanceStatus>
<code>PEND</code>
<codeName>CLOUD DATABASE(VPC) Server Pending State</codeName>
</cloudHadoopServerInstanceStatus>
<cloudHadoopServerInstanceOperation>
<code>CREAT</code>
<codeName>CLOUD DATABASE(VPC) Server Create OP</codeName>
</cloudHadoopServerInstanceOperation>
<cloudHadoopProductCode>SVR.VCHDP.MSTDT.STAND.C004.M016.NET.HDD.B050.G002</cloudHadoopProductCode>
<regionCode>KR</regionCode>
<zoneCode>KR-2</zoneCode>
<vpcNo>**65</vpcNo>
<vpcName>*****</vpcName>
<subnetNo>**23</subnetNo>
<subnetName>*****</subnetName>
<privateIp>***.***.***.***</privateIp>
<isPublicSubnet>false</isPublicSubnet>
<dataStorageType>
<code>SSD</code>
<codeName>SSD</codeName>
</dataStorageType>
<dataStorageSize>107374182400</dataStorageSize>
<cpuCount>4</cpuCount>
<memorySize>17179869184</memorySize>
<createDate>2021-11-09T18:55:29+0900</createDate>
</cloudHadoopServerInstance>
</cloudHadoopServerInstanceList>
</cloudHadoopInstance>
</cloudHadoopInstanceList>
</createCloudHadoopInstanceResponse>