Cloud Services

Enterprise Marketplace

VMware catalog creation
Published On Dec 17, 2024 - 12:57 PM

VMware catalog creation

Integrate Enterprise Marketplace with VMware Aria Automation (formerly VMware vRealize Automation).
VMware Aria Automation can be integrated as a private cloud provider. After you have performed this integration, services provided by VMware can be used to create catalogs in the Enterprise Marketplace Catalog. This page covers the integration of VMware as a provider.
Integration allows you to onboard your VMware blueprints into the Enterprise Marketplace Catalog to enable the user to order Aria Automation blueprints. This section covers preparing Aria Automation  to support the service blueprints that you provide to your users in the Catalog in Enterprise Marketplace.
The service blueprints can range from a single, simple machine with no guest operating system to complex custom application stacks delivered on multiple machines under a load balancer. Depending on the service blueprints that you provide, the preparation might include configuring your environment for integration with Aria Automation and ensuring that your tenants and resources can support your environment.
After VMware is onboarded as a service provider, Aria Automation can be used to design and publish catalogs that meet the needs of your users. This configuration allows the customer to achieve automated fulfillment using the standard Aria Automation adapter.
For more information about creating catalogs, see Provider management.
VMware Aria Automation used to be called vRealize Automation (vRA). That abbreviation is used throughout the legacy code as a reference to Aria Automation.

Supported versions

The following versions of Aria Automation are supported:
  • Aria Automation 8.12.2
  • Aria Automation 8.12.1
  • Aria Automation 8.12.0

Onboarding VMWare as a service provider

Before it can be used to create catalogs, VMWare must be onboarded as a service provider. The steps needed to do so are provided here.

1. Enable the VMware provider for catalog discovery using the API

To enable the VMware provider, issue the following command:
curl -X PUT \ https://consume_host:consumeport/catalog/v3/providers/vra \ -H 'apikey: xxxxxxxxxxxxxxxxxxxxx' \ -H 'content-type: application/json' \ -H 'username: [email protected]' \ -d '{ "adapter": "http://cb-catalog-int-api:3959/catalog/content/vra/v1", "provider": "VRA", "discoverContent": true }'

2. Assign user to team with the Catalog Administrator role

The user needed for this process must have the Catalog Administrator role. If you have not done so already, you need to assign that user to a team that has the Catalog Administrator role by using the following steps:
If you want to assign the user to the team, complete the following steps:
  1. Navigate to
    User Access
    . To learn more about navigating to the different services from each tenant, refer to Landing page navigation or Kyndryl Bridge Landing page navigation.
  2. Select the Team that you want to add the user to and click
    Assign
    .
If you do not have a team that includes the Catalog Administrator role, you can create a new team using the following steps:
  1. Navigate to
    User Access
    . To learn more about navigating to the different services from each tenant, refer to Landing page navigation or Kyndryl Bridge Landing page navigation.
  2. Click
    Teams
    if needed.
  3. Click the
    Actions
    icon for the team that you want to add roles to and select
    Edit
    .
  4. Click the
    Roles
    tab.
  5. Click
    Assign Roles
    .
  6. In the
    Role Name
    , select
    Catalog Administrator
    . You can enter search terms in the field to narrow down the options.
  7. Select the entity type that you want to add the role to from the
    Choose Entity
    menu.
  8. For each entity type that you select, select the proper group from the
    Select Team or Organization
    menu.
  9. Repeat steps 6 and 7 for every entity that you want to add.
  10. Click
    Save
    when you are finished.

3. Verify account has been created

Verify that the account has been created by executing the following API in Postman:
  • Method name:
    GET
  • URL:
    https://consume_host:consumeport/cb-credential-service/api/v2.0/serviceProviders
  • Headers:
    • Accept:
      application/json
    • Cache-Control:
      no-cache
    • Content-Type:
      application/json
    • apikey:
      API key for the system
    • username:
      Name of the user that you are logged in as
If you want to get the API key, navigate to the
User
menu and select
API Key
. The key is displayed in a pop-up window. To learn more about navigating to the different services from each tenant, refer to Landing page navigation or Kyndryl Bridge Landing page navigation.

4. Verify that the provider account is enabled for catalog discovery

If you want to verify that the provider account is enabled for catalog discovery, complete the following steps:
  1. On your tenant, navigate to
    Catalog Admin
    . To learn more about navigating to the different services from each tenant, refer to Landing page navigation or Kyndryl Bridge Landing page navigation.
  2. Click
    Start Catalog Discovery
    .
If you can select
VMware Aria Automation
, your provider account is enabled.

5. Configure policy for custom property during content pull

During Pull Content the custom property conversion to configuration must be based on policy configuration. If type= @ContextPropertyDefinition is present in the property definition of a catalog item, then it needs to be converted into a configuration of the catalog as per a policy configuration.
Policy must be uploaded before running the pull content process.
The policy configuration will look like the following example:
{ "customProperty" : { "configTemplate": [{ "selector" : [ "templateList.param2"] -- selector must be applied to generate config as per the following template. Selector must be applied to name of the custom Property and must support regular expression. More than one selector can be specified. "inputType": "freetext/selectOne/numberInput", "values" : [ -- this is required if inputType=selectOne { "valueId": "", "value": ""} ], "range" : { . -- it is required only if inputType=number input "min": 0, "max": n, "step": i } "configEditable": true, "configGroupCode": "" } ], "binding" : [{ "urn": "" , --could be present if generateConfigForParams is set to false . This will take the urn to refer to different contextual values during runtime "selector": [ "templateList.param1"] } ] } }
The policy will not be available for the system by default. It is your responsibility to upload one using the core service
API "/core/configuration/v1/configvalues/vra_pullcontent_policy"
. This policy is relevant to custom properties with input parameters only. Other attributes continue to function as is.
The
configTemplate.selector
section contains a list of
contextproperties
input parameters that need to be overridden. You need to alter the input types, range, and so on to match your schema.
The
binding.selector
section contains list of input params for which
configparam
generation is not required. This must be mutually exclusive to
configTemplate.selector
. If duplicates exist, then
configTemplate.selector
is given preference.
If
binding.urn
is empty for a bunch of selectors, then no action is taken.

6. Load config value domain dynamically during run time

If the
configvaluedomain
of a specific configuration for a catalog must be loaded dynamically at runtime rather than policy import, complete the following steps:
  1. Import the catalog using the policy.
  2. Get the config of the catalog by using the following API:
    • URL:
      https://<host>/catalog/content/<providerCode>/<version>/services/<serviceId>/config
    • Method:
      GET
    • Headers:
      username, password (for Enterprise Marketplace)
  3. Select the specific config that needs to be loaded dynamically at runtime and make a PUT request to update the config with the following values:
    • URL:
      https://<host>/catalog/content/admin/<providerCode>/<version>/services/<serviceId>/config/<config Id of dynamic loaded config>
    • Method:
      PUT
    • Headers:
      username, password (for Enterprise Marketplace)
    • Body:
      The config object that you retrieved along with the following parameters:
      allowedValues:{ "type": "on-demand", "params":< The query parameters that needs to be passed to the request>(Should be a JSON) "resource-path": <resource-path of the URL to fetch data>, "baseurl": <base URL of the provider>, "method": <which type of api it is >(Supported methods: GET & POST), "return": { "id": <Json path for provision Value>, "label": <Json path for label that is displayed to the user> }, "body": <Only required when the method is post>, "tokenURL": <If authentication required token for calling the api, Provide the token URL to fetch the token>, "authentication": { "module": <basic || vra> }, "handleByAdapter": <If specified true api call happens through adapter, If specified false the api call to fetch the data happens through adapter. }
Getting values for config varies based on whether the config is dependent on another config:
  • If config is not dependent on any other config, having allowed values with the previously specified format will be good enough to fetch values dynamically.
  • If config is dependent (in a parent-child relationship) on any other config, you must complete the following additional steps:
    1. Pass an additional attribute called
      derivedFrom
      that consists of an array of all the parent configs that this child config is dependent on (this is at the
      allowedValues
      level).
    2. Reference the parent config as
      ${parentConfigID}
      in the allowed values body if the method is POST, or as
      ${parentConfigID}
      in the allowed values params section if the method is GET.
If no reference to parent config is mentioned, then it does not matter what the
parentConfigID
is because the child Config values will be the same.
The following is an example of a parent-child config value. In this example, the child config with id: 1 is generated based on a parent config with id :2. This is coded as derivedFrom: ["2"], and you pass the configId of the parent in the allowed values body as a reference.
{ "id": "1", "name": "SELECT TEMPLATE", "description": "SELECT TEMPLATE", "default": "Centos6.6-7.3-5GB-WithAgent", "binding": "$.data.CentOS.data.cloneFrom", "required": true, "sequence": 2, "editable": true, "hidden": false, "inputType": "selectOne", "group": "Group1", "derivedFrom":["2"], "allowedValues": { "type": "on-demand", "resource-path": "/properties-service/api/propertydefinitions/cloneFrom/values", "baseurl": "https://10.154.23.177", "method": "POST", "return": { "id": "$.values..underlyingValue.value", "label": "$.values..underlyingValue.value" }, "body": { "id": ${2}, // Here the keyname can be anything which the actual rest call expects and value should be ${ parentConfigId} "tenantId": "vsphere.local", "dependencyValues": { "entries": [] }, "associateValue": null }, "tokenURL": "https://10.154.23.177/identity/api/tokens", "authentication": { "module": "vra" }, "handleByAdapter": true }, "errorMessage": "string" }

Onboarding catalogs

After you have created your catalogs, they need to be uploaded into the tenant. This is done by uploading the catalogs. Generally, you should do this by using the Catalog Sync feature because this technique minimizes human intervention, and therefore minimizes the chance of error. You can also onboard the catalogs manually. Both techniques are explained here:

Onboarding catalogs using Catalog Sync

If you want to onboard your catalogs using the Catalog Sync feature, complete the following steps:
  1. Navigate to the Catalog page at
    https://consumehost:consumeport/catalogAdmin
    .
  2. Click
    Start Catalog Discovery
    .
  3. Select the provider that you want to add, in this case
    Private Cloud
    .
    The
    Started Catalog Discovery Successfully
    window is displayed. After the discovery is complete, the message
    Last Discovered <timestamp>
    is displayed in the
    Catalog Admin
    window.
  4. Verify that the VMware blueprints defined on your Aria Automation instance appear on the page. The blueprints will be visible in Draft state.
    There are three stages of an onboarded catalog’s lifecycle:
    • Draft:
      The catalog was just discovered.
    • Work in Progress:
      The catalog has been edited.
    • Published:
      The catalog has been made available to users in Enterprise Marketplace.
      Do not publish a catalog without uploading pricing for it first.
  5. Edit an offering to publish it.
  6. Confirm that all blueprint attributes are listed in the
    Select Configurable Attributes
    section.
    The
    Catalog Admin
    edit page for the catalog displays the same attributes that are defined in the VMware blueprint.
  7. Select the appropriate category from the drop-down menu.
  8. Upload the pricing file.
  9. Click
    Enable support for multi-quantity
    .
  10. Select the maximum quantity that the user is allowed to select by moving the slider.
  11. Select the attributes that will be displayed in the Enterprise Marketplace.
    Attributes that are defined as mandatory on the VMware blueprint will be set to Display in Enterprise Marketplace automatically. You must add any other attributes manually.
  12. Click
    Save
    to change the status of the catalog to Work in Progress.
  13. Verify that the status of the catalog is Work in Progress and that the draft version is not listed in the table.
  14. Complete further configuration and click
    Publish
    .
    The
    Publish Successful
    window is displayed.
  15. Verify that the new catalog appears as available to order from the
    Catalog
    page in Enterprise Marketplace.

Onboarding catalogs manually

This method should only be preferred in unusual circumstances such as if you want to quickly onboard a new blueprint instead of rerunning catalog sync. However, due to the lower chance of error, generally use Catalog Sync, even when onboarding a single blueprint.
If you want to onboard catalogs manually, complete the following steps:
  1. Upload the config value domains used in the catalog configs. Config value domains are the values that appear in the drop-down menu of config where the corresponding ids are used.
    POST https://<consumehost:consumeport>/catalog/content/admin/vra/v1/configvaluedomain
    Use the POST command with the following JSON file:
    [ { "id": "CPU-1-CENTOS", "value": "1", "provisionValue": "1" }, { "id": "CPU-2-CENTOS", "value": "2", "provisionValue": "2" }, { "id":"MEM-512-CENTOS", "value": "512", "provisionValue": "512" }, { "id":"MEM-1024-CENTOS", "value": "1024", "provisionValue": "1024" }, { "id":"MEM-2048-CENTOS", "value": "2048", "provisionValue": "2048" }, { "id":"MEM-3072-CENTOS", "value": "3072", "provisionValue": "3072" }, { "id":"MEM-4096-CENTOS", "value": "4096", "provisionValue": "4096" }, { "id": "HD-8-CENTOS", "value": "8", "provisionValue": "8" } ]
    The corresponding
    curl
    command is as follows:
    curl -X POST \ https://<consumehost:consumeport>/catalog/content/admin/vra/v1/configvaluedomain \ -H 'apikey:xxxxxxxxxxxxxxxx' \ -H 'content-type: application/json' \ -H 'username: xxxxxxxxxxxxxxx' \ -d '[ { "id": "CPU-1-CENTOS", "value": "1", "provisionValue": "1" }, { "id": "CPU-2-CENTOS", "value": "2", "provisionValue": "2" }, { "id":"MEM-512-CENTOS", "value": "512", "provisionValue": "512" }, { "id":"MEM-1024-CENTOS", "value": "1024", "provisionValue": "1024" }, { "id":"MEM-2048-CENTOS", "value": "2048", "provisionValue": "2048" }, { "id":"MEM-3072-CENTOS", "value": "3072", "provisionValue": "3072" }, { "id":"MEM-4096-CENTOS", "value": "4096", "provisionValue": "4096" }, { "id": "HD-8-CENTOS", "value": "8", "provisionValue": "8" } ]
  2. Execute the following REST API to onboard the service.
    POST https://<consumehost:port>/catalog/content/admin/vra/v1/services
    Use the following request body:
    [ { "id": "SingleVM-CentOS", "refid": "f98f1cf1-8633-4a59-9a38-4daadef9ae09", "name": "SingleVM-CentOS", "description": "SingleVM-CentOS", "descriptionHTML": "SingleVM-CentOS", "isEditable":true, "tenantId": "vsphere.local", "serviceType": "Composite Blueprint", "labels": [ "vsphere.local", "Deployment" ], "version": 0, "status": "PUBLISHED", "updatedAt": "2017-09-14T07:24:53.639Z", "serviceCategory": [ { "id": "compute", "name": "Compute", "description" : "Compute" } ], "provider": {"name":"vra","code":"vra"}, "terms": "string" } ]
    The corresponding
    curl
    command looks like the following:
    curl -X POST \ https://<consumehost:consumeport>/catalog/content/admin/vra/v1/services \ -H 'apikey: xxxxxxxxxx' \ -H 'content-type: application/json' \ -H 'username: xxxxxxxxxxxxxxxxxxxx' \ -d ' [ { "id": "SingleVM-CentOS", "refid": "f98f1cf1-8633-4a59-9a38-4daadef9ae09", "name": "SingleVM-CentOS", "description": "SingleVM-CentOS", "descriptionHTML": "SingleVM-CentOS", "facets": [ { "key": "location", "values": [ "London" ] } ], "isEditable":true, "tenantId": "vsphere.local", "serviceType": "Composite Blueprint", "labels": [ "vsphere.local", "Deployment" ], "version": 0, "status": "PUBLISHED", "updatedAt": "2017-09-14T07:24:53.639Z", "serviceCategory": [ { "id": "compute", "name": "Compute", "description" : "Compute" } ], "provider": {"name":"vra","code":"vra"}, "terms": "string" } ]
  3. Upload the configs by using the static drop-down menu or the on-demand API:
    • To upload configs with the static drop-down menu, execute the following REST API. Configs are the corresponding attributes in VMware blueprints.
      POST https://<consumehost:consumeport>/catalog/content/admin/vra/v1/services/<serviceid>/config
      The service ID is the ID used in the body of upload service REST API. Execute the API with the following request body:
      [ { "id": "1", "name": "CPU", "description": "CPU", "default": "1", "binding": "$.data.CentOS70.data.cpu", "required": true, "editable": true, "hidden": false, "inputType": "selectOne", "group": "string", "domainValueIds": [ "CPU-1-CENTOS", "CPU-2-CENTOS" ], "errorMessage": "string" }, { "id": "2", "name": "Memory (MB)", "description": "Memory (MB)", "default": "512", "binding": "$.data.CentOS70.data.memory", "required": true, "editable": true, "hidden": false, "inputType": "selectOne", "group": "string", "domainValueIds": [ "MEM-512-CENTOS", "MEM-1024-CENTOS", "MEM-2048-CENTOS", "MEM-3072-CENTOS", "MEM-4096-CENTOS" ], "errorMessage": "string" }, { "id": "3", "name": "Storage (GB)", "description": "Storage (GB)", "default": "8", "binding": "$.data.CentOS70.data.storage", "required": true, "editable": true, "hidden": false, "inputType": "selectOne", "group": "string", "domainValueIds": [ "HD-8-CENTOS" ], "errorMessage": "string" } ]
      The corresponding
      curl
      command is the following:
      curl -X POST \ https://<consumehost:consumeport>/catalog/content/admin/vra/v1/services/SingleVM-CentOS/config \ -H 'apikey: xxxxxxxxxxxxxxxxxxxxxx' \ -H 'content-type: application/json' \ -H 'username: xxxxxxxxxxxxxxx' \ -d '[ { "id": "1", "name": "CPU", "description": "CPU", "default": "1", "binding": "$.data.CentOS70.data.cpu", "required": true, "editable": true, "hidden": false, "inputType": "selectOne", "group": "string", "domainValueIds": [ "CPU-1-CENTOS", "CPU-2-CENTOS" ], "errorMessage": "string" }, { "id": "2", "name": "Memory (MB)", "description": "Memory (MB)", "default": "512", "binding": "$.data.CentOS70.data.memory", "required": true, "editable": true, "hidden": false, "inputType": "selectOne", "group": "string", "domainValueIds": [ "MEM-512-CENTOS", "MEM-1024-CENTOS", "MEM-2048-CENTOS", "MEM-3072-CENTOS", "MEM-4096-CENTOS" ], "errorMessage": "string" }, { "id": "3", "name": "Storage (GB)", "description": "Storage (GB)", "default": "8", "binding": "$.data.CentOS70.data.storage", "required": true, "editable": true, "hidden": false, "inputType": "selectOne", "group": "string", "domainValueIds": [ "HD-8-CENTOS" ], "errorMessage": "string" } ]
    • You can also upload config values by using the
      on-demand-api
      , which is an external API that you can call to fetch the possible value for the config. The following is an example dynamic pricing JSON.
      { "id":"data-inCatalogItemName", "name":"Select Template", "description":"Select Template", "required":true, "default":"", "editable":true, "group":"Dynamic", "binding":"$.data.inCatalogItemName", "pattern":" ", "isSelected":true, "hidden":false, "inputType":"freetext", "derives":[ ], "allowedValues":{ "type":"on-demand", "resource-path":"/properties-service/api/propertydefinitions/cloneFrom/values", "baseurl":"https://10.154.23.177", "method":"POST", "return":{ "id":"$.values..underlyingValue.value", "label":"$.values..underlyingValue.value" }, "body":{ "tenantId":"vsphere.local", "dependencyValues":{ "entries":[ ] }, "associateValue":null }, "tokenURL":"https://10.154.23.177/identity/api/tokens", "authentication":{ "module":"vra" }, "handleByAdapter":true }, "errorMessage":"string" }
The most important part of the previous
sampleJson
is the
allowedValues
property. It includes the following parameters:
  • The
    "baseurl"
    and
    "resource-path"
    together form the API to be called.
  • The
    "method"
    is generally GET/POST.
  • "return"
    is the JSON path of the property from the response JSON.
  • The field
    "type"
    should be
    "on-demand-api"
  • URL also supports query parameters with fields called
    "params"
    , which can be only key value pairs.
  • The field
    "authentication"
    contains
    "module"
    and
    "providerAccountId"
    for authentication purposes.
  • If the field
    "handleByAdapter"
    is set to true, then the external API call is handled by the adapter itself.

Creating pricing

After you have uploaded the blueprints, you must upload the pricing for each of them into the Enterprise Marketplace Catalog. That process is explained here.

Available pricing models

The following pricing models are available for VMware:
  • Price calculation based on configuration:
    In this pricing model, the price is calculated based on a Base Price and chargeAdd prices based on the amount of CPU, memory, and storage that your customers use during a specific time period. For example, the chargeAdd prices could be calculated based on the following rates:
    • Compute Rate per vCPU Daily ($)
    • Compute Rate per GB RAM Daily ($)
    • Storage Rate per GB daily ($)
    The Estimated Cost is calculated based on the customer’s user of CPU, Memory, and Storage as monthly Recurring charges (chargeAdd) plus any One-time charges (Base Price).
  • Service Offering based pricing:
    In this pricing model, the blueprint price is a static charge, encoded as the Base Price, that is charged at a specific interval. The Estimated Cost is calculated using the Base Price plus any One-time charges.
  • RU based pricing:
    In this pricing model, the cost of each blueprint is based on a Request Unit (RU), which is a standardized size of deployment. Each blueprint can have multiple RUs, such as S, M, L, and XL. The cost for each blueprint is calculated based on the amount of each resource as defined by the RU and the price for that resource. Because the RUs are standardized, this amount can be pre-calculated.
  • Leased based pricing for RUs:
    In this pricing model, the pricing is based on standardized RUs that are charged at a set interval, such as per day, per month, or per year. There can be multiple levels of RUs for a blueprint. Compute the estimated cost based on the end date of the catalog configured with the leased price plus any One-Time charge. This is supported with Custom Pricing using an External Pricing API.
  • RU Based Pricing - Complex Yearly:
    In this pricing model, each catalog is constructed using one or multiple RUs. For example, user can order Cloud Technology Stacks (CTS) on vRealize provider (CTS = VMware blueprint), or user can order multiple RUs on traditional IT provider like mainframe CPU, network device, middleware, and so on. All the RUs have prices defined for one year. For example, the price for a mainframe CPU is defined for 2017, 2018, 2019, and so on. These prices reduce every year for a component, taking inflation into account. There are two charges defined for every RU:
    • Additional Resource Charges (ARC):
      For utilization of any Resource Type above the relevant Resource Baseline (Additional RU), an additional resource unit charge (ARC) is charged for each Additional RU.
    • Reducing Resource Charges (RRC):
      For utilization of any Resource Type below the relevant Resource Baseline (Reduced RU), a reduced RRC is issued by the supplier for each Reduced RU.
    This is supported with Custom Pricing using an External Pricing API.

Uploading a static price

If you want to upload a static price, you need to create a pricing JSON file. This file includes the following parameters:
Key Name
Description
Allowed Values
Required
Default Value
Id
A unique string that identifies the file in the system.
string
Yes
pricingPlan
The frequency at which the price is charged (Hourly or Monthly).
string
Yes
Hourly
serviceId
The ID for the catalog (which is part of your URL). This ID is the same for all the configs in the catalog.
string
Yes
offer
Indicates that an offer object is included.
object
Yes
offer.name
Provides a name for the offer object.
string
Yes
offer.description
Provides a more detailed description of the offer object.
string
Yes
offer.fareStrategy
This can be either "FIXED" or "RATE" depending on whether the price varies based on the value of the configuration.
string
Yes
offer.configValues
Object that includes configId.uom and value.
object
Yes
configValues.configId
Provides the config id of the configuration.
string
Yes
configValues.uom
Set this to true if the configuration is value dependent.
Boolean
No
configValues.value
Sets the value for the configuration.
string
Yes
price
Object that includes any one-time charge, the usage charge or recurring charge as applicable, and the currency code.
object
Yes
price.oneTimeCharge
Sets the one-time charge, which is typically the setup cost of the catalog.
price.description
Provides a more detailed description for the charge.
string
price.currencyCode
Sets the currency in use such as USD for US Dollar.
string
price.usageCharge
Sets the usage-based charge, which describes per unit price for given frequency. For example, 20 is chargeable per GB per month.  "usageCharge":{  "uom":{  "value":1,  "code":"GB" }, "frequency":{  "value":1, "code":"MONTH" }, "value":20 }
object
No
price.recurringCharge
Sets a recurring cost incurred for a config that does not depend on the value selected for config, uom is not needed. The frequency can be monthly or hourly such as 10 per month. "recurringCharge": { "value": 10, "frequency": { "value": 1, "code": "MONTH" }
object
No
basePrice
Can be true or false, depending on whether the catalog to consist of base price.
Boolean
No
The following is a sample pricing JSON file.
[ { "id": "CentOS65-00SSAS3", "pricingPlan": "Monthly", "serviceId": "SystemConfig_QA74", "offer": { "name": "CPU", "description": "Pricing for CPU", "fareStrategy": "RATE", "configValues": [ { "configId": "1", "uom":true, "value": "1" } ] }, "price": { "oneTimeCharge": 10, "description": "CPU Price", "usageCharge": { "uom": { "value": 1, "code": "SIZE" } , "frequency": { "value": 1, "code": "MONTH" } , "value": 20 }, "currencyCode": "USD" }, "basePrice": true }, { "id": "CentOS65-00SSMQ2", "pricingPlan": "Monthly", "serviceId": "SystemConfig_QA74", "offer": { "name": "MEMORY", "description": "Pricing for Memory", "fareStrategy": "RATE", "configValues": [ { "configId": "2", "uom":true, "value": "512" } ] }, "price": { "oneTimeCharge": 10, "description": "RAM Price", "usageCharge": { "uom": { "value": 512, "code": "SIZE" } , "frequency": { "value": 512, "code": "MONTH" } , "value": 0.5 }, "currencyCode": "USD" }, "basePrice": false }, { "id": "CentOS65-00SSMQ1", "pricingPlan": "Monthly", "serviceId": "SystemConfig_QA74", "offer": { "name": "Storage", "description": "Pricing for Storage", "fareStrategy": "RATE", "configValues": [ { "configId": "3", "uom":true, "value": "5" } ] }, "price": { "oneTimeCharge": 10, "description": "Storage Price", "usageCharge": { "uom": { "value": 5, "code": "SIZE" } , "frequency": { "value": 5, "code": "MONTH" } , "value": 10 }, "currencyCode": "USD" }, "basePrice": false } ]

Manual upload by calling the API

You can use several different methods to upload the pricing JSON. If you want to upload the file manually by calling the API, complete the following steps:
  1. Upload the pricing JSON for your catalog using a
    curl
    command or Postman using the following parameters:
    • Method:
      POST
    • API:
      https://consumehost:consumeport/catalog/content/admin/vra/v1/services/<serviceId>/offeredservices
    • Headers:
      • Content-Type:
        application/json
      • apikey:
        XXXXX-XXXX-XX-XXXXXX-XXXXXXXXXX
      • Accept:
        application/json
    The following example shows the sample
    curl
    command:
    Curl -X POST \ https://myminikube.info:300913/catalog/content/admin/vra/v1/services/SystemConfig_QA74/offeredservices \ -H ‘accept: applications/json’ \ -H ‘apikey: XXXXX-XXXX-XX-XXXXXX-XXXXXXXXXX’ \ -H ‘username: [email protected]’ \ -d ‘<name of JSON file>’
  2. If you are using Postman, click the
    Body
    tab. The pane should show the contents of your pricing JSON. Select
    Raw
    and click
    Send
    .
  3. Refresh your Enterprise Marketplace window and make sure that the base price is visible.

Uploading the pricing JSON using catalog sync/pull content

Before you can use this method, you need to run catalog sync. For more information, see VMware Aria Automation integration. After you have onboarded the blueprints, the catalogs are available without price (price as N/A) in the Draft status. If you want to upload the pricing JSON, complete the following steps:
  1. Discover where the catalog is by invoking the following API and finding the ID of your catalog in the output:
    • Method:
      GET
    • API:
      https://consumehost:consumeport/catalog/content/vra/v1/services?status=draft
    • Headers:
    The following is a sample of the
    curl
    command:
    curl -X GET \ 'https://cb-customer1-api.gravitant.net/catalog/content/vra/v1/services?status=draft' -H 'apikey: XXXXXXXXXXXXXXXXXX' -H 'username: [email protected]'
  2. Use the ID from the last step to call the following API to fetch the config for your service:
    • Method:
      GET
    • API:
      https://consumehost:consumeport/catalog/content/vra/v1/services/<id>/config
    • Headers:
    The following example is a sample response from this API:
    { "configs":[ { "id":"data-CentOS65-data-memory", "name":"Memory (MB)", "description":"The amount of RAM allocated to the machine in megabytes.", "required":false, "default":1024, "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data.memory", "pattern":null, "isSelected":false, "hidden":false, "inputType":"integer", "derives":[ ], "allowedValues":[ ] }, { "id":"data-CentOS65-data-description", "name":"Description", "description":"Description", "required":false, "default":"Testing vRealize 7.2 with vRA 1.1 build", "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data.description", "pattern":null, "isSelected":false, "hidden":false, "inputType":"freetext", "derives":[ ], "allowedValues":[ ] }, { "id":"data-CentOS65-data-storage", "name":"Storage (GB)", "description":"Amount of total storage in gigabytes.", "required":false, "default":7, "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data.storage", "pattern":null, "isSelected":false, "hidden":false, "inputType":"integer", "derives":[ ], "allowedValues":[ ] }, { "id":"data-CentOS65-data-_hasChildren", "name":"Has Children", "description":"Indicates if current component is serving as a container to other components in the blueprint.", "required":false, "default":false, "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data._hasChildren", "pattern":null, "isSelected":false, "hidden":false, "inputType":"boolean", "derives":[ ], "allowedValues":[ ] }, { "id":"data-CentOS65-data-image", "name":"Image", "description":"Configure build information for vSphere machine components in the blueprint.", "required":true, "default":"ValueSet.CentOS66", "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data.image", "pattern":null, "isSelected":true, "hidden":false, "inputType":"selectOne", "derives":[ ], "allowedValues":[ { "id":"ValueSet.CentOS66", "value":"CentOS65", "provisionValue":"ValueSet.CentOS66" } ] }, { "id":"data-CentOS65-data-cpu", "name":"CPUs", "description":"Number of virtual CPUs.", "required":false, "default":3, "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data.cpu", "pattern":null, "isSelected":false, "hidden":false, "inputType":"integer", "derives":[ ], "allowedValues":[ ] }, { "id":"data-CentOS65-data-_cluster", "name":"_cluster", "description":null, "required":false, "default":1, "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data._cluster", "pattern":null, "isSelected":false, "range":{ "min":1, "max":5 }, "hidden":false, "inputType":"numberinput", "derives":[ ], "allowedValues":[ ] }, { "id":"data-CentOS65-data-size", "name":"Size", "description":"Configure CPU, memory, and storage sizing for vSphere machine components in the blueprint.", "required":true, "default":"ValueSet.large", "editable":true, "group":"CentOS65", "binding":"$.data.CentOS65.data.size", "pattern":null, "isSelected":true, "hidden":false, "inputType":"selectOne", "derives":[ ], "allowedValues":[ { "id":"ValueSet.large", "value":"large", "provisionValue":"ValueSet.large" }, { "id":"ValueSet.medium", "value":"medium", "provisionValue":"ValueSet.medium" }, { "id":"ValueSet.small", "value":"small", "provisionValue":"ValueSet.small" } ] }, { "id":"data-_leaseDays", "name":"Lease days", "description":"Indicates for how many days the deployed blueprint will be leased.", "required":false, "default":1, "editable":true, "group":null, "binding":"$.data._leaseDays", "pattern":null, "isSelected":false, "range":{ "min":1, "max":1 }, "hidden":false, "inputType":"numberinput", "derives":[ ], "allowedValues":[ ] } ] }
  3. In the pricing JSON file that you created, update the configId in each configuration with the ID value you discovered in the previous step. Keep serviceId as "" (empty string) and ensure that each ID inside your JSON is unique so you link the pricing to the correct config. Incorrect configId pricing will not be displayed in the Enterprise Marketplace Catalog.
    The following example shows the updated sample pricing:
    [{ "id": "CentOS65-00SSAS3", "pricingPlan": "Monthly", "serviceId": "", "offer": { "name": "CPU", "description": "Pricing for CPU", "fareStrategy": "RATE", "configValues": [ { "configId": "data-CentOS65-data-cpu", "uom":true, "value": "1" } ] }, "price": { "oneTimeCharge": 10, "description": "CPU Price", "usageCharge": { "uom": { "value": 1, "code": "SIZE" } , "frequency": { "value": 1, "code": "MONTH" } , "value": 20 }, "currencyCode": "USD" }, "basePrice": true }, { "id": "CentOS65-00SSMQ2", "pricingPlan": "Monthly", "serviceId": "", "offer": { "name": "MEMORY", "description": "Pricing for Memory", "fareStrategy": "RATE", "configValues": [ { "configId": "data-CentOS65-data-memory", "uom":true, "value": "512" } ] }, "price": { "oneTimeCharge": 10, "description": "RAM Price", "usageCharge": { "uom": { "value": 512, "code": "SIZE" } , "frequency": { "value": 512, "code": "MONTH" } , "value": 0.5 }, "currencyCode": "USD" }, "basePrice": false }, { "id": "CentOS65-00SSMQ1", "pricingPlan": "Monthly", "serviceId": "", "offer": { "name": "Storage", "description": "Pricing for Storage", "fareStrategy": "RATE", "configValues": [ { "configId": "data-CentOS65-data-storage", "uom":true, "value": "5" } ] }, "price": { "oneTimeCharge": 10, "description": "Storage Price", "usageCharge": { "uom": { "value": 5, "code": "SIZE" } , "frequency": { "value": 5, "code": "MONTH" } , "value": 10 }, "currencyCode": "USD" }, "basePrice": false } ]
  4. Click the
    Actions
    icon for your catalog and select
    Edit
    .
  5. Select the appropriate category for your catalog and click
    Upload
    .
  6. Browse to the file you just saved and upload your pricing file, ensuring that the file is JSON and saved with a .json extension.
  7. Click the
    Actions
    icon for each config for which you have uploaded the price and select
    Display in Enterprise Marketplace
    .
  8. Click
    Save
    .
    After you save the catalog, the status will change to Work in Progress.
  9. Click the
    Actions
    icon and select
    Publish
    .
After receiving a success response, check your catalog under the category you selected to make sure that the pricing is visible.

Uploading a dynamic price

Before you can upload a dynamic price, you need to have an external pricing adapter running and accessible from your system. Then, you need to create a pricing JSON file. This file includes the following parameters:
Key Name
Description
Allowed Values
Required
Id
A unique string that identifies the file in the system.
string
Yes
serviceId
The service id for the catalog (which is part of your URL) and same for all the configs in the catalog.
string
Yes
offer
An object that includes the name.
object
Yes
offer.name
The name of the config.
string
Yes
pricingfn
Creates an object that includes the resource-path, baseurl, body, authentication, and so on.
object
Yes
pricingfn.resource-path
This is the end point of the external pricing adapter.  This can be any API URI including variables referring to any ID of the config as a path variable. Anything starting with ${} refers to the ID of a config. During the API call, it will be replaced by the runtime value of that config passed as payload by the price API. The variables can be used in param and body and in the return.  During the API call, the provider account ID, userid, teamid, and so on will be passed to the API as part of the header. To be more precise, whatever Enterprise Marketplace custom headers are added by Catalog and to the content server will be sent as part of the header.
string
Yes
pricingfn.method
Indicates which HTTP methods are supported.
string
Yes
pricingfn.body
The object that holds all references to configIds. These IDs will be replaced with their corresponding values before the call is made.
object
Yes
pricingfn.authentication
An object that holds information on the type of authentication supported and which provider account it should use for the same.
object
Yes
authentication.module
Currently Enterprise Marketplace only supports basic authentication.
string
Yes
authentication.providerAccountId
If this parameter is provided, then authentication is done against the credential of this account. This value will override the proverAccountId provider in the header.
string
No
pricingfn.handleByAdapter
If this parameter is set to false, the system will fetch the price from the external pricing adapter. Otherwise, the price will be fetched from inside Enterprise Marketplace. The pricing API can be called either by contentserver or by the catalog adapter based on the presence of handleByAdapter flag.
Boolean
Yes
The pricing of a catalog can be either static or dynamic.
The following example shows a sample pricing JSON for a dynamic price:
[ { "id": "CentOS65-00SSAS3", "serviceId": "CentOS65-0a2", "offer": { "name": "CPU", }, "pricingfn":{ "resource-path": "/catalog/content/price", "method": "POST", "baseurl": "http://localhost:3005", "body": { "1": "${1}", "2": "${2}", "3": "${3}", "4": "${4}", "5": "${5}", "6": "${6}", "config_123457": "${systemConfigs::config_123457}" }, "authentication": { "module": "basic", "providerAccountId": "" }, "handleByAdapter": false } ]
If you want to upload a pricing JSON for a dynamic price, call the following API:
  • Method:
    POST
  • API:
    https://consumehost:consumeport/catalog/content/admin/vra/v1/services/<serviceId>/offeredservices
The following example shows the
curl
command:
curl -X POST \ https://consumehost:consumeport/catalog/content/admin/vra/v1/services/<svcid>/offeredservices \ -H 'apikey: consumeapikey' \ -H 'content-type: application/json' \ -H 'username: consumeusername' \ -d '[{ "id": "New External dynamic-pricing", "serviceId": "83bf8d70-cb5f-11e7-86ae-9d02361394c2", "offer": { "name": "Linux Virtual service4" }, "pricingfn": { "resource-path": "/catalog/content/price", "method": "POST", "baseurl": "EndpointofDynamicPriceFunction", "body": { "cpu": "${cpu}", "memory": "${memory}" }, "authentication": { "module": "basic", "providerAccountId": "" }, "handleByAdapter": false } }]'

Day2Ops

After you upload catalogs into Enterprise Marketplace, they transition to Day 2 Operations (Day2Ops), which is the management and monitoring phase of Enterprise Marketplace. Actions that are configured for change management process can be linked to the change management workflows defined in IBM Information Technology Service Management (ITSM) or to a custom workflow in VMWare vRealize Orchestrator (vRO). These actions can be seen and used by the user in the
Service Inventory
window. For more information, see Provider management.
Day2Ops can be fulfilled by the following methods:
  • Standard provider API (available out of the box)
  • Custom VMware APIs
  • External API (can invoke a new UI or custom API as needed)
  • ITSM or MSOP SR details such as ServiceNow SR details
Enterprise Marketplace can now onboard custom Day2Ops for a catalog. The onboarding operation involves the following steps:
  1. Upload operation definition.
  2. Upload operation template.
  3. Upload operation config definition, if any.
  4. Upload price of operation, if any.
  5. Subscribe the operation endpoint, if fulfillment by any custom adapter or the default VMware adapter installation script is not used.
The Enterprise Marketplace Admin can add operations based on the catalog items and can also invoke change management workflows. Various methods are available to integrate operations into Aria Automation. This method covers actions as custom operations based on vRO. This example explains how to add a custom action (Add Disk to a Virtual Machine) as a custom day2 operation for Aria Automation. This custom action has been created using a vRO workflow. This example can be used as a reference for developing custom vRO based Day2Ops for Aria Automation.
Before you can perform these steps, you need to enable Day2Ops for catalogs onboarded to Enterprise Marketplace.

Sample workflow for adding a disk

The following is the workflow that you will be creating in this example:
Begin > addDisk > vlm3WaitTaskEnd > waitForTaskAndUpdate
This sample workflow for the custom action AddDisk is designed to add additional disks to a virtual machine (VM). It requires the following scripts:
  • addDisk
    var allVms = VcPlugin.getAllVirtualMachines(); var virtualMachine = ""; System.log("virtualMachineName:"+virtualMachineName); // Check if the VM match the regexp for (var i in allVms) { if (allVms[i].name.equals(virtualMachineName)) { virtualMachine = allVms[i]; System.log("virtualMachine for given name:"+virtualMachine); break; } } var vcacHost = Server.findAllForType("vCAC:VCACHost")[0]; System.log("vcacHost:"+vcacHost); var vcacvm= Server.findAllForType("vCAC:VirtualMachine", "VirtualMachineName eq '"+virtualMachineName+"'")[0]; var vcacEntity = vcacvm.getEntity(); System.log("vcacvm:"+vcacvm); var datastores = virtualMachine.getDatastore(); var fileName = ""; for each (var datastore in datastores) { System.log("datastore:"+datastore); System.log("datastore value:"+datastore.name); fileName = "["+datastore.name+"]"; } var devices = virtualMachine.config.hardware.device; for each (controller in devices) { var is_scsi = controller instanceof VcVirtualBusLogicController || controller instanceof VcVirtualLsiLogicController || controller instanceof VcParaVirtualSCSIController || controller instanceof VcVirtualLsiLogicSASController; if (!is_scsi) { continue; } var controller_label = controller.deviceInfo.label; System.log("SCSI controller found: " + controller_label); for each (device in devices) { if (device.controllerKey == controller.key) { var scsi_id = controller.busNumber + ":" + device.unitNumber; System.log(" device found: '" + device.deviceInfo.label + "' 'SCSI (" + scsi_id + ")'"); System.log("device controllerkey:"+device.controllerKey); } } } System.log("diskUnitNumber:"+diskUnitNumber); // Create Disk BackingInfo var diskBackingInfo = new VcVirtualDiskFlatVer2BackingInfo(); diskBackingInfo.diskMode = "persistent"; System.log("fileName no disk backing:"+fileName); diskBackingInfo.fileName = fileName; diskBackingInfo.thinProvisioned = true; var diskSize = 1048576 * sizeInGB; // Create VirtualDisk var disk = new VcVirtualDisk(); disk.backing = diskBackingInfo; disk.controllerKey = 1000; disk.unitNumber = diskUnitNumber; disk.capacityInKB = diskSize; // Create Disk ConfigSpec var deviceConfigSpec = new VcVirtualDeviceConfigSpec(); deviceConfigSpec.device = disk; deviceConfigSpec.fileOperation = VcVirtualDeviceConfigSpecFileOperation.create; deviceConfigSpec.operation = VcVirtualDeviceConfigSpecOperation.add; var deviceConfigSpecs = []; deviceConfigSpecs.push(deviceConfigSpec); // List of devices var configSpec = new VcVirtualMachineConfigSpec(); configSpec.deviceChange = deviceConfigSpecs; return virtualMachine.reconfigVM_Task(configSpec);
  • waitForTaskAndUpdate
    var vcacvm= Server.findAllForType("vCAC:VirtualMachine", "VirtualMachineName eq '"+virtualMachineName+"'")[0]; var vcacEntity = vcacvm.getEntity(); System.log("vcacvm:"+vcacvm); var hostid = vcacEntity.hostId; System.log("vcacHost:"+hostid); System.log("updating entities"); var modelName = 'ManagementModelEntities.svc'; var entitySetName = 'DataCollectionStatuses'; //Read a list of entities var entities = vCACEntityManager.readModelEntitiesByCustomFilter(hostid, modelName, entitySetName, null, null); for each (entity in entities) { var entityKey = entity.keyString; System.log("Updating entity with key: " + entityKey); var links = null; var headers = null; var updateProperties = { "LastCollectedTime":null }; vCACEntityManager.updateModelEntityBySerializedKey(hostid, modelName, entitySetName, entityKey, updateProperties, links, headers); } System.sleep(10000); System.log("Finished updating entities");

Upload operation definition

Upload the operation definition by calling the following API. This example shows the structure of the operation definition for the previously mentioned Aria Automation workflow to be on-boarded for the ServiceOffering CentOS65:
  • API:
    https://<host>:<port>/catalog/content/admin/vra/v1/services
    <host> here refers to the Consume env host.
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
  • Body:
    [{ "id": "addingDisktoVMJun19", "name": "Add Disk to VM", "providerCode": "vra", "apiDocVersion": "v3", "provider":{ "name":"vra" }, "version": "1.0.0", "serviceType": "ServiceOperation", "isPricingEnabled": true, "isConfigurationEnabled": true, "associatedTo": { "type": "SOC", "serviceOfferingRefs": [ { "id": "CentOS65_CustomOp", "providerSOCTypeId": "Virtual Machine" } ] }, "context": { "organization": [ ], "team": [ ], "application": [ ], "environment": [ ] }, "status": "PUBLISHED", "isCustom": true, "operationCode": "addDisktoVM" }]
In the previously mentioned payload, the line
"id": "CentOS65_CustomOp",
is crucial while onboarding the operation because it contains the attribute that holds the serviceOfferingId ("id": "CentOS65_CustomOp") of the catalog that you are onboarding the operation for. This ID is the serviceOfferingId for the catalog, which is in the Enterprise Marketplace offering. After this catalog is provisioned, the operation you are about to onboard is visible in the
Inventory
window for the stack of the same type at the stack component level.

Linking to the catalog in Enterprise Marketplace

Complete the following steps to link to the catalog in Enterprise Marketplace:
  1. In Enterprise Marketplace, click the
    Catalog
    tab.
  2. In the
    Search, Select and Configure
    window, click the catalog that you want to link to.
  3. Copy the last part of the URL for that catalog.
  4. In the payload of the operation definition, set this on the “id” line.

Upload operation configuration

The sample VMware workflow includes the following input parameters:
  • virtualMachineName
    : This parameter is automatically detected by the system and sent to the vRO.
  • sizeinGB
    : This input is provided by the user.
  • diskUnitNumber
    : This input is provided by the user.
Therefore, you need to define two configurations for the operation. The metadata for the definition of the configurations are like any configuration metadata for any normal catalog. Use the following API to upload the configuration of the operation:
  • API:
    https://<host>:<port>/catalog/content/admin/vra/v1/services/<OPERATION_ID>/config
    <host> here refers to the Consume env host.
    In the previous URL operation, Id refers to the id ("id":"addingDisktoVMJun19") of the operation definition onboarded in the previous step.
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
  • Body:
    [{ "id":"diskSize", "name":"size of the disk", "description":"Size of the Disk", "default":"1", "binding":"diskSize", "required":true, "sequence":1, "configGroupCode":"main", "configGroupName":"main", "configGroupSequence":1, "editable":true, "hidden":false, "inputType":"numberinput", "group":"string", "range": { "min":1, "max":500 }, "errorMessage":"string" }, { "id":"numberOfUnits", "name":"Number of Disk Units", "description":"Number of Disk Units", "default":"1", "binding":"numberOfUnits", "required":true, "sequence":2, "configGroupCode":"main", "configGroupName":"main", "configGroupSequence":1, "editable":true, "hidden":false, "inputType":"numberinput", "group":"string", "range": { "min":0, "max":15 }, "errorMessage":"string" }]
In the
"binding":"diskSize",
line, the binding attribute is used to map the configuration value to the specific template parameter. The value of the binding must match the parameter name defined in the execute operation template. For more information, see VMware Aria Automation integration.

Upload operation template

Any Operation template consists of these types of templates. All these templates are written in the COT schema.
  1. Execution Template:
    This template is used to execute the operation or to make a request to the provider for the operation. The basic assumption is that the operation to be executed should expose an API call with a callback ID to retrieve the status of the operation later. For execution templates, the following types of parameters can be passed:
    • System Parameters:
      These system parameters are always available in the runtime context to be used during execution. To use them, explicitly declare them in the template with a fixed name.
      • _resourceInfo: Contains the identification data related to the resource (stack component) on which the operation is going to be executed.
      • _configInfo: Contains the configuration information of the operation.
      • _trackingInfo: Contains the trackingInfo present in the stack associated with the operation.
      • _additionalInfo: Contains the additionalInfo present in the stack associated with the operation.
    • User Defined Parameters:
      These are the parameters that the user must provide as input during operation execution. These parameters are invariably mapped to configurations of the operation definition. The binding attribute value of the operation config must match with the name of the user defined parameters to use them in the template.
  2. Status Template:
    This template is used to retrieve the status of the operation that you executed. Keep in mind the following guidelines when preparing the status template:
    • The names of the parameters in the status template should be the same as the name of the output variables as declared in the execution template to use the correct values during status template execution. During runtime, the output of the execution template is passed to the status template as is.
    • The output of the status template must contain a variable called
      status
      that can have the following values:
      • Completed
      • Failed
      • InProgress
  3. Failure Template:
    This template is executed in failure scenarios, and it is optional. If the operation is a multi-step procedure and the operation fails during runtime, this template can be used to compensate the work already done. Keep in mind that the failure template should not contain any input parameters. All the parameters/internal variables of the execution template are passed to the Failure Template during execution. If the variables are not resolved when the failure occurs, then those variable values are passed as UNRESOLVED.
The example operation does not require a failure template.
The following is the template that is comprised of the executionTemplate and statusTemplate. These templates are executed by the template engine to perform the out-of-the-box custom operations.
{ "executionTemplate": { "$schema": "http://github.kyndryl.net/cb-consume-schema/cs/op-template.json", "contentVersion": "1.0", "name": "sampletemplate", "parameters": { "_resourceInfo": { "type": "object" }, "_configInfo": { "type": "object" }, "_orderId":{ "type":"string" }, "_trackingInfo":{ "type":"object" } }, "resourceOperations": { "vragetXaasTemplate": { "catalogItemId": "${catalogItemId}" }, "vrapostXaasTemplate": { "catalogItemId": "${catalogItemId}", "template": "${template}" } }, "variables": { "resourceId": "${_resourceInfo}.name", "catalogItemId": "'6a20e304-f563-4977-9165-d632df0b2e57'", "diskSize": "(function(configInfo){var finalvalue=''; configlist = configInfo[0].config; for(var i in configlist){if(configlist[i].configId == 'diskSize'){if(configlist[i].values[0].value) {finalvalue = configlist[i].values[0].value;}else {finalvalue = configlist[i].values[0].valueId;}}}return finalvalue;})(${_configInfo})", "numberOfUnits": "(function(configInfo){var finalvalue=''; configlist = configInfo[0].config; for(var i in configlist){if(configlist[i].configId == 'numberOfUnits'){if(configlist[i].values[0].value) {finalvalue = configlist[i].values[0].value;}else {finalvalue = configlist[i].values[0].valueId;}}}return finalvalue;})(${_configInfo})", "template": "(function(template){ template.data.virtualMachineName=${resourceId}; template.data.diskUnitNumber= ${numberOfUnits}; template.data.sizeInGB = ${diskSize}; return template;})(${vragetXaasTemplate})" }, "output": { "tracking": "${vrapostXaasTemplate}.id" } }, "statusTemplate": { "$schema": "http://github.kyndryl.net/cb-consume-schema/cs/op-template.json", "contentVersion": "1.0", "name": "sampletemplate", "parameters": { "tracking": { "type": "string" } }, "resourceOperations": { "vragetResources": { "requestId": "${tracking}" } }, "output": { "status": "function(status){ if(status.phase === 'SUCCESSFUL') {return 'Completed'} else if(status.phase === 'FAILED'){return 'Failed'} else {return 'InProgress'}}(${vragetResources})", "comments": "function(status){if(status.phase === 'FAILED'){if(status.requestCompletion.completionDetails){return status.requestCompletion.completionDetails} else {return 'error on vRA side'}}}(${vragetResources})" } }, "failureTemplate": {} }
This example uses relatively simple JavaScript concepts. It assumes that the code in the variables section will be optimized and refined as the template is developed.
Post this template against the operation you have on-boarded by calling the following API:
  • API:
    https://<host>:<port>/catalog/content/admin/vra/v1/services/<OPERATION_ID>/template
    <host> here refers to the Consume env host.
    In the previous URL "<OPERATION_ID>" refers to the ID ("id":"addingDisktoVMJun19") of the operation definition you onboarded . In addition, the following body contains the reference "<OPERATION_ID>", which means this needs to be replaced by the ID (“addingDisktoVMJun19").
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
  • Body:
    { "serviceId": "<OPERATION_ID>", "refId":"<OPERATION_ID>", "description": "<<base64 encoded format of the template json>>" }
    The following is the sample payload:
    { "serviceId": "addingDisktoVMJun19", "refId":"addingDisktoVMJun19", "description": "<<base64 encoded format of the template json>>" // in the beginning of the 3rd point, we have the template in json format, we need to encode this to base64 format and put that encoded string in the description field. }

Dependencies on the operation template: API definitions

The operation template is written in such a way that it compiles all the API calls and other operations you need to perform at the provider side, which is run by the ICB Template Engine.
This example uses three APIs in the operation template. The resourceOperations attribute in the executionTemplate refers to the APIs that need to be invoked using some reference ID (for example, vragetXaasTemplate and vrapostXaasTemplate). Therefore, before performing the operation, make sure that the API references present in the resourceOperations have definitions registered in Provider API Registry.
If you want to register the API definitions in the API Registry, run the following API:
  • API:
    https://<host>:<port>/apiregistry/providers/vra/v1/apis
    <host> here refers to the Enterprise Marketplace env host.
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
  • Body:
    [ { "contentVersion": "1.0", "id": "vragetXaasTemplate", "providerTypeId": "vra", "params": { "catalogItemId": { "type": "string" } }, "resourcepath": "/catalog-service/api/consumer/entitledCatalogItems/${catalogItemId}/requests/template", "method": "GET" }, { "contentVersion": "1.0", "id": "vrapostXaasTemplate", "providerTypeId": "vra", "body": "${template}", "resourcepath": "/catalog-service/api/consumer/entitledCatalogItems/${catalogItemId}/requests", "method": "POST" } ]
In the previous payload, observe the key 'id' at line#4 and line#16, which is similar to the reference present in the resourceOperation attribute of executionTemplate key, which means for each reference in the operation template, the corresponding API definition has been previously mentioned.
Similarly, use this API to pass the API definitions for the status template:
  • API:
    https://<host>:<port>/apiregistry/providers/vra/v1/apis
<host> here refers to the consume env host.
The following example shows the Status API.json:
[{ "contentVersion": "1.0", "id": "vragetResources", "providerTypeId": "vra", "resourcepath": "/catalog-service/api/consumer/requests/${requestId}", "method": "GET" }]
In the statusTemplate attribute of the operation template, you can find the reference in resourceOperations that matches the ID in the previous JSON.
These API definitions are fetched by the template engine during the template execution. Therefore, the presence of these definitions in the API registry before template execution is vital. The person who is posting the API definitions needs to be a part of a team that has the Service Integrator role.
This template used is only an example. You will need to build your own templates as required.

Variables in the template

As you can see in the executionTemplate, the variables section contains some JavaScript code. You can include some types of JavaScript code in the template. This capability allows you to keep the provisioning adapter more generic and configure the template according to your needs. In the previous variables, logic is included to fetch the value of the configuration based on configId. This configuration payload will come as the request payload to the provisioning adapter of Aria Automation during the runtime. The whole config object is passed as an input parameter to the previously mentioned function in the variable. The function parses the object and returns the final value.
In the example template, diskSize and numberOfUnits are the variables that are reused in the function of the variable template.
The value obtained for the variable template is used wherever it is being referred. If you look at the API definitions, you can find the reference of the variable template in the API definition with the id vrapostXaasTemplate.

Upload price of operation

If you need to charge for the operation, you can upload the prices of the operation just like the prices of any other catalog. However, currently the prices of the operation must be defined separately from the price of the associated catalogs. During the pricing calculation it will not refer the pricing of the associated catalog. Run the following API:
  • API:
    https://<host>:<port>/catalog/content/admin/vra/v1/services/<OPERATION_ID>/offeredservices
    In the previous URL, <OPERATION_ID> refers to the ID ("id":"addingDisktoVMJun19") of the operation definition.
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
The following example shows a sample
Price.json
file:
[ { "id": "<<uniqueId>>", "pricingPlan": "Monthly", "serviceId": "addingDisktoVMJun19", "offer": { "name": "DiskSize", "description": "", "fareStrategy": "RATE", "configValues": [ { "configId": "diskSize", "uom": true, "value": "1" } ] }, "price": { "oneTimeCharge": 10, "description": "Size of the Disk", "usageCharge": { "uom": { "value": 1, "code": "GB" }, "frequency": { "value": 1, "code": "MONTH" }, "value": 50 }, "currencyCode": "USD" }, "basePrice": true } ]
In the previous payload,
"addingDisktoVMJun19"
is the operation ID for the operation that is being onboarded.

Subscribe to operation endpoint

The provisioning adapter that handles the execution operation from Enterprise Marketplace needs to be subscribed to the OperationFulfillment and OperationStatus exchanges before the operation execution starts. Use the same routingKey that you used while onboarding the operation definition.
This technique should only be used if the
vra-adapter-installer
script is not used to set up the adapters. If you used this script, this step has already been completed.
If you want to subscribe to OperationFulfullment, run the following API:
  • API:
    https://<host>:<port>/msgsubscription/v1/subscriber
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
If you want to subscribe to OperationStatus, call this API:
  • API:
    https://<host>:<port>/msgsubscription/v1/subscriber
  • Headers:
    • Content-Type:
      application/json
    • Username:
      'consume-username'
    • Apikey:
      'consume-apikey'
The following is an example
SubscriptionOperationStatusEndPoint.json
file:
{ "name": "operation_fulfillment_vra", "topicName": "OperationFulfillment", "queue_name": "operation_fulfillment_vra_queue", "queue_ttl": 3600000, "routing_key": [ "vra" ], "subscription_type": "Push", "headers": { "adp-user": "{system_userid}", "adp-api-key": "{systemapikey}" }, "callback": { "url": "{nginxhost}/vRA/v1/executeOperation", "certs": { "endpoint_ca_cer": "{certificate_name}", "endpoint_cer": "{endpoint_cer_name}", "endpoint_key": "{endpoint_key_name}" }, "mutual_auth": "{mutual_auth_string}", "connection_protocol": "{connection_protocol}", "retry_count": 1, "requiredForwardProxy": true } }

Fetching the bearer token

The VMware APIs can be accessed from Postman by using token generation. Therefore, you might need to select the Auth type as Bearer Token. If you want to get the token, run the following API:
curl -X POST \ https://dal09vra73.cloudmatrix.local/identity/api/tokens \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -H 'postman-token: 6bd8f4e2-72fd-f7f1-257a-ccfa38a0f4a8' \ -d '{"username":"*********", "password":"**********", "tenant":"*****************" }'
The response will be in the format shown in the following example.
{ "expires": "", "id": "******************", // id contains the token "tenant": "vsphere.local" }

Troubleshooting

Whenever the provisioning adapter or template engine logs show the error message
comments': u'401 - "User [email protected] cannot access route /apiregistry/providers/:providerId/:versionId/apis"
, make sure that the username used while running the adapter installer belongs to a team that has the Service Integrator role and has the following permission:
{ "businessfunction": "API", "level": "CRUD" }
If you want to check whether the Service Integrator role has that permission, make a GET call to fetch the permissions for the Service Integrator role:
  • API:
    https://<<consumehost>>:<<consumeport>>/authorization/v2/roles/Service Integrator
  • Headers:
    • Username:
      <<consume username>>
    • Apikey:
      <<consume apikey>>
This GET call generates the following response:
{ "role_validations": { "mandatoryContextsFieldExtensionAllowed": false, "optionalContextsFieldOverrideAllowed": false, "contextMetadata": { "optionalContexts": { "contexts": [ ], "required": 0 }, "mandatoryContexts": [ { "contextName": "org", "contextValue": [ "ALL" ] } ] } }, "icbapplication": "core", "name": "Service Integrator", "permissions": [ { "businessfunction": "IBUS_MESSAGE", "level": "EDIT" }, { "businessfunction": "IBUS_SUBSCRIBER", "level": "CRUD" }, { "businessfunction": "SYSTEM_ACCOUNT_CREDENTIAL", "level": "VIEW" }, { "businessfunction": "PROVIDER_ACCOUNT_CREDENTIAL", "level": "VIEW" }, { "businessfunction": "AUDIT_LOG", "level": "CREATE" }, { "businessfunction": "API", "level": "CRUD" } ] }
Make sure that the following permission is found in the permissions array:
{ "businessfunction": "API", "level": "CRUD" }
If it is not, make the following PUT call to update the Service Integrator role:
  • API:
    https://<<consumehost>>:<<consumeport>>/authorization/v2/roles/Service Integrator
  • Headers:
    • Username:
      <<consume username>>
    • Apikey:
      <<consume apikey>>
    • Content-Type:
      application/json
  • Body:
    { "role_validations": { "mandatoryContextsFieldExtensionAllowed": false, "optionalContextsFieldOverrideAllowed": false, "contextMetadata": { "optionalContexts": { "contexts": [ ], "required": 0 }, "mandatoryContexts": [ { "contextName": "org", "contextValue": [ "ALL" ] } ] } }, "icbapplication": "core", "name": "Service Integrator", "permissions": [ { "businessfunction": "IBUS_MESSAGE", "level": "EDIT" }, { "businessfunction": "IBUS_SUBSCRIBER", "level": "CRUD" }, { "businessfunction": "SYSTEM_ACCOUNT_CREDENTIAL", "level": "VIEW" }, { "businessfunction": "PROVIDER_ACCOUNT_CREDENTIAL", "level": "VIEW" }, { "businessfunction": "AUDIT_LOG", "level": "CREATE" }, { "businessfunction": "API", "level": "CRUD" } ] }
After updating the Service Integrator with the previous permissions, perform another GET call to check whether the permissions have been changed. A few minutes after updating, you can access the route
/apiregistry/providers/:providerId/:versionId/apis
.

Sample payload for the system parameters

The following examples show sample payloads for the execution template:
  • _resourceInfo
    { "status": "On", "name": "vm904", "providerName": "", "tags": "", "resourceType": "Virtual Machine", "parentId": "PYTVY4H94B", "connectionInstructions": "", "primaryInfo": [ { "type": null, "name": "ip_address", "value": "10.155.251.132" }, { "type": null, "name": "MachineName", "value": "vm904" } ], "provider": "PrivateCloud", "id": "3939ee68-ed01-48b0-acba-a55d69b03781", "templateOutputProperties": [ { "type": null, "name": "MachineDailyCost", "value": 0 }, { "type": null, "name": "MachineDestructionDate", "value": "2018-06-28T10:24:19.743Z" }, { "type": null, "name": "NETWORK LIST", "value": [ { "classId": "dynamicops.api.model.NetworkViewModel", "componentTypeId": "com.vmware.csp.component.iaas.proxy.provider", "typeFilter": null, "data": { "NETWORK_MAC_ADDRESS": "00:50:56:96:d6:13", "NETWORK_NAME": "mgmt2", "NETWORK_ADDRESS": "10.155.251.132" }, "componentId": null } ] }, { "type": null, "name": "MachineReservationName", "value": "BFAST" }, { "type": null, "name": "PowerOff", "value": true }, { "type": null, "name": "IS COMPONENT MACHINE", "value": false }, { "type": null, "name": "Destroy", "value": true }, { "type": null, "name": "MachineStorage", "value": 42 }, { "type": null, "name": "MachineType", "value": "Virtual" }, { "type": null, "name": "Component", "value": "CentOS65" }, { "type": null, "name": "ChangeOwner", "value": true }, { "type": null, "name": "Reprovision", "value": true }, { "type": null, "name": "Expire", "value": true }, { "type": null, "name": "SNAPSHOT LIST", "value": [] }, { "type": null, "name": "Reconfigure", "value": true }, { "type": null, "name": "ConnectViaVmrc", "value": true }, { "type": null, "name": "MachineInterfaceType", "value": "vSphere" }, { "type": null, "name": "Reset", "value": true }, { "type": null, "name": "EXTERNAL REFERENCE ID", "value": "vm-7382" }, { "type": null, "name": "MachineMemory", "value": 1024 }, { "type": null, "name": "MachineGroupName", "value": "Configuration Administrators" }, { "type": null, "name": "Reboot", "value": true }, { "type": null, "name": "InstallTools", "value": true }, { "type": null, "name": "DISK VOLUMES", "value": [ { "classId": "dynamicops.api.model.DiskInputModel", "componentTypeId": "com.vmware.csp.component.iaas.proxy.provider", "typeFilter": null, "data": { "DISK_CAPACITY": 5, "DISK_INPUT_ID": "DISK_INPUT_ID1" }, "componentId": null }, { "classId": "dynamicops.api.model.DiskInputModel", "componentTypeId": "com.vmware.csp.component.iaas.proxy.provider", "typeFilter": null, "data": { "DISK_CAPACITY": 5, "DISK_INPUT_ID": "DISK_INPUT_ID2" }, "componentId": null }, { "classId": "dynamicops.api.model.DiskInputModel", "componentTypeId": "com.vmware.csp.component.iaas.proxy.provider", "typeFilter": null, "data": { "DISK_CAPACITY": 32, "DISK_INPUT_ID": "DISK_INPUT_ID3" }, "componentId": null } ] }, { "type": null, "name": "MachineCPU", "value": 3 }, { "type": null, "name": "ConnectViaNativeVmrc", "value": true }, { "type": null, "name": "Suspend", "value": true }, { "type": null, "name": "MachineInterfaceDisplayName", "value": "vSphere (vCenter)" }, { "type": null, "name": "MachineBlueprintName", "value": "CentOS65" }, { "type": null, "name": "VirtualMachine Admin UUID", "value": "5016af6c-ef57-002e-b301-be85d56fdd5d" }, { "type": null, "name": "MachineId", "value": "730b7215-ffd2-46d3-a7cc-33bfbdc3f892" }, { "type": null, "name": "MachineGuestOperatingSystem", "value": "CentOS 4/5/6/7 (64-bit)" }, { "type": null, "name": "MachineExpirationDate", "value": "2018-06-28T10:24:19.743Z" }, { "type": null, "name": "Shutdown", "value": true }, { "type": null, "name": "EndpointExternalReferenceId", "value": "90aacc2c-0315-4af4-aea2-251fbafa8e4f" }, { "type": null, "name": "ChangeLease", "value": true }, { "type": null, "name": "GetExpirationReminder", "value": true }, { "type": null, "name": "CreateSnapshot", "value": true } ] }
  • _configInfo
    [ { "configGroup": "Configure Region", "sectionCode": "", "config": [ { "description": "Size of the Disk", "isChange": "", "values": [ { "valueId": "2", "binding": "diskSize", "value": "" } ], "configId": "diskSize", "type": "", "uom": "", "name": "Size of the disk" }, { "description": "Number of Disk Units", "isChange": "", "values": [ { "valueId": "2", "binding": "numberOfUnits", "value": "" } ], "configId": "numberOfUnits", "type": "", "uom": "", "name": "Number of Disk Units" } ], "sectionName": "" } ]

Operation template for PowerOn/PowerOff operations

The following example shows a sample template for powering and powering off VMs.
{ "executionTemplate": { "$schema": "http:\/\/github.kyndryl.net\/cb-consume-schema\/cs\/op-template.json", "contentVersion": "1.0", "name": "sampletemplate", "parameters": { "_resourceInfo": { "type": "object" } }, "resourceOperations": { "vragetActions": { "resourceId": "${resourceId}" }, "vragetTemplate": { "actionId": "${actionId}", "resourceId": "${resourceId}" }, "vrapostTemplate": { "actionId": "${actionId}", "resourceId": "${resourceId}", "vragetTemplate": "${vragetTemplate}" } }, "variables": { "resourceId": "${_resourceInfo}.id", "actionName": "'Reboot'", "actionId": "(function(actionName,listOfActions){return listOfActions.actionId[0].filter(word => word.name==actionName)})(${actionName},${vragetActions})[0]['id']", "location": "(function(location){return location.location[0].substr(location.location[0].lastIndexOf('\/')+1,location.location[0].length)})(${vrapostTemplate})" }, "output": { "tracking": "${location}" } }, "statusTemplate": { "$schema": "http:\/\/github.kyndryl.net\/cb-consume-schema\/cs\/op-template.json", "contentVersion": "1.0", "name": "sampletemplate", "parameters": { "tracking": { "type": "string" } }, "resourceOperations": { "vragetResources": { "requestId": "${tracking}" } }, "output": { "status": "function(status){ if(status.phase === 'SUCCESSFUL') {return 'Completed'} else if(status.phase === 'FAILED'){return 'Failed'} else {return 'InProgress'}}(${vragetResources})", "comments": "function(status){if(status.phase === 'FAILED'){if(status.requestCompletion.completionDetails){return status.requestCompletion.completionDetails} else {return 'error on vRA side'}}}(${vragetResources})" } }, "failureTemplate": { } }
The following example shows the corresponding API definitions:
[ { "contentVersion": "1.0", "id": "vragetActions", "providerTypeId": "icb", "resourcepath": "\/catalog-service\/api\/consumer\/resources\/${resourceId}\/actions", "method": "GET", "returnPath": { "actionId": "$.content" } }, { "contentVersion": "1.0", "id": "vragetTemplate", "providerTypeId": "vra", "resourcepath": "\/catalog-service\/api\/consumer\/resources\/${resourceId}\/actions\/${actionId}\/requests\/template", "method": "GET" }, { "contentVersion": "1.0", "id": "vrapostTemplate", "providerTypeId": "vra", "body": "${vragetTemplate}", "resourcepath": "\/catalog-service\/api\/consumer\/resources\/${resourceId}\/actions\/${actionId}\/requests", "method": "POST", "returnPath": { "location": "responseHeaders:$.location" } }, { "schema": "http:\/\/github.kyndryl.net\/cb-consume-schema\/cs\/api.json", "contentVersion": "1.0", "id": "vragetResources", "providerTypeId": "icb", "resourcepath": "\/catalog-service\/api\/consumer\/requests\/${requestId}", "method": "GET" } ]
The previous API definitions consist of both Execute Template and Status Template API definitions.

Integrating with Cost & Asset Management

Cost & Asset Management (CAM) is a monitoring and asset tracking system available in Enterprise Marketplace. Before you can track VMWare Aria Automation assets in your system, you need to perform the following steps to integrate your content into the Cost & Asset Management system:

Registering as a private cloud

First, register a vSphere endpoint as a private cloud by clicking
Provider Accounts
and selecting
Private Cloud Registration
. In the window that is displayed, you can enter details about the private cloud such as location and the catalogs that the endpoint offers.

Uploading the GPD file

After this information is registered, you can perform manual GPD upload of the file into the Cost & Asset Management tool for ingestion against this private cloud. Download the vSphere data extractor tool from Cost & Asset Management. You can use the vSphere extractor tool to collect asset information about the vSphere environment and the performance utilization details about the vSphere environment. This data is extracted as part of the GPD file creation.
The vSphere Data Collector also computes cost data for the assets based on the rate cards and the cost that has been configured before running the vSphere Data collector tool.

Using the cost calculator

Cost & Asset Management allows you to create your own custom rate cards and integrate them with the private cloud costing and pricing for vSphere environments. The data extractor has a detailed readme document that helps you build custom rate cards and import them into Cost & Asset Management. If you want to use the cost calculator, complete the following steps:
  1. Download the extractor from Cost & Asset Management.
  2. Create a new cost plugin by implementing the custom cost calculation in Python and placing it in the rate card directory.
  3. Configure the newly created cost plugin following the directions on the readme file present in the extract and then generate the GPD file.
  4. Upload GPD file.
    You can now analyze and view the cost in Cost & Asset Management.

Importing data into Cost & Asset Management

This section details how to import asset, cost, and utilization data from your VMware-based private cloud into Cost & Asset Management.

Prerequisites

The following prerequisites must be met before you can import data into Cost & Asset Management:
  • vSphere Portal managing a vCenter Server v5.5 or above
  • vRealize Suite v7.2 or above
  • VMware tools are installed on all servers
  • Statistics Level 1 and Default Collection Intervals in the vCenter Server for performance monitoring
  • Python Scripts can be run against the vCenter Server
  • PowerCLI scripts can be run against the vCenter Server

Preliminary steps

Prepare your environment and gather resources by taking the following steps:
  1. Download the VMware community based API with python bindings called
    pyvmomi
    from .
  2. Go through the documentation and look up
    VirtualMachine.rst
    and
    PerformanceManager.rst
    because those are some of the classes you will be dealing with when bringing back Asset and Utilization Data later.
  3. Download a separate repository called
    pyvmomi-community-samples
    that contains many more examples of scripts from .
  4. Extract the repository and make sure that you can run some test scripts against the server. A typical call would look something like this: (-s for server , -u for username and –p for password).
    Python getallvms.py -s 10.154.23.151 -u [email protected] -p Broker@345
  5. Download the
    gpd-format
    repository and extract it to a folder. This is available at .

Getting the data

Now that the resources and environments are set up, you need to gather data about assets, their utilization, and costs. After you collect this data and the metadata about the private cloud that you are using, you can upload it to Cost & Asset Management. You need to retrieve the following data:

Asset data

Asset data from the private cloud can be retrieved from your vSphere Instance to get the most accurate data. This data can then be compiled to form a list of the assets for a given time frame. If you want to fetch this data, you need to have the proper credentials to access the vCenter Server using the vSphere Web Client or the API.
After you have logged in and can view the different VMs available, use the VMware community based API with Python bindings called pyvmomi to access this data using a sample Python file. You can also use the wide range of samples from the pyvmomi-community-samples that show you how to do many different types of activities using the API including getting a list of all VMs.
If you want to retrieve the asset data, complete the following steps:
  1. Run the
    getallvms.py
    file to get a list of VMs. After this Python file is edited, you can have it bring back all the resources and output that data to a csv file.
  2. Edit the
    getallvms.py
    file as shown in the following segment. These changes will allow it to bring back all objects and not just VMs that are mentioned in the file name. You can then build multiple functions that can handle the properties of those objects and output that data. The following is the
    getallvms.py
    file that allows you to fetch multiple entities:
    container = content.rootFolder # starting point to look into viewType = [vim.ManagedEntity] # object types to look for recursive = True # whether we should look into it recursively containerView = content.viewManager.CreateContainerView(container, viewType, recursive) children = containerView.view for child in children: # print_vm_to_csv(child) item = str(child) If “Datacenter” in item: print_datacenter_info(child) continue elif “Folder” in item: print_folder_info(child) continue elif “Cluster” in item: print_cluster_info(child) continue elif “ResourcePool” in item: print_respool_info(child) continue elif “VirtualMachine” in item: print_vm_info(child) continue
  3. Make sure that you look at the GPD format for assets and check the GPD asset mapping for vSphere files located in the folder on the repository. Make sure you bring back the relevant details starting with the data that is mandatory. The following example shows sample data pulled through the API using an edited version of
    getallvms.py
    .
  4. After you have successfully pulled this data and compiled it for a specific time, make sure it is in the sample asset csv format as prescribed in the gpd-format guidelines.
The process of pulling this data needs to be done multiple times a day because you will be pulling data from a specific point in time. The more pulls you run, the more accurate the asset data will be. Doing so will account for all the VMs that might have run for a day or a few hours as well.
The API cannot retrieve the tags. If you want to fetch the tags, you need to pull them using a custom script on PowerCLI that is connected to the same instance.

Utilization data

This data can be fetched using a similar method as used for the asset data for private cloud. Run the
vm_perf_example.py
file in the
pyvmomi-community-examples/samples
folder to use vSphere’s statistics data. You can specify a time interval by editing the time interval in the Python file instead of just bringing back the latest sample “maxSample = 1”
If you want to report on and select specific metrics as needed, first fetch all the performance metric data and select what you want to use. Then go back to the gpd-format and determine which fields are required and how you can populate those fields. You can then put the data together to match the sample csv for utilization data as described in the gpd-format guidelines.
This fetch needs to be run hourly or daily to keep the statistics up to date.
The following table details the generic format for the utilization. Sample data is available on the repository.
Data fields
Format
Required
Description
assetAccountId
varchar(255)
NOT NULL
Account ID on provider. This can be the budgetary unit for Private Cloud.
providerAssetId
varchar(255)
NOT NULL
Provider Provided identifier for the Asset. This must uniquely identify an asset.
startTime
datetime
DEFAULT NULL
Date and time this asset was provisioned.
endTime
datetime
DEFAULT NULL
Date and time this asset was deprovisioned. This can be derived based on asset not showing up in consequent data extractions.
minValue
varchar(255)
DEFAULT NULL
Minimum Metric Value.
maxValue
varchar(255)
DEFAULT NULL
Maximum Metric Value.
avgValue
varchar(255)
DEFAULT NULL
Average Metric Value.
unitOfMeasure
varchar(255)
DEFAULT NULL
Unit of measure for your values such as Number, Percent, and GB.

Cost data

Cost data can be imported into GPD using the following methods:
  • If the client’s private cloud instance has vRealize Business for Cloud as part of the vRealize Suite. There are multiple reports that can be run to bring back the cost data by VM or other managed object type for a certain time. This cost data then might need to be checked with the asset data to provide more information regarding some fields necessary for the data to be in the prescribed gpd-format.
  • Pull the bill from a managed service provider and then break down that data into the format necessary for GPD upload.
  • Create the costs file using a rate card approach based on the pricing policies that vRealize Business for Cloud uses. You could then calculate the costs using the asset and utilization data if necessary.

Provider metadata

For CAM to understand the context of the data coming in, it requires a certain basic level of details regarding the generic provider. These are details that you will already have, like the name of the provider, types of assets (VMs, datastores, and so on), service categories that you would map these assets to (compute, storage, and so on), and location data. The sample YAML file provided in the repository is a great place to start. However, you might have to make some decisions on how you want to categorize the data center location and how it will be viewed in different zoom levels on the CAM application.

Final upload package

The following is the final set of files that need to be uploaded:
  • Zip file should contain one
    provider_metadata.yml
    in the root folder with one or multiple folders per period. The folder names should be in YYYYMM format.
  • Each "period" folder must have one or multiple folders per billing account.
  • Each "billing account" folder must have one CSV (cost file) and may have additional folders for asset accounts. Empty billing account folder generate an error.
  • If a “asset account” folder is present, it should contain an asset CSV, a Util CSV, or both. Empty asset folders generate an error.
Do you have two minutes for a quick survey?
Take Survey